Jaqaur – Last Project

Motion Tracer

For my last project (no more projects–it’s so sad to think about), I decided to combine aspects from two previous ones: the motion capture project and the plotter project. For my plotter project, I had used a paintbrush with the Axidraw instead of a pen, and I really liked the result, but the biggest criticism I got was that the content itself (binary trees) was not very compelling. So, for this project, I chose to paint more interesting material: motion over time.
I came up with the idea to trace the paths of various body parts pretty early, but it wasn’t until I recorded BVH data and wrote some sample code that I could determine how many and which body parts to trace. Originally, I had thought that tracing the hands, feet, elbows, knees, and mid-back would make for a good, somewhat “legible” image, but as Golan and literally everyone else I talked to told me: less is more. So, I ultimately decided to trace only the hands and the feet. This makes the images a bit harder to decipher (as far as figuring out what the movement was), but they look better, and I guess that’s the point.
One more change I made from my old project was the addition of multiple colors. Golan advised me against this, but I elected to completely ignore him, and I really like how the multi-colored images turned out. I mixed different watercolors (my first time using watercolors since middle school art class) in a tray, and put those coordinates into my code. I added instructions between each line of color for the Axidraw to go dip the brush in water, wipe it off on a paper towel, and dip itself in a new color. I think that the different colored lines make the images a little easier to understand, and give them a bit more depth.

I tried to record a wide variety of motion capture data for this project (thanks to several more talented volunteers) including ballet, other dance, gymnastics, parkour, martial arts, and me tripping over things. Unfortunately, I had some technical difficulties the first night of MoCap recording, so most of that data ended up unusable (extremely low frame rate). The next night, I got much better data, but I discovered later that Breckle really is not good with upside down (or even somewhat contorted) people. This made a lot of my parkour/martial arts data come out a bit weird, and I had to select only the best ones to print. If I were to do this project again, I would like to record Motion Capture data in Hunt Library perhaps, or just with a slightly better system than the one I used for this project. I think I would get somewhat nicer pictures that way.

One more aspect of my code that I want to point out is a little portion of code I made that maps the data to be an appropriate size for the paper. It runs at the beginning, and finds the maximum and minimum x and y values reached by any body part. Then, it scales that data to be as large as possible (without messing up its original proportions) while still fitting inside the paper’s margins. This means that a really tall motion will be scaled down to be the right height, and then have its weight shrunk accordingly, and a really wide motion will be scaled by its width, and then have its height shrunk accordingly. I think that this was an important feature.

Here are some of the images generated by my code:

Above are three pictures of the same motion capture data: a pirouette. It was the first motion I painted, and it took me a few tries to get the paper’s size coordinates right, and to mix the paint dark enough.


That’s an image generated by a series of martial arts movements, mostly punches. Note the dark spot where some paint dripped on the paper; I think little “mistakes” like that give these works character, as if they weren’t painted by a robot.


This one was generated by a somersault. I think when he went upside down, the data got a bit–messed up, but I like the end result nonetheless.


Here is a REALLY messed up image that was supposed to be a front walkover. You can see her hands and feet on the right side, but I think when she went upside down, Breckle didn’t know what to do, and put her body parts all over the place. I don’t really consider this one part of my final series, and since I knew the data was messy, I wasn’t going to paint it, but I had paint/paper left over so I figured, why not? It’s interesting anyway.


I really like these. The bottom two are actually paintings of the same data, just with different paint, but all four are actually paintings of the same dance move– a “Pas De Chat.” I got three separate BVH recordings of the dancer doing the same move, and painted all of them. I think it’s really interesting to note the similarities between them, especially the top two.

All in all, I am super happy with how this project turned out. I would have liked to get a little more variety in (usuable) motion capture data, because I love trying to trace where every limb goes during a movement (you can see some of this in my documentation video above). I also think that a more advanced way of capturing motion capture data would have been helpful, but what can you do?

Thanks for a great semester, Golan.

Here is a link to my code on Github: https://github.com/JacquiwithaQ/Interactivity-and-Computation/tree/master/Motion_Tracer

Jaqaur – LookingOutwards 09

I first saw “Traces” (the above video) over a year ago, and I remember that when I did, I was simultaneously impressed with the technology and disappointed that the artwork created by the shoes was not as representative of the dancer’s movement as it could have been. There’s something to be said for an abstract interpretation, obviously, but I wished there had been more than just circles, lines, and clumps. One reason that “Traces” looks this way is, of course, that its data comes from sensors in the dancer’s shoes rather than external motion capture. This is very impressive, but limits the motion recorded to just that of the feet.

For my project, I hope to create plotter artwork that is similar in style but different in appearance, because I want to use the whole body as the basis for the brush strokes. Ideally, I want to analyze which four or five joints on the body move the most in a portion of BVH data, and then have the plotter paint only those. In this way, I think the artwork will more closely resemble the motion of the dancer/person than if I painted only the hands and feet (or any fixed set of points). For example, in a pirouette, the dancer’s spinning knee would be more worth painting than her fairly stationary foot. I’ll see if I can find a way of effectively choosing which joints to paint using only code.

Here is another video that I discovered more recently. Unlike “Traces,” the software here doesn’t generate 2D images, but rather it makes a variety of animated shapes that it places on top of the video of the dancer. This isn’t quite what my project will be doing, but I think it uses similar technology, and I really like the end result. You’ll notice that several of the animations only involve select points on the body.

Jaqaur – Object

Screaming Monster Alarm

Okay, yes. This project was done fairly last minute, while I had no creative ideas, and it depends on the use of a CloudBit, which I didn’t have. So, all in all, I wouldn’t really call it a success.
Still, here is a little video I made showing my Baby Monster in action:

Basically, the screaming monster alarm is an alarm clock that is active between 8:00 and 8:30 (that’s the networked part) and will scream unless it sees motion with its motion sensors. I thought it would be good because it would stop me from accidentally falling back asleep after turning it off. I made the tube to cover it so it wouldn’t pick up miscellaneous other motion, like that of my roommate, because the sensor it very sensitive. I like how you can make it be quiet by covering its mouth (it was that covering action that gave me the idea to make the tube into a creature). But in retrospect, it’s really not much different from a regular alarm clock.

I got the sensor/buzzer part working, but I couldn’t actually get the timed aspect to work because, as I mentioned above, I didn’t really have access to a CloudBit and didn’t want to wait around for one for the sake of this project. I did create some events on “If This Then That” that WOULD activate and deactivate the cloud bit at 8:00 and 8:30 respectively, if I had one.

I have very few sketches for this project and no code to embed, due to all of the reasons above. Not my best work, but I’m glad I learned about “If This Then That,” and its always fun to play with Little Bits. Had networking our object not been part of the assignment, I would have loved to experiment with the Makey Makey. That device was definetely the my favorite thing from this last week.

Jaqaur-Proposal

For my final project, I want to redo my plotter project, once again with ink and brush, but with more compelling content than the binary trees from before. I would like to replace those trees with lines generated from motion capture data. I could make a series of pictures with different colors, each one from a different motion (including ballet dance, martial arts, break-dancing; I know a few people who are wiling to help).

Ideally, each picture would feature a few different colors (with multiple trays of ink), perhaps each one corresponding to a joint whose progressive x and y coordinates will direct the line on the paper. I plan to experiment with which joints are best to draw; it might be different for different motions.

This project will require setting up and using the Kinect and also the Axidraw. So, it will take some time. But I’m very excited about it!

Jaqaur – Looking Outwards 08

Conditional Lover

conditioner_lover_saurabh_datta_09-800x500

I chose to write about “Conditional Lover” because I thought it was absolutely charming. It’s a robot that uses data it gathers from the pictures on your phone to figure out what sort of facial features you would find attractive. Then, it uses its camera and “fingers” to use Tinder for you, deciding which users you would like and swiping left or right accordingly.

conditioner_lover_saurabh_datta_14

I love this idea (as an art piece more than a practical tool), because it makes an objective, impersonal process out of dating, which should be very personal. However, when you think about it, Tinder has already done that, replacing meaningful connections with “Do I find him/her attractive at first glance?” If Tinder is going to take most of the humanity out of dating, why not just hand the whole thing over to a robot? This piece really made me think about our superficial culture surrounding relationships, if only for a little while, so I think it has succeeded not only as a work of technology, but as a work of art.

Conditional_Lover – A physical bot that automates your Tinder

Jaqaur – Manifesto

One part of the manifesto that stuck out to me was tenet number 4: The Critical Engineer looks beyond the “awe of implementation” to determine methods of influence and their specific effects. This is basically saying that it’s important to consider exactly why you are making the choices you are, and why you are developing the things you are, and if the answer is “because we can,” maybe you should reconsider. It reminded me of Jurassic Park when Ian says “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” Just because something can be developed doesn’t mean its worth the time and resources, even if it would be really impressive or cool. Ultimately, the point of engineering is to improve lives, not to impress others.

Jaqaur – MoCap

PIXEL DANCER

For this project, I knew from the start I wanted the motion capture data for my project to be my friend Katia dancing ballet. We actually recorded that data before I had coded much of anything for this project. Hopefully I can record some more different types of motion and apply this animation to them in the future.

Anyhow, for this project, I wanted something a little more abstract looking than shapes attached to a skeleton. So I decided to make an animation in which nothing actually moves. There is a 3D grid of “pixels” (which can be any shape, color, or size) that choose their size and/or opacity based on whether or not they occupy the space where the dancer is. They appear and disappear, and collectively this creates the figure of a person and the illusion of movement.

I decided to work in Processing, because I had the most experience in it, but 3D was still new to me. Initially, I had my pixels calculate their distance from each joint and decide how big to be based on that. It worked, but was just a series of sphere-ish clumps of pixels moving around, and I wanted it to look less default-y and more like a real person. So, I looked up how to calculate the distance from a point to a line segment, and used that for my distance formula instead (making line segments out of the connections between joints). This resulted in a sort of 3D stick figure that I was pretty happy with.

I played around a lot with different shapes, sizes, and colors for the pixels. I also tried to find the best speed for them to appear and disappear, but this was hard to do. Different people I showed it to had different opinions on how long the pixels should last. Some really liked it when they lasted a long time, because it looked more interesting and abstract, but others liked the pixels to disappear quickly so that the dancer’s figure was not obscured. Deciding how quickly the pixels should appear was less difficult. While I initially wanted them to fade in somewhat slowly, this did not look good at all. The skeleton simply moved too fast for the pixels ever to reach full size/opacity, so it was hard to tell what was going on. As a result, I made the pixels pop into existence, and I think that looks as good as it could. The motion capture data still looks a bit jumpy in places, but I think that’s the data and not the animation.

Since there was such a wide variety in the types of pixels I could use for this project, I decided to make a whole bunch of them. Here are how some of my favorites look.

The original pink cube pixels:
dance_mocap

Like the original, but with spheres instead of cubes (and they’re blue!):
teal_mocap

Back to cubes, but this time, they fade out instead of shrinking out. I think it looks sort of flame-like:
fire_mocap

Back to shrinking out, but the cubes’ colors change. I know rainbows are sort of obnoxious, but I thought it was worth a shot. I also played with some extreme camera angles on this one:
rainbow_mocap

One final example, pretty much the opposite of the last one. Spheres, with a fixed color, that fade out. I think it looks kind of like smoke, especially from a distance. But I like how it looks up close, too:
white_mocap

I didn’t really know how to sketch this concept, so I didn’t (and I’m kind of hoping that all of my variations above can make up for my somewhat lacking documentation of the process). In general, I’m happy with how this turned out, but I wish I had had the time to code this before we recorded any motion, so I could really tailor the movement to the animation. Like I said, I hope to do more with this project in the future, because I am happy with how it turned out. Maybe I can make a little music video…

Here is a link to my code on github (the pink cube version): https://github.com/JacquiwithaQ/Interactivity-and-Computation/tree/master/Pixel_Dancer

And here is my code. I am only embedding the files I edited, which do not include the parser.

//Adapted by Jacqui Fashimpaur from in-class example

BvhParser parserA = new BvhParser();
PBvh bvh1, bvh2, bvh3;
final int maxSide = 200;

ArrayList allPieces;
	
public void setup()
{
  size( 1280, 720, P3D );
  background(0);
  noStroke();
  frameRate( 70 );
  //noSmooth();
  
  bvh1 = new PBvh( loadStrings( "Katia_Dance_1_body1.bvh" ) );
  allPieces = new ArrayList();
  for (int x=-400; x<100; x+=8){
    for (int y=-50; y<500; y+=8){
       for (int z=-400; z<100; z+=8){
         Piece myPiece = new Piece(x,y,z,bvh1);
         allPieces.add(myPiece);
       }
    }
  }
  loop();
}

public void draw()
{
  background(0);
  float t = millis()/5000.0f;
  float xCenter = width/2.0 + 150;
  float zCenter = 300;
  float camX = (xCenter - 200);// + 400*cos(t));
  float camZ = (zCenter + 400 + 300*sin(t));
  //moving camera
  camera(camX, height/2.0 - 200, camZ, width/2.0 + 150, height/2.0 - 200, 300, 0, 1, 0);
  //still camera
  //camera(xCenter, height/2.0 - 300, -300, width/2.0 + 150, height/2.0 - 200, 300, 0, 1, 0);
  
  pushMatrix();
  translate( width/2, height/2-10, 0);
  scale(-1, -1, -1);
 
  ambientLight(250, 250, 250);
  bvh1.update( millis() );
  //bvh1.draw();
  for (int i=0; i<allPieces.size(); i++){
    Piece p = allPieces.get(i);
    p.draw();
  }
  popMatrix();
}
//This code by Jacqui Fashimpaur for Golan Levin's class
//November 2016

public class Piece {
  float xPos;
  float yPos;
  float zPos;
  float side;
  PBvh bones;

  public Piece(float startX, float startY, float startZ, PBvh bone_file) {
    xPos = startX;
    yPos = startY;
    zPos = startZ;
    side = 0.01;
    bones = bone_file;
  }

  void draw() {
    set_side();
    if (side > 0.01) {
      noStroke();
      fill(255, 255, 255, side);
      translate(xPos, yPos, zPos);
      sphereDetail(5);
      sphere(9);
      translate(-xPos, -yPos, -zPos);
    }
  }

  void set_side() {

    //LINE-BASED FIGURE IMPLEMENTATION
    float head_dist = get_dist(bones.parser.getBones().get(48));
    float left_shin_dist = get_line_dist(bones.parser.getBones().get(5), bones.parser.getBones().get(6));
    float right_shin_dist = get_line_dist(bones.parser.getBones().get(3), bones.parser.getBones().get(2));
    float left_thigh_dist = get_line_dist(bones.parser.getBones().get(5), bones.parser.getBones().get(4));
    float right_thigh_dist = get_line_dist(bones.parser.getBones().get(3), bones.parser.getBones().get(4));
    float left_forearm_dist = get_line_dist(bones.parser.getBones().get(30), bones.parser.getBones().get(31));
    float right_forearm_dist = get_line_dist(bones.parser.getBones().get(11), bones.parser.getBones().get(12));
    float left_arm_dist = get_line_dist(bones.parser.getBones().get(29), bones.parser.getBones().get(30));
    float right_arm_dist = get_line_dist(bones.parser.getBones().get(10), bones.parser.getBones().get(11));
    float torso_dist = get_line_dist(bones.parser.getBones().get(0), bones.parser.getBones().get(8));

    boolean close_enough = ((head_dist<700) || (left_shin_dist<100) || (right_shin_dist<100) ||
                            (left_thigh_dist<150) || (right_thigh_dist<150) || (left_forearm_dist<100) ||
                            (right_forearm_dist<100) || (left_arm_dist<150) || (right_arm_dist<150) ||
                            (torso_dist<370));
  
    //LINE-BASED OR POINT-ONLY IMPLEMENTATION
    if (!close_enough) {
      side *= 0.91;
    } else {
      //side *= 200;
      side = maxSide;
    }
    /*if (side < 0.01) {
      side = 0.01;
    }*/
    if (side < 1){ side = 0.01; } if (side >= maxSide){
      side = maxSide;
    }
  } 

  float get_dist(BvhBone b) {
    float x1 = b.absPos.x;
    float y1 = b.absPos.y;
    float z1 = b.absPos.z;
    float dist1 = abs(x1-xPos);
    float dist2 = abs(y1-yPos);
    float dist3 = abs(z1-zPos);
    return (dist1*dist1)+(dist2*dist2)+(dist3*dist3);
  }

  float get_line_dist(BvhBone b1, BvhBone b2) {
    float x1 = b1.absPos.x;
    float y1 = b1.absPos.y;
    float z1 = b1.absPos.z;
    float x2 = b2.absPos.x;
    float y2 = b2.absPos.y;
    float z2 = b2.absPos.z;
    float x3 = xPos;
    float y3 = yPos;
    float z3 = zPos;
    float dx = abs(x1-x2);
    float dy = abs(y1-y2);
    float dz = abs(z1-z2);
    float otherDist = sq(dx)+sq(dy)+sq(dz);
    if (otherDist == 0) otherDist = 0.001;
    float u = (((x3 - x1)*(x2 - x1)) + ((y3 - y1)*(y2 - y1)) + ((z3 - z1)*(z2 - z1)))/otherDist;
    if ((u >=0) && (u <= 1)) {
      float x = x1 + u*(x2 - x1);
      float y = y1 + u*(y2 - y1);
      float z = z1 + u*(z2 - z1);
      float dist4 = abs(x - xPos);
      float dist5 = abs(y - yPos);
      float dist6 = abs(z - zPos);
      return sq(dist4) + sq(dist5) + sq(dist6);
    }
    return 999999;
  }

  float getRed() {
    //FOR PINK 1: 
    return map(xPos, -400, 100, 100, 200);
    //FOR TEAL: return map(yPos, 350, 0, 2, 250);
    /* FOR RAINBOW:
    if ((millis()%30000) < 18000){
      return map((millis()%30000), 0, 18000, 255, 0);
    } else if ((millis()%30000) < 20000){
      return 0;
    } else {
      return map((millis()%30000), 20000, 30000, 0, 255);
    } */
    return 255;
  }

  float getGreen() {
    //return map(xPos, -400, 100, 50, 150);
    //FOR PINK 1: 
    return 100;
    //FOR TEAL: return map(yPos, 350, 0, 132, 255);
    /* FOR RAINBOW:
    if ((millis()%30000) < 18000){
      return map((millis()%30000), 0, 18000, 0, 255);
    } else if ((millis()%30000) < 20000){
      return map((millis()%30000), 18000, 20000, 255, 0);
    } else {
      return 0;
    } */
    return 255;
  }

  float getBlue() {
    //FOR PINK 1: 
    return map(yPos, -50, 600, 250, 50);
    //FOR TEAL: return map(yPos, 350, 0, 130, 255);
    /* FOR RAINBOW:
    if (millis()%30000 < 18000){
      return 0;
    } else if ((millis()%30000) < 20000){
      return map((millis()%30000), 18000, 20000, 0, 255);
    } else {
      return map((millis()%30000), 20000, 30000, 255, 0);
    } */
    return 255;
  }
}

Jaqaur – LookingOutwards07

http://lifewinning.com/projects/center-for-missed-connections/

I chose to write about Ingrid Burrington primarily because I love her website, resume, and general sense of humor. That being said, I also really appreciate the cool projects she has done, many of which are not really data visualization, but just humorous or nice-looking things. One project that is a little more data-visualization-y is “The Center for Missed Connections.”

5

It’s an art project disguised as a think tank dedicated to the study of loneliness in cities. It comes in the form of a booklet full of maps, charts, and forms, and while these may be a bit fictionalized, they are still presented as one would present real data, and it almost takes more artistic and critical thinking to fictionalize and plot data than to just plot data you got from somewhere else. Perhaps a little off-the-mark for a data-vis Looking Outwards, but I think it is still a nice project, whose real value comes not from the “data” but from the idea behind studying loneliness in the first place.

9

Jaqaur – Visualization

All right, all right. This is not my best work. However, it was really interesting to get to work with D3 for the first time, and I’m glad I know a bit more about it now.

My plan was to map trips from every bike stop to every other bike stop in a chord diagram. I chose a chord diagram because I thought it reminded me of a wheel, and I thought “Hey, maybe I can make it spin, or decorate it to look like a wheel!” That all went out the window very soon.

I used a chord diagram from the D3 blocks website to achieve this, and honestly changed very little about it except for the colors, the scale of the little marks around the circle, and of course the data. The main code that I wrote was the Processing file that turned the data in the .csv file we had into data I could work with. It created a two dimensional array, then incremented elements (A, B) and (B, A) by one for every trip from station A to B, or station B to A. I chose to make the matrix symmetrical, and treat trips from A to B as equivalent to trips from B to A. Perhaps the other way may have been a bit more precise, but it also made the diagram even less readable. When the chords were thicker at one end than another, I didn’t really know what that meant, so I wanted to just keep the matrix symmetrical.

The Processing file generated a .txt file containing the matrix that I needed. After I generated it, I pasted it into the D3 in my HTML file, and then I displayed it as a graphic. It all went according to plan (I guess), but I hadn’t really thought about just how unrealistic it was to make a chord diagram of over fifty bike stops. As you can see in the image below, it was pretty much totally unreadable and unhelpful.

all-data-graph

So, I looked at that first trial, picked out the ten busiest stops (I did this manually, not by writing code, just for time’s sake), and altered my code so that I could get a new matrix that only dealt with the ride data for the top ten busiest stops. You can see that iteration below. The eleventh, largest section of the circle in the upper left is the “other” section, representing rides from one of the ten busiest stations to or from one of the less busy stations. I chose to not display rides from “other” to “other,” because it wasn’t relevant to the ten busiest stops, and it dominated the circle when it was included.

green-top-10-graph

Here is another diagram representing the same data, just with a different color scheme. I don’t find this one as pretty, but it gives every stop its own unique color, which makes it slightly more readable. As you can see, every stop’s connectors are ordered from largest to smallest (going clockwise).

One thing I found quite interesting is how none of the ten busiest stops’ largest connector was to another one of the ten busiest stops. Most of them had the most rides to or from “other,” which is to be expected, I supposed, considering just how many other stations there are. Still, several of the stops have more rides to themselves than any other stop, more even than to all of the “other” stops! And a lot of the stops whose largest connector was to “other” still had more rides to themselves than to any of the other ten busiest stops.
I was surprised to see how common that was. I guess it isn’t all that weird to check out a bike for a while, ride it around for fun or for errands, and then bring it back where you found it without putting it away in between. Still, if I were to visualize more data I would like to look exclusively at rides that start and end in the same place, and see if there is any pattern there regarding the type of user that does this, or the time of day it is done.

colored-top-10-graph

All in all, this is a very very minimum viable product. I spent most of the week just struggling with D3, and while time constraints are no real excuse for poor work, I would like the record to reflect that I am very aware that this project is flawed. My biggest frustration is that the names of the stops in question are not displayed by the portion of the circle that represents them. I could’t figure out how to do that, but if I were to work more on this visualization, that would be my next priority.

Here’s a link to my code on github: https://github.com/JacquiwithaQ/60212/tree/master/Bike%20Data%20Visualization

Here is my Processing code for the version that only cares about the ten busiest stops:

int[][] matrix;
Table allRidesTable;

PrintWriter output; 

void makeMatrix(){
  matrix = new int[11][11];
  for (int i=0; i<11; i++){
    for (int j=0; j<11; j++){
      matrix[i][j] = 0;
    }
  }
  //Now our matrix is set up, but it's all zero. Now we need to fill it with values.
  allRidesTable = loadTable("HealthyRide Rentals 2016 Q3.csv", "header");
  //Trip iD,Starttime,Stoptime,Bikeid,Tipduration,From station id, From station name,To station id,To station name, Usertype
  int totalRides = allRidesTable.getRowCount();
  for (int row=0; row < totalRides; row++){
    TableRow thisRow = allRidesTable.getRow(row);
    int startStationID = thisRow.getInt("From station id");
    int endStationID = thisRow.getInt("To station id");
    println("Start ID = " + startStationID + ", End ID = " + endStationID);
    //We only want to map the 10 busiest stations, which are:
    //1000, 1001, 1010, 1012, 1013, 1016, 1017, 1045, 1048, 1049
    int startStationNumber= 10;
    int endStationNumber = 10;
    if (startStationID==1000) startStationNumber = 0;
    if (startStationID==1001) startStationNumber = 1;
    if (startStationID==1010) startStationNumber = 2;
    if (startStationID==1012) startStationNumber = 3;
    if (startStationID==1013) startStationNumber = 4;
    if (startStationID==1016) startStationNumber = 5;
    if (startStationID==1017) startStationNumber = 6;
    if (startStationID==1045) startStationNumber = 7;
    if (startStationID==1048) startStationNumber = 8;
    if (startStationID==1049) startStationNumber = 9;
    if (endStationID==1000) endStationNumber = 0;
    if (endStationID==1001) endStationNumber = 1;
    if (endStationID==1010) endStationNumber = 2;
    if (endStationID==1012) endStationNumber = 3;
    if (endStationID==1013) endStationNumber = 4;
    if (endStationID==1016) endStationNumber = 5;
    if (endStationID==1017) endStationNumber = 6;
    if (endStationID==1045) endStationNumber = 7;
    if (endStationID==1048) endStationNumber = 8;
    if (endStationID==1049) endStationNumber = 9;
    //println("Start Number = " + startStationNumber + ", End Number = " + endStationNumber);
    if (startStationNumber == endStationNumber){
      matrix[startStationNumber][endStationNumber] += 1;
    } else {
      //I will treat trips from station A->B and B->A as the same.
      //Direction does not matter for this data visualization.
      //So, the matrix will be symmetric.
      matrix[startStationNumber][endStationNumber] += 1;
      matrix[endStationNumber][startStationNumber] += 1;
    }
  }
  //Now the matrix is full of the number of rides from place to place.
}

void setup() {
  makeMatrix();
  output = createWriter("myMatrix.txt"); 
  int nRows = matrix.length; 
  int nCols = nRows;

  output.println("["); 
  for (int row = 0; row < nRows; row++) {
    String aRowString = "[";
    for (int col = 0; col< nCols; col++) {
      aRowString += matrix[row][col];
      if (col != (nCols -1)){
        aRowString += ", ";
      }
    }
    aRowString += "]";
    if (row != (nRows -1)) {
      aRowString += ", ";
    }
    output.println(aRowString); 
  }
  output.println("];"); 
  

  output.flush();  // Writes the remaining data to the file
  output.close();  // Finishes the file
  exit();  // Stops the program
}

And here is my Processing Code for the version that maps all stops:

int[][] matrix;
Table allRidesTable;

PrintWriter output; 

void makeMatrix(){
  matrix = new int[53][53];
  for (int i=0; i<53; i++){
    for (int j=0; j<53; j++){
      matrix[i][j] = 0;
    }
  }
  //Now our matrix is set up, but it's all zero. Now we need to fill it with values.
  allRidesTable = loadTable("HealthyRide Rentals 2016 Q3.csv", "header");
  //Trip iD,Starttime,Stoptime,Bikeid,Tipduration,From station id, From station name,To station id,To station name, Usertype
  int totalRides = allRidesTable.getRowCount();
  for (int row=0; row < totalRides; row++){
    TableRow thisRow = allRidesTable.getRow(row);
    int startStationID = thisRow.getInt("From station id");
    int endStationID = thisRow.getInt("To station id");
    println("Start ID = " + startStationID + ", End ID = " + endStationID);
    //Note that the station IDs range from 1000 to 1051, inclusive
    int startStationNumber = startStationID - 1000;
    int endStationNumber = endStationID - 1000;
    if (startStationNumber < 0 || startStationNumber > 51){
      //The Start Station number was invalid, and all invalid Stations will be called 52.
      startStationNumber = 52;
    }
    if (endStationNumber < 0 || endStationNumber > 51){
      //The End Station number was invalid, and all invalid Stations will be called 52.
      endStationNumber = 52;
    }
    println("Start Number = " + startStationNumber + ", End Number = " + endStationNumber);
    if (startStationNumber == endStationNumber){
      matrix[startStationNumber][endStationNumber] += 1;
    } else {
      //I will treat trips from station A->B and B->A as the same.
      //Direction does not matter for this data visualization.
      //So, the matrix will be symmetric.
      matrix[startStationNumber][endStationNumber] += 1;
      matrix[endStationNumber][startStationNumber] += 1;
    }
  }
  //Now the matrix is full of the number of rides from place to place.
}

void setup() {
  makeMatrix();
  output = createWriter("myMatrix.txt"); 
  int nRows = matrix.length; 
  int nCols = nRows;

  output.println("["); 
  for (int row = 0; row < nRows; row++) {
    String aRowString = "[";
    for (int col = 0; col< nCols; col++) {
      aRowString += matrix[row][col];
      if (col != (nCols -1)){
        aRowString += ", ";
      }
    }
    aRowString += "]";
    if (row != (nRows -1)) {
      aRowString += ", ";
    }
    output.println(aRowString); 
  }
  output.println("];"); 
  

  output.flush();  // Writes the remaining data to the file
  output.close();  // Finishes the file
  exit();  // Stops the program
}

And here is my HTMl/D3:














Jaqaur – Looking Outwards 06

MOVIE SCRIPT CAPS

I chose to write about the bot “Movie Script Caps” by Thrice Dotted. This is a bot that tweets portions of movie scripts that are in all caps.

screen-shot-2016-10-29-at-9-34-09-pm

To me, this bot is more interesting than it is funny. I have researched proper screenplay formatting rules and conventions, and written a few screenplays myself, so I know generally what should and should not be written in all caps, but seeing just those portions without any context makes me want to know more. For example, shot headers and descriptions are always written in caps (like “JACQUI’S POV” or “MED. SHOT”), as are camera movements (“PAN TO:”), first-time character appearances (“We see GOLAN, a middle-aged college professor with a black T-shirt and a beard”), on-screen text (like “SUPERIMPOSE: WRITTEN BY JACQUI FASHIMPAUR” or “A sign says ‘CARNEGIE MELLON UNIVERSITY'”) and editing effects (“FADE THROUGH BLACK” or “EERIE MUSIC STARTS”). These are pretty standard and one can find examples of them in the series of tweets above, but there are plenty of more unusual phrases, too. After all, whether or not to capitalize something is really at the screenplay-writer’s discretion, and sometime he or she will just capitalize something that is particularly relevant to him or her. I find it fun to go through the tweets and try to figure out which phrase is which type of thing (shot header, editing effect movement), and try to imagine a context in which many of them could work together.

screen-shot-2016-10-29-at-10-19-36-pm

It’s not the twitter bot that’s going to save the world or anything, but I was particularly entertained by “Movie Script Caps,” and I think it’s a good out-of-the-box example of what this medium can be used for.

Jaqaur – Book

    GENESONG

img_3175

For my generative book project, I decided to make a song book. I called it “Genesong” (like “Genesis” or “Generate” and “Song”), and it contains 30 songs, each with rhyming lyrics, musical notes, and guitar chords. I had a lot of ideas for this book that I ended up not being able to execute, but I am very proud of how much I was able to do. I’ll go through the steps that are part of generating each song (so this is done thirty times).

song_book_flip

    GENERATION PROCESS

1. A key is chosen. I decided to limit the book’s choices to major keys, particularly Ab major, Eb major, Bb major, D major, C major, F major, and G major. This was mostly because minor keys would require different chord progressions, and would mean a lot more work and conditional statements. Throughout the book, every pitch was represented by an integer, with middle C being 0 and each half step up being an increment of 1. The keys for each song were represented by the integer of their base note, so Ab was -5, Eb was 3, etc. For this project, I disallowed changing keys mid-song.

2. A chord progression is chosen. I limited the options to progressions with four steps, just so that the music would all fit a general form. I picked progressions that are pleasant/generally popular: I-ii-IV-V , I-vi-ii-V, I-IV-V-V, I-V-vi-IV, and I-vi-IV-V.

3. A rhyme scheme is chosen. I wanted to keep these very simple, so that the rhymes can be identified despite what will be fairly nonsense lyrics. So, the only options are “ABAB”, “AAAA”, and “AABB.” The rhyme schemes affect not only the lyrics, but the music, because in general, rhyming lines in songs tend to feature parallel rhythms. That is not a rule in music, but for this project, I decided to make it one.

4. A “rhythm line” is generated. When a rhythm line is created, it is told how many beats it needs to fill. This amount will be four times the number of measures in the line (which is a random int between 3 and 5 inclusive). All songs are 4/4 time, just to keep things simple. A rhythm line knows how many elements (notes and rests) there are, how long each one lasts, and whether each element is a note or a rest. This was all randomly generated, but with a few parameters: the total length must be the length that was passed in (12, 16, or 20), the only possible lengths for elements are 0.5, 1, 2, 3, and 4 (I didn’t want to deal with dotted quarter notes, ties, etc, but as you’ll see I failed in that), the line cannot start with a rest, the line cannot end with a rest, and a rest cannot be longer than 1 beat (with such short lines and so little musical complexity, I didn’t want to make anyone sit through a long rest).

5. More rhythm lines are generated such that there are four per verse and four per chorus, and lines that rhyme have identical rhythm lines.

6. Each note is given an integer pitch based on what chord it is from (which is calculated from the key combined with where we are in the chord progression). Each line starts with the base note of said chord, just to have a nice musical landmark every few measures. Rests are given a pitch value of 100 (which is invalid).

7. Lyrics are generated via RiTa’s random word generator. I had hoped to make these coherent Markov-based sentences, but making those fit syllabically with the music and also fit in the rhyme scheme was too difficult to achieve (I tried a loop that kept re-generating lines until one worked, and would repeat this process for each song, and it often ran for several minutes with no positive results. This happened no matter what texts I put into the Markov chain, and for time’s sake I had to simplify things). So, each note gets one one-syllable word, about 80% of which are from RiTa’s lexicon, and 20% of which are randomly pieced together from an array I made of word beginnings (eg. “br”,”ch”, “st”) and word ends (eg. “ess”, “ind”, “ay”). If a word is at the end of the line, it will 100% of the time be one of the piece-meal words. Then, if it needs to rhyme with something else, it is just given the same word ending and a different beginning. When a line is complete, the first word is capitalized and a random punctuation mark is added. Random hyphens are also added (only after words that are non-lexicon) to imitate multiple-syllabled words. (Three sets of lyrics are generated for each line in the verses, one for the chorus).

8. A title of length 1-4 words is chosen from the beginning of the chorus.

An early rough-draft of a verses page.
An early rough-draft of a verses page.

Once all 30 songs are generated, we have to display each one. Here are those steps.

    DISPLAY PROCESS

1. Print the title at the top of the page (if this is a verse page). Then, find your starting point (passed in as a parameter). Calculate how far apart every element in this line needs to be, based on how wide the page is and how many elements there are.

2. For each element, figure out what image you need (based on whether or not its a rest, how long the element is, and what pitch the element is). If you have room in the measure (kept track of by a measure-beat-counter–each measure needs exactly four beats), just put that image on the page, calculating its x position based on which element in the line this is, and its y position based on its pitch and the distance between the staff lines). If you do NOT have room, put down however much you have room for, draw the measure line, and then put down the rest, then draw a tie between them. This part of the code was much more difficult than I thought, because it forced me to use all the things I was trying to avoid before, like dotted quarter notes, ties, and trying to express things like “3.5 beats” with as little superfluous notation as possible. When I used multiple note symbols for one element, I sometimes had to cram them together (see below) so as not to exceed the amount of horizontal space that one element was given. I should have done more to avoid situations like this in my original generation of the rhythms. Ultimately, I accounted for this by reducing the maximum number of possible measures per line from 7 to 5, just to ensure a reasonable amount of space. Also, the lyrics for each note were added to the document at the same time as their note and at the same x position.

screen-shot-2016-10-23-at-11-08-01-pm

screen-shot-2016-10-23-at-11-53-34-pm
Some squished notes in early iterations of “Genesong”

3. Do step 2 for every line. Then, add things like the staff lines themselves, the treble clef, the time signature, the repeat signs, the page number, and (this was a late addition) the chord names above the lines.

4. A fun addition: add a randomly generated adverb from RiTa’s lexicon at the beginning of the piece. I did this mostly to poke fun at sheet music that demands to be played “nonchalantly” or “youthfully” or other vague terms like that. Some of the generated adverbs made some sense in context, and others (see below) did not, like “architecturally,” or “whitely.” I prefer it this way.

screen-shot-2016-10-25-at-1-49-29-am
screen-shot-2016-10-26-at-11-04-36-am
screen-shot-2016-10-25-at-1-51-04-am
screen-shot-2016-10-25-at-1-49-19-am
screen-shot-2016-10-25-at-1-49-09-am
screen-shot-2016-10-24-at-8-43-48-pm
screen-shot-2016-10-26-at-11-11-33-am
screen-shot-2016-10-25-at-1-51-34-am

Once all of the songs were displayed (actually, this is done first, but I didn’t want to wreck the narrative I had going), a table of contents is generated using the title of every song and its respective page number. This is all put together and exported as one .pdf file. The cover was made separately.

SOME MORE THINGS TO SAY

I chose not to give too much explanation in the book of what it was. On the cover, it mentions that the songs are computationally generated, but otherwise I wanted Genesong to present itself as a real song book. I tried to use fonts that conveyed a slightly old-fashioned piano book vibe, and (in theory) formatted it as if it could be “real” sheet music.

img_3177

There were a lot of ideas I had for this project that I was not able to complete in the two-week time frame, including some I mentioned above (like lyrics that had actual meaning) and some other ones (like the use of Markov chains to put more notes next to each other that “go” together). I would like to return to this project in the future, and hopefully improve upon it. One thing I would really like to add is a base clef with chords and notes to be played alongside the melody. Dynamics, fermatas, and other such musical things would be great, too. Below are some pages of my notes/plans before I began coding.

img_3188

img_3187

My original notes/plans before any coding was done.

Even as it is, my project is imperfect. Sometimes, an usually long syllable and an unusually packed line came together to equal lyrics printed on top of each other (see below). Also, the resolution did not turn out as well as I had hoped, and you can see that the notes were often a bit pixelated on the paper copy (see below). Despite these little things, I am very very proud of Genesong, and I hope to have it played and sung aloud in the near future so people who don’t read music can appreciate it!

img_3181 img_3183

Here’s a video of Professor Levin flipping through my book:

Here is a link to my code on Github: https://github.com/JacquiwithaQ/60212/tree/master/Song_Book

Here is the PDF of the iteration I chose to print: song_book-38-A-good.pdf

Here are some more/alternative iterations of the book:
song_book-22
song_book-32
song_book-36

(Here is where I would embed my code, if it weren’t 1000+ lines across 6 Processing files. Golan, let me know if you really want me to embed it anyway, and I will.)

And, finally, here is a picture of Processing being sassy with me:
img_2653

Jaqaur – LookingOutwards05

https://www.metavision.com/

I am writing about the presentation at Weird Reality that had the greatest impact on me: Meta. Meta is a company that works in augmented reality, and they strive to create an interface that is natural and gestureless. For example, one could select something by picking it up rather than by pointing your face at it and awkwardly pinching the air (like you have to do with some other AR systems). What really blew my mind was how fast VR/AR technology is advancing in ways I didn’t know about. For example, Meta currently has a working (if somewhat rough and not totally safe) way to let users feel virtual items just by touching them with their naked hand. And they (“they” not necessarily meaning Meta, but rather the ambiguous VR/AR scientists that develop this stuff) have wearable technology that can read the brain waves of a person and determine from that alone what the person is looking at. Similarly, they (same ambiguous they) can transmit an image (or at least a few colors) directly to a person’s brain, and it will be as if that person is seeing it with his or her eyes. Like, what?! That’s crazy! And once it develops further, it could have huge applications in medicine and psychology, not just entertainment. The presenter said that by 2030, he thinks we will have transcended AR goggles to the point that most people just wear devices that put the AR into their brains. That would be a huge advancement from the AR goggles they have now, which are clunky and a bit awkward. All in all, Weird Reality was a great experience, but Meta’s presentation in particular really reminded me just how FREAKING AWESOME this technology could be.

Check out a video of theirs (and this is from three years ago):

Jaqaur – FaceOSC

For my Face OSC Project, I made a 2-D baby that reacts to the user’s face rather than directly being puppeteered by it. When it doesn’t see a face, it will become scared. When it sees a face, it is neutral/happy, and when the user makes a “funny face” (a.k.a. raised eyebrows and/or an open mouth), it will smile and laugh. Its eyes also follow the user from side to side. It’s pretty simplistic, and if I had more time with it, I would have liked to add more ways to interact with the baby (you can see some of these commented out in my code below). Still, I think it’s cute and it’s fun to play with for a few minutes.

From the start, I knew I didn’t want to make a direct puppet, but rather something that the user could interact with in some way. I also wanted to make the design in 3-D, but I had a lot of trouble figuring the 3-D version of Processing out, and I didn’t think I had enough time to devote to this project to both learn 3-D Processing and make something half decent. So, I settled for 2D. My first idea was that the user could be a sort of king or queen looking out over an army of tiny people. Different commands could be given to the army via facial expressions, and that could cause them to do different things. While I still like this idea in theory, I am not very good at animation, and didn’t know how to get 100 tiny people to move and interact naturally. My next idea, and one that I actually made, was “Bad Blocks,” a program in which the user acts as the baby-sitter for some randomly-generated block people. When the user is’t looking (a.k.a. when no face is found), the blocks run around, but when the user looks, they freeze and their facial expressions change. The user can also open his/her mouth to send the blocks back into their proper place.

screen-shot-2016-10-13-at-10-44-54-pm

This program worked okay, but the interactions didn’t feel very natural, and the blocks were pretty simplistic. Also, FaceOSC sometimes blinks out when the user’s face is moving quickly, contorted strangely, or poorly lit. My block program did not respond well to this, as the blocks abruptly change facial expression and start running around the second a face went away. It looks jumpy and awkward, and I decided to start over with similar interactions, but one single character that would be more detailed and have more smooth facial transitions.

image-1

That’s when I came up with the giant baby head. It seemed fairly easy to make out of geometric shapes (and it was), and it could use similar interactions to the blocks, since both have baby-sitting premises. It was important to me that the baby didn’t just jump between its three facial expressions, because that doesn’t look natural. So, I made the baby’s features be based on a float variable called “happiness” that is changed by various Face OSC input. I made sure that all of the transitions were smooth, and I am pretty proud of how that aspect of this turned out. All in all, I am content with this project. It fulfills my initial expectations for it, but I know it’s not as unique or exciting as it could be.

Here is a link to the code on Github. The code is also below:

//
// FaceOSC Baby written by Jacqueline Fashimpaur
// October 2016
//
// Based on a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230
//
import oscP5.*;
OscP5 oscP5;

int[] colors;// = new int[6]; 
/*
colors[0] = 0xfeebe2;
 colors[1] = 0xfcc5c0;
 colors[2] = 0xfa9fb5;
 colors[3] = 0xf768a1;
 colors[4] = 0xc51b8a;
 colors[5] = 0x7a0177;
 */

// num faces found
int found;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;
int skin_color_index;
int eye_color_index;
int gender_index;
float happiness;
float eye_displacement = 0;
boolean eye_right;
float baby_center_x;
float baby_center_y;

void setup() {
  size(640, 640);
  frameRate(30);
  //sets all colors
  /*colors = new int[6]; 
  colors[0] = #feebe2;
  colors[1] = #fcc5c0;
  colors[2] = #fa9fb5;
  colors[3] = #f768a1;
  colors[4] = #c51b8a;
  colors[5] = #7a0177;*/
  skin_color_index = int(random(0,4));
  eye_color_index = int(random(0,4));
  gender_index = int(random(0,2));
  happiness = 0;
  eye_displacement = 0;
  eye_right = true;
  baby_center_x = 320;
  baby_center_y = 320;
  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");
}

void display() {
  eye_displacement = (((70+baby_center_x)-posePosition.x)/25)+2;
  /*if (watched()){
    eye_displacement = ((250-posePosition.x)/25)+2;
  } else {
    if (eye_right){
      eye_displacement += 1;
    }
    else {
      eye_displacement -= 1;
    }
    if (eye_displacement<-15) {eye_displacement = -15; eye_right = true;}
    if (eye_displacement>15) {eye_displacement = 15; eye_right = false;}
  }*/
  int skin_r = 141;
  int skin_g = 85;
  int skin_b = 36;
  int eye_r = 00;
  int eye_g = 128;
  int eye_b = 192;
  int clothing_r = 255;
  int clothing_g = 187;
  int clothing_b = 218;
  if (skin_color_index == 0) {
    skin_r = 255;
    skin_g = 224;
    skin_b = 186;
  } else if (skin_color_index == 1) {
    skin_r = 241;
    skin_g = 194;
    skin_b = 125;
  } else if (skin_color_index == 2) {
    skin_r = 198;
    skin_g = 134;
    skin_b = 66;
  }
  if (eye_color_index == 0) {
    eye_r = 0;
    eye_g = 192;
    eye_b = 255;
  } else if (eye_color_index == 1) {
    eye_r = 0;
    eye_g = 192;
    eye_b = 0;
  } else if (eye_color_index == 2) {
    eye_r = 83;
    eye_g = 61;
    eye_b = 53;
  }
  if (gender_index == 1){
    clothing_r = 168;
    clothing_g = 204;
    clothing_b = 232;
  }
  //draw the body
  fill(clothing_r, clothing_g, clothing_b);
  noStroke();
  ellipse(baby_center_x, (210+baby_center_y), 500, 200);
  rect(baby_center_x-(500/2), (210+baby_center_y), 500, 300);
  //draw the face
  fill(skin_r, skin_g, skin_b);
  ellipse(baby_center_x,baby_center_y-40, 350, 350);
  ellipse(baby_center_x,baby_center_y+60, 300, 220);
  beginShape();
  vertex(baby_center_x-(350/2), baby_center_y-40);
  vertex(baby_center_x-(300/2), baby_center_y+60);
  vertex(baby_center_x+(300/2), baby_center_y+60);
  vertex(baby_center_x+(350/2), baby_center_y-40);
  endShape(CLOSE);
  //draw the eyes
  fill(#eeeeee);
  ellipse(baby_center_x - 60, baby_center_y - 40, 80, 80);
  ellipse(baby_center_x + 60, baby_center_y - 40, 80, 80);
  fill(eye_r, eye_g, eye_b);
  ellipse(baby_center_x-65+eye_displacement, baby_center_y -40, 50, 50);
  ellipse(baby_center_x+55+eye_displacement, baby_center_y -40, 50, 50);
  fill(0);
  ellipse(baby_center_x-65+eye_displacement, baby_center_y -40, 25, 25);
  ellipse(baby_center_x+55+eye_displacement, baby_center_y -40, 25, 25);
  //draw the nose
  noFill();
  strokeCap(ROUND);
  stroke(skin_r - 20, skin_g - 20, skin_b - 20);
  strokeWeight(3);
  arc(baby_center_x, baby_center_y + 20, 50, 30, 0, PI, OPEN);
  //draw the mouth
  strokeWeight(10);
  if (skin_color_index == 0) stroke(#ffcccc);
  if (happiness<0){
    //unhappy
    fill(#cc6666);
    ellipse(baby_center_x, baby_center_y+80, 60-(happiness/8), 0-happiness);
  } else if (happiness<=40){
    //happy
    noFill();
    arc(baby_center_x, baby_center_y+80-(happiness/5), 60+(happiness/4), happiness/2, 0, PI, OPEN);
  } else {
    strokeWeight(8);
    fill(#cc6666);
    arc(baby_center_x, baby_center_y+81-(happiness/5), 60+(happiness/4), happiness-20, 0, PI, OPEN);
    fill(skin_r, skin_g, skin_b);
    arc(baby_center_x, baby_center_y+79-(happiness/5), 60+(happiness/4), 20+((happiness-40)/10), 0, PI, OPEN);
  }
  //draw the cheeks (range 340-380)
  noStroke();
  fill(skin_r, skin_g, skin_b);
  if (happiness>30){
    ellipse(baby_center_x-90, baby_center_y+60-(happiness/2), 100, 70);
    ellipse(baby_center_x+90, baby_center_y+60-(happiness/2), 100, 70);
  }
  //draw the eyelids (range 200-240)
  if (happiness<0){
    ellipse(baby_center_x-90, baby_center_y-120-(happiness/3), 100, 90);
    ellipse(baby_center_x+90, baby_center_y-120-(happiness/3), 100, 90);
  }
  //draw a hair
  stroke(0);
  noFill();
  strokeWeight(2);
  curve(400,10,baby_center_x,baby_center_y-200,baby_center_x-20,baby_center_y-270,0,0);
  //draw a bow? If time...
  /* fill(clothing_r, clothing_g, clothing_b);
  noStroke();
  ellipse(320,120,60,60); */
}

void draw() {  
  background(#ccffff);
  happiness += 0.5;
  if (!watched()){
    happiness-= 2;
  } else if (funnyFace()){
    happiness++;
  } else if (happiness > 40){
    happiness-=2;
    if (happiness<40) happiness = 40;
  } else {
    happiness++;
    if (happiness>40) happiness = 40;
  }
  if (happiness>90) happiness = 90;
  if (happiness<-70) happiness = -70;
  stroke(0);
  baby_center_x += 1-(2*noise(millis()/1000));
  baby_center_y += 1-(2*noise((millis()+500)/800));
  if (baby_center_x < 260) baby_center_x = 260;
  if (baby_center_x > 380) baby_center_x = 380;
  if (baby_center_y < 300) baby_center_y = 300;
  if (baby_center_y > 340) baby_center_y = 340;
  display();
  println(eyebrowLeft);
}

// OSC CALLBACK FUNCTIONS

public void found(int i) {
  //println("found: " + i);
  found = i;
}

public void poseScale(float s) {
  //println("scale: " + s);
  poseScale = s;
}

public void posePosition(float x, float y) {
 // println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);
}

public void poseOrientation(float x, float y, float z) {
  //println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);
}

public void mouthWidthReceived(float w) {
  //println("mouth Width: " + w);
  mouthWidth = w;
}

public void mouthHeightReceived(float h) {
  //println("mouth height: " + h);
  mouthHeight = h;
}

public void eyeLeftReceived(float f) {
  //println("eye left: " + f);
  eyeLeft = f;
}

public void eyeRightReceived(float f) {
  //println("eye right: " + f);
  eyeRight = f;
}

public void eyebrowLeftReceived(float f) {
  //println("eyebrow left: " + f);
  eyebrowLeft = f;
}

public void eyebrowRightReceived(float f) {
  //println("eyebrow right: " + f);
  eyebrowRight = f;
}

public void jawReceived(float f) {
  //println("jaw: " + f);
  jaw = f;
}

public void nostrilsReceived(float f) {
  //println("nostrils: " + f);
  nostrils = f;
}

// all other OSC messages end up here
void oscEvent(OscMessage m) {
  if (m.isPlugged() == false) {
    println("UNPLUGGED: " + m);
  }
}

boolean watched() {
  if (found==0) {
    return false;
  }
  float left_eye_height = 10;
  float right_eye_height = 10;
  if (left_eye_height < 10 && right_eye_height <10) {
    return false;
  }
  return true;
}

boolean funnyFace(){
  if (eyebrowLeft>8 || eyebrowRight>8) return true;
  if (mouthHeight>3) return true;
  return false;
}

boolean mouthOpen(){
  if (mouthHeight>2){
    return true;
  }
  return false;
}

Here is a gif of my demo:
faceosc_baby_demo

And here is a weird thing FaceOSC did while I was testing!
screen-shot-2016-10-13-at-6-07-21-pm

Jaqaur – LookingOutwards04

https://www.tiltbrush.com/

Okay, I know this isn’t technically an art project, but it’s still a form of interactivity that I find interesting, so it counts, right? Tilt Brush combines my unboundedly increasing obsession with Google and my long-term love for virtual reality. Basically, it’s an environment in which you can “paint” in three dimensional space using their special Tilt Brush tool. You can choose color, stroke width, and all that good stuff, and then… just draw. In air. Technically, you need a VR headset to be able to see what you’re drawing, but that’s a small price to pay for the ability to instantly create 3D objects around you (either for fun or to plan out a future project). There have been 3D modeling tools out there for a while now, but Tilt Brush is different. It’s not as good if you want to run physics stimulations on your creations, but it’s so much better for abstract brainstorming of ideas. You can create all sorts of shapes fairly quickly, and then actually walk around them and see how they would look in the real world. This is an idea I’ve dreamed about since I was a kid, and something I think could be really useful to all kinds of artists in the future.

This video illustrates how Tilt Brush works, and while it looks a little simplistic now, the possibilities are endless. I bet that, in the not-so-distant future, it could be possible for users to smooth out the surfaces of the shapes they draw, because right now, that’s the main thing that bothers me about the drawings shown in the video below: they’re rough, and look a bit like they were put together with colorful strips of paper mache. Anyway, this whole project really excites me, and I’m looking forward to seeing where it goes from here.

Jaqaur – Plot

I loved the idea of plotter-made artwork when I first heard about it, but as it turned out, I didn’t have any great plotter ideas. Unlike screens, plotters can’t create moving images (which I usually like to make). They also don’t have nearly the range of colors, gradients, and opacities that screens do. Basically everything had to be line-based. I spent the first half of this week trying to make a maze generator, and while I still like that idea in theory, I struggled to come up with a creative way to design a maze that wouldn’t be too hard to generate with a program. Then, one night I was at Resnik and thought “What about binary trees?” I remembered from 15-051 that real numbers can be represented as line-based “trees”, and quickly drew out on a receipt (see below) all eight possible trees for the number 3. I thought that with a little randomization, those trees could look really nice, so I changed my plan.

img_3075

First, I wrote up code that would just generate eight random trees (checking each time to make sure it doesn’t do an arrangement it has done before) and turn them into a pdf. To be safe, I printed this on the Axidraw with pen:

img_3070

However, I still wanted to do more. So, I restructured my code entirely such that instead of generating a pdf, it would control the Axidraw in real time. I did this so I could have the Axidraw paint the trees. I added some code so that it would go back for more ink before every tree, and after a few messy trial runs, it worked as well as I imagined it would! The video at the beginning of this post shows the Axidraw painting my design, which I named “Three Trees.” Some pictures of my final paintings are below, followed by pictures of the digital images the computer generated while making them.

img_3115
screen-shot-2016-09-29-at-11-33-03-pm
img_3114
img_3109

They actually turned out more artistic-looking than I originally anticipated. Several people have told me they look like characters in a foreign language, but I prefer to think they look like little dancing people (with stretched out bodies, I guess). Sometimes the ink splashed or dripped a bit, and I think that makes it look a little more hand-painted. I wish I could have made a maze generator work, but I really do like my trees!

Here is a link to the pdf generating code on Github.
Here is a link to the paintbrush version.

Jaqaur – Clock-Feedback

In general, people seemed to like my clock (or they’re all just very nice), which is good because its my favorite of all the projects we’ve done so far. I’m really proud of it, and I haven’t yet gotten tired of watching it. Still, there were some flaws, as people noticed. The main complaint was in regards to my physics: some didn’t like how the balls overlapped at the bottom, and some didn’t like the jostle-y-ness. I was well aware of this before I turned the clock in, and most of my time working on the project was trying to fix this. There was basically an inverse relationship between these two issues. I could raise the sensitivity, which would reduce overlap, but lead to more vibration and shaking throughout the whole sea of marbles. I could also reduce sensitivity, which would make the marbles bounce around less but also make them more likely to overlap at the bottom where there was a lot of pressure on them. I ended erring on the side of high sensitivity, so as to reduce overlap as much as possible, and then put a velocity cap on the marbles so they couldn’t shake past the point of legibility. I think I did the best I could without implementing a seriously more complicated physics system. So, in short, I agree with the complaints against my physics, but I couldn’t do much to totally get rid of those issues.

My color palette was a little more across the board. Some people said they really liked the monochrome, and in general I do too. I especially like how I prevented the color generated from being too gray/brown. However, Tega suggested I do more with interesting color palettes, and that’s not a bad idea. I knew I didn’t want totally random colors, but I did want a lot of potential color options. Implementing particular color palettes would take a lot of work if I really want there to be as many options as there are now, but maybe it would be worth it? It might make the whole thing more dynamic if minutes, hours, and seconds, were more differentiated. Some people suggested an accent color, too, but I actually thought about that and decided I preferred the aesthetic when the only thing distinguishing the active marbles from the inactive marbles was size.

All in all, I wasn’t surprised by most of my feedback, and I’m still pretty happy with my clock!

Jaqaur – Looking Outwards 03

incendianext_2
I think Incendia is really interesting, because rather than being an individual art project, it’s a software that allows users to make designs with fractals. The examples on their website vary wildly, from traditional swirls to pictures made up of stone columns and suns (show below) or hot air balloons. In this project, the computers seem to have little autonomy, because users get to control most of the factors like color, type of fractal, and anything that would really affect the image. Still, there is a high amount of complexity that just comes with all fractals–they continue infinitely (in theory), so there are a lot of intricate details. Even though everything is extremely orderly, the pictures created by Incendia appear very complex to viewers, and sometimes very beautiful.
suns

Jaqaur – Reading03

1a. I think that mathematics, particularly mathematical proofs, exhibit effective complexity. Technically, there isn’t much disorder (or any at all), but they show a flawless transition from complexity to simplicity, or vice versa. They can be incredibly complicated, but still beautiful. Below is a picture of a proof of Euler’s identity, which is often called “the most beautiful proof in mathematics.” I guess on a scale from total order to total disorder, this would definitely be on the total order side, but there is still a great deal of variation from one proof to another, such that some don’t even look like math anymore.
euler-derivations

1b. As much as the Problem of Meaning intrigues me, I think enough of my classmates have answered it the same way I would have to warrant me discussing another problem. So, I will talk about the Problem of Uniqueness, another question which has bothered me in the past. Before I was even exposed to computer-based art as I know it, I was shown prints: pictures carved into a piece of wax, metal, or wood, covered in ink, and then pressed onto paper after paper. This made me a little uncomfortable. No matter how good the art was, the fact that it was mass-produced made it feel less real to me, and certainly less valuable then something that was painted by hand. I remember just a few years ago I saw a rack of 20 copies of the same painting in a “World Market,” and I said “Wow. Before I saw how many there were, I thought it was a real painting.”
So, even before I was presented with this question in the reading, I guess I had an answer: to me, mass produced artwork is less valuable than one-of-a-kind artwork (even if there are slight variations in some generative art), if for no other reason than supply-and-demand economics. That being said, I don’t want to negate the actual artistic thinking that goes into writing the code or carving the initial print block that leads to the mass-produced art–that act can be incredibly creative and skillful. Rather, I think that the actual “art” in this scenario is the code or the block rather than the pieces that get generated. When a musical is performed every night, the true art is in the writing, direction, choreography, and design rather than the individual performance. Similarly, each piece of generated “art” is but a child of the original artwork, like a poster of a Picasso painting. While they can be beautiful, I think their intrinsic “value” is reduced by their quantity.

Jaqaur – AnimatedLoop

Here is the code running in p5.js (the actual gif is below):
jaqaur_animated_loop

I didn’t have any really great ideas for this project. I thought I could do something about my love of math, like something with a Klein bottle, or something infinite that shows the set representation of all natural numbers. But I didn’t know how to implement either of those in a realistic, interesting way. When I started sketching things, one of the first simple geometric ideas I had was the one I ended up going with: “Ladder Squares.” It’s the fourth sketch on the sheet below. I thought little lines could climb down horizontal “rungs,” forming squares as they go. It’s one of my simpler ideas, but I really like how it turned out. It’s easier to look at without getting a headache than I’m sure some of my others would have been. I decided to add a slight pause when the lines are forming squares and when they are forming horizontal lines, as those are the most interesting positions to me. I added the blue-green gradient last minute to make it a little more interesting.

jaqaur loop sketches

Here is the gif itself. Sometimes it runs really slowly; I don’t know why:
Jaqaur - Loop

It also looks interesting when you modify the code a little bit so some lines climb backwards, although I didn’t like this version quite as much. I wanted the squares.
jaqaur_animated_loop-backwards

Here’s a link to the code on Github.

Jaqaur – Interruptions

Click to generate a new interruptions.

jaqaur_interruptions

Here is a link to my code on Github.

The easy part of this assignment was drawing the lines. I counted that Molnar’s “Interruptions” were about fifty lines by fifty lines, so I did the same. Using nested for loops, I drew lines at randomly generated angles, with their centers all the same distance apart.

The hard part was making the interruptions (the “holes” in the picture) look like Molnar’s. At first, I had every line decide based on a random number whether to exist or not (with odds of about 9/10 of existence). That distribution was too even, though, and didn’t make the big holes I wanted. That led me to create my more complicated “interrupt” method, which creates a random number of “interruptions,” each of a random size. An “interruption,” in this method, is a clump of lines that collectively decide not to exist. An interruption starts at a random spot (indexed from 1-2500), turns that line’s angle to 100 (rather than something between -PI and PI), and then moves left(-49), right(+49), up (-1), or down (+1) and continues this until it runs out of size. If a line has an angle greater than 10 (eg 100), it will not be drawn later.

This creates irregularly shaped clumps. When I first tried this, I had a few big clumps that looked like Molnar’s, but I didn’t have enough little ones throughout the image. So, I upped the number of clumps, and reduced the clump size. Most of the time, this leads to a few big “clumps” (actually made out of smaller clumps) and then some smaller ones. I think there’s a strong resemblance between mine and the original, although it’s a little hard to judge from only five examples.

Jaqaur – LookingOutwards2

I’ll be honest; I don’t really keep up with the computational art scene. When I first heard this assignment, no particular project came to mind. Sill, I love computational art when I see it; when I was a kid, and my family visited a museum, I would always spend an unreasonably long time playing with the interactive wall projections, catching colorful raindrops in my hand or stretching out my arms to see how many digital birds I could get to land on them. While I love this sort of thing, and was really excited by the idea of this class, I can’t point to any specific project and say “That’s what inspired me.”

So, what am I going to write about? Only the latest, greatest, interactive augmented reality project that basically took over the world in less than a week. That’s right: Pokemon Go. Yeah, yeah, I know it’s not as purely artistic as many other computational art projects, but it’s an excellent example of emerging technologies coming together to form something that’s interactive, entertaining, and all-around pretty impressive.

For those of you who don’t know what Pokemon Go is, it’s a new(-ish) mobile game that allows users to collect and battle virtual animals called “Pokemon” in the real world. Players need to physically walk around to earn points, find Pokemon, and hatch eggs. It may seem pretty simplistic, but there’s a lot going on. The app uses GPS technology to find out where you are and how far you’ve walked (I tend to agree with the people who say that the phone’s pedometer would have been a better way to measure the latter). It uses data or WiFi to access game information, like what Pokemon and Pokestops are in your area. Finally, it uses your camera to display an augmented version of reality: one that includes little animals all over the place.

pokemon-go-in-action

None of these technologies are particularly new, but never before has there been a game that used all of them to this degree, on this scale. Through this app, players have access to an entire virtual world that Niantic (the company behind Pokemon Go) has created. Real-world locations are used as Poke-Gyms and Poke-stops, and you can watch the digital avatar you design for yourself walking through your town. It’s a massive project that has gotten nerds everywhere out walking, exercising, and socializing, and it’s success could mean more augmented reality games like this in the future.

pokemon-go-action-pcadvisor-co-uk

It may not be a traditional art project, but I find Pokemon Go pretty inspiring. Sure, it could use some improvements (*cough* tracking *cough*), but what’s more important than the gameplay itself is the fact that augmented reality is making its way into our everyday life. Niantic has even said that they are working on making it work on smart glasses! Pokemon Go is the first step in what is hopefully a massive entertainment revolution. If people had the opportunity to view the world around them through well-implemented augmented reality that wasn’t hugely inconvenient, it would make gaming much more immersive, exciting, and (for what it’s worth) healthy. That’s the sort of thing I want to work on in the future, and the sort of world I want to see.

Jaqaur – Clock

Click to re-set with a new color!
jaqaur_clock

I had a lot of ideas when I started working on this project, and as usual, I went with none of them. I considered an analog-style clock but with one hand pointing to the minutes whose length would correspond to the hours. I considered some clock that would make the user do work, either by solving a riddle or a math problem, in order to figure out the time, but I wanted to do something a little more interactive and fun to look at. I considered some complicated Rube-Goldberg-esque setups, but those would have been too complicated to make, I think. Some pictures of my initial sketches are below. My final idea is the closest to the second sketch, which was a screen full of all the numbers in a random order, and hands pointing from the mouse to the correct one, but it’s still not much like that.

Anyway, the idea that I actually went with is definitely my favorite. It’s a pile of marbles, each with a number on it. There is a marble for every second, every minute, and every hour, and they are all the same size except for when they are active. If the time is the number displayed on a marble, then that marble will grow to the appropriate size (hours get really big, minutes get kind of big, and seconds don’t grow much). It will shrink back again when it’s no longer needed.

The hardest part was getting the “physics” to work, and it’s still not perfect (most of the code that works was borrowed from other people; see my code’s comments). There was a sort of inverse relationship between the amount of overlap the circles could have (which is not good, and would lead to a bunch getting squished in the corners) and the amount of vibration throughout the pile (which is also not good because it can make marbles hard to read). Ultimately, I leaned towards the “More Vibration” side, and just put a cap on the marbles’ speed so they can’t shake too much.

I also had to add a function that tells the numbers themselves how big they need to be. I decided to make this a function of their marble’s width, and while it works pretty well, it makes the “display” function that much more complicated. Another problem I faced was turning it into 12-hour time. I knew the “hours” function was in terms of 24 time, so I added an “if” statement that would subtract 12 if hours() returned something greater than 12. However, I forgot that, in 24 hour time, midnight is 0:00! I only recently accounted for this, but now it works fine.

I really like how it looks now. The big marbles tend to slowly rise to the top, like they would in real life, but there is always movement. Personally, I can watch it for a really long time! I wouldn’t really change anything about it, except maybe patching up the physics. I think it’s really pretty and very readable. To make it even more interactive, I added a feature where when the user clicks on the Canvas, all of the balls get re-dropped in a new color. The color is always randomly generated, but I added a formula that prevents the color from being too dark or too ugly (in my opinion). Have fun playing with it!

My initial sketches:
img_2635img_2636

I would like to point out that this clock does not always work as intended when run alongside everyone else’s clock; it’s functions are slowed down such that the seconds can’t reach their full size before it is time for them to shrink again. This makes for less marble movement than intended. I highly recommend viewing this post in its own tab for maximum effect!

Jaqaur – LookingOutwards01

“Google to buy Syria in $3.2 Billion Deal”
“Selena Gomez: There’s a Big Difference Between Yasser Arafat and Me”
Those are just a few of the headlines generated by Darius Kazemi’s “Two Headlines” twitter bot, which pulls real headlines from the news and splices them together to get rarely accurate and often funny “headlines,” which it then tweets. Kazemi makes what he calls “Weird Internet Stuff,” small coding projects that usually take less than 5 hours to complete, and often generate fairly useless images, phrases, and information.

In his 2014 Eyeo talk, Kazemi discussed making art with code, and what it takes to be successful in this. He displayed a mathematical formula and called it elegant, but then warned that the things that make equations elegant–compactness, infinite expressiveness–are a red flag for procedurally generated art, and “the computer art equivalent of a Thomas Kinkade.” He showed a few fractal landscapes generated by a computer, and declared that while they were impressive, they were boring: very little was different between the landscapes, and a viewer could quickly spot the patterns going on. According to Kazemi, making good computer generated art takes more than just spitting out some image or phrase based on a randomly generated number; it takes Templated Authorship, Random Input, and Context.

Templated Authorship means not just displaying random information, but rather interpreting it in some way, like letting a random number be the x coordinate of a shape, or having a random word be searched in Google. This is simple enough, and something most creators of digital art already make use of. Filtering or finding the Random Input in a unique way is a great way to make one’s art more relevant. Like the “Two Headlines” bot, which takes its input from real news headlines, you should get your “randomness” from the world itself if you want your artwork to be a reflection of something meaningful rather than a series of random numbers. Finally, Context is what makes art mean something to viewers. Sure, you can make a bot that can randomly generate a word and then display its definition. But who would want to look at that? But if you put that information in the context of a Ryan Reynolds saying “Hey girl, you must be a because you are “, as Kazemi did, you have a pickup line generator that people could play with for hours, even if half of what it says makes no sense.

As a speaker, Kazemi made excellent use of examples, both by showing the audience pictures of what he was talking about and by actually running some of the random-generator programs he spoke of right in front of them. He spoke about video games and tweets, things many young people today can relate to, but what really stuck out to me was the way he talked about elegance, coherence, and even general quality: these things must be considered undesirable when trying to make procedural art. Despite the fact that these are often considered positive traits of an individual work, striving for them when writing your code will yield results that are constrained, unoriginal, and boring–everything art shouldn’t be. I know I’ll have to fight my basic instinct as an artist in order to follow Kazemi’s advice, but hopefully my work is all the better for it.

Here is a link to Kazemi’s website, which contains many examples of his “Weird Internet Stuff”.

And here is the video I watched of his presentation:

Jaqaur – FirstWordLastWord

I’ve never thought of myself as particularly creative; as much as I love making art, I struggle to come up with anything truly novel. My approach to improving as an artist usually involves studying technique and theory, and most of my best work (in my opinion) is actively based on someone else’s. So, I’d have to say I’m more of a Last Word Artist, even though it’s hard to say now whether or not my work will stand the test of time.
That being said, I’ve always appreciated First Word Art most: seeing a form of art that’s I’ve never seen before feels like a new discovery, like I’m a tiny bit smarter or more worldly than before. I don’t get that same excitement looking at an oil portrait of fruit, no matter how well-painted it is.
Ultimately, First Word Art is a better example of creation–a major part of what art is all about–because it not only creates an individual work, but potentially a new genre, style, or idea. Last Word Art, on the other hand, provides a better means of expression–another key part of art–because an idea is more likely to reach an audience if they know what they’re looking at in the first place.