takos-mocap

 

Originally I was collaborating with Keali, but as we worked on separate parts of out projects, our paths diverged. I started off by messing around with the three-dimensional representation of motion capture. I started by using PeasyCam to make the camera view changeable by use of the mouse. Then I experimented with what it would look like to show all the frames of the motion capture by not only drawing the current frame- this was inspired by Etienne-Jules Marey. The result of this was interesting at first, but it became incomprehensible overtime because everything ended up the same color.
human_motion_capture_running_a-1dds

Then I decided to cycle the color of the motion capture through all the different shades of gray, and to add opacity. Then I recorded a motion capture (thanks Kander) of someone moving very slowly so that the motion could be better shown as the lines would be closer together. The video and gif are on a smaller screen resolution, but my screenshots below are done on a 3840 x 2160 pixel screen. I also made it so that you can reset the program by pressing ‘x’ – this will redraw the background to whatever the current line color is

hahahascreen-shot-2016-11-11-at-6-39-40-am screen-shot-2016-11-11-at-8-21-04-am screen-shot-2016-11-11-at-8-23-31-am screen-shot-2016-11-11-at-8-24-40-am screen-shot-2016-11-11-at-8-24-52-am screen-shot-2016-11-11-at-8-25-57-am screen-shot-2016-11-11-at-8-26-10-am screen-shot-2016-11-11-at-8-26-47-am screen-shot-2016-11-11-at-8-27-46-am screen-shot-2016-11-11-at-8-28-07-am screen-shot-2016-11-11-at-8-29-23-am screen-shot-2016-11-11-at-8-30-25-amsketchmocap

 

takos-manifesto

The Critical Engineer recognizes that each work of engineering engineers its user, proportional to that user’s dependency upon it.

 

This tenant means that even though things are made for people to use, which implies that they should bend to human wants and needs, that is not the case. People adapt to their technology, which is constructed for them, but also adapts in response to how people use it, and again people adapt to this change. Basically, critical engineering is a never ending cycle that alters the person and the engineered object over and over again. I think this is interesting cause it shows how nothing designed for people can truly ever be ‘finished’, not can it really be optimized since people are constantly adapting to their environments (which include the tech itself) and because of personal tastes and the difference between people that are either bridged or expanded.

 

 

Keali-Mocap

Beep boop!


beepboopfinal

First iteration of ideas: a body as particles–initially I automatically conjured ideas that could be attributed to some sort of radiation or atmospheric effect: a figure of star particles, pixel dust, wisps of smoke–something simple, seamless, but with admirable aesthetics. This desire to represent some aura-based atmosphere also led to indirect models of the form, such as a delicate rain scene, where the body is invisible and the viewer can only see the body from where the rain bounces off. Another exploration regarded the soul butterflies, i.e. butterflies of death, a common trope in anime where glittering butterflies fly near lost souls or people close to death. (So, perhaps some iteration where if the model makes abrupt movements/shaking, he or she could shake the butterflies/death off of them–this shaking and loosening effect could be applied to any of my particle-based ideas).

I originally partnered with Takos to do this assignment and toy with some of these, and her ideas, and as we assigned ourselves parts to develop further, we actually continually drifted apart in our coding approaches and end goals… which eventually led to separate projects haha.
Ironically, my final product was an idea that she gave to me, including the link to the video below (thanks Takos!); once she presented this idea to me, I already thought of all the attributes needed that I thought would make the execution successful, and ended up going with it, while she decided to develop another completely different idea (that, ironically, was more of my usual aesthetic with seamless monochromatic visuals…) But cool thing is, I’m glad I explored something different anyway, and am actually very happy with how well-rounded my results became, in that even though it was a visually simple simulation, I feel like all the details and characteristics were well-considered and complement each other with purpose very much.

As such is the walking signal simulator, where a plane of circle bulbs light up according to the human figure: if the figure is moving, it is green, and ideally if the figure stops moving, the lights go red. I included audio from the walking signal noise at Morewood and Forbes (commonly nicknamed the “beep boop” by CMU students), and the audio also pauses if the red stop signal is on. The lights are lit according to an isBoneNear function that calculates the theoretical segment between all the Bvh bones and compares it to a point(x,y) that would be the center of all the circles on the light bulb plane, and if the distance is within my hardcoded epsilon, the circle will be green or red instead of the default gradient of grays.

Final: Troubleshooting the head was interesting because I assumed that the head would be the bone without a parent (a conditional I had to include anyway so that there wouldn’t be a null exception error), but when I upped the epsilon I saw no change, so I… guess the head wasn’t it; Golan then taught me about the function that allowed me to directly check for bone names (“Head”) that made the process easier, so raising the epsilon ended up succeeding to make the head little more prominent, although the default Head bone itself was still very close to the torso so the final figure looks like it has a very short neck… (but this is still the best improvement because the figure originally looked headless… also thank you Golan.) I even had an iteration where, because I still couldn’t identify and isolate the head bone yet, where my increase in epsilon accidentally made the model look pregnant (because it turned out that the bone I affected was at the waist I guess…) I could not fathom how to get the stop signal of red to work at random pauses, as I found it difficult to calculate whether the Bvh model moved between the last frame or not, so I ended up coding a method to just make the file pause at the end of every loop for a bit longer than usual before relooping, and at that moment of pause, changed the lit color to red and the audio amp to 0. I also added a two frames to the borders to give it an effect of having the walking signal yellow box frame. Originally I also made the plane flat, but decided to give it a top down gradient of gray rather than the flat grays, to mimic some short of shadow being casted from the top of the walking signal box. The top four pictures of the screencaps below were the initial tinkering stages of making the colors work and align well (as you can see, I had some debugging to do.)

I particularly also found it fitting that the model is stretching, as if taking a break from a jog or pedestrian stroll or walk 🙂 Take care of yourself, exercise, and remember that it’s the little things that count! (I should really take that advice…) Overall, I’m really pleased that, although the result appears uncomplicated, that all its parts combine very well… it made me really happy that the class laughed once they realized exactly what my mocap attempted to mimic in real life. (The beep boop audio helped immensely, I believe… by the way, credits to this CMU remix, which is where I cropped the audio from!)

finaldoc

15034103_1366314526817589_188347404_o

GitHub repository//

import processing.sound.*;
SoundFile file; 

// Originally from http://perfume-dev.github.io/

import java.util.ArrayList;
import java.util.List;

BvhParser parserA = new BvhParser();
PBvh bvh1, bvh2, bvh3;

long totalFrameTime;
long loopCounter;
long loopTime;

void setup()
{
  size( 600, 600, P3D );
  background( 0 );
  noStroke();
  frameRate( 30 );
  file = new SoundFile(this, "beepboop.wav");
  file.loop();

  bvh1 = new PBvh( loadStrings( "A_test.bvh" ) ); // testing w this one
  //bvh2 = new PBvh( loadStrings( "B_test.bvh" ) );
  //bvh3 = new PBvh( loadStrings( "C_test.bvh" ) );

  totalFrameTime = bvh1.parser.totalLoopTimeMillis();
  
  loop();
  
}

long lastMillis = -1;
long setToMillis = 0;

public void draw()
{
  if (lastMillis == -1) {
    lastMillis = millis();
  }
  background( 0 );
  fill(209,181,56);
  rect(0,0,width,height);
  fill(150,129,36);
  rect(20,20,width-40,height-40,8);
  fill(0);
  rect(30,30,width-60,height-60,18);

  //camera
  float _cos = 0.0;
  float _sin = 0.0;
  //camera(width/4.f + width/4.f * _cos +200, height/2.0f-100, 550 + 150 * _sin, width/2.0f, height/2.0f, -400, 0, 1, 0);
  camera(width/2, height/2, 510.0, width/2, height/2, 0.0, 0, 1, 0); 
  
  //ground 
  fill( color( 255 ));
  stroke(127);
  //line(width/2.0f, height/2.0f, -30, width/2.0f, height/2.0f, 30);
  stroke(127);
  //line(width/2.0f-30, height/2.0f, 0, width/2.0f + 30, height/2.0f, 0);
  stroke(255);

  pushMatrix();
  translate( width/2, height/2-10, 0);
  scale(-1, -1, -1);

  long currMillis = millis() % totalFrameTime;
  long elapsedMillis = currMillis - lastMillis;
  long savedCurrMillis = currMillis;
  if (currMillis < lastMillis) {
    loopCounter = 150;
    loopTime = setToMillis;
  }
  
  if (loopCounter > 0) {
    loopCounter--;
    setToMillis = 200;
  } else {
    setToMillis += elapsedMillis;
  }
    

  //model
  bvh1.update( (int)setToMillis );
  //bvh2.update( millis() );
  //bvh3.update( millis() );
  
  //bvh1.draw();
  //bvh2.draw();
  //bvh3.draw();
  
  lastMillis = savedCurrMillis;
  
  popMatrix();
  
  pushMatrix();
  int num = 54;
  int r = width / num; 
  noStroke();
  fill(64,64,64);
  //int count = 0;
  /*for (float i = 40; i < width-40; i = i+r) {
    count++;
    fill(0+count*2);
    for (float j = 40; j < height-40; j = j+r) {
      ellipse(j,i,r,r);
    }
  }*/
  
  
  fill(64,64,64); // 34
  
  for (float i = 40; i < width-40; i = i+r) {
    int count = 0;
    for (float j = 40; j < height-40; j = j+r) {
      count++;
      if (isBoneNear(bvh1.getBones(),i,j)) {
        if (loopCounter > 0) {
          fill(214,73,73);
          file.amp(0);
        } else {
          fill(182,232,169);
          file.amp(1);
        }
        ellipse(i,j,r,r);
      } else {
        fill(0+count*2);
        ellipse(i,j,r,r);
      }
    }
  }
  
  
  //ellipse(0,0,200,200);
  popMatrix();
      
}

boolean isBoneNear(List bones, float x, float y) {
  float epsilon = 6.8;
  float scale = 2.7;
  x = x / scale;
  y = -y / scale;
  float xOffset = -105.0;
  float yOffset = 201.0;
  x += xOffset;
  y += yOffset;
  for (BvhBone bone : bones) {
    PVector start = bone.absPos;
    PVector end;
    epsilon = 6.8;
    if (bone.getName().equals("Head")) {
      epsilon = 12;
    }
    if (bone.getParent() == null) {
      end = bone.getChildren().get(0).absPos;
    } else {
      end = bone.getParent().absPos;
    }
    //PVector end = bone.absEndPos;
    float x1 = start.x;
    float y1 = start.y;
    float x2 = end.x;
    float y2 = end.y;
    double dist = lineDist(x1, y1, x2, y2, x, y);
    if (dist < epsilon) return true;
  }
  return false; 
}

double lineDist(float x1, float y1, float x2, float y2, float x3, float y3) {
  float px=x2-x1;
  float py=y2-y1;
  float temp=(px*px)+(py*py);
  float u=((x3 - x1) * px + (y3 - y1) * py) / (temp);
  if(u>1){
    u=1;
  }
  else if(u<0){
    u=0;
  }
  float x = x1 + u * px;
  float y = y1 + u * py;
  float dx = x - x3;
  float dy = y - y3;
  double dist = Math.sqrt(dx*dx + dy*dy);
  return dist;
}
Written by Comments Off on Keali-Mocap Posted in Mocap

Guodu-Mocap

In collaboration with Lumar, we explored displaying the kinetic energy of the body’s movements. The spheres of the body grow and shrink depending on how much kinetic energy there was given the body part we chose.

ballguychickendrumstickrainbowcube    rainbowopacoty

Written by Comments Off on Guodu-Mocap Posted in Mocap

hizlik-lookingoutward08

Hiroshi Ishii – Materiable

This series of creations/projects by Hiroshi Ishii and the other team members of the group is perhaps the most famous, and possibly therefore most cliche Looking Outwards pick, by the Tangible Media Group. I personally connect to this project because I have seen the documentation long before this, I think possibly even before college years. I had never heard of physical computing, of interactive art or anything at the time. I just knew, when I saw it, that this was the future!

The Materiable “tables” are complex yet simple designs that allow you to “form” shapes using blocks on motors. As seen in the documentation, you can use programatic designs (such as 3D graphs), interactive reactions (such as motion sensing) and “moldable” forms, which react to direct physical actions like pushing down on the blocks. I really loved the visualizations of the 3d graphs, and the “real life” example of the phone that moves into view.

Some critiques on this would, I suppose, be that it is still a bit too static (limited by a square of area on a table). I would love to see a room-sized version of this where you can walk on it and interact with it that way, perhaps with a 2-3 foot height when each block is fully extended. And, of course, “resolution” also could be increased, although with each additional “pixel”/block resolution, the complexity would only increase. But maybe in time.

Aliot-mocap

Fig. 1. Lil Wayne and Fat Joe making it rain.

Fig. 2. A closeup of Lil Wayne.

Fig. 3. Bender, a sentient robot from Futurama, making it rain, presumably in the club.

These gifs (Fig. 1-Fig. 3), while not my own personal sketches, illustrate the concept I was going for very well.

I wrote a sketch in Unity that would receive OSC data from KinectV2OSC. The sketch identifies the hand-position and state (open or closed) and allows users to make it rain. I am pretty pleased with what I accomplished although the video and kinect feeds must be manually synced up. (Ie you have to place the camera/laptop on top of or very near the kinect camera.) I would have also liked to introduce some random movement into the dolla dolla bills as they fall. While I do have a cloth simulator on them, some values need to be tweaked in order for the money-rain feel more realistic.

Written by Comments Off on Aliot-mocap Posted in Mocap

kander – mocap

 

I spent a lot of time thinking about what I wanted to do with this project. My initial idea was to create little monsters (see sketchbook), but the level of detail I wanted was hard to do with the 3D environment in Processing. I then investigated Perlin noise, which had some interesting results (I wanted to have multiple characters, and have the noise of each affect the other), but I wasn’t super into that project either.

img_1743 the sketching for the monster perlin Some of my experiments with Perlin noise to generate forms perlin2 Some of my experiments with Perlin noise to generate forms

Then, while looking at some of my preliminary code for the monsters idea, I came up with the idea of making “corn people” (perhaps this idea subconsciously stemmed from me missing the Midwest?). I then adapted the Bvh data to make a corn person class, shown below, and my final product is a pair of corn people dancing in sync to techno music (the original data comes from my dancing). An earlier version had different bvh files, and the flailing corn people remind me of awkward dances of my high school days.

sketches of my ideas corn_primiitve the OG ear of corn

corn295 corn425

While dancing, I had to be cognizant about the leaf-like way in which the arms should move, and I was careful not to hop about, because corn people would be unlikely to move their feet in such a way. Overall, I am quite happy with how this assignment turned out, especially considering the late start I got due to investigation of other topics. I really thought it was crucial to have the leaf-like limbs, and I’m glad I was about to accomplish that. I also think it’s pretty funny.

Github Repository

 

Written by Comments Off on kander – mocap Posted in Mocap

Jaqaur – MoCap

PIXEL DANCER

For this project, I knew from the start I wanted the motion capture data for my project to be my friend Katia dancing ballet. We actually recorded that data before I had coded much of anything for this project. Hopefully I can record some more different types of motion and apply this animation to them in the future.

Anyhow, for this project, I wanted something a little more abstract looking than shapes attached to a skeleton. So I decided to make an animation in which nothing actually moves. There is a 3D grid of “pixels” (which can be any shape, color, or size) that choose their size and/or opacity based on whether or not they occupy the space where the dancer is. They appear and disappear, and collectively this creates the figure of a person and the illusion of movement.

I decided to work in Processing, because I had the most experience in it, but 3D was still new to me. Initially, I had my pixels calculate their distance from each joint and decide how big to be based on that. It worked, but was just a series of sphere-ish clumps of pixels moving around, and I wanted it to look less default-y and more like a real person. So, I looked up how to calculate the distance from a point to a line segment, and used that for my distance formula instead (making line segments out of the connections between joints). This resulted in a sort of 3D stick figure that I was pretty happy with.

I played around a lot with different shapes, sizes, and colors for the pixels. I also tried to find the best speed for them to appear and disappear, but this was hard to do. Different people I showed it to had different opinions on how long the pixels should last. Some really liked it when they lasted a long time, because it looked more interesting and abstract, but others liked the pixels to disappear quickly so that the dancer’s figure was not obscured. Deciding how quickly the pixels should appear was less difficult. While I initially wanted them to fade in somewhat slowly, this did not look good at all. The skeleton simply moved too fast for the pixels ever to reach full size/opacity, so it was hard to tell what was going on. As a result, I made the pixels pop into existence, and I think that looks as good as it could. The motion capture data still looks a bit jumpy in places, but I think that’s the data and not the animation.

Since there was such a wide variety in the types of pixels I could use for this project, I decided to make a whole bunch of them. Here are how some of my favorites look.

The original pink cube pixels:
dance_mocap

Like the original, but with spheres instead of cubes (and they’re blue!):
teal_mocap

Back to cubes, but this time, they fade out instead of shrinking out. I think it looks sort of flame-like:
fire_mocap

Back to shrinking out, but the cubes’ colors change. I know rainbows are sort of obnoxious, but I thought it was worth a shot. I also played with some extreme camera angles on this one:
rainbow_mocap

One final example, pretty much the opposite of the last one. Spheres, with a fixed color, that fade out. I think it looks kind of like smoke, especially from a distance. But I like how it looks up close, too:
white_mocap

I didn’t really know how to sketch this concept, so I didn’t (and I’m kind of hoping that all of my variations above can make up for my somewhat lacking documentation of the process). In general, I’m happy with how this turned out, but I wish I had had the time to code this before we recorded any motion, so I could really tailor the movement to the animation. Like I said, I hope to do more with this project in the future, because I am happy with how it turned out. Maybe I can make a little music video…

Here is a link to my code on github (the pink cube version): https://github.com/JacquiwithaQ/Interactivity-and-Computation/tree/master/Pixel_Dancer

And here is my code. I am only embedding the files I edited, which do not include the parser.

//Adapted by Jacqui Fashimpaur from in-class example

BvhParser parserA = new BvhParser();
PBvh bvh1, bvh2, bvh3;
final int maxSide = 200;

ArrayList allPieces;
	
public void setup()
{
  size( 1280, 720, P3D );
  background(0);
  noStroke();
  frameRate( 70 );
  //noSmooth();
  
  bvh1 = new PBvh( loadStrings( "Katia_Dance_1_body1.bvh" ) );
  allPieces = new ArrayList();
  for (int x=-400; x<100; x+=8){
    for (int y=-50; y<500; y+=8){
       for (int z=-400; z<100; z+=8){
         Piece myPiece = new Piece(x,y,z,bvh1);
         allPieces.add(myPiece);
       }
    }
  }
  loop();
}

public void draw()
{
  background(0);
  float t = millis()/5000.0f;
  float xCenter = width/2.0 + 150;
  float zCenter = 300;
  float camX = (xCenter - 200);// + 400*cos(t));
  float camZ = (zCenter + 400 + 300*sin(t));
  //moving camera
  camera(camX, height/2.0 - 200, camZ, width/2.0 + 150, height/2.0 - 200, 300, 0, 1, 0);
  //still camera
  //camera(xCenter, height/2.0 - 300, -300, width/2.0 + 150, height/2.0 - 200, 300, 0, 1, 0);
  
  pushMatrix();
  translate( width/2, height/2-10, 0);
  scale(-1, -1, -1);
 
  ambientLight(250, 250, 250);
  bvh1.update( millis() );
  //bvh1.draw();
  for (int i=0; i<allPieces.size(); i++){
    Piece p = allPieces.get(i);
    p.draw();
  }
  popMatrix();
}
//This code by Jacqui Fashimpaur for Golan Levin's class
//November 2016

public class Piece {
  float xPos;
  float yPos;
  float zPos;
  float side;
  PBvh bones;

  public Piece(float startX, float startY, float startZ, PBvh bone_file) {
    xPos = startX;
    yPos = startY;
    zPos = startZ;
    side = 0.01;
    bones = bone_file;
  }

  void draw() {
    set_side();
    if (side > 0.01) {
      noStroke();
      fill(255, 255, 255, side);
      translate(xPos, yPos, zPos);
      sphereDetail(5);
      sphere(9);
      translate(-xPos, -yPos, -zPos);
    }
  }

  void set_side() {

    //LINE-BASED FIGURE IMPLEMENTATION
    float head_dist = get_dist(bones.parser.getBones().get(48));
    float left_shin_dist = get_line_dist(bones.parser.getBones().get(5), bones.parser.getBones().get(6));
    float right_shin_dist = get_line_dist(bones.parser.getBones().get(3), bones.parser.getBones().get(2));
    float left_thigh_dist = get_line_dist(bones.parser.getBones().get(5), bones.parser.getBones().get(4));
    float right_thigh_dist = get_line_dist(bones.parser.getBones().get(3), bones.parser.getBones().get(4));
    float left_forearm_dist = get_line_dist(bones.parser.getBones().get(30), bones.parser.getBones().get(31));
    float right_forearm_dist = get_line_dist(bones.parser.getBones().get(11), bones.parser.getBones().get(12));
    float left_arm_dist = get_line_dist(bones.parser.getBones().get(29), bones.parser.getBones().get(30));
    float right_arm_dist = get_line_dist(bones.parser.getBones().get(10), bones.parser.getBones().get(11));
    float torso_dist = get_line_dist(bones.parser.getBones().get(0), bones.parser.getBones().get(8));

    boolean close_enough = ((head_dist<700) || (left_shin_dist<100) || (right_shin_dist<100) ||
                            (left_thigh_dist<150) || (right_thigh_dist<150) || (left_forearm_dist<100) ||
                            (right_forearm_dist<100) || (left_arm_dist<150) || (right_arm_dist<150) ||
                            (torso_dist<370));
  
    //LINE-BASED OR POINT-ONLY IMPLEMENTATION
    if (!close_enough) {
      side *= 0.91;
    } else {
      //side *= 200;
      side = maxSide;
    }
    /*if (side < 0.01) {
      side = 0.01;
    }*/
    if (side < 1){ side = 0.01; } if (side >= maxSide){
      side = maxSide;
    }
  } 

  float get_dist(BvhBone b) {
    float x1 = b.absPos.x;
    float y1 = b.absPos.y;
    float z1 = b.absPos.z;
    float dist1 = abs(x1-xPos);
    float dist2 = abs(y1-yPos);
    float dist3 = abs(z1-zPos);
    return (dist1*dist1)+(dist2*dist2)+(dist3*dist3);
  }

  float get_line_dist(BvhBone b1, BvhBone b2) {
    float x1 = b1.absPos.x;
    float y1 = b1.absPos.y;
    float z1 = b1.absPos.z;
    float x2 = b2.absPos.x;
    float y2 = b2.absPos.y;
    float z2 = b2.absPos.z;
    float x3 = xPos;
    float y3 = yPos;
    float z3 = zPos;
    float dx = abs(x1-x2);
    float dy = abs(y1-y2);
    float dz = abs(z1-z2);
    float otherDist = sq(dx)+sq(dy)+sq(dz);
    if (otherDist == 0) otherDist = 0.001;
    float u = (((x3 - x1)*(x2 - x1)) + ((y3 - y1)*(y2 - y1)) + ((z3 - z1)*(z2 - z1)))/otherDist;
    if ((u >=0) && (u <= 1)) {
      float x = x1 + u*(x2 - x1);
      float y = y1 + u*(y2 - y1);
      float z = z1 + u*(z2 - z1);
      float dist4 = abs(x - xPos);
      float dist5 = abs(y - yPos);
      float dist6 = abs(z - zPos);
      return sq(dist4) + sq(dist5) + sq(dist6);
    }
    return 999999;
  }

  float getRed() {
    //FOR PINK 1: 
    return map(xPos, -400, 100, 100, 200);
    //FOR TEAL: return map(yPos, 350, 0, 2, 250);
    /* FOR RAINBOW:
    if ((millis()%30000) < 18000){
      return map((millis()%30000), 0, 18000, 255, 0);
    } else if ((millis()%30000) < 20000){
      return 0;
    } else {
      return map((millis()%30000), 20000, 30000, 0, 255);
    } */
    return 255;
  }

  float getGreen() {
    //return map(xPos, -400, 100, 50, 150);
    //FOR PINK 1: 
    return 100;
    //FOR TEAL: return map(yPos, 350, 0, 132, 255);
    /* FOR RAINBOW:
    if ((millis()%30000) < 18000){
      return map((millis()%30000), 0, 18000, 0, 255);
    } else if ((millis()%30000) < 20000){
      return map((millis()%30000), 18000, 20000, 255, 0);
    } else {
      return 0;
    } */
    return 255;
  }

  float getBlue() {
    //FOR PINK 1: 
    return map(yPos, -50, 600, 250, 50);
    //FOR TEAL: return map(yPos, 350, 0, 130, 255);
    /* FOR RAINBOW:
    if (millis()%30000 < 18000){
      return 0;
    } else if ((millis()%30000) < 20000){
      return map((millis()%30000), 18000, 20000, 0, 255);
    } else {
      return map((millis()%30000), 20000, 30000, 255, 0);
    } */
    return 255;
  }
}
Written by Comments Off on Jaqaur – MoCap Posted in Mocap

Guodu-LookingOutwards08

How Do You Design the Future?

“Transform Beyond Pixels, Towards Radical Atoms” by Hiroshi Ishii

Intro

  • Last time Hiroshi was in this room was Randy Pausch’s Last Lecture September 18, 2007
  • Ars Electronica
  • Students are the future, how do you inspire them?

Timeline

  • 1992: ClearBoard: Seamless Collaboration Media
  • 1995: TRANS-Disciplinary: Finding opportunity in conflict between disciplines & Breaking  down old paradigms to create new archetypes
  • Ideas Colliding, Opportunities Emerging, Disciplines Transcending, Arts + Sciences
  • Music Technology MirrorFugue III by Xiao Xiao – embodied interaction to artistic interaction
  • Lexus Design in Milan 2014 – Transform f1
  • 1. Visions >100 years 2. Needs ~10 years 3. Technologies ~1 year
  • Tangible Bits embody digital information to interact with directly with hands
  • Origin: Weather Bottle – the sound of weather coming out of a soy sauce bottle in her kitchen
  • I/O Brush by Kimiko Ryokai, Stefan Marti & Hiroshi Ishii 2004
    • It looks like a painting but goes beyond that
    • Capturing and weaving history
  • PingPongPlus
  • Audio pad by James Patten and Ben Recht (Physics & Media)
  • Urp: Urban Planning Workbench
  • Sandscape:
  • Two Materials:
    • 1. Frozen Atoms
    • 2. Intangible Pixels
  • Third Material
    • 3. Radical Atoms
  • Time Scape: based on relief, manipulate in real time
  • TRANSFORM
    • inFORM 2013: http://tangible.media.mit.edu/project/inform ART NOT UTILITY
    • Sean Follmer, Phillip Scholl, Amit Zoran,
    • Opposing Elements / Design vs Technology / Stillness vs Motion / Atoms vs Bits
    • Materiable is an interaction framework that build a perspective
    • Flexibility, Elasticity, Viscosity
  • Biologic: “Bio is the new interface” http://tangible.media.mit.edu/project/biologic/
  • “Making material Dance
  • Why do you have to obey?
  • The Future is not to predict but to invent – Alan Kay 1971 “This is the century in which you can be proactive about the future; you don’t have to be reactive. The whole idea of having scientists and technology is that those things you can envision and describe can actually be built”
  • Envision — Art and philosophy,
  • Embody — Design and Technology,
  • Inspire — Art and Aesthetics
  • Eye –> Telescope –> Observatories –> Hubble Space Telescope –> Voyager 1
  • People could only see the world from their own perspective
    • Towards Holistic Worldview
    • Holistic Perspective –> Heuristic Focus –> (“Life is short”)
    • Inspiration: Douglas Engelbart, Mark Weiser, William Mitchell, Bill Buxton, Alan Kay, Nicholas Negroponte (Heroes and Gurus)
  • Who are friends? Bouncing ideas back, this tension is friendship
    • Golan Levin – Director of Studio for Creative Inquiry, CMU 🙂
    • Austin Lee
    • Lining Yao
  • Technology soon becomes obsolete
  • How do you focus on vision? What is the most exciting
    • Abacus – a physical embodiment of a digit
    • Abacus – sound of accounting
    • What do I care about?
    • Get more legs to your chair so people understand because art is abstract
  • Virtual Reality is completely opposite of Randy Pausch’s Dream and what I do, but I’m nice and I just say let them do it
  • Your one hour listening to me is beyond art, design, and technology
  • What do you want to communicate, and influence?
  • Reacting to Failure, sometimes the floor gets so low, the ceiling gets so high, but what’s the new potential?
  • Try not to think of Art, Design, Science, and Technology as boundaries

hizlik-mocap

screengrab

Created with Processing, project available on GitHub.

Creating this was an interesting process. I’m not sure if there is a real way to access the bone-by-bone data through the API, but I ended up modifying the BVH parser code directly, outside of my main Processing file. I used each bone element as a starting point to randomly draw lines to two other randomly picked bones in the body, and connect them to create a triangle (with a semi-opaque fill). I chose to have the triangles change position every frame rather than retain their assigned bones throughout, because I thought it was more abstract and exciting than a static triangle shape moving with the movement of the bones.

Some issues I ran into: I found it easier to access bone data (again, by modifying the parser library) compared to the three.js library, therefore my final project stayed in Processing. I also found it to be very glitchy with other found BVH/mocap recordings found online. I have a folder full of hundreds of recordings, my favorites being martial-arts based movements. However, even though these all work in three.js demo file, they did not work at all the Processing thing, and I’m unsure why. It was not a normal crash (array index out of bounds or some other error). Instead, it would just look weird, glitchy and move all over the place, with no actual code errors.

loop

I have no planning/sketches as I was creating this project experimentally rather than planned like some of my other projects.

Written by Comments Off on hizlik-mocap Posted in Mocap

Lumar-LookingOutwards 8

Hiroshi Ishi mentioned materials that translated the intangible yet versatile digital ‘pixel’ or atom, to the physical. He proposes radical, interactive, auto-adaptive materials.

I am very much excited about moving towards tangible media! A particular sentiment that Ishi expressed – “digital pixel, you can’t touch, it…it sucks”. The direction he proposes for technology is one where in more technology will feel like less. Technology will be used to make technology ‘ invisible’ in the sense that most will be translateable to the physical world.

The little motorized tile ‘pixels’ presented at Lexus Design Conference in Milan is a perfect example.

Here’s a little bits/tangible media inspired collaboration project of mine from earlier:

 

This is an arduino project that I really enjoyed. The sound generated from motion when applied with body motion capture can give a whole new depth to the phrase “percussive dance”

DODECAUDION (2011) from ◥ panGenerator on Vimeo.

Lumar- Reading-Response #08: Two Readings about Things

“1. The Critical Engineer considers any technology depended upon to be both a challenge and a threat. The greater the dependence on a technology the greater the need to study and expose its inner workings, regardless of ownership or legal provision.”

“5. The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user’s dependency upon it.”

I am looking forward to the design lecture tomorrow. I am both excited and incredibly wary of the booming rise of the internet of things. I can imagine it can very easily be another subject that would go perfectly as an expansion of the documentary “death by Design”. The documentary “Death by Design” explores the question –

“What is the cost of our digital dependency?”

It uncovers a global story of damaged lives, environmental destruction, and devices that are designed to die.

With the engineering principles, I find #1 something that’s emphasized in my design studios. When designers include ‘fancy’ futuristic tech as solutions in their concept pitches, the biggest warning is always that the designer has to be able to consider carefully how to make the technology work for people, finding the gaps within the system that the tech could fulfill rather than making gaps to the existing system to necessitate a convoluted solution. I like to liken the use of technology that one doesn’t fully understand as a design solution problem to the federal government pouring money into a flopping program; it doesn’t really help. Well….I mean, it does. Money will inherently make the wheels spin faster, but the amount of money certainly isn’t proportional to the net benefit. The money simply isn’t being used effectively. Technology, if not understood well, is much the same.

For number 5, I agree! Tools shape you, you shape tools!

“tools

shape

you

shape

tools”

That’s a quote directly from Graphic #37, a graphic design magazine that introduces computation as a medium for graphic design.

An analog example of this sentiment would be in the evolution of symbols. When a designer first makes a symbol for an object, it tends to be more literal and representative. But as the public gets used to this association, the next generation of designers would redesign the symbol to be a simplified version of the previous. If this second symbol was used at the very beginning, it might not be nearly as effective. (think of the symbols for phone)

 

 

Ngdon-mocap

 

snip20161108_17

https://github.com/LingDong-/60-212/tree/master/decSkel

I am most interested in the dynamism of the bodily movements in motion captures, and the idea of abstract human forms composed of objects. I wanted to develop a way to present the abstract impression of humans in energetic motion.

The rotating tapes evolved from the ones in my animated loop assignment. I am fascinated by the way they can depict a form when and only when they’re in motion, and how they’re very flexible and sensitive to movement. So I decided to push on these concepts.

There are two layers of tapes around the bodies. The first, more densely populated, is related more closely to the human form, while the second respond mostly to the human motion. Therefore, when the actors are waving their hands crazily, or kicking and jumping, the second layer of tapes will fly around wildly exaggerating their movements, while the first layer still sticks closely to the bodies outlining their forms.

To achieve this, I first calculate a series of evenly spaced points on the skeleton from the motion capture data. These serves as the centers of rotation for the tapes. Then I figured out the directions of the bones at these points, which will be the normals of the planes of rotation for the tapes. I also had the last information stored, so I can know how much things moved since last frame.

After this, through trigonometry, translation and rotation, I can draw each of the segments which makes up a tape that rotates over time.

Since I received comments about how the tapes in my animated loop assignment had questionable colors, I decided to develop a better color scheme for this one.

At a single frame, for either of the two layers of tapes, the colors mainly consists of 3 different shades of the same hue, and one accented color. The colors for the second layer neighbor those of the first layer. When in motion, every colors shifts the same direction in hue. I then wrote a function to redistribute the hues on the color wheel, based on my discovery that some hues looks nicer than others on the tapes.

I used the Mocap data from Perfume, since I found that their data has the most decent quality when compared to others I can find on internet. But I really wonder what my program would look like when visualizing in real time.

ezgif-com-optimize-2  ezgif-com-optimize-4

 


 

BvhParser parserA = new BvhParser();
BvhParser parserA = new BvhParser();
PBvh[] bvhs = new PBvh[3];
Loop[] loops = new Loop[512];

int dmode = 0;

int[][] palette1 = new int[][]{{0,100,80},{5,100,60},{0,100,40},{30,100,80}};  
int[][] palette2= new int[][]{{45,17,95},{48,50,80},{60,50,80},{60,20,100}}; 


int bettercolor(int c0){
  if ( c0 < 120){
    return floor(lerp(0,50,c0/120.0));
  }else if (c0 < 170){
    return floor(lerp(50,170,(c0-120.0)/50.0));
  }else if (c0 < 230){
    return floor(lerp(170,200,(c0-170.0)/60.0));
  }else if (c0 < 260){
    return floor(lerp(200,260,(c0-230.0)/30.0));
  }
  return c0;
}


float[] lerpcoord(float[] p0, float[] p1, float r){
  return new float[]{
    lerp(p0[0],p1[0],r),
    lerp(p0[1],p1[1],r),
    lerp(p0[2],p1[2],r)
  };
}
float dist3d(float[] p0, float[] p1){
  return sqrt(
    sq(p0[0]-p1[0])+
    sq(p0[1]-p1[1])+
    sq(p0[2]-p1[2])
  );
  
}


class Loop{
  float x0;
  float y0;
  float z0;
  float[] lxyz = new float[3];
  float a;
  float w = 4;
  float[] dirv = new float[3];
  float[] dirv2 = new float[3];
  float r;
  float r1;
  float r2;
  float rp1=1;
  float rp2=1;
  float[][] cl = new float[32][4];
  int cll = 16;
  float spd = 0.1;
  int id;
  int[] col = new int[3];
  public Loop(float x,float y,float z){
    this.x0 = x;
    this.y0 = y;
    this.z0 = z;
    id = floor(random(100000));
    a = random(PI*2);
  } 
  public void update(){
    
    r1 = lerp(r1,dist3d(new float[]{x0,y0,z0},lxyz),0.25);
    r2 = noise(id,frameCount*0.1)*10;
    
    r = r1*rp1+r2*rp2;
    a+=PI*spd;
    
    dirv2 = new float[]{x0-lxyz[0],y0-lxyz[1],z0-lxyz[2]};

    cl[0][0] = r*cos(a);
    cl[0][1] = r*sin(a);

    for (int i = 1; i < cll; i++){
      pushMatrix();
      translate(x0,y0,z0);
      rotateX(atan2(dirv[2],dirv[1]));
      rotateZ(atan2(dirv[1],dirv[0]));

      //translate(10,0,0);
      //box(20,5,5);
      
      
      cl[i][0] = r*cos(a+i*0.05*PI);
      cl[i][1] = r*sin(a+i*0.05*PI);
      //cl[i] = lerpcoord(cl[i],cl[i-1],spd);
      
      rotateY(PI/2);
      noStroke();
      fill(col[0],col[1],col[2]);
      beginShape();
        vertex(cl[i][0],cl[i][1],-w/2);
        vertex(cl[i][0],cl[i][1],w/2);
        vertex(cl[i-1][0],cl[i-1][1],w/2);
        vertex(cl[i-1][0],cl[i-1][1],-w/2);      
      endShape();
      if (dmode == 0){
        stroke(0,0,10);
      }
      line(cl[i][0],cl[i][1],-w/2,cl[i-1][0],cl[i-1][1],-w/2);
      line(cl[i][0],cl[i][1],w/2,cl[i-1][0],cl[i-1][1],w/2);
      //line(cl[i][0],cl[i][1],cl[i][2],cl[i-1][0],cl[i-1][1],cl[i-1][2]);
      
      popMatrix();
    }
    
    a += PI*0.1;
    
  }
}


public void setup()
{
  size( 1200, 720, P3D );
  background( 0 );
  noStroke();
  frameRate( 30 );
  
  bvhs[0] = new PBvh( loadStrings( "aachan.bvh" ) );
  bvhs[1] = new PBvh( loadStrings( "nocchi.bvh" ) );
  bvhs[2] = new PBvh( loadStrings( "kashiyuka.bvh" ) );
  for (int i = 0; i < loops.length; i++){ loops[i] = new Loop(0.0,0.0,0.0); } if (dmode == 1){ palette1 = new int[][]{{255,255,255}}; palette2 = new int[][]{{100,255,255}}; }else{ colorMode(HSB,360,100,100); } //noLoop(); } public void draw() { background(0,0,10); //camera float rr = 600; float ra = PI/2.75; camera(width/2+rr*cos(ra),height/2,rr*sin(ra),width/2,height/2,0,0,1,0); pushMatrix(); translate( width/2+50, height/2+150, 0); scale(-2, -2, -2); if (dmode > 0){
    background(230);
    directionalLight(160,160,160, 0.5, -1, 0.5);
    //pointLight(255,255,255,0,-300,-200);
    //pointLight(255,255,255,0,-300,0);
    ambientLight(160,160,160);
    //shininess(5.0); 
    fill(250);
    pushMatrix();
    //rotateX(frameCount*0.1);
    box(500,10,500);
    popMatrix();
    
  }
  //model
  int j = 0;
  int e = 0;
  for (int i = 0; i < bvhs.length; i++){
    bvhs[i].update( 2000+frameCount*25 );
 
    for( BvhBone b : bvhs[i].parser.getBones())
    {
      
      
      if (b.getParent()!= null){
        float px = b.getParent().absPos.x;
        float py = b.getParent().absPos.y;
        float pz = b.getParent().absPos.z;
        
        float[] p1 =  new float[]{b.absPos.x,b.absPos.y,b.absPos.z};
        float[] p0 = new float[]{px,py,pz};
        float d =  dist3d(p0,p1);

        for (float k = 0; k < d; k+= 4){
          
          float[] c = lerpcoord(p0,p1,k/d);
          loops[j].lxyz = new float[]{loops[j].x0,loops[j].y0,loops[j].z0};
          loops[j].x0 = c[0];
          loops[j].y0 = c[1];
          loops[j].z0 = c[2];

          loops[j].rp1 = 0.5;
          loops[j].rp2 = 1.7;
          loops[j].dirv = new float[]{ px-b.absPos.x, py-b.absPos.y, pz-b.absPos.z};
          int[] col = palette1[j%palette1.length];
          loops[j].col[0] = bettercolor(floor(col[0]+320+frameCount*0.15)%360);
          loops[j].col[1] = col[1]; loops[j].col[2] = col[2];
          loops[j].cll = 24;
          j++;
        }
        for (float k = 0; k < d; k+= 100){
          
          float[] c = lerpcoord(p0,p1,k/d);
          loops[j].lxyz = new float[]{loops[j].x0,loops[j].y0,loops[j].z0};
          loops[j].x0 = c[0];
          loops[j].y0 = c[1];
          loops[j].z0 = c[2];
          loops[j].dirv = new float[]{ px-b.absPos.x, py-b.absPos.y, pz-b.absPos.z};
          loops[j].rp1 = 10;
          loops[j].rp2 = 2;
          int[] col = palette2[j%palette2.length];
          loops[j].col[0] = floor(col[0]+320+frameCount*0.15)%360;
          loops[j].col[1] = col[1]; loops[j].col[2] = col[2];
          loops[j].cll = 24;
          loops[j].cll = 16;
          loops[j].spd = 0.01;
          j++;
        }

        //line(b.absPos.x,b.absPos.y,b.absPos.z,px,py,pz);
      }

      pushMatrix();
      translate(b.absPos.x, b.absPos.y, b.absPos.z);
      fill(0,0,100);
      if (dmode <= 0){rotateY(PI/2-PI/2.75);ellipse(0, 0, 2, 2);}
      popMatrix();
      if (!b.hasChildren())
      {
        pushMatrix();
        translate( b.absEndPos.x, b.absEndPos.y, b.absEndPos.z);
        if (dmode <= 0){
          rotateY(PI/2-PI/2.75);
          ellipse(0,0,5,5);
        }
        popMatrix();
      }
    }
  }

  for (int i = 0; i < j; i++){
    loops[i].update();
  }

  popMatrix();
  //saveFrame("frames/"+nf(frameCount,6)+".png");

}

photo-on-11-10-16-at-3-27-pm

Written by Comments Off on Ngdon-mocap Posted in Mocap

hizlik-lookingoutward07

138 Years of Popular Science    Jeb Thorp

This was a project that caught my eye as I searched through the works of the several data-vis artists and designers listed on the lecture page. I personally have read many pop-sci magazine articles in the past and, along with Wired magazine, value their views, opinions and news regarding the technological and scientific world, past, present and future. Therefore I was interested in what parameters Jeb was considering as the data for his visuals (there are so many things you can compare with 140 years of magazine data). He decided to create a visualization based on terms used throughout the years (“Radio-Television” for example). Although I personally think this is an ok but not best choice, I can’t think of anything cooler. However, I really enjoyed his method of visualizing this massive amount of data. According to his detailed documentation, he wanted a DNA-structure for the decades, with clusters of years surrounding each decade, and within each year is a circle that represents a separate magazine article (and the circle color is dependent on the dominant color on the article cover).

I personally love the final structure, but am unsure about the colors still. Although it is a culmination of all the dominant cover colors over the decades, the overall pop-sci page feels very dull/muted. Perhaps a pure-white background would help pop the cover colors more, or using color in the metadata/term text surrounding the structure? Not sure. I do love his many forms of designing the final structure, including this one “diversion” as he calls it.

You can view more of this project at his blog post. I have been looking more at his other works too and really enjoy his visualizations. I could love to actually pick up and read this page from the Pop-Sci magazine though.

kander – manifesto

It seemed that most of the “critical engineer” points were concerned with considering the implications of technological developments on the world. They were mainly about acknowledging that technology doesn’t exist in a bubble, and implied things like ethics and impact shouldn’t be disregard for the sake of “progress.” I found item #6 particularly interesting:

“The Critical Engineer expands “machine” to describe interrelationships encompassing devices, bodies, agents, forces and networks.”

In other words, technology shouldn’t be exclusively defined to be an item comprised of solely physical elements/hardware. The definition of technology and machines needs to take into account the space in which the item occupies in the world — an iPhone would have very different implications if it was an item owned by only the wealthiest, or was something that everyone in the world had access to. To continue with the iPhone example, it’s uses would be much different if it couldn’t connect to the Internet.

This last example brings up the interesting idea that part of the iPhone’s invention is the Internet. If we extend this principle — that all technologies that interact with other innovations include said innovations in their own makeup — then we begin to see modern technology not as a set of individual components, but a web of connected ideas and devices that build upon each other.

 

 

Drewch – ManifestoReading

2. The Critical Engineer raises awareness that with each technological advance our techno-political literacy is challenged.

This tenet of the manifesto is interesting to me because I have seen its effects in action. The stereotype of the computer-illiterate parent/grandparent is based on this. The huge, sweeping adoption and evolution of computers left generations of individuals, born and raised during technologically simpler times, in the dust. Then came phones and social media, and warnings of becoming slaves to the instruments. This trend isn’t just limited to electronic inventions, however. The first time the printing press was adopted, it was violently opposed because of how it would “create forgetfulness in the learners’ souls, because they will not use their memories,” or something like that. Even further back, writing was treated the same way. Whatever the next step may be, be it AR or AI, I’m ready to hear all about the moral and social outcries.

anson-Visualization

screen-shot-2016-11-07-at-4-23-35-pm

So, this was interesting, and an important lesson for me about parsing strings and creating tsv’s. I adapted (admittedly minimally) a Mike Bostock bar graph, and plotted the number of rides at each of the 24 hours in a day. I think with a lot more practice, I could come to like Javascript quite a lot. When you hover over the individual bars, the bar changes color – here shown in the web color “rebecca purple” which is a fun name for a color. I’d like to keep working with Javascript in future projects.

Table allRidesTable;

int ridesPerHour[];
  
void setup() {

  ridesPerHour = new int[24]; 
  for (int s=0; s<24; s++) {
    ridesPerHour[s] = 0; // initialized to zero yo
  }


  allRidesTable = loadTable("HealthyRideRentals2015Q4.csv", "header"); 
  // Trip id,Starttime,Stoptime,Bikeid,Tripduration,From station id,From station name,To station id,To station name,Usertype

  int nRows = allRidesTable.getRowCount(); 
  for (int i=0; i






  

Healthy Ride

A Day of Pittsburgh Bike Rides

kadoin-lookingoutwards08

I chose Rachel Binx to look at because she had worked for NASA and that seemed pretty cool, but that was a while ago and she’s moved on to different things since then. Those things are still pretty cool, though. The work that reeled me in was her visualizations of viral Facebook posts. They weren’t very readable without an explanation but watching them was still mesmerizing. The structures formed have a lot of energy while exploding every which way, and have a very organic form to them. I also think the data she chose to base this project off of was funny: 3 of George Takei’s Facebook posts. I see people sharing his posts all the time so I totally believe the explosive virality shown in the time lapse video, but at the same time, I’ve always wondered why George Takei is so active on social media. I get that he’s big into social activism and all that, but he posts a lot of memes for an old man. I’ve read that he has a lot of other people posting for him sometimes, but I still find the whole thing odd. Unfortunately, all the links to the original posts are broken now.

All in all, these visualizations don’t really resolve any confusion about George Takei’s social media activity, but they’re beautifully done and fascinating to watch.

LLAP Mr. Sulu

 

hizlik-lookingoutward06

Restroom Genderator by  & 

This Twitter-bot is a unique take on the conversation of gender equality, gender fluidity and gender identity that is taking over the political and social spectrum for last few years now. This bot generates and pairs random words to create unique, unheard of genders and pairs them with a symbol and a Braille translation. The result is formatted to look like a bathroom sign that you would hang in a corridor or bathroom door. The background colors, genders, symbols and braille are all generated randomly.

I personally love the completeness and absurdity of the project. In terms of completeness, I respect a project that goes to round out all the edges to an idea, so in this case he did not just generate genders, but a fully constructed sign with all the required parts to it. And the results look clean and good looking, all of them.


The absurdness comes from the sometimes non-existent link to human behavior or appearance (such as “minerals” in the gender name), the often un-relatable symbols to human form (ordinarily you see a generic man or woman stick figure, not an alien symbol), and the last of which is the fact that, in the end, this bot is sort of segregating genders even more. Instead of promoting a single bathroom for all to use, it is generating an infinite amount of segregated bathrooms, one for each gender. I believe whether you are for or against gender equality in bathrooms, Restroom Genderator is an enjoyable piece for all to enjoy and remark about.

cambu-visualization

click (on image) for interactive version

For this project, I decided to analyze the number of concurrent bicyclists using the EasyRide system at any one moment in time. To visualize this, I used Tom May’s Day/Hour Heatmap.

processing

Table allTimes;
IntDict grid; //thanks to gautam for the idea of an intdict
String gridKey;

//"this is about you having a car crash with D3" ~Golan 

void setup() {
  // change this if you add a new file 
  int dayOfMonthStarting = 7; 
  grid = new IntDict();

  //allTimes = loadTable("startStopTimes_sep19to25.csv", "header");
  allTimes = loadTable("startStopTimes_aug10to16.csv", "header");
  
  //header is Starttime, Stoptime

  int numRows = allTimes.getRowCount();
  for (int i = 0; i < numRows; i++) { TableRow curRow = allTimes.getRow(i); //M/D/YEAR 24HR:60MIN //PARAM ON START HOUR String startTime = curRow.getString("Starttime"); String Str = startTime; int startChar = Str.lastIndexOf( ' ' ); int endChar = Str.lastIndexOf( ':' ); int startHourInt = Integer.parseInt(startTime.substring(startChar+1, endChar)); //PARAM ON END HOUR String stopTime = curRow.getString("Stoptime"); //9/19/2015 0:01 String StrR = stopTime; int startCharR = StrR.lastIndexOf( ' ' ); int endCharR = StrR.lastIndexOf( ':' ); int stopHourInt = Integer.parseInt(stopTime.substring(startCharR+1, endCharR)); //PARAM ON DAY int curDay = Integer.parseInt(startTime.substring(2, 4)) - (dayOfMonthStarting - 1); //1-7 println("-->> " + startTime + " to " + stopTime);
    //println("Place this in day: " + curDay + ", with an hour range of: "); 
    //println("start hour: " + startHourInt);
    //println("stop hour: " + stopHourInt);

    int rideDur;

    if (startHourInt - stopHourInt == 0) {
      //place one hour of usage at the startHourInt location
      rideDur = 1;
      //println(rideDur);
    } else {
      rideDur = stopHourInt - startHourInt + 1;
      //println(rideDur);
      //d3_export(i);
    }
    startHourInt = startHourInt + 1;
    gridKey = "D" + curDay + "H" + startHourInt;
    println(gridKey + " -> " + rideDur);

    if (rideDur == 1) { //only incrementing or making a single hour change
      keyCreate(gridKey);
    } else { //ranged creation
      println(rideDur + " @ " + startHourInt);
      for (int n = startHourInt; n <= startHourInt + rideDur; n++) { gridKey = "D" + curDay + "H" + n; if (n > 24) {
          println("warning");
          //do nothing
        } else {
          keyCreate(gridKey);
        }
        
        println(n + " -> " + gridKey);
      }
    }
  }
  println(grid);
  d3_export();
}

void keyCreate(String gridKey) {
  if (grid.hasKey(gridKey) == true) {
    grid.increment(gridKey);
  } else {
    grid.set(gridKey, 1);
  }
}

void d3_export() {
  Table d3_data;
  d3_data = new Table();
  d3_data.addColumn("day");
  d3_data.addColumn("hour");
  d3_data.addColumn("value");

  for (int days = 1; days <= 7; days++) {
    for (int hours = 1; hours <= 24; hours++) {
      String keyComb ="D" + days + "H" + hours; 
      //println(keyComb);
      TableRow newRow = d3_data.addRow();    
      newRow.setInt("day", days);        
      newRow.setInt("hour", hours);
      if (grid.hasKey(keyComb) == false) {
        newRow.setInt("value", 0);
      } else {
        newRow.setInt("value", grid.get(keyComb));
      }
    }
  }
  saveTable(d3_data, "data/sep7-13.tsv", "tsv");
}

kadoin-visualization

Data vis can be cool and isolating data isn’t so bad, but d3 is a punk and I’d need a bit more practice with it before I think I’d make anything nice with it.  I tried to see if any of the bikes went to all the stations and sadly the answer is no. The most worldly of the bikes have only been to 36 stations while some have stayed in one place the entire time.

This graph is pretty meh but it gets the point across I think. A continuation of this project might be a graph that shows a bell curve for the average number of stations visited by a bike.

Overall, data vis ain’t my favorite.

capture

link to the full graph here

Catlu – Visualization

datavisualization

On this project, I was initially curious about how many times each bike returned “home,” based on the station they were parked at the beginning of the year. Later, I realized I wouldn’t be able to do this because the Healthy Ride data did not come with dates. I then focused on the idea of bike “diversity.” I wondered how many different bikes had been to each station. This information I thought would be best shown in a bar graph for clear comparison. More than information, I guess I was trying to draw out a story. First, I pulled the Healthy Ride file into Processing and used Processing to calculate and turn out the “diversity” per station. This I then saved as a TSV file. As for the making of the visualization, D3 proved a bit confusing. I tried to load the TSV into my code, but just couldn’t get it to show up. In the end, since my data that I wanted to use D3 on wasn’t that long, I ended up just hard-coding it into the code as 2 arrays. I changed and tested a lot of things from the code (taken from the D3 Workergnome bar graph example), and ended up with my graph.

The screenshot is a little blurry for some reason. Here is a link to a clearer version you can zoom in on:

localhost

Here is the Processing code for the bike diversity calculations (github):

bike calculations

Here is the D3 code used to make my bar graph (github):

bike D3 code

Drewch – Mocap

holyshi

Partly inspired by Unnamed SoundSculpture by Daniel Franke & Cedric Kiefer, I made a mo-cap marble-party-man. The bone ends of the motion captured body spews out marbles of assorted colors (size and darkness depending on Z position). I wish I worked with Lumar since Lumar was able to figure out how to calculate the Kinetic Energy of every motion captured point, which could determine how the marbles are spawned (for example, you could stand still but fling marbles with your arm, while other marbles just drop to the floor). I also could not do collision detection (unlike what I saw in Unnamed SoundSculpture) because the process would be incredibly slow to render, however I recognize that that is a route that I could have taken.

github: https://github.com/AndyChanglee/60-212/tree/readme-edits/brekel_mocap

 

Written by Comments Off on Drewch – Mocap Posted in Mocap

Drewch – LookingOutwards08

AI is a big topic nowadays, but sometimes I like to take a step back from super-intelligence and instead look at the progress being made in endearing robots. I love the idea of people having empathy for robots. I always hear the argument “but it doesn’t have feelings” or “its not like it actually cares”, but at what point does this not hold true? It’s practically the same discussion about an AI’s capacity for emotion. Pinokio, by Adam Ben-Dror, is one of those projects that pushes this discussion just a little bit further. It isn’t groundbreaking but I find it endearing nonetheless.

ngdon-LookingOutwards07

All streets limited by Ben Fry

http://3rdfloor.fathom.info/products/all-streets

This visualization simply draws all the streets in the U.S. without any other information such as terrain, boundaries, etc. However we can clearly see where the cities are and how the terrain probably look like from the density and shape of the streets.

I’m particularly interested in this type of visualization because it does not attempt to extract information for the reader but instead let the reader explore it themselves. It has a very simple idea (just drawing all the streets) but has a very complex effect. Different people with different interest can find out different things from the data, and the more you look at it, the more you see.

Also I find the visualization aesthetically pleasing. The way the delicate thin black lines divide the cells when you look up close and the texture of the image when look from afar are really beautiful.