chromsan-arsculpture

When set free, this gnome is not such a friendly fellow.

Each time you turn away from the gnome, he moves towards you and gets larger and scarier. After initially placing him down, you can walk around him, but as you start to turn away, he rotates to always be facing you. Given this was a scary gnome, I found a place that was particularly fitting and straight out of a horror movie- a long and bleak fluorescent lit hallway. The gnome would have been more at home outside, but I wanted to remove him from his normal environment and place him somewhere new and off-putting. It certainly does not feel as natural to have a gnome in a hallway, but I think this adds to the overall effect.

 

chromsan-UnityTutorial

I wasn't sure what to screenshot so I chose a few of the more interesting ones.

Loops

Scope

Turning on/off a light

driving around a sphere

camera following a sphere as it falls

spawning/deleting instances of a prefab

 

chromsan-Book

The Extended Encyclopedia of Philosophy- A collection of twelve new -isms. A sample chapter can be viewed here.

For this project, I wanted to create some sort of an encyclopedia of new terms. I was originally thinking about doing made up wars, but in the end shifted to creating new philosophies.

Since I was making an encyclopedia, it was fitting that I source the text from Wikipedia. This has a few advantages. Firstly, Wikipedia has a nice API that makes searching for pages and getting the text of any page very easy in javascript. Second, I could get any page Wiki has to offer, which means that I could create many interesting combinations. Finally, all text on Wikipedia is explicitly free to be remixed.

I began by collecting a list of all the -isms that Wikipedia has articles on. There's a nice list here. When you request a page, Wiki provides the entire HTML structure that has not been parsed, so javascript is a must for this task. I wrote a short script using p5 to do this. Now, I had a json file with all the -isms Wiki has to offer, structured to include the urls from the hyperlinks on the page, so I could easily get each article with one API call.

I then started to make the made up philosophies. Some are created by mashing two together: dropping the -ism on the first and replacing it with -ist. Others are created by adding a latin prefix from this list (that I parsed into a json file in the same manner as before) to the first term. This yields an interesting list of new philosophies like:

I then collected the pages for each of the original philosophies used and parsed them to get just the first couple paragraphs that summarize the terms. These were then mashed together to get around 4-6 sentences of description of the new term. I did a little work to replace mentions of the original terms with the new term to make it feel a bit more natural. This entire process was done in another p5 script. It results in descriptions such as:

I wasn't a huge fan of the fact that the original philosophers were kept in the descriptions of the new terms, as it roots them a bit too much in the two original philosophies. So, I wrote a short script in python to mix up the names and places a bit. For this, I needed a way to identify individuals like Karl Popper in the example above. There is a really nice package built on NLTK that I've worked with before that does just that.  This created text like:

Finally, I typeset the pages using basil.js in InDesign. I'd never worked with basil.js before and really liked the amount of freedom it affords you.

The resulting text can have some interesting combinations of philosophies. Often times there are some clear contradictions between the different elements of the terms, which makes it even better. I do think, however, that the text part is a bit too long, especially when there are 12 pages of it; I probably should have made each summary about 3-4 sentences. There was also more work to do in ensuring the text was well formatted with appropriate spaces and periods, as this was messed up when I did some of the parsing. It's readable, but not perfect in this regard. I'm otherwise happy with the result.

A sample chapter can be viewed here.

The full set of PDFs can be downloaded here.

All the code for this project can be found here.

chromsan-parrish

The part of the lecture that stuck with me the most was the discussion on the mapping of the word net hierarchy to the cortex of the brain. The fact that the brain has this topographic representation of words and concepts is quite incredible. There are other mappings in the brain too; visual and auditory information is represented in a sort of a hierarchy throughout the cortex. The fact that this also extends to language is even more impressive given that language is evolutionary newer, and therefore less ingrained in the structure of the brain. These findings suggest a sort of innate mapping of the abstract meanings of words that affects how they are perceived and processed. Very interesting bit of research presented there.

chromsan-weirdie-Automaton

The development of our automaton was mainly driven by the variety of interesting movements produced by the objects we found at the Center for Creative Reuse. We played around by combining some of the objects and seeing how their interactions changed. We started putting these objects together without one cohesive idea about what the final product would be. The idea was to develop as much character as possible.

The first major element that we liked was the springy antennae. We wanted them to respond to motion and the distance of the observer so we mapped the antenna's angular movement to the observer's movement. Since we had a limited number of motors, we wanted to use the two motors that powered the antennae for other movements, so we attached them to the body of the creature. This produced a pleasing flapping motion that made it seem a bit like a penguin. In the same spirit, we added a variety of springs to the inside of the creature which bounce around and grind against each other.

The final result is fairly simple, but we got some interesting motion and character out of it. It would have been nice to do something with linkages, which we thought about a few times, but never implemented. We also thought about doing something with LEDs on the inside, but decided against it as well. We could have painted it yellow.

The hot glue gun was handled evenly on this project. Weirdie made the hat and wrote the code. Chromsan made the inside and hooked up the wiring.  We both made the antennae.

In progress pictures:

Code can be found here.

 

chromsan-LookingOutwards04

This is a project by Adam Harvey and Surya Mattu called SkyLift. It's a WiFi geolocation device that tricks user's phones into being at the Ecuadorian Embassy in London- the current residence of Julian Assange.  It broadcasts a WiFi signal that exploits the WiFi geolocation services of a cellphone. It mimics the MAC address used at the Ecuadorian Embassy, captured by the two collaborators from the street outside the Embassy. It uses a Raspberry pi and WiFi module and can be built for under $50, though Harvey has recently released plans for a second version that can be built for less than $5. All the software that runs on it and steps for setting up your own can be found on his GitHub.

The goal of this project, according to Harvey, is to call attention to aspects of the surveillance state; especially around the particularly charged aura of Julian Assange. In addition, the project allows you to disappear from a geographical location and appear at a completely different one- at least digitally. There's certainly an interesting combination created by the merger of the digital and physical in this project, both conceptually and literally.

chromsan-Body

For this project, I created a visualization of the space where the gaze of two characters in conversation meet. As the characters look at or away from one another, a shape is created between them that grows over the course of a scene.

I used ml5.js and PoseNet to track the characters in the videos and p5.js to draw the shape between them. The field of view for each character is determined by a line drawn from their ear to a point between their eyes and nose. The line is extended outward at two different angles to create a triangle. Both characters' fields of vision are checked for intersecting points and the shape drawn between them is progressively built from these points, layered over and over.

I decided to use scenes from movies mainly because of the one scene from There Will Be Blood where Daniel and Eli engage in this intense standoff, where Daniel talks down to Eli in both the dialogue and his body language. Eli's gaze falls to the ground while Daniel's stays fixed on Eli. This scene worked particularly well, as there is a good amount of change in their gazes throughout, and the shape that emerges between the two is quite interesting. I searched for more scenes with dialogue shot from the side, and came up with a number of them, mostly from P.T. Anderson and Tarantino, the only two directors that seem to use it regularly. I ended up using clips from Kill Bill Vol.I, There Will Be Blood, Boogie Nights, The Big Lebowski, and Inglourious Basterds.

I initially wanted to create one shape between the characters and have their gaze pull it around the screen, but I didn't like the result. So I created a fill for the area where the gazes intersected, and then I created multiple fills that built up to create a complex shape that ends up looking like a brushstroke. There are definitely limitations to this method though; the camera has to be relatively still, there can't be too many characters, and the scene has to be pretty short. Drawing all the shapes on every draw call in addition to running PoseNet in realtime is quite computationally demanding. Even with a good graphics card the video can start to lag; the frames definitely start to drop by the end of some of the longer scenes. Overall, I'm pretty happy with the result, but there is certainly more optimization to be done.

A few more:

Some debug views: 

The code can be found on GitHub.

let poseNet, poses = [];
let video, videoIsPlaying; 
let left = [], right = [], points = [];
let initialAlpha = 100, FOVheight = 150;
let time;
let show = true, debug = true;
 
let vidName = 'milkshake';
let rgb = '#513005';
 
class FOVedge {
 
  constructor(_x1, _y1, _x2, _y2) {
    this.x1 = _x1;
    this.x2 = _x2;
    this.y1 = _y1;
    this.y2 = _y2;
  }
  draw(r, g, b, a){
    stroke(r, g, b, a);
    line(this.x1, this.y1, this.x2, this.y2);
  }
  intersects(l2) {
    let denom = ((l2.y2 - l2.y1) * (this.x2 - this.x1) - (l2.x2 - l2.x1) * (this.y2 - this.y1));
    let ua = ((l2.x2 - l2.x1) * (this.y1 - l2.y1) - (l2.y2 - l2.y1) * (this.x1 - l2.x1)) / denom;
    let ub = ((this.x2 - this.x1) * (this.y1 - l2.y1) - (this.y2 - this.y1) * (this.x1 - l2.x1)) / denom;
    if (ua < 0 || ua > 1 || ub < 0 || ub > 1) return false;
    let x = this.x1 + ua * (this.x2 - this.x1);
    let y = this.y1 + ua * (this.y2 - this.y1);
    return {x, y};
  }
}
 
class FOV {
  constructor(x1, y1, x2, y2) {
    this.s1 = new FOVedge(x1, y1, x2, y2 - FOVheight);
    this.s2 = new FOVedge(x1, y1, x2, y2 + FOVheight);
    this.s3 = new FOVedge(x2, y2 + FOVheight, x2, y2 - FOVheight);
    this.col = color(191, 191, 191, initialAlpha); 
    this.show = true;
    // right = 1, left = -1
    if (x2 > x1) {
      this.direction = 1;
    } else {
      this.direction = -1
    }
  }
  checkForIntersections(fov) {
    let intersections = [];
    for (let j = 1; j < 3; j++){
      let sattr = 's' + j.toString();
      for (let i = 1; i < 3; i++) {
        let attr = 's' + i.toString();
        let ints = this[sattr].intersects(fov[attr]);
        if (ints != false) {
          intersections.push(ints);
        }
      }
    }
    if (intersections.length == 2){
      intersections.push({x: this.s1.x1, y: this.s1.y1});
    }
    return intersections;
  }
  fade(){
    this.col.levels[3] = this.col.levels[3] - 3;
    if (this.col.levels[3] < 0) {
      this.show = false;
    }
  } 
  draw() {
    this.s1.draw(this.col.levels[0], this.col.levels[1], this.col.levels[2], this.col.levels[3]);
    this.s2.draw(this.col.levels[0], this.col.levels[1], this.col.levels[2], this.col.levels[3]);
    this.s3.draw(this.col.levels[0], this.col.levels[1], this.col.levels[2], this.col.levels[3]);
  }
}
 
function setup() {
  videoIsPlaying = false; 
  createCanvas(1280, 720, P2D);
  //createCanvas(1920, 1080);
  video = createVideo( vidName + '.mp4', vidLoad);
  video.size(width, height);
  poseNet = ml5.poseNet(video, modelReady);
  poseNet.on('pose', function(results) {
    poses = results;
  });
  video.hide();
}
 
function modelReady() {
  select('#status').html('Model Loaded');
}
 
function mousePressed(){
  vidLoad();
}
 
function draw() {
  if (show) {
    image(video, 0, 0, width, height);
  } else {
    background(214, 214, 214);
  }
 
  drawKeypoints();
}
 
function drawKeypoints()  {
 
  for (let i = 0; i < poses.length; i++) {
 
    let pose = poses[i].pose;
    for (let j = 0; j < 5; j++) { let keypoint = pose.keypoints[j]; if ((j == 3 || j == 4) && keypoint.score > 0.7) { // left or right ear
        // calclulate average x between nose and eye
        let earX = 0, earY = 0;
        if (pose.keypoints[2].score > 0.7){
          earX = pose.keypoints[2].position.x;
          earY = pose.keypoints[2].position.y;
        } else {
          earX = pose.keypoints[1].position.x;
          earY = pose.keypoints[1].position.y;
        }
         let x1 = keypoint.position.x
         let y1 = keypoint.position.y
         let x2 = earX;
         let y2 = earY;
         //let x2 = (earX + pose.keypoints[0].position.x) / 2;
         //let y2 = (earY + pose.keypoints[0].position.y) / 2;
         let length = Math.sqrt(Math.pow(x1 - x2, 2) + pow(y1 - y2 , 2));
         let newX = x2 + (x2 - x1) / length * 1200;
         let newY = y2 + (y2 - y1) / length * 1200;
 
         let look = new FOV(x2, y2, newX, newY);
 
         if (look.direction == -1) {
          let lastR = right.pop();
          if (lastR != undefined){
            let ints = look.checkForIntersections(lastR)
            if (ints.length >= 3 && show) {
              points.push(ints);
            }
            right.push(lastR);
          }
          left.push(look)
         } else {
          let lastL = left.pop();
          if (lastL != undefined) {
            let ints = look.checkForIntersections(lastL)
            if (ints.length >= 3 && show) {
              points.push(ints);
            }
            left.push(lastL);
          }
          look.col =  color(56, 56, 56, initialAlpha); 
          right.push(look)
         }
      }
      let col = color(rgb);
      col.levels[3] = 2;
      fill(col.levels[0], col.levels[1], col.levels[2], col.levels[3]);
      noStroke();
      if (!debug){
      for (let i = 0; i < points.length; i++) {
        beginShape();
        for (let j = 0; j < points[i].length; j++) {
          vertex(points[i][j].x, points[i][j].y);
        }
        endShape(CLOSE)
      }
    }
      for (let i = 0; i < left.length; i++) {
        if (debug){left[i].draw();}
        left[i].fade();
        if (!left[i].show) {
          left.splice(i, 1);
        }
      }
      for (let i = 0; i < right.length; i++) {
        if (debug){right[i].draw();}
        right[i].fade();
        if (!right[i].show) {
          right.splice(i, 1);
        }
      }
    }
  }
}
 
function vidLoad() {
  time = video.duration();
  video.stop();
  video.loop();
  videoIsPlaying = true;
  if (!debug) {
  setTimeout(function(){ 
    show = false;
    video.volume(0);
    video.hide();
   }, time * 1000);
}
}
function keyPressed(){
  if (videoIsPlaying) {
    video.pause();
    videoIsPlaying = false;
  } else {
    video.loop();
    videoIsPlaying = true;
  }
}

chromsan-LookingOutwards03

We Make the Weather is an interactive installation  made as a collaboration between Greg Borenstein, Karolina Sobecka, and Sofy Yuditskaya. It was made in the aftermath of Hurricane Sandy and uses breath detection, motion capture, and the Unity game engine. The user controls a figure crossing a virtual bridge with their breath, where each breath makes the bridge longer and further from the figure. The user wears a headset that sends the sound of their breath to Unity, which controls the 3D environment projected on a screen. This is a particularly clever and unique way of interacting with the environment, and it plays perfectly into their concept and its environmental themes. It's also notable for both its visual simplicity and conceptual complexity.