I really struggled to come up with a compelling idea for this project. Initially, I wanted to do something with hands using openpose. Unfortunately, it was too slow on my computer to use live.

After unsuccessfully attempting openpose, I shifted my focus to FaceOSC. I explore a couple different options, including a rolling ping pong ball, before settling on this. Initially inspired by a project Marisa Lu made in the class, I wanted to create a drawing tool where the controller is one's face. Early functionality brainstorms included spewing particles out of one's mouth and 'licking' such particles to move them around. Unfortunately, faceOSC's tongue detection system is not great so I had to shift directions.

Thinking back to the image processing project for 15-104, I thought it would be fun if the particles 'revealed' the user's face. Overall, I'm happy with the elegant simplicity of the piece. I like Char's observation when she described it as a 'modern age peephole'.However, I'm still working on a debug mode in case faceOSC doesn't read the mouth correctly.


ParticleSystem ps;
import processing.video.*;
Capture cam;
import oscP5.*;
OscP5 oscP5;
int found;
float[] rawArray;
Mouth mouth;
int particleSize = 3;
int numParticles = 20;
void setup() {
  size(640, 480);
  ps = new ParticleSystem(new PVector(width/2, 50));
  mouth = new Mouth();
void setupOSC(){
  rawArray = new float[132]; 
  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "rawData", "/raw");
void setupCam(){
  String[] cameras = Capture.list();
  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
  } else {
    println("Available cameras:");
    for (int i = 0; i < cameras.length; i++) {
    // The camera can be initialized directly using an 
    // element from the array returned by list():
    cam = new Capture(this, cameras[0]);
void draw() {
  if (cam.available() == true) {
  translate(width, 0);
  scale(-1, 1);
  ps.origin =new  PVector(mouth.x, mouth.y);
  if(mouth.isOpen || mouth.isBlowing){
  //image(cam, 0, 0);
void drawFacePoints() {
  int nData = rawArray.length;
  for (int val=0; val<nData; val+=2) {
      fill(100, 100, 100);
      ellipse(rawArray[val], rawArray[val+1], 11, 11);
// A class to describe a group of Particles
// An ArrayList is used to manage the list of Particles 
class ParticleSystem {
  ArrayList particles;
  PVector origin;
  ParticleSystem(PVector position) {
    origin = position.copy();
    particles = new ArrayList();
  void addParticle() {
    for(int i = 0; i < numParticles; i++){ particles.add(new Particle(origin)); } } void run() { for (int i = particles.size()-1; i >= 0; i--) {
      Particle p = particles.get(i);
      if (p.isDead()) {
class Mouth{
 boolean isOpen;
 boolean isBlowing;
 float h;
 float w;
 float x;
 float y;
 float xv;
 float yv;
 void update(){
   PVector leftEdge = new PVector(rawArray[96], rawArray[97]);
   PVector rightEdge = new PVector(rawArray[108], rawArray[109]);
   PVector upperLipTop = new PVector(rawArray[102], rawArray[103]);
   PVector upperLipBottom = new PVector(rawArray[122], rawArray[123]);
   PVector lowerLipTop = new PVector(rawArray[128], rawArray[129]);
   PVector lowerLipBottom = new PVector(rawArray[114], rawArray[115]);
   float lastx = x;
   float lasty = y;
   w = rightEdge.x - leftEdge.x;
   x = (rightEdge.x - leftEdge.x)/2 + leftEdge.x;
   y = (lowerLipBottom.y - upperLipTop.y)/2 + upperLipTop.y;
   h = lowerLipBottom.y - upperLipTop.y;
   float distOpen = lowerLipTop.y - upperLipBottom.y;
   float avgLipThickness = ((lowerLipBottom.y - lowerLipTop.y) + 
                           (upperLipBottom.y - upperLipTop.y))/2;
   if(distOpen > avgLipThickness){ isOpen = true;}
   else { isOpen = false;}
   if(w/h <= 1.5){ isBlowing = true;}
   else { isBlowing = false;}
   xv = x - lastx;
   yv = y - lasty;
 void drawDebug(){
   if(isOpen || mouth.isBlowing){
       stroke(255, 255, 255, 150);
       ellipse(x, y, w, h);
// A simple Particle class
class Particle {
  PVector position;
  PVector velocity;
  PVector acceleration;
  float lifespan;
  Particle(PVector l) {
    acceleration = new PVector(0, 0.00);
    velocity = new PVector(random(-1, 1), random(-2, 0));
    position = l.copy();
    lifespan = 255.0;
  void run() {
  // Method to update position
  void update() {
    //lifespan -= 1.0;
    velocity.x = velocity.x *.99;
    velocity.y = velocity.y *.99;
  // Method to display
  void display() {
    //stroke(255, lifespan);
    //image(cam, 0, 0);
    float[] col = getColor(position.x, position.y);
    fill(col[0], col[1], col[2]);
    ellipse(position.x, position.y, particleSize,particleSize);
  // Is the particle still useful?
  boolean isDead() {
    if (lifespan < 0.0) { return true; } else { return false; } } } public float[] getColor(float x, float y){ cam.loadPixels(); int index = int(y)*width +int(x); float[] col = {0, 0, 0}; if(index > 0 && index < cam.pixels.length){
    col[0] = red(cam.pixels[index]);
    col[1] = green(cam.pixels[index]);
    col[2] = blue(cam.pixels[index]);
  return col;
public void rawData(float[] raw) {
  rawArray = raw; // stash data in array


I was really intrigued by Lauren McCarthy and Kyle McDonald's "How We Act Together," both by the interaction between individuals and between computer and individual. The project shows you a stream of different pictures of other people interacting with the project from before, as long as you continue acting certain facial movements. It explores modern interaction, in both positive and negative ways.

I think it's interesting that it attempts to pose these questions about modern interaction, which involves lots of virtual interaction, through virtual interaction itself, with very distant people. The whole set up itself is already "awkward and intimate" and it's cool that they chose to accentuate that with a kind of interaction that induces a bodily response.



A Journey, Seoul by Mimi Son and Elliot Woods.

Seoul is a piece of interactive art that consists of numerous clear acrylic plates with white detailing added to them. The plates can be placed into a box by the viewer that illuminates the plates and projects colors and patterns onto them. The project focuses on the viewer's memories, calling to viewer to assemble a box that means something to them.

I like how self-defined the piece is. It tries not to be meaningful to the viewer by making  a statement on its own. Instead, its meaning comes from the meaning that the viewer puts into it. This is interactive art on a very base level, i.e., the user has a hand in creating their own version of the piece. In fact, no two viewers will view the same piece. While they both might assemble the same plates in the same order, nobody will assemble the box for the same reason, thus fundamentally changing the piece.

A Journey, Seoul is part of a series created by the artists that includes a couple of other cities. Each city provides different interactions-- for example, A Journey, London, uses changing sounds and lights to tell different stories through the same physical model. Each iteration of the project includes the user more and more in the personalization (A Journey, Dublin is the last in the series and allows the viewer to actually draw on the panels to create their own narrative completely).

Of the series, though, I think Seoul is the strongest. London leaves little up to the viewer, making its interaction a mostly passive experience. Dublin, while quite visually pleasing, seems to give too much freedom to the viewer. The piece becomes more of a white-board than an art piece. Seoul, on the other hand, gives the viewer enough freedom to make their own memories from the predefined plates, while still keeping control over what the viewer is seeing to a certain extent. There is value to limiting what a viewer can do, as it forces the imagination to fill in gaps in their head, rather than giving the viewer the freedom to fill in those gaps in the physical world.




Nova Jiang's "Figurative Drawing Device" (link here) was exhibited at the New Wight Gallery in Los Angeles. This device requires two people, a designated tracer and the person to be outlined, and graphs the outline with imperfections that are clearly evident. I was drawn to this piece because of its personal and irregular nature. No two outlines would be the same and it was also dependent on the tracer. The device seems to be made up of a series of metal bars that translate the bigger outline of the tracer's drawing to something that could be fit into a sketchbook's size. Although overall, I love this piece, I wonder how the traces would look if the outline was completely black, creating a stronger contrast with the white paper's background. I respect the social and psychological elements in this piece within the relationships that it creates and also the fact that a single outline is not something that can be done quickly and perfectly. The poser must stay relatively still in poses that may be hard to maintain while the outliner must focus on doing his/her best in creating the drawing. Within completion, the drawing serves as an interpretation of the participants' combined effects, which is something I find exciting.


483 Lines Second Edition (2015) by Mimi Son explores how light and image can create a surreal digital environment. The interactivity is in how the viewer views the piece. Today, I viewed one of Memo Atken's pieces that explores creating different environments in each eye in virtual reality. The users explore the space by moving their head in the VR environment and by attempting to focus on different parts. Son's work attempts to create similar surreal environments in reality through projection. Standing closer or farther away from the lines create senses of motion through the plane of lines. Looking at the piece from the angle of a tunnel creates a sense of motion along the plane of lines.


Delicate Boundaries - Chris Sugrue

This project interests me not because of the artistic concept behind it(which I find a bit simplistic), but because of the novel use of media. The way the artist uses projection mapping to bridge the divide between the digital and the physical is incredible - giving digital objects a "presence" without much effort. This served as a reference point for my "Augmented Body" project.



Recently I have been making more use of my Fitbit, and one of the social "games" I have been playing on it is the step challenges. Counting steps as a form of interactivity is a fairly old concepts, with the first pedometers showing up from Japanese manufacturers in 1985 (interestingly, Leonardo Da Vinci had envisioned a mechanical step-counting gadget centuries earlier!).

What I found unique about the Fitbit's spin on the "step challenge" concept is the virtual races you can hold with your friends. Users can pick an iconic expanse to walk across to race on like the Alps, Appalachian or Rocky mountains and can see realtime where their friends stand along these trails. The fitbit will (if permitted) use GPS tracking to figure out how much distance users cover, or utilize their gait and steps taken to generate a somewhat accurate representation of distance covered on these trails. Furthermore, walking these trails allows users to unlock 180 degree views of these locations and in-game "treasures" and unlockables.

The second and somewhat less obvious effect of this interactivity is that I find myself feeling closer to the people who do these challenges with me, regardless of the multiple thousand miles between us. The concept of catching up and being able to overtake your friends helps me feel closer to them. I am not sure if the developers realized this aspect of their product, but I think this is something special, and I see the potential in a game that makes you feel closer to people by moving relative to them.


This is an interactive installation work by Camille Utterback from 2013 entitled Flourish. It is a series of 7 glass panels, each with 2 layers, and 3 of which are interactive. The combination of colors and textures creates a sense of depth which is heightened by lights that respond to viewers' movements and travel between the panels. I'm inspired by the combination of materials and ideas in this piece. Painting, sculpture, interactivity, time-based media, and glass-work are all being combined to create what I see as a living painting with an incredible sense of depth. It's hard to know without seeing the piece in person, but I wish all of the panels were interactive, though perhaps it is more surprising if only a few are.  I think the image of the tree is a bit cliche, and that the more abstract but still very natural elements of the rest of the panels are much more compelling. The idea of creating interactive paintings that change over time is one that is exciting to me, particularly coming from a painting background myself.




Daniel Rozin has created many mechanical "mirrors" using video cameras, motion sensors, and motors to display people's reflections. I had seen the popular pompom mirror before, but I was interested to see the other mirrors he created. One mirror that I found interesting was the penguin mirror. Rather than facing the user directly, this mirror is flat on the ground and takes the shape of a projected shadow. As the user moves, the stuffed penguins turn so that their white bellies are showing. I really enjoy how Rozin uses his mirrors to take a simple shadow and turn it into a huge mechanized process.

I think that penguins were very fitting for this mirror, because the colors of the penguin allow for a transition between black and white as they turn. The appearance of a huge group of penguins together also gives the appearance of a penguin huddle. The sound of this mirror is  also very pleasant. As you move more, the clicking sound of all the penguins turning increases. There is something very soothing about listening to an army of penguins follow your movements.



We Make the Weather is an interactive installation  made as a collaboration between Greg Borenstein, Karolina Sobecka, and Sofy Yuditskaya. It was made in the aftermath of Hurricane Sandy and uses breath detection, motion capture, and the Unity game engine. The user controls a figure crossing a virtual bridge with their breath, where each breath makes the bridge longer and further from the figure. The user wears a headset that sends the sound of their breath to Unity, which controls the 3D environment projected on a screen. This is a particularly clever and unique way of interacting with the environment, and it plays perfectly into their concept and its environmental themes. It's also notable for both its visual simplicity and conceptual complexity.