Stillness (Assignment 7/9)

Overview

stillness cover photo

I made a projection of virtual butterflies which will come land on you (well, your projected silhouette) if you hold still, and will fly away if you move.

Inspiration

This semester, a friend of mine successfully lobbied for the creation of a “Mindfulness Room” to be created in one of the dorms on campus. The room is meant to be a place where students go to relax, meditate, and, as the name implies, be more mindful.

For my final project, I wanted to create something that was for a particular place, and so I chose the Mindfulness Room. Having tried to meditate in the past, I know it can be very challenging to clear your mind and sit entirely still for very long. So, the core of this project was to make something that would make you want to be still (and that would also fit in with the overall look and feel of the room.)

Technical Aspects

Some of the technical hurdles in this project:

  • Capturing a silhouette from a Kinect cam image. I tried to DIY this initially, which didn’t go well. Instead, I ended up finding this tutorial about integrating a Kinect and PBox2D. I fixed the tutorial code so that it would run in the most recent version of Processing and with the most recent version of the SimpleOpenNI library.
  • Integrating assorted libraries: SimpleOpenNIblobDetectionPBox2DToxicLibs, standard Java libraries. I almost certainly didn’t actually need to use all of them, but figured that out too late.
  • Dealing with janky parts of those libraries (e.g., jitteriness in the blobDetection library, fussiness of SimpleOpenNI). Using the libraries made my project possible, but I also couldn’t fix some things about them. I did, however, manage to improve blob detection from the Kinect cam image by filtering out all non-blue pixels (the Kinect highlights a User in blue).
  • Trying to simulate butterflies flying—with physics. Trying to simulate a whimsical flight path using forces in PBox2D had only ok results. I think it would be easier to create their paths in vanilla Processing or with another library, (though that might make collision detection far more challenging.)
  • Finding a computationally cheap way to do motion tracking. When I tried simple motion tracking, my program ate all my computer’s memory and still didn’t run. I ended up taking the Kinect/SimpleOpenNI provided “Center of Mass” and using that to track motion, which worked pretty well for my purposes.

Critical Reflection

As I worked on this project, I was unsure throughout that all the pieces (butterflies, kinect, etc.) would come together and/or work well. I think they came together fairly well in the end. Even though the project right now doesn’t live up to what I imagined in my head at the beginning, it still does what I essentially wanted it to do—making you want to stay still.

When people saw the project, their general response was “that’s really cool”, which was rewarding. Also, the person in charge of the Mindfulness room liked it enough that she wanted me to figure out how to make it work there long term. (Which could be really logistically difficult, in terms of setup and security because the room is always open and unsupervised, and drilling into the walls to mount things isn’t allowed.)

So, though there’s a list of things I think should be better about this project (see below), I think I managed to my concept simplistically, and well given that simplicity.

Things that could be better about this:

  • Butterflies’ visual appeal. Ideally, the wings would be hinged-together PBox2D objects. And antennae/other details would add a lot.
  • Butterflies movement. Could be more butterfly-like.
  • Attraction to person should probably be more gradual/a few butterflies at a time.
  • Code cleanliness: not good.
  • Ragged edge of person’s silhouette should be smooth.
  • Better capture of user. Sometimes the Kinect refuses to recognize a person as a User, or stops tracking it. This could have to do with how I treat the cam image, or placement, or lighting, or just be part of how I was doing Kinect/SimpleOpenNI. After talking with Golan, I think ditching OpenNI altogether and doing thresholding on the depth image would work best.

Video

Code

https://github.com/juliat/stillness2

Assignment 9 Looking Outwards + Sketches

Looking Outwards

Discovery 1: Triple Geek Showcase

The Triple Geek Showcase is a montage of projects by Stefan Goodchild,  a “freelance creative coder, motion graphics & interaction designer”. Most of his projects are concert visuals (largely light-based) for clients, made for a musical artist and a venue. Those visuals, especially in montage form, are really dynamic and aesthetically pleasing. Seeing his work made me want to experiment with large scale projections or music visualization.

Discovery 2: Equil Jot Pen

My second discovery is more of a product and less of an art piece, but stay with me. The Equil consists of a receiver and a pen which sends out a signal as a person writes. It allows a person to create on an analog medium—paper—and immediately have it appear and become manipulable onscreen. That blurring of borders between analog and digital input was the part that interests me. It reminds me thematically of a project I’ve referenced before by Bret Victor—a gestural animation app. I like that these projects change how input works to make it simpler to create more expressive digital images or animations.

Discovery 3: Rubix Cube Building

I can’t find the link for this one again as I write this, but I think that the project video I watched was filmed at the building pictured above, the Stuttgart library. The library is modeled after a rubick’s cube and has lights all over the outside. This person’s project was to create a custom, normal-sized rubick’s cube which people could solve, and as they moved the small scale cube, the LEDs on the building would mirror the changes. I liked the whimsy of this project, and how it took an existing light installation and made it interactive.

Discovery 4(ish): Expandos Digital Packaging

This project doesn’t involve circuits or code (at least as a visible output), but I wanted to include it anyways. The items pictured above are ExpanDOs, which are basically cardboard packing chips. The fun thing about them is that they are also Lego-like—once pulled apart, you can use them to build all kinds of structures (see below.) What I like about this product/project is how it takes what would otherwise be trash and makes something of it. Looking at these reminded of me of an idea I’ve had for a while, which is to take scrap cardboard and laser cut it to make larger building block toys (similar to ones I had as a kid.)

Final Project Ideas

Idea 1: Meditation Butterflies

One of my friends at CMU has been working with the university to create a Mindfulness Room, “a room where there is no technology or homework allowed, somewhere to go to relax, be inspired, and breathe.” My idea is to use computer vision (through a webcam, kinect, whatever) to create butterflies in the room which respond to people who enter the environment. Basically, the butterflies flutter around when you walk into the room, but as you sit and meditate longer, the butterflies come and “land” on you/your shadow. To me, the most important part of this concept is just creating a simulated natural environment (fitting the mood of the room) that rewards people for sitting/being still.

Idea 2: Sunrise Blinds

This concept is to link up a motor to the internet and use information about the time of sunrise and sunset in a given place to control the lifting and lowering of a set of blinds. People have done very similar projects before, though they are largely just automatically-controllable blinds. The role that I want my blinds to play is to make people more aware of and in tune with the rising and setting of the sun (as a counter to constant artificial lighting.)

The other motivation I have here is selfish—there are bright outdoor lights outside my window, so I close my blinds to block out as much light as possible at night. However, in the morning, waking up is easier if light gradually brightens (like the sunrise). But since I’m asleep, I can’t open the blinds to let the light in—this project that solves that problem for me.

Idea 3: Whimsical Cheery Creature

The last project is the most whimsical. It would basically be using code to control a cheery-mechanical figurine, either modeled after the above motivational penguin, or as a little creature which wags its tail in different ways in response to someone’s interactions with it. One of the benefits of this project is that it seems relatively simple, and that it would let me gain technical experience with creating basic robotics (I have none.)

Are you sitting yourself to death? (Arduino Measuring Device – Julia & Dave)

Project Description

To start this assignment, we brainstormed interesting things to measure, interesting ways to measure them, and possible locations for our measurement device.  We came up with all kinds of ideas—from a person’s repetitive thoughts (through self report) to boredom in class (seeing if we could look at network data and count how many people were on facebook.) In the end, we narrowed our ideas down to these 3:

  1. Measuring light in dark areas where plants grow nonetheless—seeing how much light is actually there.

  2. Measuring the effect of an installation piece on altruism—if we had a countdown that would motivate people to be aware of or donate to distant issues.

  3. (on the quantitative self side of things) measuring how long we spent sitting during the day. This was inspired by a study that indicated that excessive sitting increases health risks and decreases lifespan.

We chose to measure sitting as our final idea because it fit better with the assignment—measuring something that existed already—rather than idea 2, which was very deliberately changing people’s behavior. We also chose to measure sitting because we felt the measurement could be a provocative one, encouraging a wearer to think about their lifestyle choices.

Sensors Used

We used a pressure sensor (a.k.a. round force-sensitive resistor) placed in a back pocket to tell if a person was sitting.

Sketch

Photo Nov 11, 3 37 54 PM

Photos

Video

Code

Github repo: https://github.com/juliat/ems2a8

#include  Wire.h
#include "Adafruit_LEDBackpack.h"
#include "Adafruit_GFX.h"

// initialize the 7 segment number display
Adafruit_7segment matrix = Adafruit_7segment();

// pressure sensor
const int sensorPin = 0;
int pressureReading; // the analog reading from the FSR resistor driver
int sittingThreshold = 900;

// cell phone vibrator
int vibratorPin = 7;

#define minutesToMillisFactor 60000

boolean currentlySitting = false;
float overallTimeSitting = 360*minutesToMillisFactor;
float sitStartTime = 0;
float timeBefore = 0;

boolean warningNow;

// 0.1 minutes for testing, but this would really be about 30 minutes (for practical use)
const float sitWarningThresholdInMinutes = 0.1;

void setup() {
  // get the serial port running for debugging
  Serial.begin(9600);
  Serial.println("Start");

  // setup the 7 segment number display
  matrix.begin(0x70);

  // initialize vibratorPin
  pinMode(vibratorPin, OUTPUT);
}

void loop() {
  pressureReading = analogRead(sensorPin);

  checkSitting();
  Serial.println("overallTimeSitting");
  Serial.println(overallTimeSitting);

  // how long have I currently been sitting?
  float currentSitDurationInMillis = millis() - sitStartTime;

  // warn if I've been sitting too long
  if ((currentSitDurationInMillis > sitWarningThresholdInMinutes * minutesToMillisFactor) 
     && currentlySitting) {
       warningNow = true;
  }

  if (warningNow == true) {
    digitalWrite(vibratorPin, HIGH);
  }

  // if I'm sitting, update the display, adding the time delta

  matrix.print((long)overallTimeSitting/minutesToMillisFactor, DEC);
  matrix.writeDisplay();
  delay(50);
}

void checkSitting() {
  Serial.print("pressure: ");
  Serial.println(pressureReading);

  float timeNow = millis();
  // are you sitting?
  if (pressureReading < sittingThreshold) {       // were you sitting last time I checked?       if (currentlySitting == false) {         sitStartTime = millis();         currentlySitting = true;         Serial.println("started sitting");       }       else {         Serial.println("still sitting");         // update overall sitting time         float thisSitDuration = timeNow - timeBefore;         overallTimeSitting += thisSitDuration;                }   }   // are you sitting now   if (pressureReading > sittingThreshold) {
    // did you just get up?
    if (currentlySitting == true) {
      currentlySitting = false; 
      Serial.println("got up");
      warningNow = false;
      digitalWrite(vibratorPin, LOW);
    }
  }
  timeBefore = timeNow;
}

Diagram

Looking Outwards: Shields

Discovery 1: Electric Imp Shield

The Electric Imp is, basically, a wifi shield, but it has two things that make it more appealing than a standard wifi shield.

  1. It is cheaper than a standard wifi shield (at least this one that I found on sparkfun.) I’m not sure why that is.
  2. It comes with software/a system that makes connecting the shield and arduino to the internet simpler. I haven’t used a plain wifi shield, but I have used the Electric Imp for a project, and it did abstract away most basic I/O and networking concerns, so I could just program behavior for it.

Last time I used the electric imp, it was so that a user could adjust the behavior of a device from a web interface (the adjustments were then transmitted through the Electric Imp system/device.) Some ideas for how I could use the Electric Imp in other projects are:

  • Have two installations that are connected to each other, where the state or viewers of one is reflected in the state or behavior of the other (with the two communicating via the imp)
  • Because the electric imp is scalable, it could be a part of a project with dozens of installations (geographically spread out) that respond to online data or sensor data dynamically. (Beacons, kind of.)

Discovery 2: CMUcam v4

This is an expensive little shield, but pretty amazing. I wouldn’t have imagined that this existed before I found it. The CMUcam v4 “is a fully programmable embedded computer vision sensor” which is dedicated to handling motion and color input information and pipes that data to the Arduino to which it is attached.

Some ideas for how to use this computer vision shield would be:

  • Having a robot which responds “emotionally” to its environment’s light circumstances. (Basically a creature which can see the real world and react.) I’m imagining a cute blob-type creature with an LED inside where the blob’s color changes in response to the light and motion around it. (This could also branch off into a “chameleon” type project.
  • Have an environment/objects in an environment that can see and respond to people in it. For example, there could be a box that runs away from the people around it, or several which chase or surround people that come in a room.

Discovery 3: GPS Shield

This GPS shield can locate your position within a few meters. I know I’m almost certainly not the only one including this in my post, but it just has a ton of potential. Some ideas I have for it are:

  • Having something which floats, like a mini hot air balloon, with a GPS tracker in it, then recording the GPS coordinates to capture the path that the balloon takes. Then I could use the path data to draw wind wind or something (or this could be combined with the computer vision one.)
  • Have an object which might get passed around (e.g., a rubber ball with an arduino suspended inside) and use GPS with the first electric imp shield to track its path through the world. I don’t think the rubber ball is the best idea. Maybe a “lost” smartphone instead (although in that case you could “lose” a smartphone and then use its built in sensors to track where it goes and how its used.)

Looking Outwards: Sensors and Actuators

Discovery 1: IR Distance Sensor

IR distance sensor includes cable (10cm-80cm)

 

This SHARP distance sensor bounces [infrared light] off objects to determine how far away they are. It returns an analog voltage that can be used to determine how close the nearest object is.

This IR sensor seems like it has interesting possibilities because it would let you have virtual creations respond to the proximity of their viewers. Since personal space is a pretty meaningful social cue, allowing virtual creations to respond to it seems like it would have evocative potential. Some ideas off the top of my head for how to use this would be:

  • A virtual creature which shys away depending on how the viewer approaches it (this would be similar to my creature/ecosystem design but use actual proximity of a viewer rather than mouse proximity)
  • A virtual siren who would sing and beckon viewers closer.
  • A mobile robot-creature that would change its behavior depending on how near it was to a person. (e.g., interacting with that person when they are within a foot or less of him or her.)

Discovery 2: Coin Acceptor

Coin Acceptor - Programmable 1 Coin Type

 

When a valid coin is inserted, the output line will pulse for 20-60ms (configurable)

This coin acceptor seems interesting because using it in a project would let you play with the dynamics of payment, and what people are willing to pay for, etc. Some quick ideas for possible uses are:

  • Submitting 25 cents to get access for a few seconds to a webcam somewhere interesting. One example would be having a “viewfinder” (like the kind they have on top of the Empire State Building or the Space Needle) that is somewhere else, but that would show you a view from somewhere else in the world.
  • A social commentary piece where you put in coins and then they “trickle down” through a bunch of obstacles and end up being distributed unevenly at the bottom. (mockery of trickle-down economics) Alternatively, you could let people interact with it and control where things trickle down with a knob or knobs representing economic variables.

 

 

Discovery 3: Toy Motor

 

Gear motors allow the use of economical low-horsepower motors to provide great motive force at low speed such as in lifts, winches, medical tables, jacks and robotics. They can be large enough to lift a building or small enough to drive a tiny clock. (src)

 

This toy motor seems interesting 1. because it is cheap, so using many of them together would not be expensive to do, and 2. because its small size opens up interesting possibilities for hiding the motor itself. Some ideas for this motor are:

  • Making everyday objects move. Maybe recreating the “Be Our Guest” scene from Beauty and the Beast with standard dishware (this might not be feasible, and definitely echoes Adam’s Pixar lamp project)
  • Doing animatronics with small dolls or stuffed animals. Maybe you could have a version of the Sims in which people control actual dolls.

Confetti? (Lasercut Screen)

Screen

http://cmuems.com/2013/a/wp-content/uploads/sites/2/2013/10/frame-0070.pdf
Confetti Screen 1

http://cmuems.com/2013/a/wp-content/uploads/sites/2/2013/10/frame-0082.pdf
Confetti Screen 2

Description

This screen design was mostly just an interesting mistake that came about while I was working on my original screen concept. The basic way that the sketch works is that a set of uniform particles are placed at random locations in the window. Then, a repulsive force between the particles gradually pushes them apart. The particles draw their own trails as paths, and the stroke outliner helper function turns those paths into wider strokes. The sketch stops when the user presses ‘d’ and is recorded when the user presses ‘r’.

Code

Main

import oscP5.*;
import netP5.*;
import processing.pdf.*;

ArrayList myParticles;
boolean doneDrawing = false;

int margin;
boolean record = false;

void setup() {
  size(864, 864);
  myParticles = new ArrayList();

  margin = 50;

  for (int i=0; i<900; i++) {
    float rx = random(margin, width-margin);
    float ry = random(margin, height-margin);
    myParticles.add( new Particle(rx, ry));
  }
  smooth();
}

void mousePressed() {
  noLoop();
}
void mouseReleased() {
  loop();
}
void keyPressed() {
  if (key == 'd') {
    doneDrawing = true;
  }
  if (key == 'r') {
    record = true;
  }
}

void draw() {
  if (record) {
    // Note that #### will be replaced with the frame number. Fancy!
    beginRecord(PDF, "frame-####.pdf");
  }

  // background (255);
  float gravityForcex = 0;
  float gravityForcey = 0.02;
  float mutualRepulsionAmount = 3.0;

  if (doneDrawing == false) {
    // calculating repulsion and updating particles
    for (int i=0; i 1.0) {

          float componentInX = dx/dh;
          float componentInY = dy/dh;
          float proportionToDistanceSquared = 1.0/(dh*dh);

          float repulsionForcex = mutualRepulsionAmount * componentInX * proportionToDistanceSquared;
          float repulsionForcey = mutualRepulsionAmount * componentInY * proportionToDistanceSquared;

          ithParticle.addForce( repulsionForcex, repulsionForcey); // add in forces
          jthParticle.addForce(-repulsionForcex, -repulsionForcey); // add in forces
        }
      }
    }

    for (int i=0; i

Particle

class Particle {
  //float px;
  //float py;
  float vx;
  float vy;
  PVector currentPosition;
  ArrayList trail;
  int trailWidth;
  float damping;
  float mass;
  boolean bLimitVelocities = true;
  boolean bPeriodicBoundaries = false;

  // Constructor for the Particle
  Particle (float x, float y) {
    currentPosition = new PVector(x, y);
    vx = vy = 0;
    damping = 0.96;
    mass = 1.0;
    trail = new ArrayList();
    trailWidth = 5;
  }

  // Add a force in. One step of Euler integration.
  void addForce (float fx, float fy) {
    float ax = fx / mass;
    float ay = fy / mass;
    vx += ax;
    vy += ay;
  }

  // Update the position. Another step of Euler integration.
  void update() {
    vx *= damping;
    vy *= damping;
    limitVelocities();
    handleBoundaries();
    currentPosition.x += vx;
    currentPosition.y += vy;
    PVector logPosition = new PVector(currentPosition.x, currentPosition.y);
    trail.add(logPosition);
    println(trail.size());
  }

  void limitVelocities(){
    if (bLimitVelocities){
      float speed = sqrt(vx*vx + vy*vy);
      float maxSpeed = 10;
      if (speed > maxSpeed){
        vx *= maxSpeed/speed;
        vy *= maxSpeed/speed;
      }
    }
  }

  void handleBoundaries() {
    // wraparound
    if (bPeriodicBoundaries) {
      if (currentPosition.x > width - margin ) currentPosition.x -= width;
      if (currentPosition.x < margin     ) currentPosition.x += width;
      if (currentPosition.y > height - margin) currentPosition.y -= height;
      if (currentPosition.y < margin     ) currentPosition.y += height;
    }
    // bounce
    else {
      if (currentPosition.x > width - margin ) vx = -vx;
      if (currentPosition.x < margin     ) vx = -vx;
      if (currentPosition.y > height - margin) vy = -vy;
      if (currentPosition.y < margin     ) vy = -vy;
    }
  }

 /* I want my particles to draw their trails but can't figure out how. Thoughts? */
  void render() {
    drawStrokeOutline(trail, trailWidth);
  }
}

Simple Stroke Outliner

void drawStrokeOutline(ArrayList points, int strokeWidth) {
  noFill();
  stroke(0);
  strokeWeight(1);

  beginShape();
  // iterate over points in array going from to back, drawing shape
  for (int i=0; i < points.size(); i++) {
    PVector currentPoint = points.get(i);
    vertex(currentPoint.x, currentPoint.y);
  }
  endShape();

  beginShape();
  // then go backwards, and shift all points down by strokeWidth, 
  // continuing the same shape
  int lastIndex = points.size() - 1;
  for (int i=lastIndex; i >=0; i--) {
    PVector currentPoint = points.get(i);
    vertex(currentPoint.x - strokeWidth, currentPoint.y + (strokeWidth*0.5));
  }
  endShape();

  /* delete later
   stroke(0);
   PVector sPoint = points.get(points.size() - 1);
   line(sPoint.x, sPoint.y, sPoint.x - strokeWidth, sPoint.y + (strokeWidth*0.5));
  */

  if (doneDrawing) {  // draw start cap
    stroke(0);
    PVector startPoint = points.get(0);
    line(startPoint.x, startPoint.y, startPoint.x - strokeWidth, startPoint.y + (strokeWidth*0.5));

    // close line by drawing caps
    PVector endPoint = points.get(lastIndex);
    line(endPoint.x, endPoint.y, endPoint.x - strokeWidth, endPoint.y + (strokeWidth*0.5));
  }
}

Looking Outwards: Arduino

Discovery 1: Printer Orchestra

“Printer Orchestra”  was created by Chris Cairns and the team at “is this good?” for the printer manufacturer Brother.

I liked the Printer Orchestra off-the-bat for its charm. In the about section on Vimeo, the team explains that they were inspired by “Tristram Cary, James Houston, BD594 and other radical tinkerers” and comment that “Making cold stuff warm is fun.” I love the last part of that—I think the orchestra is a huge success in taking mundane, cold pieces of technology and making them warm and expressive. (I also think it’s a good idea to carry forward in this course/in electronic media art in general.)

Discovery 2: BMW Museum Kinetic Sculpture

This kinetic sculpture in the BMW museum was created by the firm ART+COM, a German firm which “design[s] and develop[s] innovative media installations, environments, and architecture.” According to ART+COM, the sculpture visualizes “the process of form-finding in different variations.”

This project’s documentation does not explicitly state that it uses Arduino, but it did come up in a youtube search for “arduino mediaarttube” and it looks like an Arduino project. Anyway, I enjoyed this project first for its aesthetic and second for its concept. The suspended spheres look like they are floating, and seeing them move gracefully and gradually into sync is mesmerizing to watch. I think the project successfully expresses the exploration involved in form-finding.

Given that this project is not highly interactive, I think it’s remarkably engaging. I also appreciate that in this piece, it seems clear that the technology was supporting a larger vision—to create these floating, synchronized spheres—rather than just being an experimental “gizmo”.

Discovery 3: un-melt

“un-melt” is a video created by Tony Round, an architect and filmmaker. The video was created for Gizmodo’s monthly video challenge. The particular challenge he was responding to was to “play with video reversal—backwards playback”.

Round used Arduino in this project to drive a homemade timelapse dolly rig. I liked this project because the video seemed beautiful and magical, showing me a process (un-melting) that I would not normally perceive. I also really enjoyed the cinematography of the piece; it had really beautiful shots. Round’s use of the Arduino to steer his dolly enabled him to take those shots, and I think that this use of Arduino was interesting because it was not all about the Arduino itself; it was about what the Arduino could support.

 

Light on Water (Lasercut Screen Attempt)

Concept

My concept for the lasercut screen was to have a screen with a pattern of cutouts similar to the pattern of highlights on water/waves.

reference photos of light on water

 

A trace of a wave image in illustrator.
A trace of a wave image in illustrator.

The Tale of Many Dead Ends

I did not manage to create the kind of pattern that I wanted to. I tried a bunch of things that didn’t work, then ended up using one of my more interesting mistakes for the screen. These are some of the things that didn’t work:

Creating Ripple Force Field

ripples
I couldn’t manage to turn ripples into a flow field.

One failed approach was to create a flow field (like the one described in Chapter 6 of the Nature of Code) using ripples to determine the strong and weak points of the field. I think this failed because I didn’t spend enough time thinking about how to take circles—graphic ripples—and translate them into a flow field.

Note: I also couldn’t find the chapter in the Nature of Code Book while I was working on it—refound it while writing this. It would have been helpful to review while trying to code the ripples thing.

Playing with Physics

After I figured out that the early universe had similar patterns to light on water, I decided to try to simulate distributed particles with certain masses and gravity forces. I spent a fair amount of time just messing around with repulsion forces, masses, and velocities, trying to see if I could get the particles to attract/repulse into the right pattern (with the particles drawing trails behind them).

Things that happened while I played with the constants in my particle simulations.
Things that happened while I played with the constants in my particle simulations.

Clicking to Disturb Particles

My next attempt was to just set up a field of particles, then apply a repulsive force when and where the mouse was clicked. I figured I could create my own waves until a pattern I liked emerged, then trace through or around the particles to create my pattern. The first part of this approach worked, but I could not find a good enough approach to tracing.

Connect Closest

The first thing I tried was just having particles connect to the ones closest to them. This approach was not well thought out, and that became apparent very quickly when I implemented it. Connecting the particles created overlapping geometric shapes that weren’t really in the pattern I wanted, and which would also probably leave me with a shredded piece of board rather than a cut out screen if I used it with the laser cutter. (Since the approaches flaws were apparent, I didn’t fix the bugs in it, which explains some of the random seeming lines in the screenshot below.)

Closest Connect

Blob Tracing

I decided to try tracing the darkest areas of the particles in order to create the forms I wanted. I looked at many different blob tracing libraries, some of which worked and some of which didn’t:

  • BlobDetection – didn’t work
  • openCV blobs – didn’t work (meant for video)
  • dieWaldCV – kind of worked, but didn’t smooth the edges of my blobs enough to my taste. This might have ended up working if I had played with the color/gradients of the blobs themselves.

Screen Shot 2013-10-10 at 12.13.39 AM Screen Shot 2013-10-10 at 12.17.16 AM

 

One approach I didn’t try was making the blob tracing libraries trace the white areas rather than the black dots. That might have gotten me closer to what I wanted, but it also might have just produced ugly blobs.

Code

Code: https://github.com/juliat/lasercut-screen

Side Note: Hopefully Helpful Helper Function

I wrote a really basic helper function that takes in a normal stroke and draws an outlined version of it. This may be useful to others: helper function code

FaceOSC

Defense Mechanisms

My initial idea was to create an onscreen character which takes a neutral/unfriendly expression and exaggerates it, literally reflecting the person’s “prickliness” or unapproachability. When a person smiles, then the character becomes rounded and happier.

Photo Oct 06, 11 09 09 PM copy

In the end, while I worked on the sketch, I modified the concept. It became more of a creature and less of a puppet. The character becomes rounder and more visible when you smile, and pricklier and less visible for the longer you frown.  It also moves away toward the corner of the window for the longer you frown. Altogether, the character reacts to your expressions (or reflects your emotions, depending how you interpret it) by becoming more or less defensive (reflected in visibility, proximity, and prickliness.)

If I had more time to spend on this sketch, I would have experimented with moving it back toward my original concept—adding back facial features and making this more like a puppet.

Code

Github Repo

pricklyFace (main)

import oscP5.*;
OscP5 oscP5;

// our FaceOSC tracked face dat
Face face = new Face();
float faceScale = 1; // default - no resizing of face
ArrayList faceOutline = new ArrayList();
int numPoints = 100;
float initialPrickliness = 0.2;
float prickliness = initialPrickliness;
float maxPrickliness = 0.7;
float minPrickliness = 0;

float closeness = 0.3;
float maxCloseness = 0.7;
float minCloseness = 0.15;

void setup() {
  // default size is 640 by 480
  int defaultWidth = 640;
  int defaultHeight = 480;

  faceScale = 1; // shrink by half

  int realWidth = (int)(defaultWidth * faceScale);
  int realHeight = (int)(defaultHeight * faceScale);
  size(realWidth, realHeight, OPENGL);

  frameRate(10);

  oscP5 = new OscP5(this, 8338);
}

void draw() {  
  background(250);
  noStroke();

  updatePrickliness();

  if (face.found > 0) {
    
    // draw such that the center of the face is at 0,0
    translate(face.posePosition.x*faceScale*closeness, face.posePosition.y*faceScale*closeness);

    // scale things down to the size of the tracked face
    // then shrink again by half for convenience
    
    closeness = map(prickliness, minPrickliness, maxPrickliness, maxCloseness, minCloseness);
    scale(face.poseScale*closeness);

    // rotate the drawing based on the orientation of the face
    rotateY (0 - face.poseOrientation.y); 
    rotateX (0 - face.poseOrientation.x); 
    // rotateZ (    face.poseOrientation.z); 

    float fill = map(prickliness, minPrickliness, maxPrickliness, 100, 200);
    fill((int)fill);
    
    // drawEyes();
    // drawMouth();
    // print(face.toString());

    faceOutline = new ArrayList();
    getFaceOutlinePoints();
    drawOutline();
    
    /*if (face.isBlinking()) {
      println("BLINKED");
    }

    face.lastEyeHeight = face.eyeLeft;
    face.lastEyebrowHeight = face.eyeRight;
    */
  }
}

// OSC CALLBACK FUNCTIONS

void oscEvent(OscMessage m) {
  face.parseOSC(m);
}

void drawOutline() {
  float x = 0;
  float y = 0;

  if (faceOutline.size() != (numPoints + 1)) {
    getFaceOutlinePoints();
    return;
  }
  else {
    beginShape();
    for (int i=0; i < = numPoints; i++) {
      x = faceOutline.get(i).x;
      y = faceOutline.get(i).y;
      vertex(x, y);
    }  
    endShape();
  }

}

void updatePrickliness() {
  float antiPrickliness = 0;
  int transitionTime = 30000;

  if (!face.isSmiling()) {
    prickliness = constrain(face.timeSinceSmile, 0, transitionTime);
    prickliness = map(prickliness, 0, transitionTime, minPrickliness, maxPrickliness);
  }
  
  antiPrickliness = constrain(face.smilingTime, 0, transitionTime);
  antiPrickliness = -1 * map(antiPrickliness, 0, transitionTime, minPrickliness, maxPrickliness);
  
  prickliness = prickliness + antiPrickliness;
  constrain(prickliness, minPrickliness, maxPrickliness);
  if (prickliness < 0) {
    prickliness = 0;
  }
}

void getFaceOutlinePoints() {
  int xCenter = 0;
  int yCenter = 0;
  
  for (int i=0; i <= numPoints; i++) {
    float radius = 30;
  
    // iterate and draw points around circle
    float theta = 0;
    float x;
    float y; 
    float oldRadius = -1;
  
    theta = map(i, 0, numPoints, 0, 2*PI);
  
    if (i%2 == 0) {
      oldRadius = radius;
      radius = radius * random(1+prickliness, 1+(prickliness*2));
    }
  
    x = radius*cos(theta) + xCenter;
    y = radius*sin(theta) + yCenter;
  
    if (i == numPoints +1) {
      PVector firstPoint = faceOutline.get(0);
      PVector circlePoint = new PVector(firstPoint.x, firstPoint.y);
      faceOutline.add(circlePoint);
    } 
    else {
      PVector circlePoint = new PVector(x, y);
      faceOutline.add(circlePoint);
    }
  
    if (oldRadius > 0) {
      radius = oldRadius;
      oldRadius = -1;
    }
  }
}

void drawEyes() {
  int distanceFromCenterOfFace = 14;
  int heightOnFace = -4;
  int eyeWidth = 6;
  int eyeHeight = 4;
  ellipse(-1*distanceFromCenterOfFace, face.eyeLeft * heightOnFace, eyeWidth, eyeHeight);
  ellipse(distanceFromCenterOfFace, face.eyeRight * heightOnFace, eyeWidth, eyeHeight);
}

void drawMouth() {
  float mouthWidth = 30;
  int heightOnFace = 14;
  int mouthHeightFactor = 3;

  float mLeftCornerX = 0;
  float mLeftCornerY = heightOnFace;

  float pointX = mLeftCornerX + ((mouthWidth/2));

  float mouthHeight = face.mouthHeight * mouthHeightFactor;
  ellipse(mLeftCornerX, mLeftCornerY, mouthWidth, mouthHeight);
}

Face class

import oscP5.*;

// a single tracked face from FaceOSC
class Face {

  // num faces found
  int found;

  // pose
  float poseScale;
  PVector posePosition = new PVector();
  PVector poseOrientation = new PVector();

  // gesture
  float mouthHeight, mouthWidth;
  float eyeLeft, eyeRight;
  float eyebrowLeft, eyebrowRight;
  float jaw;
  float nostrils;

  // past
  float lastEyeHeight;
  float lastEyebrowHeight;
  
  boolean wasSmiling = false;
  float startedSmilingTime = 0;
  float smilingTime = 0;
  
  float stoppedSmilingTime = 0;
  float timeSinceSmile = 10000;

  Face() {
  }

  boolean isSmiling() {

    if (mouthIsSmiling()) {
      if (wasSmiling == false) {
        wasSmiling = true;
        startedSmilingTime = millis();
        timeSinceSmile = 0;
      }
      else {
        smilingTime = millis() - startedSmilingTime;
        println("smilingTime: ");
        print(smilingTime);
        println("");
      }
      return true;
    }
    else {
      if (wasSmiling == false) {
        timeSinceSmile = millis() - stoppedSmilingTime;
      }
      else {
        wasSmiling = false;
        stoppedSmilingTime = millis();
        smilingTime = 0;
      }
      return false;
    }
  }
  
  boolean mouthIsSmiling() {
    float minSmileWidth = 15;
    float minSmileHeight = 2;
    return ((mouthWidth > minSmileWidth) && (mouthHeight > minSmileHeight));
  }
  
  boolean isBlinking() {
    float eyeHeight = (face.eyeLeft + face.eyeRight) / 2;
    float eyebrowHeight = (face.eyebrowLeft + face.eyebrowRight) / 2;

    if ((eyeHeight < lastEyeHeight) &&
      (eyebrowHeight > lastEyebrowHeight)) {
      return true;
    }
    return false;
  }

  boolean isSpeaking() {
    int speakingMouthHeightThreshold = 2;
    if (face.mouthHeight > speakingMouthHeightThreshold) {
      return true;
    } 
    else {
      return false;
    }
  }

  // parse an OSC message from FaceOSC
  // returns true if a message was handled
  boolean parseOSC(OscMessage m) {

    if (m.checkAddrPattern("/found")) {
      found = m.get(0).intValue();
      return true;
    }      

    // pose
    else if (m.checkAddrPattern("/pose/scale")) {
      poseScale = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/pose/position")) {
      posePosition.x = m.get(0).floatValue();
      posePosition.y = m.get(1).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/pose/orientation")) {
      poseOrientation.x = m.get(0).floatValue();
      poseOrientation.y = m.get(1).floatValue();
      poseOrientation.z = m.get(2).floatValue();
      return true;
    }

    // gesture
    else if (m.checkAddrPattern("/gesture/mouth/width")) {
      mouthWidth = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/mouth/height")) {
      mouthHeight = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/eye/left")) {
      eyeLeft = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/eye/right")) {
      eyeRight = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/eyebrow/left")) {
      eyebrowLeft = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/eyebrow/right")) {
      eyebrowRight = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/jaw")) {
      jaw = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/nostrils")) {
      nostrils = m.get(0).floatValue();
      return true;
    }

    return false;
  }

  // get the current face values as a string (includes end lines)
  String toString() {
    return "found: " + found + "\n"
      + "pose" + "\n"
      + " scale: " + poseScale + "\n"
      + " position: " + posePosition.toString() + "\n"
      + " orientation: " + poseOrientation.toString() + "\n"
      + "gesture" + "\n"
      + " mouth: " + mouthWidth + " " + mouthHeight + "\n"
      + " eye: " + eyeLeft + " " + eyeRight + "\n"
      + " eyebrow: " + eyebrowLeft + " " + eyebrowRight + "\n"
      + " jaw: " + jaw + "\n"
      + " nostrils: " + nostrils + "\n";
  }
};

Being Shushed

shhhhh face

My second idea was to create a character and, to some degree, an environment/game mechanic. When you open your mouth, a small speech bubble appears and begins to grow. However, as soon as you open your mouth, the word “shhhhh” begins to appear all around, and the words cluster around the speech bubble, as though they are squishing it. If you close your mouth, the speech bubble disappears and the face onscreen looks somewhat unhappy. But if you keep your mouth open long enough, the bubble grows and pushes the shhh’es out of the frame. If you successfully do this, you see the word applause appear all around.

I attempted to implement this idea and part of the way. I created (as shown in the video above) a speech bubble which grows based on how long you have been “speaking” (crudely measured by the length of time which you have had your mouth open). However, I had trouble figuring out how to position the face and speech bubble on screen such that they wouldn’t overlap awkwardly. I also realized that implementing some sort of particle system (most likely) of “shhh”es to put pressure on the speech bubble was going to make realizing this fully take a ton more time.

If I had more time to spend on this, I would probably stop drawing the face temporarily and work on the speech bubble’s interaction with a particle system of “shhh”es, then come back to the issue of the speaker’s face.

Code

Github Repo

shhhFace

//
// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// this example includes a class to abstract the Face data
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230

import oscP5.*;
OscP5 oscP5;

// our FaceOSC tracked face dat
Face face = new Face();
SpeechBubble speechBubble = new SpeechBubble();
float faceScale = 1;

// for additions


void setup() {
  // default size is 640 by 480
  int defaultWidth = 640;
  int defaultHeight = 480;
  
  int realWidth = (int)(defaultWidth * faceScale);
  int realHeight = (int)(defaultHeight * faceScale);
  size(realWidth, realHeight, OPENGL);
  
  frameRate(30);

  oscP5 = new OscP5(this, 8338);
}

void draw() {  
  background(255);
  stroke(0);

  if (face.found > 0) {
    
    // draw such that the center of the face is at 0,0
    translate(face.posePosition.x*faceScale, face.posePosition.y*faceScale);
    
    // scale things down to the size of the tracked face
    // then shrink again by half for convenience
    scale(face.poseScale*0.5);
    
    // rotate the drawing based on the orientation of the face
    rotateY (0 - face.poseOrientation.y); 
    rotateX (0 - face.poseOrientation.x); 
    rotateZ (    face.poseOrientation.z); 
    
    noFill();
    drawEyes();
    drawMouth();
    
    face.isSpeaking();
    int sbX = 7;
    int sbY = -15;
    speechBubble.draw(sbX, sbY);
      
    //}
    
    //drawNose();
    //drawEyebrows();
    print(face.toString());
    
    if (face.isSmiling()) {
      println("SMILING");
    }
    if (face.isBlinking()) {
      println("BLINKED");
    }
    
    face.lastEyeHeight = face.eyeLeft;
    face.lastEyebrowHeight = face.eyeRight;
    println("lastEyeHeight " + face.lastEyeHeight);
    println("lastEyebrowHeight " + face.lastEyebrowHeight);
  }
}

// OSC CALLBACK FUNCTIONS

void oscEvent(OscMessage m) {
  face.parseOSC(m);
}

void drawEyes() {
  int distanceFromCenterOfFace = 20;
  int heightOnFace = -9;
  int eyeWidth = 6;
  int eyeHeight =5;
  ellipse(-1*distanceFromCenterOfFace, face.eyeLeft * heightOnFace, eyeWidth, eyeHeight);
  ellipse(distanceFromCenterOfFace, face.eyeRight * heightOnFace, eyeWidth, eyeHeight);
}
void drawEyebrows() {
  rectMode(CENTER);
  fill(0);
  int distanceFromCenterOfFace = 20;
  int heightOnFace = -5;
  int eyebrowWidth = 23;
  int eyebrowHeight = 2;
  rect(-1*distanceFromCenterOfFace, face.eyebrowLeft * heightOnFace, eyebrowWidth, eyebrowHeight);
  rect(distanceFromCenterOfFace, face.eyebrowRight * heightOnFace, eyebrowWidth, eyebrowHeight);
}
void drawMouth() {
  float mouthWidth = 30;
  int heightOnFace = 14;
  int mouthHeightFactor = 6;
  
  float mLeftCornerX = 0;
  float mLeftCornerY = heightOnFace;
 
  float pointX = mLeftCornerX + ((mouthWidth/2));
  
  float mouthHeight = face.mouthHeight * mouthHeightFactor;
  ellipse(mLeftCornerX, mLeftCornerY, mouthWidth, mouthHeight);
}

void drawNose() {
  int distanceFromCenterOfFace = 5;
  int heightOnFace = -1;
  int nostrilWidth = 4;
  int nostrilHeight = 3;
  ellipse(-1*distanceFromCenterOfFace, face.nostrils * heightOnFace, nostrilWidth, nostrilHeight);
  ellipse(distanceFromCenterOfFace, face.nostrils * heightOnFace, nostrilWidth, nostrilHeight);
}

SpeechBubble

class SpeechBubble {
  float xPos; 
  float yPos; 

  float sbHeight = 150*0.25;
  float sbWidth = 250*0.25;
 
  float initialRadius = (sbHeight/3);
  float radius = initialRadius;
 
  int numPoints = 30;
  // http://math.rice.edu/~pcmi/sphere/degrad.gif
  float extrusionTheta = (5*PI)/6;
  float epsilon = PI/25;
  
  void draw(float xPosition, float yPosition) {
    xPos = xPosition;
    yPos = yPosition;
    
    float timeRadiusFactor = face.totalTime/10000;
    
    radius = radius + timeRadiusFactor;
    
    if (radius < 10) {
      return;
    }
    
    float xCenter = xPos+sbWidth/2 + timeRadiusFactor;
    float yCenter = yPos+sbHeight/2 - (timeRadiusFactor/2);
    
    println("DRAWN");
    beginShape();
    
      // variables for calculating each point
      float x;
      float y;
      float theta;   
      
      // iterate and draw points around circle.
      for (int i = 0; i <= numPoints; i++) {
        
        theta = map(i, 0, numPoints-2, 0, 2*PI);
        // this minus-2 is a hack to make the circle close
        x = radius*cos(theta) + xCenter;
        y = radius*sin(theta) + yCenter;
        
        // check to see if we're at the point in the circle where 
        // we want to draw the part of the speech bubble that sticks out
        if (((theta - epsilon) < extrusionTheta) && 
            ((theta + epsilon) > extrusionTheta)){
             
              float extrusionRadius = PI/25;
              
              float startTheta = extrusionTheta - extrusionRadius;
              float endTheta = extrusionTheta + extrusionRadius;
              
              float startX = radius*cos(startTheta) + xCenter;
              float startY = radius*sin(startTheta) + yCenter;
  
              float endX = radius*cos(endTheta) + xCenter;
              float endY = radius*sin(endTheta) + yCenter;
            
              curveVertex(startX, startY);
              vertex(startX, startY);
              vertex(x - (radius/1.5), y+ (radius/3));
              vertex(endX, endY);
              curveVertex(endX, endY);
        }
        else {
          curveVertex(x, y);
        }
      }
    endShape();
  }
}

Face class

import oscP5.*;

// a single tracked face from FaceOSC
class Face {

  // num faces found
  int found;

  // pose
  float poseScale;
  PVector posePosition = new PVector();
  PVector poseOrientation = new PVector();

  // gesture
  float mouthHeight, mouthWidth;
  float eyeLeft, eyeRight;
  float eyebrowLeft, eyebrowRight;
  float jaw;
  float nostrils;

  // past
  float lastEyeHeight;
  float lastEyebrowHeight;
  boolean wasSpeaking = false;
  float startSpeakingTime = 0;
  float totalTime = 0;
  float stoppedSpeakingTime = 0;

  Face() {
  }

  boolean isSmiling() {
    float minSmileWidth = 15;
    float minSmileHeight = 2;

    if ((mouthWidth > minSmileWidth) &&
      (mouthHeight > minSmileHeight)) {
      return true;
    }
    return false;
  }

  boolean isBlinking() {
    float eyeHeight = (face.eyeLeft + face.eyeRight) / 2;
    float eyebrowHeight = (face.eyebrowLeft + face.eyebrowRight) / 2;

    if ((eyeHeight < lastEyeHeight) &&
      (eyebrowHeight > lastEyebrowHeight)) {
      return true;
    }
    return false;
  }

  boolean isSpeaking() {
    int speakingMouthHeightThreshold = 2;
    /* Debug: 
     println("MOUTHHEIGHT");
     println(face.mouthHeight);
     */
     println("totalTime: ");
     print(totalTime);
     println("");
     
    if (face.mouthHeight > speakingMouthHeightThreshold) {
      if (!wasSpeaking) {
        totalTime = 0;
        startSpeakingTime = millis();
        wasSpeaking = true;
      }
      else {
        totalTime = millis() - startSpeakingTime;
      }
      println("SPEAKING");
      return true;
    } 
    else {
      if (wasSpeaking) {
        println("NOT SPEAKING");
        stoppedSpeakingTime = millis();
        wasSpeaking = false;
        totalTime = 0;
      }
      else {
        totalTime = -1*(millis() - stoppedSpeakingTime);
      }
      return false;
    }
  }

  // parse an OSC message from FaceOSC
  // returns true if a message was handled
  boolean parseOSC(OscMessage m) {

    if (m.checkAddrPattern("/found")) {
      found = m.get(0).intValue();
      return true;
    }      

    // pose
    else if (m.checkAddrPattern("/pose/scale")) {
      poseScale = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/pose/position")) {
      posePosition.x = m.get(0).floatValue();
      posePosition.y = m.get(1).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/pose/orientation")) {
      poseOrientation.x = m.get(0).floatValue();
      poseOrientation.y = m.get(1).floatValue();
      poseOrientation.z = m.get(2).floatValue();
      return true;
    }

    // gesture
    else if (m.checkAddrPattern("/gesture/mouth/width")) {
      mouthWidth = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/mouth/height")) {
      mouthHeight = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/eye/left")) {
      eyeLeft = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/eye/right")) {
      eyeRight = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/eyebrow/left")) {
      eyebrowLeft = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/eyebrow/right")) {
      eyebrowRight = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/jaw")) {
      jaw = m.get(0).floatValue();
      return true;
    }
    else if (m.checkAddrPattern("/gesture/nostrils")) {
      nostrils = m.get(0).floatValue();
      return true;
    }

    return false;
  }

  // get the current face values as a string (includes end lines)
  String toString() {
    return "found: " + found + "\n"
      + "pose" + "\n"
      + " scale: " + poseScale + "\n"
      + " position: " + posePosition.toString() + "\n"
      + " orientation: " + poseOrientation.toString() + "\n"
      + "gesture" + "\n"
      + " mouth: " + mouthWidth + " " + mouthHeight + "\n"
      + " eye: " + eyeLeft + " " + eyeRight + "\n"
      + " eyebrow: " + eyebrowLeft + " " + eyebrowRight + "\n"
      + " jaw: " + jaw + "\n"
      + " nostrils: " + nostrils + "\n";
  }
};

Other Idea: Feeling Misinterpreted

Photo Oct 06, 11 09 09 PM copy 2

One of my initial ideas was to create a face/character that would mirror your expressions but be…off. The face itself would be distorted, somewhat ugly, with some features upside-down or asymmetrical. As you looked at the face, it would mirror your expressions somewhat—if you smile, it would smile too, but crookedly, awkwardly.

The concept for this was to create a sort of mirror that evokes the feeling of being misinterpreted, not being able to say the right thing, or express it effectively.

I abandoned this idea because after initial experimentation, I decided that it would be too difficult to get to the point of accurately mirroring a face’s expressions so that I could deliberately distort parts of that mirroring.