Interactive Shadow Box

I knew for this project that I wanted to create something that specifically responded to the face. I was interested in how the movement of the face could be translated to control something non-human, such as a butterfly. For this reason I chose to use FaceOSC, as I wanted to utilize the gestures such as head tilt and eye openness to control movement. The goal was to create a sort of interactive shadow box, where the image appears still until it detects a face. I would be interested in creating a whole display set of them in the future with other insects and other control gestures. The wings were drawn in Photoshop, created as two separate images which are rotated about the y-axis based on how open your eyes are.

After being frustrated with how jittery the movement appeared, I utilized a circular buffer that takes a running average of previous points to translate to the actual movement of the butterfly. This helped significantly, though it could certainly be refined further. Additionally, I would like to add a cast shadow from the butterfly to add depth.

GIF of some normal blinking:

GIF with really aggressive squinting:

Still image of just the shadow box:

import oscP5.*;
OscP5 oscP5;
int     found; // global variable, indicates if a face is found
PVector poseOrientation = new PVector(); // stores an (x,y,z)
float leftOpen;
float rightOpen;
PImage wingRight;
PImage wingLeft;
PImage bckgrd;
CircularBuffer leftBuff = new CircularBuffer(10);
CircularBuffer rightBuff = new CircularBuffer(10);
void setup() {
  size(800, 800, OPENGL);
  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "leftOpen", "/gesture/eye/left");
  oscP5.plug(this, "rightOpen", "/gesture/eye/right");
  oscP5.plug(this, "leftBrow", "/gesture/eyebrow/left");
  oscP5.plug(this, "rightBrow", "/gesture/eyebrow/right");
  wingRight = loadImage("wingr.png");
  wingLeft = loadImage("wingl.png");
  bckgrd = loadImage("background.png");
void draw() {
  background (214, 205, 197);
  background(178, 163, 149);
  image(bckgrd, 0, 0, width, height);
  float scl = 250;
  if (found != 0) {
    translate (width/2, height/2, 0);
    rotateZ (poseOrientation.z);
    float rightRotate = filter(rightBuff);
    rotateY (constrain(map(rightRotate, 2.7, 3.7, -PI/2, PI/6), -PI/2+0.1, -0.05)); 
    image(wingRight, 0, -200, scl, 1.4*scl);
    translate (width/2, height/2, 0);
    rotateZ (poseOrientation.z);
    float leftRotate = filter(leftBuff);
    rotateY (constrain(map(leftRotate, 2.7, 3.7, PI/2, -PI/6), 0.05, PI/2-0.1));
    image(wingLeft, 0, -200, -scl, 1.4*scl);
    translate (width/2, height/2, 0);
    image(wingRight, 0, -200, scl, 1.4*scl);
    image(wingLeft, 0, -200, -scl, 1.4*scl);
  fill(37, 34, 27);
  rect(0, 0, width, 30);
  rect(0, 0, 30, height);
  rect(0, height-30, width, 30);
  rect(width-30, 0, 30, height);
// Event handlers for receiving FaceOSC data
public void found (int i) { found = i; }
public void poseOrientation(float x, float y, float z) {
  poseOrientation.set(x, y, z);
public void leftOpen (float i) {leftOpen = i;;}
public void rightOpen (float i) {rightOpen = i;;}
// Event handlers for receiving FaceOSC data
public void found (int i) { found = i; }
public void poseOrientation(float x, float y, float z) {
  poseOrientation.set(x, y, z);
public void leftOpen (float i) {leftOpen = i;;}
public void rightOpen (float i) {rightOpen = i;;}
public float filter (CircularBuffer buff)
  float filt = 0;
  for (int i = 0; i <; i++)
    filt = filt +[i];
  return filt/;
//CIRCULAR BUFFER CLASS -- keeps past datapoints and calculates average
//to help with smoothing the movement
public class CircularBuffer {
    public float[] data = null;
    private int capacity  = 0;
    private int writePos  = 0;
    private int available = 0;
    public CircularBuffer(int capacity) {
        this.capacity = capacity; = new float[capacity];
    public void reset() {
        this.writePos = 0;
        this.available = 0;
    public int capacity() { return this.capacity; }
    public int available(){ return this.available; }
    public int remainingCapacity() {
        return this.capacity - this.available;
    public void store(float element){
            if(writePos >= capacity){
                writePos = 0;
            data[writePos] = element;


This is an interactive installation work by Camille Utterback from 2013 entitled Flourish. It is a series of 7 glass panels, each with 2 layers, and 3 of which are interactive. The combination of colors and textures creates a sense of depth which is heightened by lights that respond to viewers' movements and travel between the panels. I'm inspired by the combination of materials and ideas in this piece. Painting, sculpture, interactivity, time-based media, and glass-work are all being combined to create what I see as a living painting with an incredible sense of depth. It's hard to know without seeing the piece in person, but I wish all of the panels were interactive, though perhaps it is more surprising if only a few are.  I think the image of the tree is a bit cliche, and that the more abstract but still very natural elements of the rest of the panels are much more compelling. The idea of creating interactive paintings that change over time is one that is exciting to me, particularly coming from a painting background myself.




Spectacle is when a medium is used to show off the latest software, often by large companies and in advertising.

Speculation focuses not on craft, but on the relationship between technology and art-making, often in a way that is meant not to be visually appealing but conceptually interesting.

The project A Hole in Space is one that I think could be argued as both spectacle and speculation, and for that reason I view it as sitting somewhere in the middle. It absolutely has elements of being a spectacle - it is showing off new, grand technology in capturing and broadcasting video in a way that is meant to be technically amazing to the viewer. However, it also has elements of speculation, where it is commenting on growing technology's ability to impart a sense of togetherness and to make the world a little smaller. This is something less about the technically impressive aspect of the project and more about what it is "about" conceptually.

This piece falls very strongly towards technological acceleration in acceleration vs drag, as it promotes a future with increased telecommunication capabilities. Clearly it is more about visibility rather than invisibility, particularly in the aspect that viewers cannot see themselves on the screens, as with a typical video-chat, but can only see the other city. I would argue this piece leans more towards surplus rather than waste, as it is more emphasizing the positives of the development of technology. Finally, I think this piece is exclusively in the category of art rather than commerce, as it was done unannounced with no brand, advertisement, or promotion of any kind attached to it.


Rain Ghosts

Interactive: Hover and move your mouse around the screen in the rain.

This is as a simple interactive environment based on the idea of being together with someone even though you cannot see them. 


Below is a GIF of 2 people interacting with it.


The project is a rainy forest in which the only indication that anyone else is there with you is where the rain is falling. If there is a gap, there must be another person present. After testing it out with a few friends, it has become apparent that not only is this something you must be looking and waiting for, but that you may find yourself seeing gaps or "ghosts" that aren't really there. It attempts to combine anonymity and intimacy through the concept of simply being there with another person. You may or may not know who they are, but you share a space and environment that reacts to you together regardless of how far apart you actually are.

Originally, I had many grand ideas for this project. Perhaps a collaborative garden where individuals planted trees and helped care for other peoples' saplings, or a drawing program where individuals had different "parts" - branches, leaves, and flowers.

In the end, this is what I had time for, and while it is simple, I think it has potential. I would like to expand this first to have the drops impact and make small splashes on the ground and on the tops of the entities in the environment, and to create leaves on the bushes that would react to being "brushed past".


Rain code derived from:

Interactivity based on Shar Stiles' drawing program:


The original idea for this piece was something that represented growth and decay in terms of nature. This evolved into a clock that drew generative artwork - one that changed throughout the day and grew more complex the longer you looked at it. For this, I started off with the code from The Coding Train's Perlin Noise Flow Field as a base template of sorts. The seed for the Perlin noise field is generated based on the day and year. Every second there is a white particle added, every minute a blue, and every red an hour, each traveling at different speeds to show how time "flies by". One can tell how many minutes or hours they have been staring at the clock, either watching or wasting time, by how many particles there are.

To continue this project, there are two things I would like to expand upon. First, there is currently a maximum number of each type of particle, as having an unlimited number results in a continuously lower framerate, and I would like to find a way to remove this limitation. Additionally, I would want to see this displayed so that it is continuously generating - that is, it would show how many seconds, minutes, and hours had passed in the day thus far.

12:04-12:05, with very little time elapsed:

3:13-3:14, with a around 60 minutes passed:

8:32-8:33, with way too many hours passed:



Code (it's really ugly I know):

var prevSec;
var millisRolloverTime;
var inc = 0.1;
var scl = 20;
var cols, rows;
var numpart = 0;
var zoff = 0;
var radius = 250;
var particles = [];
var flowfield;
function setup() {
  createCanvas(700, 700);
  cols = floor(width/scl);
  rows = floor(height/scl);
  millisRolloverTime = 0;
  //seconds particles
  for(var i = 0; i &lt; 60; i++)
    var angle = 3*PI/2 + (TWO_PI/60*i);
    particles[i] = new Particle(radius*cos(angle)+width/2, radius*sin(angle)+height/2);
  //minute particles
  for(var i = 60; i &lt; 120; i++)
    var angle = 3*PI/2 + (TWO_PI/60*(i-60));
    particles[i] = new Particle(radius*cos(angle)+width/2, radius*sin(angle)+height/2);
    particles[i].type = 1;
  //hour particles
  for(var i = 120; i &lt; 132; i++)
    var angle = 3*PI/2 + (TWO_PI/12*(i-120));
    particles[i] = new Particle(radius*cos(angle)+width/2, radius*sin(angle)+height/2);
    particles[i].type = 2;
  flowfield = new Array(cols*rows);
function draw() {
  //draw the circle of circles
  for(var i = 0; i &lt; 12; i++)
    var angle = 3*PI/2 + (TWO_PI/12*i);
    stroke(255, 50);
    ellipse(radius*cos(angle)+width/2, radius*sin(angle)+height/2, 30, 30);
  var angle = 3*PI/2 + (TWO_PI/60*second());
  var nextsec = abs((second()+1)%60);
  var angle = 3*PI/2 + (TWO_PI/60*nextsec);
  particles[second()].active = true;
  particles[nextsec] = new Particle(radius*cos(angle)+width/2, radius*sin(angle)+height/2);
  particles[second()].maxspeed = 5;
  var nextmin = abs((minute()+1)%60);
  var angle = 3*PI/2 + (TWO_PI/60*nextmin);
  particles[60+minute()].active = true;
  particles[nextmin+60] = new Particle(radius*cos(angle)+width/2, radius*sin(angle)+height/2);
  particles[60+minute()].type = 1;
  particles[60+minute()].maxspeed = 3;
  var nexthour = abs((hour()+1)%60);
  var angle = 3*PI/2 + (TWO_PI/12*nexthour);
  var h = hour()%12;
  particles[120+h].active = true;
  particles[120+h].type = 2;
  particles[120+h].maxspeed = 1.5;
  var yoff = 0;
  for (var y = 0; y &lt; rows; y++)
    var xoff = 0;
    for (var x = 0; x &lt; cols; x++)
      var index = (x+y*cols);
      var angle = noise(xoff, yoff, zoff)*TWO_PI*4;
      var v = p5.Vector.fromAngle(angle);
      flowfield[index] = v;
      xoff += inc;
    yoff += inc;
    zoff += 0.0005;
for (var i = 0; i &lt; particles.length; i++) {
    if (particles[i].active) {
  stroke(86, 255, 238);
  var angle = 3 * PI / 2 + (TWO_PI / 60 * (minute()));
  point(radius * cos(angle) + width / 2, radius * sin(angle) + height / 2);
function Particle(startx, starty) { = false;
  this.pos = createVector(startx, starty);
  this.vel = createVector(0, 0);
  this.acc = createVector(0, 0);
  this.maxspeed = 2;
  this.type = 0;
  this.h = 0;
  this.prevPos = this.pos.copy();
  this.update = function() {
this.follow = function(vectors) {
    var x = floor(this.pos.x / scl);
    var y = floor(this.pos.y / scl);
    var index = x + y * cols;
    var force = vectors[index];
  this.applyForce = function(force) {
  } = function() {
    if ( {
      if (this.type == 0) {
        var angle = 3 * PI / 2 + (TWO_PI / 60 * (second()));
        point(radius * cos(angle) + width / 2, radius * sin(angle) + height / 2);
      } else if (this.type == 2) {
        stroke(255, 111, 86);
        var angle = 3 * PI / 2 + (TWO_PI / 12 * (hour()));
        point(radius * cos(angle) + width / 2, radius * sin(angle) + height / 2);
      } else if (this.type == 1) {
        stroke(86, 255, 238);
        var angle = 3 * PI / 2 + (TWO_PI / 60 * (minute()));
        point(radius * cos(angle) + width / 2, radius * sin(angle) + height / 2);
      line(this.pos.x, this.pos.y, this.prevPos.x, this.prevPos.y);
  this.updatePrev = function() {
    this.prevPos.x = this.pos.x;
    this.prevPos.y = this.pos.y;
  this.edges = function() {
    if (this.pos.x &gt; width) {
      this.pos.x = 0;
    if (this.pos.x &lt; 0) { this.pos.x = width; this.updatePrev(); } if (this.pos.y &gt; height) {
      this.pos.y = 0;
    if (this.pos.y &lt; 0) {
      this.pos.y = height;


Politics of Power by automato

Politics of Power is an interactive installation piece that uses different "models" of plugs - Model D, M, and T - to simulate different ideological structures in the politics of networks by using different algorithms to distribute power between the lights which are plugged in. For example, in Model M, power is distributed hierarchically, where the topmost light gets the most power, and the bottom row gets very little. The "monarch" may randomly die, but this does not give any more power to those that were beneath it.

I was intrigued by this project because it played with a dual meaning of the word "power" and was able to convey such a complex issue in a simple and entertaining way. I think this project alludes to a greater link between generative art and sculpture, one that can react to viewers and change its behavior accordingly. To take this piece even further, I think it would be exciting to be able to combine different hierarchical structures and see how they interact with one another - how a monarchy influences a democracy and the vying for power between them.

The artists behind automato are Simone Rebaudengo, Matthieu Cherubini, Saurabh Datta and Lorenzo Romagnoli. Their work often deals with the ethics of technology, and the idea of how technology can be used to reflect human ethics.


I think that categorizing things as either "First Word Art" or "Last Word Art" is difficult, if not impossible to do except in hindsight. I think every artist strives to make one or the other, but there is no real way to know other than to see what follows, and as such I believe most projects will fall somewhere in the middle.

Particularly the idea of "Last Word Art" is troublesome to me. It seems to require a universal understanding that the best of the best has already been created, and to attempt to refine the idea any further would be ridiculous, but I think that overlooks the fact that the most innovative ideas draw inspiration from the past as part of the process.

Since technology is constantly evolving, one might anticipate that a lot of "First Word Art" can stem from the development of technology. However, just because something is new and different doesn't necessarily mean it's valuable simply because it utilizes a new material or concept (though it certainly can be).


My initial inspiration for the piece was simply thinking about how objects could visually track movement, and because I'm me I wanted to do that in kind of a weird way. The first step was creating the eyes that would be able to track movement, initially using the mouse for testing. After that, I tried a bunch of functions to find a path for the fly that I liked, eventually picking a 3-petaled polar graph "flower" because of the way it looped around 3 of the eyes. Getting the fly to face the direction it was traveling was a little tricky, as that involved calculating the direction of the next frame of the fly and rotating it towards that. I chose the Double Exponential Ogee function because I wanted the fly to slow down slightly going around corners, and speed up when it was traveling in more of a straight line, and the Ogee function had that pattern - fast, slow, fast - and so I used that while phase-shifting it a bit. Overall, I'm pretty pleased with the result, as I came pretty close to that concept that I had in mind and I learned a lot about mapping and direction changes in the process. I initially thought about having some of the eyes blink randomly or blink in response to the fly coming too close. However, with the short length of the GIF I chose not too, but it could be something to try in the future. I also think the addition of motion blur to the fly would have made the appearance more smooth.



function renderMyDesign(percent) {
  // here, I set the background
	background(255, 147, 140);
  // coordinates of the fly
  var flyx = 0;
  var flyy = 0;
  var p = map(percent, 0, 1, 0, 3.14);
  if(percent &gt;=0 &amp;&amp; percent &lt;= 0.3333) { var frac = map(percent, 0, 0.3333, 0, 1); var speed = function_DoubleExponentialOgee(frac, 0.15); p = map(speed, 0, 1, 0, PI/3); } else if(percent &gt; 0.333 &amp;&amp; percent &lt;= 0.666)
    var frac = map(percent, 0.3333, 0.666, 0, 1);
  	var speed = function_DoubleExponentialOgee(frac, 0.15);
  	p = map(speed, 0, 1, PI/3, 2*PI/3);
    var frac = map(percent, 0.666, 1, 0, 1);
  	var speed = function_DoubleExponentialOgee(frac, 0.15);
  	p = map(speed, 0, 1, 2*PI/3, PI);
  var r = cos(3*(p+PI/2));
  var nr = cos(3*(p+0.0157+PI/2));
  var jousx = map(r*cos((p+PI/3+PI/2)), -1, 1, 30, 630);
  var jousy = map(r*sin((p+PI/3+PI/2)), -1, 1, 30, 630);
  var nextx = map(nr*cos((p+0.0157 + PI/3+PI/2)), -1, 1, 30, 630);
  var nexty = map(nr*sin((p+0.0157 + PI/3+PI/2)), -1, 1, 30, 630);
  var jx = map(jousx, 30, 630, -1, 1);
  var jy = map(jousy, 30, 630, -1, 1);
  var nx = map(nextx, 30, 630, -1, 1);
  var ny = map(nexty, 30, 630, -1, 1);
  var direction = atan2((ny - jy), (nx - jx)) + PI/2.2;
  flyx = jousx;
  flyy = jousy;
  for (var r = 0; r &lt; 5; r++) {
    for (var c = 0; c &lt; 4; c++) {
      var midx = 0;
      var midy = 0;
      if (r % 2 == 0) {
        midx = 106.66 + c * 213.33;
        midy = 160 * r;
      } else {
        midx = 213.333 * c;
        midy = 160 * r
      //direction of the fly
      var fx = map(flyx, 0, width, -1, 1);
      var fy = map(flyy, 0, height, -1, 1);
      var mx = map(midx, 0, width, -1, 1);
      var my = map(midy, 0, height, -1, 1);
      var dir = atan2((fy - my), (fx - mx));
      var amp = 30;
      if (dist(midx, midy, flyx, flyy) &lt;= 30)
        amp = dist(midx, midy, flyx, flyy);
      //white of eye
      fill(244, 244, 233);
      ellipse(midx, midy, 130, 130);
      translate(midx + amp * cos(dir), midy + amp * sin(dir));
      //distortion value
      var d = constrain(dist(midx, midy, flyx, flyy), 0, 30);
      var squish = map(d, 0, 30, 1, 0.9);
      stroke(20, 114, 80);
      fill(61, 249, 187);
      ellipse(0, 0, 70 * squish, 70);
      fill(19, 20, 45);
      ellipse(0, 0, 30 * squish, 30);
  fill(19, 20, 45);
  translate(flyx, flyy);
  stroke(19, 20, 45);
  ellipse(0, 0, 20, 30);
  ellipse(0, -15, 16, 10);
  line(0, 0, -17, -17);
  line(0, 0, 17, -17);
  line(-17, -17, -20, -15);
  line(17, -17, 20, -15);
  line(0, -10, 17, 3);
  line(0, -10, -17, 3);
  line(17, 3, 20, 6);
  line(-17, 3, -20, 6);
  line(0, 0, 17, 17);
  line(0, 0, -17, 17);
  fill(255, 89, 0);
  ellipse(6, -17, 6, 8);
  ellipse(-6, -17, 6, 8);
  fill(160, 255, 223, 150);
  vertex(0, -10);
  vertex(-15, 0);
  vertex(-15, 25);
  vertex(-4, 15);
  vertex(0, -10);
  vertex(0, -10);
  vertex(15, 0);
  vertex(15, 25);
  vertex(4, 15);
  vertex(0, -10);
// Double Exponential Ogee function ('_a' is the slope)
//(it goes fast, slowwww, fast)
// See
// From:
  function function_DoubleExponentialOgee (x, a){
  functionName = "Double-Exponential Ogee";
  var min_param_a = 0.0 + Number.EPSILON;
  var max_param_a = 1.0 - Number.EPSILON;
  a = constrain(a, min_param_a, max_param_a); 
  var y = 0;
  if (x&lt;=0.5){
    y = (pow(2.0*x, 1.0-a))/2.0;
  else {
    y = 1.0 - (pow(2.0*(1.0-x), 1.0-a))/2.0;
  return y;


For the praxinoscope I just wanted to make some little bugs crawling around. There are two bugs coded that start from opposite ends and walk along a sine curve. It was a good learning process in making the legs draw properly and for all of the pieces to move together. Pretty simple, but this partially inspired my final GIF design.

function drawArtFrame ( whichFrame ) {
// Draw the artwork for a generic frame of the Praxinoscope,
// given the framenumber (whichFrame) out of nFrames.
// NOTE #1: The "origin" for the frame is in the center of the wedge.
// NOTE #2: Remember that everything will appear upside-down!
//bug one!
var moveY = whichFrame * 17;
var a = 20*cos(whichFrame*0.5 + 50);
ellipse(0+a, 80-moveY, 12, 20);
ellipse(0+a, 70-moveY, 5, 5);
line(0+a, 80-moveY, 10+a, 70-moveY);
line(0+a, 80-moveY, -10+a, 70-moveY);
line(0+a, 75-moveY, 10+a, 90-moveY);
line(0+a, 75-moveY, -10+a, 90-moveY);
line(0+a, 75-moveY, 12+a, 80-moveY);
line(0+a, 75-moveY, -12+a, 80-moveY);
line(0+a, 70-moveY, 5+a, 65-moveY);
line(0+a, 70-moveY, -5+a, 65-moveY);
//bug two!
var a = -20*cos(whichFrame*0.2 + 50);
ellipse(0+a, 60-moveY, 12, 20);
ellipse(0+a, 50-moveY, 5, 5);
line(0+a, 60-moveY, 10+a, 50-moveY);
line(0+a, 60-moveY, -10+a, 50-moveY);
line(0+a, 55-moveY, 10+a, 70-moveY);
line(0+a, 55-moveY, -10+a, 70-moveY);
line(0+a, 55-moveY, 12+a, 60-moveY);
line(0+a, 55-moveY, -12+a, 60-moveY);
line(0+a, 50-moveY, 5+a, 45-moveY);
line(0+a, 50-moveY, -5+a, 45-moveY);



1A. Something I've always been fascinated with that exhibits effective complexity is the appearance of the Fibonacci sequence in nature. Not only do (most) flowers have a number of petals which is a Fibonacci number, but the arrangement of the seeds in the centers of flowers is determined by Fibonacci numbers. For example, the flower below has seeds arranged in spirals. There are 55 spirals going to the right, and 34 spirals going to the left, both numbers in the Fibonacci sequence. This is an example of almost total order - while different flowers have different numbers, flowers of the same type are virtually indistinguishable from one another.








1B. I relate to Galanter's idea of the problem of meaning. While I value creating works that are visually or technically interesting and enjoyable to see, there is a big difference for me between that and a piece that holds a lot of meaning. I struggle with whether or not the process or the product is more important to me as well as the viewer, but I think they are equally as valuable in different ways. Intent and concept are important in making art to me, but I'm certainly open to "happy accidents" that are part of the process of making generative art.