For our last project, Kate and I both wanted to work with projections. We chose to augment a common, often overlooked but frequently used object, the water cooler. We first discussed how we wanted to create a “flooded” water effect, as if the digital water was flowing from the tap. We attempted to work with the physics engine, Box 2D, but we ended up creating our own particle system in Processing to generate the waterfall. We added some floating kids into the waterfall to create an unexpected sense of playfulness.

Here’s our video: 

To create the projection mapping, we used the Keystone library for Processing to correct the perspective from the projector throw. In the final documentation, we used Millumin to add further control over the warping of the projections, fitting the waterfall precisely to the water cooler tap and floor level. This allowed us to use bezier curves and segmenting to enhance our projection mapping accuracy.

Here’s some code:

Water[] drops = new Water[500];
Mist[] bubbles = new Mist[500];
Ball[] balls = new Ball[200];

int numBalls = 200;
float spring = 0.05;
float gravity = 0.2;
float friction = -.1;

int numFrames = 81;  // The number of frames in the animation
int currentFrame = 0;
PImage[] images = new PImage[numFrames];
//ArrayList mistClouds;

float[] p1 = {237, 0};
float[] p2 = {320, 0};
float[] p3 = {320, 0};
float[] p4 = {320, 0};
float[] p5 = {320, 0};
float[] p6 = {320, 0};
float[] p7 = {320, 0};
float[] p8 = {320, 0};
float[] p9 = {337, 0};

int mouseR = 25;

void setup() {
  size(640, 640);

  //animation1 = new Animation("Witch Flying_2_", 81);
  //animation2 = new Animation("PT_Teddy_", 60);

  //for (int j = 0; j < numFrames; j++) {
  //  String imageName = "Witch Flying_2_" + nf(j, 5) + ".png";
  //  images[j] = loadImage(imageName);

  for (int i = 0; i();

void draw() {
  background(0, 0, 0);

  //currentFrame = (currentFrame+1) % numFrames;  // Use % to cycle through frames
  /*int offset = 0;
   for (int x = -100; x < width; x += images[0].width) { 
   image(images[(currentFrame+offset) % numFrames], x, -20);
   image(images[(currentFrame+offset) % numFrames], x, height/2);

  //                    draw pool 

  //fill(150, 180, 255);



  //translate(0, height/2);

  //curveVertex(p1[0], p1[1]);
  //curveVertex(p1[0], p1[1]);
  //curveVertex(p2[0], p2[1]);
  //curveVertex(p3[0], p3[1]);
  //curveVertex(p4[0], p4[1]);
  //curveVertex(p5[0], p5[1]);
  //curveVertex(p6[0], p6[1]);
  //curveVertex(p7[0], p7[1]);
  //curveVertex(p8[0], p8[1]);
  //curveVertex(p9[0], p9[1]);
  //curveVertex(p9[0], p9[1]);

  ////ellipse(p1[0], p1[1], 10, 10);
  ////ellipse(p2[0], p2[1], 10, 10);
  ////ellipse(p3[0], p3[1], 10, 10);
  ////ellipse(p4[0], p4[1], 10, 10);
  ////ellipse(p5[0], p5[1], 10, 10);
  ////ellipse(p6[0], p6[1], 10, 10);
  ////ellipse(p7[0], p7[1], 10, 10);
  ////ellipse(p8[0], p8[1], 10, 10);
  ////ellipse(p9[0], p9[1], 10, 10);



  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p2[1] -= .2;
  //    }
  //  }

  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p3[1] -= .5;
  //    }
  //  }

  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p4[1] -= .5;
  //    }
  //  }

  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p5[1] -= .5;
  //    }
  //  }

  //for (int dot= 0; dot width/2) { //shrink left
  //      p6[0] -= .5;
  //    }
  //    if (p6[1]> p1[1]) { //shrink up
  //      p6[1] -= .5;
  //    }
  //  }

  //for (int dot= 0; dot width/2) { //shrink left
  //      p7[0] -= .5;
  //    }
  //    if (p7[1]> p1[1]) { //shrink up
  //      p7[1] -= .5;
  //    }
  //  }

  //for (int dot= 0; dot width/2) { //shrink left
  //      p8[0] -= .5;
  //    }
  //    if (p8[1]> p1[1]) { //shrink up
  //      p8[1] -= .5;
  //    }
  //  }

  //for (int dot= 0; dot width/2+25) { //shrink left
  //      p9[0] -= .5;
  //    }
  //  }

  for (int drop = 0; drop


It is very difficult to choose just one tenant of this Critical Engineering Manifesto. Many of the tenants are interrelated, and they feed directly into much of the reading I’ve been doing recently on the human-machine entanglement.

Therefore, I pick three tenants, which I believe to be highly interrelated:

1. The Critical Engineer considers any technology depended upon to be both a challenge and a threat. The greater the dependence on a technology the greater the need to study and expose its inner workings, regardless of ownership or legal provision.

2. The Critical Engineer raises awareness that with each technological advance our techno-political literacy is challenged.

9. The Critical Engineer notes that written code expands into social and psychological realms, regulating behaviour between people and the machines they interact with. By understanding this, the Critical Engineer seeks to reconstruct user-constraints and social action through means of digital excavation.

These concepts all hinge on the power and value hierarchy wielded by those who create the “black box” around new technological developments. We have seen, with vivid and brutal clarity, what happens when we depend on a technology and allow it to “regulate our behavior” – remaining within its tightly controlled constructs, and not the questioning the legitimacy of this dependency. The “echo chamber” around social media, the propagation of fake news, the threats to cybersecurity – all of these relate to our “techno-political literacy.” When we depend on a technology, we render ourselves vulnerable to its exploitation. As the perpetual “forward march of progress” in technology continues, we are challenged to understand the new developments as they affect our liberty, communication, and access to information. The more these technological developments remain underneath the “black box” veil, the more we must apply the tenants of the Critical Engineer, to “expose its inner workings,” and “reconstruct user-constraints and social action through means of digital excavation.”

Lucy Suchman articulates the importance of “Critical Technical Practice, in which attention to the rhetorics and technologies through which a field constructs its research objects becomes an integral part of its research practice.” – Suchman, Human-Machine Reconfigurations : Plans and Situated Actions, 2nd Edition, 2007. As Critical Engineers and Practitioners, we must be self-aware and critical of our own rhetoric surrounding the technology we develop and work with – thus continually unmasking the “black box” – and refusing to become the robots of our own design.



The project I’ve chosen is The Architecture of Radio, by Richard Vijgen. I’m very interested in projects which take the site as an important element in the experience. I’m less interested in visualizations or experiences that take place within a closed loop of user-to-device, and more engaged with hybrid works which connect directly to the physical environment / actual time / specific location of the user.

This project is a site specific iPad app that visualizes what is normally invisible to the naked eye – the network of networks that surround us all the time : i.e. cell towers, wifi routers, satellites for navigation, communication, and observation. The project was created using Three.js and the Iconic Framework (for apps), and utilizes GPS to find cell towers that are within reach from OpenCellID.

I believe that this project taps into an important gap in many people’s knowledge – understanding just how intertwined we are, how surveilled we are, and how we depend on this invisible network of information pathways to inhabit our contemporary always-on, always-connected, and always-observed society. As data privacy and cybersecurity become increasingly problematized, educating the public about the “invisible” structures around them, and how to navigate them safely, will become (has already become) paramount.

This sentiment is alluded to in Business Insider’s review, which the artist uses in their documentation (and therefore believes to be important) : “Both beautiful and slightly disturbing.”

The physical act of holding up an iPad in public space is a bit ridiculous, though, so I think the project could consider alternate, more seamless ways of engaging with the information. And to this end I would suggest Augmented Reality, or Mixed Reality, specifically the Hololens, but to posit that the Hololens is a “seamless” experience, or non-intrusive, is a fallacy.  If Magic Leap releases what they propose to be developing – this experience would fit right in.




Claire and I made a trampoline that can dial any phone number, input as numbers corresponding to jumps by a user. The trampoline essentially functions as a keyboard to type the numbers for you. Phones are often used absentmindedly, we may feel tethered to them to stay connected, and we often lead sedentary lives sitting at laptops all day. So, here comes TrampolineDial, breaking up the monotony, and turning phone calls into an active, fun interaction!

We wrote the code using Arduino. When a user jumps, the Little Bits act as a cursor, typing in numbers that create the phone number. Apparently if we used Java.robot we could automate the calling, so that the call would be placed once the 10 digits were generated by the jumps.



Here’s the diagram:


Here’s a video of TrampDial in action! :


For my final project, I think it would be useful if I did something using p5.js and also string parsing. I am not sure what this project would be – maybe some more mocap or a better FaceOSC – but I think that I could use a review / more in-depth project with the concept of String Lists, Matrices, etc. I don’t have a hugely artistic vision for this yet but I know the technical review will be very beneficial to me. I’m interested in three.js but that may be far beyond my scope currently.


Mocap is cool. This project was fun just to get my hands on 3D software and also to actually see a mocap setup for the first time. Being my own model was not so great (my ‘performance’ is not very compelling, though I did try to do one of the dances from my generative book – just the foot pattern without much flourishing). Doing this reminds me I need to expand my network in Pittsburgh of performers, dancers, etc. – which I will do.

I didn’t write code for my final output, but I did get Golan’s example code working in Processing with my BVH. Then I moved onto exploring the 3D animation software, Cinema 4D. I’d learned a little of this program about two years ago, so it was great to get back into it a little. I think I’ll try more things with this software now. I know that scripting in Python is possible in Cinema 4D. I didn’t script in this project, but would try this on the second iteration.


The project was fun. My output isn’t thrilling, but I’m glad to play with 3D (and remember why I love editing animation/video) and learn about cloners, physics tags (rigid body, collider body, force, friction, etc), lighting effects, and using the mocap skeleton.




Here’s the video:



So, this was interesting, and an important lesson for me about parsing strings and creating tsv’s. I adapted (admittedly minimally) a Mike Bostock bar graph, and plotted the number of rides at each of the 24 hours in a day. I think with a lot more practice, I could come to like Javascript quite a lot. When you hover over the individual bars, the bar changes color – here shown in the web color “rebecca purple” which is a fun name for a color. I’d like to keep working with Javascript in future projects.

Table allRidesTable;

int ridesPerHour[];
void setup() {

  ridesPerHour = new int[24]; 
  for (int s=0; s<24; s++) {
    ridesPerHour[s] = 0; // initialized to zero yo

  allRidesTable = loadTable("HealthyRideRentals2015Q4.csv", "header"); 
  // Trip id,Starttime,Stoptime,Bikeid,Tripduration,From station id,From station name,To station id,To station name,Usertype

  int nRows = allRidesTable.getRowCount(); 
  for (int i=0; i


Healthy Ride

A Day of Pittsburgh Bike Rides


Here’s the full Spirit Animals PDF: SpiritAnimals_print.pdf

From the Artist’s Note at the beginning of the book:

“Spirit Animals is a computationally generated book of dances, created with Processing 3, an open-source software for creative coding. The dances can be done solo or in groups, with or without music, and are meant for novices and professionals alike. They aim, though do not promise, to be physically possible.

Spirit Animals draws influence from the Fluxus art movement, John Cage’s compositions, Andy Warhol’s Dance Diagrams, and traditional instructional dance diagrams such as the Fox Trot, Tango, and Lindy Hop. The dance names are derived using a random generator combining an adjective and an animal, producing variations on the “Funky Chicken.”

I hope you’ll use Spirit Animals to dance anywhere, at any time, and with anyone. Why walk, when you can dance?”

Spirit Animals marks a new step for me towards creating participatory computational artworks and experiences. I’ve been reading and thinking a lot about the ideas of participation, interaction, and the moment of encounter. I’m very interested in artworks which create situations for people to interact in some physical manner with themselves, the world, and others around them. I’m reading about relational aesthetics, “emancipated spectatorship” and the writings of Claire Bishop, Nicholas Bourriard, Roland Barthes, and Umberto Eco. Conceptually, Spirit Animals is hopefully fun, silly, and has a low barrier to entry for anyone to pick up the book and play. I like that this book/PDF format is easily distributable, and doesn’t require any technology to “perform.” Spirit Animals is also a visual experiment in computationally generated movement patterns. Secretly/not so secretly I have a wild desire to be a choreographer – and have a long history of collaborating with dancers and performers. However, not being a dancer, I’m a bit hopeless at actually creating the choreography, so I enjoy the “computer as collaborator” aspect of the dance creation here. I’m not sure where this goes, but I’m interested in exploring these ideas further.

In terms of the technical aspects of creating this book – let me count the ways….. First of all, this is by far the most complex and longest program I’ve ever written. Still being a beginner with coding and Processing, this was actually a mammoth task. I’ve learned a lot, and feel that the biggest technical skill I worked on here was just graphical – generating the foot patterns and figuring out the layout that made the most sense, and making those pesky curved arrows go where they should. I definitely didn’t work with Basil.js or InDesign but did put some of the text pages together in Illustrator. Rita.js is interesting to me, but will have to be tackled at a later date.

Here are some images from the book:

Here is the code:

// Processing 3.0x code
import processing.pdf.*;

float ax;
float ay;
float bx;
float by;
float cx;
float cy;
float dx;
float dy;

float f;
float g; 

float px;
float py;
float qx;
float qy;

int pageWidth = 72 * 8;
int pageHeight = 72 * 10;

float cellMarginX;
float cellMarginY;

int rightFoot_col = 4;
int rightFoot_row = 7;
int leftFoot_col = 4;
int leftFoot_row = 7;

final int pixelsPerInch = 72;

int nCols = 5;
int nRows = 7;
int nSteps = 7;

float cellSpaceW = pageWidth / nCols;
float cellSpaceH = pageHeight / nRows;

final int gridLineWidth = 3;

IntList usedCells;

PShape leftFoot;
PShape rightFoot;

float footScale = 0.07;
float gridMargin;
float gridWidth;
float gridHeight;
float cellWidth, cellHeight;

float leftFootWidth;
float leftFootHeight;
float rightFootWidth;
float rightFootHeight;

PFont myFont;
PFont myTitles;

int currentCell = (nRows * nCols) /2;

int cellNumber; 

int cellCol = cellNumber % nCols;
int cellRow = cellNumber / nCols;

StringList pageTitles;


void setup() {
  size(612, 792); // 8.5 x 11  pixelsPerInch * 8.5 and PixelsPerInch * 11

  beginRecord(PDF, "spiritanimals174.pdf");
  //PGraphicsPDF pdf = (PGraphicsPDF) g;

  leftFoot = loadShape("foot_L.svg"); 
  rightFoot = loadShape("foot_R.svg"); 

  leftFootWidth = leftFoot.width * footScale;
  leftFootHeight = leftFoot.height * footScale;
  rightFootWidth = rightFoot.width * footScale;
  rightFootHeight = rightFoot.height * footScale;

  // indent the grid by a half inch all around
  gridMargin = pixelsPerInch * 0.90;

  gridWidth = 612 - (gridMargin * 2);
  gridHeight = 792 - (gridMargin * 2);

  cellWidth = gridWidth / nCols;
  cellHeight = gridHeight / nRows;

  myFont = createFont("Helvetica", 15, true); 

  myTitles = createFont ("Helvetica ", 40);

  usedCells = new IntList();



void draw() 

  background(255); // white

  // dance some steps!

  for (int i = 0; i < nSteps; i++) 
    drawLeftFootInCell(currentCell, i+1);

    int nextCell = getPossibleNextCell();
    while (usedCells.hasValue(nextCell)) 
      nextCell = getPossibleNextCell();

    drawLineFromCellToCell(currentCell, nextCell, i);

    currentCell = nextCell;

    drawRightFootInCell(currentCell, i+1);   
    nextCell = getPossibleNextCell();
    while (usedCells.hasValue(nextCell)) 
      nextCell = getPossibleNextCell();

    if (i < nSteps - 1) 
      stroke (0, 100, 255, 100);
      drawLineFromCellToCell(currentCell, nextCell, 1);

      currentCell = nextCell;
  text(pageTitles.get(0), 306, 65);

  //textFont(myTitles, 75);


void drawLeftFootInCell(int cellNumber, int numberToDraw)
  int cellCol = cellNumber % nCols;
  int cellRow = cellNumber / nCols;

  float drawLX = (cellCol * cellWidth) + (cellWidth / 2) - (leftFootWidth / 2) + gridMargin;
  float drawLY = (cellRow * cellHeight) +  (cellHeight / 2) -  (leftFootHeight / 2) + gridMargin;

  shape(leftFoot, drawLX, drawLY, leftFootWidth, leftFootHeight);
  text (numberToDraw, drawLX - 10, drawLY + 40);

void drawRightFootInCell(int cellNumber, int numberToDraw)
  int cellCol = cellNumber % nCols;
  int cellRow = cellNumber / nCols;

  float drawRX = (cellCol * cellWidth) + (cellWidth / 2) - (rightFootWidth / 2) + gridMargin;
  float drawRY = (cellRow * cellHeight) +  (cellHeight / 2) -  (rightFootHeight / 2) + gridMargin;

  shape(rightFoot, drawRX, drawRY, leftFootWidth, leftFootHeight);
  text (numberToDraw, drawRX - 10, drawRY + 40);

void drawLineFromCellToCell(int fromCell, int toCell, int whichStep)
  float fromCellX = topOfCellXCoordinate(fromCell); // ax
  float fromCellY = topOfCellYCoordinate(fromCell); // ay
  float toCellX = topOfCellXCoordinate(toCell); // dx
  float toCellY = topOfCellYCoordinate(toCell); //dy

  float fromCellX_qx = lerp(fromCellX, toCellX, 0.3333); // px
  float fromCellY_qy = lerp(fromCellY, toCellY, 0.3333); // py
  float toCellX_px = lerp(fromCellX, toCellX, 0.6666); // qx
  float toCellY_py = lerp(fromCellY, toCellY, 0.6666); // qy

  float tx = toCellX-fromCellX;
  float ty = toCellY-fromCellY;
  float th = sqrt((tx*tx) + (ty*ty));

  f = 0.13; 
  g = 0.11; 
  if (whichStep%2 == 1) {
    f = 0-f;
  if (whichStep%2 == 1) {
    g = 0-g;
  if (th < cellSpaceW) { f = g = 0; } println("Hey: " + whichStep + " " + th + " " + cellSpaceW); ax = fromCellX; ay = fromCellY; bx = fromCellX_qx - f*ty; by = fromCellY_qy + f*tx; cx = toCellX_px - g*ty; cy = toCellY_py + g*tx; dx = toCellX; dy = toCellY; // direction-sensitive offsets to the line coordinates float offsetx = 15; float offsety = 30; if (tx > 0) { // then I'm going from Left to Right
    ax += offsetx; 
    dx -= offsetx;
  } else if (tx < 0) { // then I'm going from right to left ax -= offsetx; dx += offsetx; } else { // tx == 0 if (ty > 0) { // going down
      ay += offsety;
      dy -= offsety;
    } else if (ty < 0) { // going up ay -= offsety; dy += offsety; } } float sepY = 5; if (ty > 0) { // going down
    ay += sepY;
    dy -= sepY;
  } else if (ty < 0) { // going up ay -= sepY; dy += sepY; } boolean bDrawColoredEllipses = false; if (bDrawColoredEllipses) { float eR = 7; noStroke(); //stroke(0,0,0); strokeWeight(1); fill(255, 0, 0); // A = red ellipse(ax, ay, eR, eR); fill(0, 255, 0); // B = green ellipse(bx, by, eR, eR); fill(0, 0, 255); // C = blue ellipse(cx, cy, eR, eR); fill(255, 255, 0); // D = yellow ellipse(dx, dy, eR, eR); } /* fill(255, 200, 255); // Q = light pink ellipse(fromCellX_qx, fromCellY_qy, 10, 10); fill(100, 100, 255); // P = dark purple ellipse(toCellX_px, toCellY_py, 10, 10); */ // draw the curve noFill(); stroke(255); // white strokeWeight(4); // thick beginShape(); curveVertex(ax, ay); curveVertex(ax, ay); curveVertex(bx, by); curveVertex(cx, cy); curveVertex(dx, dy); curveVertex(dx, dy); endShape(); fill(0); noStroke(); ellipse(ax, ay, 5, 5); noFill(); stroke(0, 0, 0); // black strokeWeight(1); // thin beginShape(); curveVertex(ax, ay); curveVertex(ax, ay); curveVertex(bx, by); curveVertex(cx, cy); curveVertex(dx, dy); curveVertex(dx, dy); endShape(); drawArrowhead(cx, cy, dx, dy); //line(fromCellX, fromCellY, toCellX, toCellY); } //--------------------------------------------------------- float topOfCellXCoordinate (int cellNumber) { int cellCol = cellNumber % nCols; return (cellCol * cellWidth) + (cellWidth / 2) + gridMargin; } //--------------------------------------------------------- float topOfCellYCoordinate (int cellNumber) { int cellRow = cellNumber / nCols; return ((cellRow * cellHeight) + (cellHeight / 2)) + gridMargin; } //--------------------------------------------------------- int getPossibleNextCell() { // both moveX and moveY cannot be zero - int cellCol = currentCell % nCols; int cellRow = currentCell / nCols; int moveX = (round(random(-2, 2))); while ((moveX + cellCol >= nCols) | (moveX + cellCol < 0)) { moveX = (round(random(-2, 2))); } int moveY = (round(random(-2, 2))); while ((moveY + cellRow >= nRows -1) | (moveY + cellRow < 0)) 
    moveY= (round(random(-2, 2)));

  if (moveX == 0 && moveY == 0) 
    // arbitrary
    if (cellCol <= 5) {
      moveX = 3;
    } else {
      moveX -= 2;

  cellCol = cellCol + moveX;
  cellRow = cellRow + moveY;

  return cellRow * nCols + cellCol;


void drawArrowhead(float fromCellX, float fromCellY, float fromCellX_halfway, float fromCellY_halfway) {
  float hx = fromCellX_halfway - fromCellX; 
  float hy = fromCellY_halfway - fromCellY; 

  float len = dist(fromCellX, fromCellY, fromCellX_halfway, fromCellY_halfway);
  float dh = sqrt(hx*hx + hy*hy); //same!
  float ang = atan2 (hy, hx); // hY first!! just is. 
  if (abs(hx) < 0.0001) {
    ang = 0-ang;

  float arrowSize = constrain(len/4, 10, 20); 
  translate(fromCellX_halfway, fromCellY_halfway);
  rotate(ang + radians(70)); 
  line(0, 0, 0, arrowSize); 
  translate(fromCellX_halfway, fromCellY_halfway);
  rotate(ang - radians(70)); 
  line(0, 0, 0, -arrowSize); 


void drawDanceFloor()
  stroke(255, 255, 255, 30); 

  // draw horizontal lines
  // number of lines  draw are (nRows + 1)

  // set initial draw point
  float drawX = gridMargin;
  float drawY = gridMargin;


  for (int row = 0; row <= (nRows + 1); row++) {
    line(drawX, drawY, drawX + gridWidth, drawY);
    // bump drawY down to next row
    drawY += cellHeight;

  // draw vertical lines

  // set initial draw point
  drawX = gridMargin;
  drawY = gridMargin;

  for (int col = 0; col <= (nCols + 1); col++) {
    line(drawX, drawY, drawX, drawY + gridHeight);
    // bump drawY down to next row
    drawX += cellWidth;

// animals and adjectives collected at random from with 7 numbers generated by integer generator
void createPageTitles()
  StringList myAnimals = new StringList(new String[]{"cougar", "elk", "skunk", "tapir", "whale", "wolf", "newt"});
  StringList myAdjectives = new StringList(new String[]{"vaulting", "sparing", "one-eyed", "gleaming", "doctrinal", "designing", "brimstone"});

  pageTitles = new StringList();
  for (int k=0; k < nSteps; k++)
    String animal;
    String adjective;
    if (myAnimals.size() == 1)
      animal = myAnimals.get(0);
      adjective = myAdjectives.get(0);
    } else
      int randomAnimal = (int)random(myAnimals.size());
      animal = myAnimals.get(randomAnimal);
      int randomAdjective = (int)random(myAdjectives.size());
      adjective = myAdjectives.get(randomAdjective);

    pageTitles.append(adjective + " " + animal);






I used Rebecca Fiebrink’s Machine Learning tool, the Wekinator, to connect FaceOSC to Isadora (software used most often in live performance for video cueing / playback and some interactivity). My face is controlling the speed of the video playback, and in a sense becomes a remote control. This project for me was about plumbing – aka connecting different pieces of software to each other, enabling communication. It was also a chance for me to work with Dr. Fiebrink’s Wekinator again – a tool I am very interested in exploring further. I enjoyed this process.


This is my plotter drawing, “Cocktail Olive Generator.”


I’m pleased with this drawing, though it is by no means complex. This is very simple code created with a double for loop and some random parameters to generate a little variety. Honestly, the reason for the simplicity is two fold – one, this week my time was extremely short due schedule, so I created something simple that I knew I could execute in a reasonable amount of time (both to write the code and draw with the plotter). And two, I’m still at the early stages of creating generative drawings, so I’m sticking within my knowledge base on this particular assignment. When I wrote the code, I originally intended to create something that looked like bubbles. However, what appeared had the immediate and distinct appearance of cocktail olives. Such simple shapes took on a little bit of humor and playfulness, which was pleasantly surprising. It was a lot of fun to draw these with the plotter, and I would like to try to create a more complex drawing with this machine again. In addition to being fun to watch, it was very satisfying to have something that had been entirely digital, take on physical form, using nice paper, a good pen, and a tangible output. I think the pedagogic value of this assignment is multi-layered, but this connection between the digital and physical is paramount. Witnessing a machine make marks with a pen has an uncanny effect of seeming both animate and mechanistic at the same time. This also displayed for us the actual process that the computer uses to draw the lines we generate in Processing – breaking them down into separate actions instead of immediately appearing on screen as it does in the browser (play window). This helped us to dissect our code, seeing how we might logically change it to make the plotter process faster or more streamlined. One odd thing was that the plotter actually drew each circle twice before lifting up and drawing the next circle. There must be a reason in my code, but I wasn’t able to determine this.

I think it would be fun to create a drawing that utilize the plotter and hand drawing, using the machine as a collaborator to create an artwork.

Below is the code:



Interactive installations. There are too many. It’s hard to choose one to focus on. Do you go with the commercial / advertising projects? Artwork in galleries? Performance? I could pick Hakanai by the French artists Adrien M and Claire B… or the complete advertising coup of the decade, the Museum of Feelings in New York City (created by Ogilvy / Radical Media), or Kyle McDonald’s Sharing Faces… or Social Galaxy by Black Egg (and Kyle McDonald and Lauren McCarthy, with some code by our own Dan Moore), which utilized the user’s Instagram feed and takes you inside your own images and hashtags, floating around with the feeds of other participants, inside an infinite mirror tunnel. This is inside the Samsung store in Chelsea in NYC. Having participated in this, I can say it is moderately uncomfortable, a little embarrassing, a little thrilling, a little ego-trip, and a little 2001 Space Oddesy.

One of the most well known interactive installation is Chris Milk’s Treachery of Sanctuary which I have seen many people lay claim to, and spread around the internet with abandon.

I like all these installations. I wonder how to grow from the “interactive installation pose” – aka the spread arms and waving them around in front of a projection that responds to your (graceful movement) (flailing). Gesture-based interaction is very compelling to me, but it is also a little repetitive. How can we push this method further? What new technology can we use to allow our natural body language to come through?

I have to also shout out Golan’s list of installations that include a large majority of work done by women. I clearly have more research to do.

Image above: Museum of Feelings


This is not so much a “clock” as it is a minute timer. It’s also a video, because I wrote it in Processing, not p5.js.

//triangle made of three points and an angle
PVector anchor; 
PVector trianglePt1;
PVector trianglePt2;
float theta;

PImage img1, img2, maskImage;
//float offset = 0;
//float easing = 0.05;

//class of circles called Spot
//declare the class, the array, and the object
Spot[] sp = new Spot[12];

void setup() {

  size(640, 480);
  anchor = new PVector(width/2, height/2);
  theta = 0;

  //construct the object
  for (int i = 0; i < 12; i++) {
    sp[i] = new Spot((i+1)*width/12-(width/12/2), 40, width/12);

  img1 = loadImage("profile_cutout.png");

void draw() {

  background(250, 215, 210, 0);

  image(img1, 170, 288, 300, 300); 
  tint(255, 185);  // Display at half opacity
  fill(30, 125, 175);

  theta += 360/60/frameRate;
  trianglePt1 = calcPointonCircle(radians(theta), 150);
  trianglePt2 = calcPointonCircle(radians(theta+90), 150);
  triangle(anchor.x, anchor.y, trianglePt1.x, trianglePt1.y, trianglePt2.x, trianglePt2.y);

  fill(190, 100, 200);

  for (int i = 0; i < sp.length; i++) {
    ellipse(sp[i].x, sp[i].y, sp[i].diameter, sp[i].diameter);

  for (int  j = 0; j < 12; j++) {
    float thetaCoordinates = radians(j*30);
    int radius = 150;
    float x = radius*cos(thetaCoordinates) + width/2;
    float y = radius*sin(thetaCoordinates) + height/2;
    ellipse(x, y, sp[j].diameter/2.65, sp[j].diameter/2.65);

  /* Below is an effort to get the array of Spots sp[i] to move down the screen. I wanted them to move down 
  one per hour, and map this to time, but I didn't figure it out. 
  valid = True 
   for (int i=0; i<12; i++) {
   if (random() 0.5) {
   then sp[i] [sp[i].y]-1
   if sp[i][sp[i].y]==height {
   valid = false;

//float h = map(hour() + norm(minute(), 0, 60), 0, 24, 0, TWO_PI * 2) - HALF_PI;

PVector calcPointonCircle(float _theta, float radius) {
  PVector tempPoint = new PVector(anchor.x+radius*cos(_theta), anchor.y+radius*sin(_theta));

  return tempPoint;

class Spot {
  float x, y, diameter;

  Spot(float xpos, float ypos, float dia) {
    x = xpos;
    y = ypos;
    diameter = dia;


I have been inspired by Karolina Sobecka’s project Sniff since I heard about it four years ago. This project was, as far as I’m concerned, extremely innovative for its time – and it is surprising when I look at the documentation on this project and realize it was made 7 years ago. This is an example of a site-specific interactive projection in public space. This project utilizes computer vision, Open Frameworks, and the real time graphics game engine, Unity 3D.

The project took place on a storefront window, and a sidewalk in New York City. Using an IR camera, people’s movements were monitored, and an animated dog “reads” the gestural response of the users to inform its artificial intelligence system, forming a relationship with the person who is interacting with it (which could be read as friendly, excited, aggressive, or standoffish). I feel this was innovative in many ways – the artists, Karolina Sobecka and James George, wrote custom software to create the project. But more importantly, they explored in an effective way the emotional and psychological impacts that could take place within an interaction with a digital “presence” – in this case, a dog. Many questions are raised here. Who is affecting whom? Where is cause and effect? Can you have an embodied experience with a digital experience? Can you summon genuine emotion from a digital presence? These are questions that I ask in my own work, and would like to explore in my research, creating interactive, digital experiences that explore our relationship to space, examine our connection to each other, and focus on embodiment. And I believe embodiment with a direct link to the spatial conditions (site specific) around us are the most powerful.

Sobecka’s work was inspired by the essay, “The Body We Care For,” by Vinciane Despret, which discusses a horse named Hans who was believed to have been able to learn mathematics. However, the horse was simply responding to physical and emotional signs from its handlers. She quotes Despret, “Who influences and who is influenced, in this story, are questions that can no longer receive a clear answer. Both, human and horse, are cause and effect of each other’s movements. Both induce and are induced, affect and are affected. Both embody each other’s mind.”

These questions, which I believe are becoming increasingly imperative, are at the root of my research.

Here’s a video of someone interacting with the work.




First Word / Last Word

I believe a very important part of the discussion regarding first word / last word art is accessibility. I feel this is especially relevant to our high-tech moment. Who is granted access to the tools to experiment with this art form? When tools / technology are only accessible to a small group of people, or a limited demographic, the concept of “first word art” has an equally narrow scope and cannot be described as a global statement regarding a new art form. The conversation takes place inside an echo chamber, speaking only to its reflection. Working to expand the availability and knowledge base of high tech tools, we may muddy the waters of “pure experimentation,” and increase the timeline of “first word artworks,” but we vastly increase the scope of the dialogue. New, broad investigations lead us to expressive breakthroughs that would not be heard or possible within a group limited to those who may predictably “claim” to be the innovators of the moment.

Personally, I aim to strike a balance between experimentation that pushes the boundaries of the tools available (for the excitement of this process and the headache that comes with it), and the communicative ability of these experimentations to effectively tell a story, convey a feeling, or express an idea.