My last project for this class shifted many times as I realized the limits of my capabilities and, quite honestly, left me with a trail of newly minted skills instead of a clearly defined project. This blog post is the tale of that trail of wandering…

I started off with the intention of continuing my explorations in tangible computing with Guodu, but we eventually scrapped this plan. Instead, I decided I would try to learn the skills necessary to add ‘location awareness’ to a project I’d worked on in a another class. To do this, I would need to learn the below at a bare minimum:

  • Soldering and making basic circuits [learning resource]
  • some level of Arduino programming
  • RFID Tag hardware and software [learning resource] incld. reading and writing to and from RFID Tags
  • wireless communication between computers and Arduino
  • How to control physical devices (fans, lights, etc.) with an Arduino [learning resource]

Before I had the Adafruit RFID Shield, I decided to explore another RFID reader. The Phidget 1023 RFID tag reader (borrowed from IDeaTe), but after extensive work found I could only control it via a USB Host device. I spent a night exploring a Raspberry Pi approach wherein I would be able to script control of the Phidget reader via processing on the Pi. I learned how to flash a Pi with a Processing image but driver issues with the Phidget ultimately doomed this approach.

I then moved back to an Arduino approach which required learning Physical Computing basics, including how to Solder, communicate with the Arduino board via serial in the terminal (‘screen tty’), understand baud rates, pwm, digital vs. analogue in/out and more. The true highlight of my Arduino adventure was triggering a physical lamp via a digital RFID trigger:

All that said, at one point, I realized the original goal of extending my previous project the way I intended was impossible with the time given. At that point, I completely shifted gears… This new direction was based on a few inspirations:

  1. Golan’s BlinkyTapes
  2.‘s Physical Computing Trailer
  3. Noodl’s External Hardware and MQTT Guide

My next goal was to control physical hardware through some type of digital control. To achieve this, I used BlinkiTape’s Processing library to render MQTT messages sent through from Noodl’s slider modules. See the video below:


In the end, despite not pulling together a singular cohesive project, I learned a great deal about Arduino, hardware programming, soldering, and other tools for communication between hardware and software systems.


Update (Nov 29, 2016)

I (cambu) continued to work on this project with another classmate, Lucas Ochoa, in Environments Design Studio/Lab (51-265/7). See the below video for an iterated version of the project. The entire demo is working except the environmental controls (fan & projector screen). Guodu and I (cambu) will be moving forward from this second iteration for our final project. See the process blog posts for the other course at the below links (password: Environments).

For our (guodu + cambu) project, we prototyped various examples of tangible media interactions for computer input. We began our project buzzing with inspiration from Hiroshii Ishii’s lecture on Radical Atoms, intrigued to play with some basic ideas from the domain of tangible media. Our early ideas were focused around the notion of creating a flip-based “cubular” interaction that would allow the traversal of linear or 2D dimensional information spaces.




System Diagram


Code (github)

Note: there seems to be issues with the xml file getting messed up by the wordpress code embedder plugin, please look at the code on github to see it correctly.

    App Switch Right

    KeyCode::TAB, ModifierFlag::COMMAND_L


    KeyCode::TAB, ModifierFlag::COMMAND_L,                                                      

    App Switch Left

    KeyCode::TAB, ModifierFlag::COMMAND_L,ModifierFlag::SHIFT_L


    KeyCode::TAB, ModifierFlag::COMMAND_L,ModifierFlag::SHIFT_L,                                                     



Update: I am still going to be exploring tangible computing but am no longer working with Guodu. Instead, I’ll be iterating on a project I worked on in Environments Design Studio (51-265/7). 

For the remainder of the course I am planning to continue my exploration into physical and tangible computing, in partnership Guodu. This proposal builds on our last project.

The exact direction our investigation will take is unclear, but I’ve been looking at a lot of existing work [see below] and the goal will be to think of ways of computing that are distinct from traditionally “click-y” (mouse and keyboard) interactions.

Other Works



The Story

When I was about 12, I visited the North American veterinary conference (NAVC) with my mom in Orlando, Florida. I was walking around the show floor with my mom when we decided to stop at the Bayer booth. In the middle of the booth was an original Microsoft Surface table — many people were congregating around to to see what it was all about. My mom and I played with it for awhile and then she left to enjoy the rest of the conference, but I stayed in the Bayer booth for easily 3 or 4 more hours becoming good friends with the booth attendants. I think it was the first highly responsive touch interface I’d ever used and it played on in my dreams for weeks. When I returned home, I tried to get my dad to buy one for our house, but at the time it was ~10-15K to install and you had to be a commercial partner…



60-212: cambu-mocap demo





//include statements for the library
import oscP5.*;
import netP5.*;

img image1; //Constructor for Image
hand leftHand; //the object that will contain all of the leftHand Data 
hand rightHand; //the object that will contain all of the rightHand Data
OscP5 oscP5; //name the oscP5 object
NetAddress serverAddress; //name the addresses you'll send and receive @
PImage imageFill1;

int listeningPort; //server and client ports

float rectX = 200;
float rectY =  200;
float rectWidth = 350;
float rectHeight = 250;

//now set the addresses, etc
void setup()
  imageFill1 = loadImage("IMG_1087.JPG");
  //if listening and sending are the same then messages will be sent back to this sketch
  listeningPort = 12345;
  oscP5 = new OscP5(this, listeningPort);

  size(1200, 700);
  background(rectX, rectY, rectWidth, rectHeight);

  // create image object 

  image1 = new img(rectX, rectY, rectWidth, rectHeight);

  // create hand objects
  leftHand = new hand();
  rightHand = new hand();

void oscEvent(OscMessage receivedMessage) {
  String[] message = receivedMessage.addrPattern().split("/");

  //ripping out all joint:hand data
  boolean isHand = message[4].equals("HandLeft") || message[4].equals("HandRight");
  if (message[3].equals("joints") && isHand == true) {

    if (message[4].equals("HandLeft")) {
      float handLeftXPos = receivedMessage.get(0).floatValue();
      float handLeftYPos = receivedMessage.get(1).floatValue();
      String tracked = receivedMessage.get(3).stringValue();

      leftHand.updateXYC(handLeftXPos, handLeftYPos, tracked);
    if (message[4].equals("HandRight")) {
      float handRightXPos = receivedMessage.get(0).floatValue();
      float handRightYPos = receivedMessage.get(1).floatValue();
      String tracked = receivedMessage.get(3).stringValue();

      rightHand.updateXYC(handRightXPos, handRightYPos, tracked);
  //ripping out all hand:closed data
  if (message[3].equals("hands")) {
    String leftOrRight = message[4];
    String grabVar = (receivedMessage.get(0).stringValue() + "/" + leftOrRight);

    if (grabVar.contains("Left")) {//change something about left
      if (grabVar.contains("Open")) {
      } else {
    if (grabVar.contains("Right")) {//change something about the right hand
      if (grabVar.contains("Open")) {
      } else {
  //println ("rectX" + rectX);
  //println ("rectY" + rectY);
  //println ("rectWidth" + rectWidth);
  //println ("rectHeight" + rectHeight);
void hoverCheck() {
  //check if right hand is hovering over the object
  if (rightHand.xPos >= image1.xPosition && rightHand.xPos <= image1.xPosition + image1.rectWidth && rightHand.yPos >= image1.yPosition && rightHand.yPos <= image1.yPosition + image1.rectHeight) { //println(rightHand.xPos + " >= " + rectX + " && " + rightHand.xPos + " < = " + (rectX+rectWidth)); image1.updateHoverState(true); if (rightHand.closed == true) { println("hoverGrab"); image1.move(rightHand.xPos, rightHand.yPos); toScale(); } } else { image1.updateHoverState(false); } } void toScale() { if (leftHand.xPos >= image1.xPosition && leftHand.xPos <= image1.xPosition + image1.rectWidth && leftHand.yPos >= image1.yPosition && leftHand.yPos <= image1.yPosition + image1.rectHeight) {
    //left hand also hovering

    if (leftHand.closed == true) {
      //get distance
      float rightToLeftDist = dist(rightHand.xPos, rightHand.yPos, leftHand.xPos,leftHand.yPos);
      float scaleVar = map(rightToLeftDist, 0, 0.5*image1.rectWidth, 0, 1.5);
      image1.rectWidth = image1.rectWidth*scaleVar; 
      image1.rectHeight = image1.rectHeight*scaleVar;
      //scale by some multuplier 

void draw() {
  fill(255, 255, 255, 100);
  rect(0, 0, width, height);

  image(imageFill1, image1.xPosition, image1.yPosition);
  imageFill1.resize(int(image1.rectWidth), int(image1.rectHeight));
class hand { //class that allows the creation of any hand method

  boolean closed;
  float xPos;
  float yPos;
  color fillColor;
  String trackingConfidence; //is either Tracked, Inferred, or (maybe something else)

  hand() {
    closed = false;
    xPos = 200;
    yPos = 200;
    fillColor = color(200, 200, 200);

  void updateXYC(float newXPos, float newYPos, String trackedState) { // a function to update x position, y position, and tracking confidence

    //direct map
    //xPos = map(newXPos, -1, 1, 0, width);
    //yPos = map(newYPos, 1, -1, 0, height);

    //smooothed map
    float mappedNewXPos =  map(newXPos, -1, 1, 0, width);
    xPos = 0.5 * xPos + 0.5 * mappedNewXPos;
    float mappedNewYPos =  map(newYPos, 1, -1, 0, height);
    //println(mappedNewXPos + "," + mappedNewYPos);
    yPos = 0.5 * yPos + 0.5 * mappedNewYPos; 

    trackingConfidence = trackedState;

  void updateIsClosed(boolean openOrClose) {
    if (openOrClose == true) {
      fillColor = color(230, 50, 100);
      closed = true;
    } else { // open
      fillColor = color(200, 200, 200);
      closed = false;

  void render() {
    ellipse(xPos, yPos, 25, 25);
class img {

  color c;
  float xPosition;
  float yPosition;
  float rectWidth;
  float rectHeight;
  boolean isHovering;

  img(float xPos, float yPos, float rWidth, float rHeight) {
    c = color(200, 200, 200, 0);
    xPosition = xPos;
    yPosition = yPos;
    rectWidth = rWidth;
    rectHeight = rHeight;
    isHovering = false;

  void render() {
    rect(xPosition, yPosition, rectWidth, rectHeight);

  void updateHoverState(boolean hoverState) {
    isHovering = hoverState;
    if (isHovering) {
      c = color(245, 50, 100, 50);
    } else {
      c = color(245, 50, 100, 0);

  void move(float x, float y) {
    //xPosition = xPosition + deltaX;
    //yPosition = yPosition + deltaY;
    xPosition = x-rectWidth/2;
    yPosition = y-rectHeight/2;


After attending Hiroshi Ishii’s lecture in McConomy auditorium last week, I was totally blown away by the breadth of work that despite different all seemed to share a similar spirit about working with computational and digital ideas in a physical manner.

What I found missing from his work in current from was a lack of practicality or way to imagine how more complex interfaces and experiences would be enabled within his worldview. On the other hand, the above project by the Fluid Interfaces Group uses a series of existing technologies to make their prototypes completely possible today.

That’s to say I think the interaction of holding a lens to everything and manipulating the physical world with zero to little non-visual feedback is necessarily a good idea though. Having used and made software and interfaces like this, I know they can be exceedingly frustrating and not overly enjoyable.


3. The Critical Engineer deconstructs and incites suspicion of rich user experiences.
from CE

“Any sufficiently advanced technology is indistinguishable from magic.” is the third of Arthur C. Clarke’s three laws. In many cases, the work of artists, designers, and others who deliver ‘human-experiences’ use technology to create this ‘rich user experience’ magic. Of course, those doing this art & design work are often their own engineers, doing a type of engineering themselves (even if they’re using art-engineering toolkits like openFrameworks or Processing.) If by extension we then say they should adopt the mindset of Critical Engineering, the point at which the show is over and the trick can be ‘revealed’ is one of contention.

I think the very existence of the open source software movement in conjunction with GitHub has shown that many artist-engineers are freely willing to share what they make and how they do it. Even companies like Disney and Microsoft reveal a great many of their tricks through their large research organizations which publish findings regularly.


click (on image) for interactive version

For this project, I decided to analyze the number of concurrent bicyclists using the EasyRide system at any one moment in time. To visualize this, I used Tom May’s Day/Hour Heatmap.


Table allTimes;
IntDict grid; //thanks to gautam for the idea of an intdict
String gridKey;

//"this is about you having a car crash with D3" ~Golan 

void setup() {
  // change this if you add a new file 
  int dayOfMonthStarting = 7; 
  grid = new IntDict();

  //allTimes = loadTable("startStopTimes_sep19to25.csv", "header");
  allTimes = loadTable("startStopTimes_aug10to16.csv", "header");
  //header is Starttime, Stoptime

  int numRows = allTimes.getRowCount();
  for (int i = 0; i < numRows; i++) { TableRow curRow = allTimes.getRow(i); //M/D/YEAR 24HR:60MIN //PARAM ON START HOUR String startTime = curRow.getString("Starttime"); String Str = startTime; int startChar = Str.lastIndexOf( ' ' ); int endChar = Str.lastIndexOf( ':' ); int startHourInt = Integer.parseInt(startTime.substring(startChar+1, endChar)); //PARAM ON END HOUR String stopTime = curRow.getString("Stoptime"); //9/19/2015 0:01 String StrR = stopTime; int startCharR = StrR.lastIndexOf( ' ' ); int endCharR = StrR.lastIndexOf( ':' ); int stopHourInt = Integer.parseInt(stopTime.substring(startCharR+1, endCharR)); //PARAM ON DAY int curDay = Integer.parseInt(startTime.substring(2, 4)) - (dayOfMonthStarting - 1); //1-7 println("-->> " + startTime + " to " + stopTime);
    //println("Place this in day: " + curDay + ", with an hour range of: "); 
    //println("start hour: " + startHourInt);
    //println("stop hour: " + stopHourInt);

    int rideDur;

    if (startHourInt - stopHourInt == 0) {
      //place one hour of usage at the startHourInt location
      rideDur = 1;
    } else {
      rideDur = stopHourInt - startHourInt + 1;
    startHourInt = startHourInt + 1;
    gridKey = "D" + curDay + "H" + startHourInt;
    println(gridKey + " -> " + rideDur);

    if (rideDur == 1) { //only incrementing or making a single hour change
    } else { //ranged creation
      println(rideDur + " @ " + startHourInt);
      for (int n = startHourInt; n <= startHourInt + rideDur; n++) { gridKey = "D" + curDay + "H" + n; if (n > 24) {
          //do nothing
        } else {
        println(n + " -> " + gridKey);

void keyCreate(String gridKey) {
  if (grid.hasKey(gridKey) == true) {
  } else {
    grid.set(gridKey, 1);

void d3_export() {
  Table d3_data;
  d3_data = new Table();

  for (int days = 1; days <= 7; days++) {
    for (int hours = 1; hours <= 24; hours++) {
      String keyComb ="D" + days + "H" + hours; 
      TableRow newRow = d3_data.addRow();    
      newRow.setInt("day", days);        
      newRow.setInt("hour", hours);
      if (grid.hasKey(keyComb) == false) {
        newRow.setInt("value", 0);
      } else {
        newRow.setInt("value", grid.get(keyComb));
  saveTable(d3_data, "data/sep7-13.tsv", "tsv");



click above to play around with it 

The prompt for this looking outwards is especially difficulty for me because there are so many great examples out there + I’m very aware of where/how to find many more. I recently found out the co-creator of Processing & founder of Fathom, Ben Fry, attended CMU for Communication Design and CS.

Hence, to make it easier, I decided to scope in and select something from Fathom’s incredible portfolio of work. One of their pieces, The Measure of a Nation, stuck out in particular for both its interesting mobile and web interactions (click here for video) and the ease by which it allows the comparison of complex information.

The piece isn’t immediately ‘understandable,’ and normally that would bother me, but I like that it becomes more comprehensible as you play with it longer. This reminds me of something Dan Boyarski told our C-Studio I class last mini, it went something like this:

‘Have a conversation with the viewer […] provide them the respect to believe they can come to understand your message […] it’s not necessarily a bad thing if your work asks something of the viewer’




For my bots investigation, I looked mainly at visual-computational bots that consume an image and reply with a manipulated version. My favourite one of these bots was @pixelsorter (see above results), but I also just Img Rays, Img Shredder, and IMG2ASCIII particularly enjoy two aspects of these types of bots: 

  1. That you send a tweet and then know you will get a reply within a 30-90 seconds, in contrast to how Twitter is normally used (when you don’t know whether or not someone will reply), this is interesting.
  2. But, you don’t know what it will be and the waiting period makes it such that it’s hard to brute force out an understanding of the algorithm.

Together, this created an interesting human-like interaction that a lot like talking to a real person on twitter. checking out  Unfortunately, sometimes the bots didn’t reply, which was very depressing.

A few other fun things I came across while exploring:




This small square book tells the story of my summer in Los Angeles in 2014. After sophomore year of high school, I flew from Toronto, ON to live with extended family in LA while working at a small ‘startup’ company. The data to plot these moments of my summer was retrieved from Google’s location tracking database ( and chosen in a semi-computational manner that involved a non-perfect human input system.


Going into this project, I knew I wanted to work with maps and location data. I’d been playing around with Mapbox Studio and I wanted an excuse to really dig into it. The next ingredient was to use my personal location data from Google My Activity — this made it easy for me to export all of my location data as a massive JSON blob. My initial idea revolved around using that data in conjunction with some longitude/latitude math to trace a line through the earth and create a book that had my location(s) and the projected location(s) of a doppelganger version of me in a different city.

I decided against this direction in the end because I wasn’t convinced there was any particular meaning to tracing a direct (or even canted/distorted line) through the planet. In the end, I decided to focus more on the exploration of my own data instead of trying to make artificial juxtapositions. See the below image for more process of the project.



For me personally, looking at the book has a lot of meaning and feels very ‘deja vu’-ish because I can viscerally remember being at all of the highlighted locations. It also has a deep ‘uncanny valley’ feeling because, of course, the images are from Google street view and not things I personally captured — but, I can imagine having captured something similar. After all, I was there!

That said, something I didn’t account for when starting with the idea of using my personal location data was that it wouldn’t have the same meaning to other people. All of the psychological triggers that are working on me just aren’t being experienced the same way by other people. If I were to iterate further on this concept, I would create a fully automated pipeline for taking anyone’s location data + selected date range and converting that into their own book. I think then people would be able to feel the same way I did about my book.


Here’s a video of Professor Levin flipping through my book:

Full Book


python node transformer app:

import json
from datetime import datetime as dt

myFile = open("LocationHistory.json")
# myFile = open("testLocHist.json")
js = json.load(myFile)
#js = [2012,2013,2014,2015,2016,2017]

# Vars 
nLocations = len(js["locations"])
deleteIndexes = set() #must be a set so it be be easily indexed, gautam bose (andrew: gbose) helped me with the logic behind switching this from a list to a set

print("years in main loop: ")
for i in range(nLocations):
    curTimeStamp = js["locations"][i]["timestampMs"]
    ## converting milliseconds to seconds   
    humanDate = dt.fromtimestamp(float(curTimeStamp)/1000.0)
    curYear = humanDate.year
    curMon = humanDate.month 
    curDay =

    #print(str(curMon) + " " + str(curDay))

    if curYear == 2014:  # if the condition is met, the items will be deleted
        # keep these
        # print(curYear)
        if curMon >= 7 and curMon <= 8: #print(curYear) if curMon == 7: if curDay >= 12:

            if curMon == 8:
                if curDay <= 18:
    #@Cam added this conditional to remove a brief trip to SF

    if js["locations"][i]["latitudeE7"] < 350000000: pass else: # print('case1') deleteIndexes.add(i) # print("## start delete loop from ##") # print(deleteIndexes) # print("w/ LENGTH OF ---> ")
# print(len(deleteIndexes))
# print("______")

# print(nLocations)
# print("&&")
# print(deleteIndexes)

for x in range(nLocations, 0, -1):
    if x in deleteIndexes:

        del js["locations"][x]

# for y in range(len(js["locations"])):
#     curAgTimeStamp = js["locations"][y]["timestampMs"]
#     # print(curAgTimeStamp)

#     print(dt.fromtimestamp(float(curAgTimeStamp)/1000.0))

open("updated-file.json", "w").write(
    json.dumps(js, sort_keys=True, indent=4, separators=(',', ': '))

# print(deleteIndexes)
# print(js["locations"])
# date = {}

# date = datetime.fromtimestamp(int("unixTimeVar"))
# year = date.year
# hour = date.hou∫∫r∫

##etc. unpack 
# #print("a total of" + nLocations + "exist in this JSON File") #TypeError: Can't convert 'int' object to str implicitly
# print("Locs:")
# print(nLocations)
#print(huma  nDate.year)ss
# year = humanDate.year
# print(js["locations"][0]["timestampMs"])
# print(js["locations"][1]["timestampMs"])

javascript web app to call APIs and download images to local directory:

A Simple Map



//Load JSON File
var locations = [] // all data

window.onload = function rdy() {
    newStreetImage(34.1187624,-118.2751063,640,640, 90, 235, 10)

var i = 0;
function again(jumpVal, newHeading) {
    var longitude = locations[i][0];
    var latitude = locations[i][1];
    var zoom = 12 //locations[0][2]
    var width = 640;
    var height = 640;
    var fov = 90;
    var S = null;
    var pitch = 10;

    if (newHeading == null) {
        var heading = 235;
    } else {
        var heading = newHeading;

    if (S == null) {
      newMapImage(longitude,latitude,zoom, width, height, 0);
      newStreetImage(longitude,latitude, width, height, fov, heading, pitch, 0);
    } else {
      newMapImage(longitude,latitude,zoom, width, height, 1);
      newStreetImage(longitude,latitude, width, height, fov, heading, pitch, 1);

    console.log(longitude + " " + latitude)
    console.log(i + " / " + locations.length)
    //setTimeout(function(){alert(i)}, 500);
    // newMapImage(longitude,latitude,zoom, width, height, save);
    // newStreetImage(longitude,latitude, width, height, fov, heading, pitch, save);
    i = i + jumpVal;


function newMapImage(long, lat, zoom, width, height, save) { // map image
  var linkMap = "" + lat + "," + long + "," + zoom + "/" + width + "x" + height + "?access_token=pk.eyJ1Ijoic3VwZXJjZ2VlayIsImEiOiJjaWZxMzV6NnFhb3pjaXVseDQ1dm84Z2RkIn0.T5qZqiB_JanRezs012Zppw";
  document.getElementById('myMap_image').src = linkMap;
  document.getElementById('myMap_link').href = linkMap;
  // document.getElementById('myMap_link').download = 1
  if (save == 1){document.getElementById('myStreetView_link').click();}

function newStreetImage(long, lat, width, height, fov, heading, pitch, save) { // street image
    var link = "" + width + "x" + height + "&location=" + long + "," + lat + "&fov=" + fov + "&heading=" + heading + "&pitch=" + pitch + "&key=AIzaSyAAhrTirgQBQJH88rpw6LpOfp3oMRTMzqg";
    document.getElementById('myStreetView_image').src = link;
    document.getElementById('myStreetView_link').href = link;
    // document.getElementById('plugBoi2').download = 1
    if (save == 1){document.getElementById('myStreetView_link').click();}




Of all the games I played at the VR Salon, none had the craft and quality of SuperHyperCube. Of course this isn’t a knock against the lesser funded and more artisan efforts — some of them were thoroughly interesting (and thought provoking) experiences.

But, I do think it’s important to recognize the difference between VR experiences that are fun for a 5-minute demo and ones that I could really imagine spending hours in. SuperHyperCube certainly falls into the latter camp. After playing for only a few moments, I was enamored with the slick graphics and slowly building complexity, yet, it had enough similarities to existing game metaphors that it wasn’t overwhelming. Also, the game was simply really fun — never underestimate fun!

I really hope we continue to see VR experiences that focus on both being fun and being of quality — not everything has to be an artist statement.


Humans look at real faces in the real world every day. But since the advent of smartphone technology, people have been spending increasing amounts of time looking at phone screens while out in public, even around other ‘real’ people. This “issue” has been the subject of a series of artist investigations and conversations within popular culture. I’ve found many of these pieces contain truth, but often whitewash or don’t delve into the actual reason(s) we’re so interested in looking at our phones. I was interested in tackling that situation my piece.





To enable the live capturing of my iPhone Screen, I constructed a multi-piece graphics pipeline. The structure is as follows:


The below work is an interesting piece of ‘selective focus’ that Golan pointed out to me when I showed him an iteration of project. The work is an entire year of New York Times covers where everything except people’s faces are blacked out.

Full Code Embed

// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC

// further adapted by Marisa Lu
// adapted by Kaleb Crawford 
// 2012 Dan Wilcox
// for the IACD Spring 2012 class at the CMU School of Art

// adapted from from Greg Borenstein's 2011 example

import oscP5.*;
OscP5 oscP5;

import gab.opencv.*;
import java.awt.Rectangle;

Capture cam;
//Movie cam;
// num faces found
int found;
float[] rawArray;

//which point is selected
int highlighted;

int liveOrNot = 1 ; //0 for recorded video, 1 for live
int shiftVal = 465;
int xOffset = round(0.1*width);
int yOffset = round(0.1*height);

void setup() {
 size(1280, 730);
 oscP5 = new OscP5(this, 8338);
 oscP5.plug(this, "found", "/found");
 oscP5.plug(this, "rawData", "/raw");

 String[] cameras = Capture.list();

 if (cameras.length == 0) {
 //println("There are no cameras available for capture.");
 } else {
 for (int i = 0; i < cameras.length; i++) {

 cam = new Capture(this, 1024, 576, cameras[0]);

//void keyPressed() {
// if (keyCode == RIGHT) {
// highlighted = (highlighted + 2) % rawArray.length;
// }
// if (keyCode == LEFT) {
// highlighted = (highlighted - 2) % rawArray.length;
// if (highlighted < 0) {
// highlighted = rawArray.length-1;
// }
// }
void draw() {
 //background(255, 255,255, 50);
 fill(255, 255, 255, 7);
 int border = 5;
 rect(border, border, width-border*2, height-border*2);
 int timeNowSinceStart = millis() % 2555;
 float curColMod = map(timeNowSinceStart, 0, 2555, 0, 255);

 //if (cam.available() == true) {
 //set(xOffset, yOffset, cam);

 if (found > 0) {
 for (int val = 0; val < rawArray.length -1; val+=2) {
 //function that changes stroke color if it been more than X time since last called
 ellipse(rawArray[val], rawArray[val+1], 1, 1);

void drawPhoneFrame() {
 int phoneWidth = 345;
 int phoneHeight = 675;
 int screenWidth = 315;
 int screenHeight = 570;

 stroke(0, 0, 0);

 rect(width/2 - phoneWidth*0.5, 45-15, phoneWidth, phoneHeight, 45); //phone frame
 rect(width/2 - 0.5*screenWidth, 45+15+15, screenWidth, screenHeight, 15); //phone screen
 rect(width/2 - 0.5*100, 45, 100, 15, 5); //earpiece
 ellipse(width/2, 675, 35, 35); //home

float currentMilVal = 0;
float prevMilVal = 0;
float someVal = 285; //this 1000 miliseconds or one second
int faceIncre;

void chnageColIfBreak(float curColMod) {
 currentMilVal = millis();
 if (currentMilVal - prevMilVal < someVal) {
 //the time between OSC face grabs has not been long enough to change the colour
 // aka, it just relocked a face, didn't switch to a new face
 } else {
 int curSelect = faceIncre % 3;
 if (curSelect == 1) { // RED
 stroke(17, 45, 200 * (millis()%1000)/100);
 //stroke(curColMod*1.2, curColMod*0.8, curColMod*0.5);
 //println(curColMod*1.2 + "//" + curColMod + "//" + curColMod);
 } else if (curSelect == 2) { // GREEN
 stroke(32, 165, 50 * (millis()%1000)/100);
 //stroke(curColMod*0.1, curColMod*1.2, curColMod*0.3);
 } else { // curSelect == 3, in this case BLUE 
 stroke(120, 78, 245 * (millis()%1000)/100);
 //stroke(curColMod/8, curColMod/2, curColMod*1.65);
 //println(faceIncre + " " + curSelect);

 prevMilVal = currentMilVal;
/////////////////////////////////// OSC CALLBACK FUNCTIONS//////////////////////////////////

public void found(int i) {
 //println("found: " + i);
 found = i;

public void rawData(float[] raw) {
 //println("raw data saved to rawArray");
 rawArray = raw;
 if (liveOrNot == 0) {
 for (int x = 0; x < rawArray.length; x = x + 2) {
 rawArray[x] = rawArray[x] + shiftVal;


My Looking Outwards for this week scratches at the confluence of two ideas I’ve been thinking about for the past few days:

  1. Interaction as Challenge
  2. Blurring the Physical and Digital (sparked by James & Josh’s talk on Wednesday

With regard to the first idea, it’s very common within the School of Design to talk about what makes something hard/bad, not human-friendly, etc. This isn’t surprising, because almost always, the goal of ‘design’ is to get out of the way and reduce the friction between the human and the built-world/designed-artifact/etc. But, during How People Work (51-271) on Septemeber 28th, the idea of making things ‘difficult on purpose’ came up within the context of learning and video game design. To the second point, after listening to James Tichenor and Joshua Walton speak on the need to create ‘richer blurs’ between digital and physical spaces, I’ve been on the lookout for good example of this in status quo.

When I first saw Mylène Dreyer’s interactive drawing game on Creative Applications, I felt like it was really hard to understand and would probably confuse users. But I also tried to think about how that could benefit her within the context of ‘Interaction as Challenge.’ It also reminded me about some discussions [1, 2] within the UX community awhile back about how Snapchat’s bad user experience is actually to its benefit. Also: Double points for cute music and simple graphics — it really makes the game pop!



Table of Contents:
  1. Process
  2. Technical Details
  3. Photo & Video Gallery
  4. Full Code Embed


I kicked off my thought process for this project thinking about this font specimen and how I liked the seemingly arbitrary selection of numbers used. I wanted to create a system that would allow the continual generation of such values and also the spatial placement of such (typographic) numbers.

In contrast to my clock project, where I did a lot of drawing in Sketch App to define my design before beginning to code, with this project, I jumped into Processing almost right away. I did this to ’embrace the symptom’  and let the limitations of the tool guide my way through the design problem.

To be completely honest, this is a foreign way of working for me. I’m very used to asking the traditional question of “What should be designed?”, not the engineering question of “what is designable?” Playing with this question was an ongoing game of discovery during the project and one I’m learning to better understand. Furthermore, the way I setup my code using the provided template made my program consistently export PDFs throughout my working process, so I have a good amount of iteration history. [see above and below]

Technical Details

From an engineering point of view, this project involved more complexity than I’ve dealt with in the past. To start, I used a dynamic-grid-column drawing system to define the regions of my canvas. Then I use those predefined rectangular shapes to write numbers inside of them using the createFont() method, but importantly, instead of drawing the fonts to the canvas, I drew them to a JAVA2D PGraphics ‘offscreen’ canvas. I remixed some of the code for this from this github project. This means all of the numbers are being drawn in my custom font, GT Walsheim, directly onto an image object instead onto the primary canvas. The reason I do this is to allow for easy distortion and warping of the pixels and elements without having to convert text to outlines and deal with bezier curves.

The followup question was how do I get my design back out of raster format into ‘vector/object’ format, so I can use an exported PDF with the AkiDraw device? I used a method for scanning the pixels of the raster with the get() method, then I’m able to ‘etch the drawing’ back out of pixels and place objects that will export in the PDF where the colour values register in certain ranges. [see the video below for an example]

Etching Method Test
Etching Method Test

Photo & Video Gallery


Full Code Embed

// see //<>// //<>//
import processing.pdf.*;
boolean bRecordingPDF;
int pdfOutputCount = 0; 
PFont myFont;
Gridmaker newGrid;
PGraphics pg;
int textSize = 40;
float magicYimpactor;
float amount; 
float currentChaos;

void setup() {
  size(612, 792);
  bRecordingPDF = true;
  myFont = createFont("GT-Walsheim-Thin-Trial.otf", textSize);

  newGrid = new Gridmaker();
  pg = createGraphics(width, height, JAVA2D); // create a PGraphics the same size as the main sketch display window

void draw() {
  if (bRecordingPDF) {
    background(255); // this should come BEFORE beginRecord()

    pg.beginDraw(); // start drawing to the PGraphics

    pg.endDraw(); // finish drawing to the PGraphics
    //END -- -- -- CAMBU FUNCTIONS
    image(pg, 0, 0);
    // -- -- -- function that reads all of pg and places points/ellipses at certain values of a certain brightness
    beginRecord(PDF, "cambu_" + pdfOutputCount + ".pdf");
    bRecordingPDF = false;

void keyPressed() {
  //magicYimpactor = mouseX*0.0005;
  magicYimpactor = mouseX*0.05;
  //magicXXX = mouseX;
  //magicXimpactor = mouseY*0.0005;
  //amount = mouseX*0.0005;
  bRecordingPDF = true;

void chaosRepresentation() {
  float chaosStart = 1;
  int startX = 0;
  int startY = 0;

  int chaosIndex = 0;
  for (int y = 0; y < newGrid.numberOfRows; y++) { //verticalDivisor, x amount
    startX = 0;
    for (int x = 0; x < newGrid.numberOfCols; x++) { // horizontalDivisor, y amount
      fill((255/newGrid.numberOfCols)*(y/2), (255/newGrid.numberOfRows)*x, 200);
      //rect(startX,startY,newGrid.horizontalDivisor,newGrid.verticalDivisor); //within the domain & range of this rectangle, transform the pixels on pg 
      chaosIndex = chaosIndex + 1;
      currentChaos = chaosStart * chaosIndex;
      charsHere(startX, startY, currentChaos);
      startX = startX + newGrid.horizontalDivisor;
    startY = startY + newGrid.verticalDivisor;

void charsHere(int x, int y, float currentChaos) {
  int a = round((x + y)*.5);

  pg.fill(0, 0, 0);

  int xDes = x+(newGrid.horizontalDivisor/16);
  int yDes = y-(newGrid.verticalDivisor/4);

  pg.text(a, xDes, yDes);
  quadrantDestoryer(xDes, yDes, currentChaos); // operates between (startX, startY, newGrid.horizontalDivisor, newGrid.verticalDivisor)

void quadrantDestoryer(int xToDes, int yToDes, float currentChaos) {
  float xA = xToDes + 0.6*newGrid.horizontalDivisor - noise(currentChaos, yToDes, xToDes);
  float yA = yToDes - 0.2*newGrid.verticalDivisor;

  pg.fill(255, 235, 250);
  //pg.ellipse(xToDes + 0.5*newGrid.horizontalDivisor * noise(currentChaos, yToDes), yToDes - 0.2*newGrid.verticalDivisor, noise(currentChaos, yToDes)*0.5*currentChaos, 0.05*currentChaos);
  //pg.ellipse(xA, yA, random(0, newGrid.horizontalDivisor)*0.8, noise(50, newGrid.horizontalDivisor)*2);
  //pg.rect(xA-8, yA, xA+ 30, yA + newGrid.verticalDivisor * 0.5);
  //pg.ellipse(xToDes, yToDes, currentChaos*noise(xToDes, yToDes), noise(currentChaos+currentChaos));

void rasterToNotVector() {//y down
  for (int y = 0; y < height; y ++) {
    for (int x = 0; x < width; x++) { //x across              
      color cp = get(x, y);
      int b = (int)blue(cp);
      int g = (int)green(cp); 
      int r = (int)red(cp);
      int tolerance = 150;

      float noised = 30;

      if (r < tolerance && g < tolerance && b < tolerance) { 

        float amount = 30;

        float nx = noise(x/noised, y/noised); 
        float ny = noise(magicYimpactor + x/noised, magicYimpactor + y/noised); 

        nx = map(nx, 0, 1, -amount, amount); //cc to Golan for explaining distortion fields.

        ny = map(ny, 0, 1, -amount, amount*magicYimpactor); 

        //line(x, y, x+nx, y+ny);
        fill(34, 78, 240);
        ellipse(x + nx*0.5, y + ny/2, 4, 3);
void drawGrid() {
  int i = 0;
  for (int y = 0; y < newGrid.totalHeight; y = y + newGrid.verticalDivisor) { //squares down
    if (i % 2 == 0) {
      fill(140, 140, 140, 80);
    } else {
      fill(240, 240, 240, 80);
    } //if even, else odd

  int j = 0;
  for (int x = 0; x < newGrid.totalWidth; x = x + newGrid.horizontalDivisor) { ////squares across
    if (j % 2 == 0) {
      fill(140, 140, 140, 80);
    } else {
      fill(240, 240, 240, 80);
    } //if even, else odd

class Gridmaker {
  int totalHeight = height;
  int totalWidth = width;
  int numberOfRows = 12;
  int numberOfCols = 54;
  int verticalDivisor = round(totalHeight/numberOfRows);
  int horizontalDivisor = totalWidth/numberOfCols;


I was quite surprised to see the feedback I received; in general, it was more positive than I expected. My personal feeling when finishing the project was that the idea had potential, but that I hadn’t executed it very well. This notion was reflected in the feedback to a degree, but I expected to hear more of it. If I were doing the project again, I would make sure to make the sub-hour time frame feel more dynamic. I would also make sure it rendered as a gradient.



Cut, May 2015, Leander Herzog

I’m really glad I stumbled across Leander’s website, it’s filled with beautifully executed leaning-conceptual explorations of colour and data. But one of my favourite pieces was his ‘Cut’ project, a 3D spinning rectilinear shape of red and white. It’s completely interactive and renders with a mix of cubish small shadows and larger straight-edge shadows.

The form feels like a combination of random and human-defined elements, but going beyond that to say what type of algorithm defined it is hard to do. Even after a little searching, I couldn’t find the artist talking about the work.

In terms of Effective Complexity, I think it falls more on the side of ordered (than chaotic), especially in how it almost feels like a spaceship or structural form, with the ‘randomness’ around the edges where it seems most believable. Take a look a Star Destroyer below to see how it uses a similar effect to make it seem more ‘real.’ Also, the shapes reminded me of drawing rectilinear cube-forms in Visualizing (51-121), freshman year.



Question 1A.

I believe much of Andreas Gursky’s photography exhibits effective complexity, especially his work involving mass-arrangements and macro-level overviews. They capture the confusion and overwhelming force of the modern world resultant from a confluence of both individualism/differentiation and consistency/smoothness. With regard to the ‘scale of order’ (order to chaos), I think Andreas’ work exhibits two types of placements. On the one hand, some of his work shows how uniformity and calmness can result from many specialized and unique items, but, in contrast, it also shows how erratic and nervous emotion can be the result of a mass of sameness.

Question 1B. — The Problem of Locality, Code, and Malleability; The Problem of Creativity

I find the debate present in this problem of Generative Art one of a confusion between principle and practice. I fall on the side of saying that the eased copy-ability of digital/rule-based/generative art does not fundamentally change the nature of the art. I also believe the nature of object truth exists in the analogue experience available to/experienced by viewers, beholders, and consumers; and not in the system that creates those truths. Though it’s true that some viewers may now become remixers by looking at the source code, I view this similarly to the fact that some museum goers enjoy copying the paintings into their sketchbooks. On the whole, the ‘nature of the real’ has not changed meaningfully to shift principles, only enough to change practice, but this is ever changing.


Animated Loop [github]



When looking through the examples on the deliverables 03 page, I was particularly impressed by the simple bees and bombs dot explorations. I began by sketching dots flying onto and then off of the canvas and also explored colour and gradient notions. To create interesting motion, I used a combination of mapping, pattern functions, and sin() waves, but was was again frustrated at my inability to easily visualize all of the pieces going into the animation in my head. This made it hard to troubleshoot small inconsistencies in the motion, because too many compound operations were forming it up. One idea I have for a project later this semester is a graphing module that makes it easier to see all of these various operations while coding, I think this would greatly reduce the need to constantly pepper my program with print statements.



Interruptions [github]


  • The work has a white border all the way around the edge.
  • The canvas is a square.
  • Most of the lines face up-ish and down-ish
  • The holes are quite small.
  • The holes don’t often occur close to each other.
  • Sometimes the holes are more nuanced and don’t even form “complete holes” in the fabric.
  • The background is off-white, not white-white.
  • Near the holes, the lines become somewhat magnatized and get kinda ‘messed up.’


I really enjoyed executing this project, the mix of programming and math was fun and I’m glad I got to practice more modularization techniques. With regard to the math, the ‘shaping of randomness’ via a function to have most of the lines face up or face down instead of to the sides, was particularly interesting.* I believe my final result is quite close, save for a few deficiencies, here are two big problems:

  • The lines near the edges occasionally stick out awkwardly, which doesn’t happen in the provided examples.
  • The lines do not seem to ‘barely touch’ each other as nicely as they do in these [1, 2] examples.

** == Golan programmed this in office hours as an illustration for random-shaping.


clock [github]

My clock was originally inspired by an animated GIF I quite enjoy that shows a series of squares entering and leaving the frame. To begin my process, I did some sketching on ‘grided’ and ‘cubular’ shapes. Then, I began drawing in Sketch to work out more of the visual fidelity. The units in my clock combine both traditional and non-standard ideas. I’m a big fan of AM/PM clocks, which I integrate using blue gradient colours for PM and yellow for AM. But, I also wanted to explore the notion of a non-standard sub-hour time units, to achieve this, the ‘hour squares’ fill in at a rate that’s not quite a second, but far from a minute. As far as the actual building was concerned, the experience was consistently frustrating. A lot of the features and tweaks I expected to be easy to build were actually quite tricky 😉




My first exposure to computational design and art was through Steven Wittens ( in my senior year of high school while taking a Calculus and Vectors class. I had become irritated with the way my teacher was approaching the topic, never letting us explore the material or do projects, instead forming the entire semester around wrote tests and quizzes. Around the same time, I also became aware of Bret Victor (, whose projects inspire me immensely to this day.

One of my favourite pieces of Bret’s work is Drawing Dynamic Visualizations (video, additional notes), a concept for a hybrid direct-manipulation/programmatic information visualizer.

In his talk, Bret introduces the problem of Spreadsheets only creating pre-fab visualizations, drawing programs like Illustrator not being able to deal with dynamic data, and the output of coded artifacts not being continuously “seeable.” Meaning you can’t see what you’re making until after you render, which creates a feedback loop where errors can occur. To express this idea, he posits that programming is equal to “Blindly Manipulating Symbols.” A feeling I relate very strongly to when I don’t know exactly what my code is doing and can’t recreate the entire structure in my mind’s eye.

As a solution to this problem, Bret presents a concept for a program that combines the idea of direct manipulation with the ability to process and handle dynamic data.


This prototype was created wholly by Bret, but is not his first attempt at creating programmatic drawing tools or concepts. For prior art, see his works: Substroke, Dynamic Drawing, and ‘Stop Drawing Dead Fish.’ In terms of the future, I see the possibility for tools like this to change how many people work with the computational display of information and ideas. Personally, I’ve never taken immense joy from the act of programming, but, rather from the results which it produces and I believe tools like this could make that power far more accessible and enjoyable.


The question poised my Mr. Naimark is one that goes beyond the World of Art. In fact, I’ve found the parallel question in design is, more often than not, answered with a bias towards the Last Word. Famously, Paul Rand said: “Don’t try to be original. Just try to be good.” 

While new paradigms and designs that buck traditional patterns (Apple 3D Touch, Rap Music in the 80s, The Yale Graphic Style, etc.) are very often scrutinized (even maybe unfairly so) I do not personally believe there’s a choice to be made between the First and Last Word. As Naimark mentions, the latter can not exist without the former. That said, I think it would be honest to speak truthfully about the generally poor craft and quality of First Word items while also being critical of the wrote-ness and lack of innovation present in Last Word works.

It’s a convention in many places where critique is practiced (I’ve experienced it here at the School of Design and working in industry) that the question is first asked What type of feedback is desired? [and will be delivered] — though I don’t believe permission should have to be granted for critique to be given, I do think it’s an astute question to ask oneself before giving the feedback. Because if we can master our consciousness and context awareness, we may just be able to see the value the First Word can bring while appreciating the level of mastery the Last Word has achieved.


Eyeo 2015 – Giorgia Lupi and Stefanie Posavec

For the purposes of this blog post, I’ll be talking about both Giorgia and Stefanie.

Giorgia and Stefanie are both ‘data visualizers’ by trade, but in contrast to many others in similar roles, they are not hybrid creatives/artists who have a computer science angle. They are both traditionally trained in the design & architecture disciplines and do not program or develop digital artifacts directly. Both of them are also public figures who have spoken at many conferences and festivals.

Both Giorgia and Stefanie have produced impressive results in their efforts to focus on the human quality and material reality of quantitive information and data, especially in traditionally technical contexts. Stefanie worked at Facebook as an artist in residence communicating the behaviour in through people’s relationship statuses. Similarly, Giorgia has spoken about the human meaning of data, and how the numbers are only a proxy for people and their actions: [we both] work with data in a handcrafted way, trying to add a human touch to the world of computing and algorithms.”

Something I found especially resonant about their work was how they clearly positioned themselves as differentiated and unlike others in the business of dataviz. I believe it’s a particularly good example of embracing one’s areas of passion instead of following the trends of the mainstream alone.

Speaking to their presentation style, they both present real ‘in the moment’ documentation of what happened instead of process-fictions or overbuilt case studies. This is something I’d like to begin to incorporate more into my own work.