For my last project, I took the opportunity to learn Unity through prototyping a simple 3d local-multiplayer game.

In the game, each player controls a ‘Shepherd’. The objective of the game is to eliminate all of the enemies ‘vassals’ which are a herd of small soldiers. The only control the shepherd has over the vassals is to toggle between having them follow their shepherd, or seek out and attack the nearest enemy vassal.

This makes the gameplay tactics about positioning, choosing the right time to attack, and using the environment to their advantage. If your units attack as a group, or at a choke point, they will eliminate the enemy with ease.

Because I had never worked with unity before, the vast majority of this 2 week project was spent familiarizing myself with unity, C#, and how to use many of the built-in functionality that unity provides. This included:

• Nav Meshes & Nav Mesh Agents to control the flocking/pathfinding behavior of the AI vassals.

• Delegates, Events, and Event Subscription to allow GameObjects to relay their position / health / etc. to other GameObjects

• Instantiation, Parent/Child relationships, and how to safely destroy GameObjects to allow for modular setup/reset of the level

• Camera Viewports and how to setup splitscreen for multiplayer

• Layers, Tagging, and physics interactions / collisions

The scripts I wrote for unit control, health, combat interactions, among other things are on github here

As I began wrapping up the programming for the basic gameplay interactions (about 1 night before the due date), I decided to quickly create some low-fidelity 3D assets to create more interesting environments to battle in. 

A bridge to connect islands and act as a choke point

Some islands, to create a more divided playspace that forced players to choose when to cross between them.

Some palm trees, to add to the atmosphere and provide an additional small obstacle.

Here’s an earlier iteration of the map, with a previous bridge design that was replaced because it’s geometry interacted problematically with the vassals.

Additionally, I originally tried to work with a top-down camera, but felt that I couldn’t find a balance of showing the entire map while giving enough attention to the seeking behavior of the vassals.

I ran into several roadblocks along the way, but learned even more than I imagined I would in the given time. Unfortunately, much to my dismay as someone from a primarily visual background, I was left with very little time to focus on learning lighting, materials, and camera effects. The result was a very awkwardly-colored poorly lit prototype.

However, I loved working with unity. The separation of functionality into separate scripts and objects that allowed me to compartmentalize code was refreshing. Additionally, working with prefabs that allowed me to build up components of my program as an object felt intuitive. I will certainly work with unity again and mastering lighting and materials will be my next goal.



For my final project, I’d like to recreate and polish a game that I prototyped in my sophomore year (during 15-104) in Unity.

The mechanics are fairly simple: each of the 2+ players controls a ‘shepherd’. Each shepard has a ‘flock’ of identical units. Each players can move the shepherd freely, but the only way the shepherd can control the flock is by switching between a “follow” state where the flock tries to reach the position of the shepherd or an “attack” state where the flock runs to attack the nearest enemy flockling.

The goal of the game is to eliminate the opposing players shepherd, or their entire flock. This makes the game very much about positioning, timing, prediction, and understanding how to control a mindless mob at a meta level.



extra right arm on right knee


extra set of arms on hips


extra set of legs attached at elbows


extra left arm on right hand and right arm on left hand


extra right arm on left knee and left arm on right knee


‘antler arms’ extra arms attached at hand


mouth grabbers, extra arms attached at face

This project was a collaboration with the wonderful Kadoin, where we explored the idea of arbitrary limb duplication to explore strange—sometimes horrific—semi-human skeletons.

By imagining the skeleton as a collection of bones, we looked at what could happen if skeletons could augment themselves by collecting more bones, adding extra limbs, and building new bodies.

The frankensteined results are uncanny but familiar forms that make us wonder about what each of these creatures might do with those extra limbs, how they might walk and interact, and what their unique structure allows them to do.

This project was created in processing, using sample code from Golan’s BVH player. We ran into an unfortunate heap of technical trouble when executing this rather conceptually simple project, which caused bugs with rendering anything other than lines for the limbs, as well as being unable to render the terminals of the duplicated bones.

Ideally, we would have loved to attach more fleshy, deforming geometry to the skeletons, and refine the visual presentation beyond just wireframes, but were satisfied with how compelling the simple output was.

Github here

Processing BvhBone Class Below:

(majority of the custom code is here)

import java.util.List;

class PBvh
  BvhParser parser;  
  PBvh(String[] data) {
    parser = new BvhParser();
    parser.parse( data );
  void update( int ms ) {
    parser.moveMsTo( ms );//30-sec loop 
  void drawBones(boolean renderExtras) {

      List theBvhBones = parser.getBones();
      int nBones = theBvhBones.size();       // How many bones are there?
      BvhBone aBone;

      /////////MAIN BONE LOOP/////////
      for (int i=0; i<nBones; i++) {         // Loop over all the bones
        PVector anchorTranslation = new PVector (0,0,0);
        aBone = theBvhBones.get(i);
        //Manual Duplicated adding
        if (aBone.getName().equals("LeftForeArm") && renderExtras == true) {
          //draw root bone in original position

          // Look through duplicates array, find the matching translation Vectors (where to attach the duplicated limb)
          for (String dupe : aBone.duplicates) {
            for (int l = 0; l < theBvhBones.size(); l++) {
              if (theBvhBones.get(l)._name.equals(dupe))
                //then, save the translation in preparation for drawing duplicate
                anchorTranslation = new PVector(theBvhBones.get(l).absPos.x, theBvhBones.get(l).absPos.y, theBvhBones.get(l).absPos.z);
              }//end if
            }//end the for loop
          }//end for dupe

          BvhBone currentBone = aBone;
          float modifier = 4.0;
          while (currentBone.hasChildren()) {
            List currentChildren = currentBone.getChildren();

            for (int j = 0; j < currentChildren.size(); j++) {


              List grandchildren =  currentChildren.get(j).getChildren();

              for (int k = 0; k < grandchildren.size(); k++) {

                //  currentChildren.get(j).absEndPos.x*0, 
                //  currentChildren.get(j).absEndPos.y*0, 
                //  currentChildren.get(j).absEndPos.z*0, 
                //  grandchildren.get(0).absPos.x, 
                //  grandchildren.get(0).absPos.y, 
                //  grandchildren.get(0).absPos.z);
              }//end grandchildren for
            }//end current children for

            BvhBone nextBone = currentChildren.get(0);
            currentBone = nextBone;
          }//end of while loop
        }//end specific bone if
        //Manual Duplicated adding
        if (aBone.getName().equals("RightForeArm") && renderExtras == true) {
          //draw root bone in original position

          // Look through duplicates array, find the matching translation Vectors (where to attach the duplicated limb)
          for (String dupe : aBone.duplicates) {
            for (int l = 0; l < theBvhBones.size(); l++) {
              if (theBvhBones.get(l)._name.equals(dupe))
                //then, save the translation in preparation for drawing duplicate
                anchorTranslation = new PVector(theBvhBones.get(l).absPos.x, theBvhBones.get(l).absPos.y, theBvhBones.get(l).absPos.z);
              }//end if
            }//end the for loop
          }//end for dupe

          BvhBone currentBone = aBone;
          float modifier = 4.0;
          while (currentBone.hasChildren()) {
            List currentChildren = currentBone.getChildren();

            for (int j = 0; j < currentChildren.size(); j++) {


              List grandchildren =  currentChildren.get(j).getChildren();

              for (int k = 0; k < grandchildren.size(); k++) {

                //  currentChildren.get(j).absEndPos.x*0, 
                //  currentChildren.get(j).absEndPos.y*0, 
                //  currentChildren.get(j).absEndPos.z*0, 
                //  grandchildren.get(0).absPos.x, 
                //  grandchildren.get(0).absPos.y, 
                //  grandchildren.get(0).absPos.z);
              }//end grandchildren for
            }//end current children for

            BvhBone nextBone = currentChildren.get(0);
            currentBone = nextBone;
          }//end of while loop
        }//end specific bone if

        ////////////////////////////////STUFF THAT DRAWS THE ORIGINAL SKELETON/////////////////////////////////

        PVector boneCoord0 = aBone.absPos;   // Get its start point
        float x0 = boneCoord0.x;             // Get the (x,y,z) values 
        float y0 = boneCoord0.y;             // of its start point
        float z0 = boneCoord0.z;

        if (aBone.hasChildren()) {
          // If this bone has children,
          // draw a line from this bone to each of its children
          List childBvhBones = aBone.getChildren();
          int nChildren = childBvhBones.size();
          for (int j=0; j<nChildren; j++) {
            BvhBone aChildBone = childBvhBones.get(j);
            PVector boneCoord1 = aChildBone.absPos;

            float x1 = boneCoord1.x;
            float y1 = boneCoord1.y;
            float z1 = boneCoord1.z;

            line(x0, y0, z0, x1, y1, z1);
          }//end if children loop
        } else {
          // Otherwise, if this bone has no children (it's a terminus)
          // then draw it differently. 

          PVector boneCoord1 = aBone.absEndPos;  // Get its start point
          float x1 = boneCoord1.x;
          float y1 = boneCoord1.y;
          float z1 = boneCoord1.z;

          line(x0, y0, z0, x1, y1, z1);

          String boneName = aBone.getName(); 
          if (boneName.equals("Head")) { 
            translate( x1, y1, z1);
            ellipse(0, 0, 30, 30);
          } //end if head
        } //end else
      }//end loop over all bones
    } //end drawbones
  } //end class BVH



Cilllia – MIT Tangible Media Group

After going to Ishii’s wonderful lecture, I was looking through the collection of projects from the Tangible Media Group and was particularly struck by Cilllia. Over the last year or so, I’ve become increasingly interested in biomimicry and the design insights that can be gleaned from studying natural systems. Simultaneously, I’ve become increasingly skeptical of 3D printing as a medium for genuine innovation, as so much of the hype surrounding it boils down to little more than overpriced on-demand desk decorations.

However, this project thoroughly impressed me. By framing 3D printing not as the end medium, but using it as a method of synthesizing a unique material that itself has new properties, the TMG explores some many compelling use-cases for these furry plastic doodads. Additionally, the output is astoundingly low-tech. Aside from the complex production method, it doesn’t require electricity or hardware to function, but instead reveals new possibilities when combined with tech.

Some of the scenarios presented are more rooted in aesthetics, unique textures, and even a little goofy, but others-such as the directional touch recognition-are beautifully functional. Overall, this project, and its documentation, are a phenomenal example of what exploratory and experimental design should do: open doors for new ideas and provoke the audience into questioning possibilities with this creation in a way that invites response and collaboration.


The Cyber


So we had to get very, very tough on cyber and cyber warfare. It is a huge problem. I have a son—he’s 10 years old. He has computers. He is so good with these computers. It’s unbelievable. The security aspect of cyber is very, very tough. And maybe, it’s hardly doable. But I will say, we are not doing the job we should be doing.

2. The Critical Engineer raises awareness that with each technological advance our techno-political literacy is challenged.

This second point in the Critical Engineer’s Manifesto was the most thought-provoking for me, because it underscores a point that goes so often unacknowledged in discussions of new technologies: Not only do advancements in technology pull us father from understanding the mechanics of the new technology, but abstract and obscure our ability to discuss the ethical, political, and social implications of their implementation.

One of the most significant examples of this comes in the form of politicians discussing the relatively recent phenomena of cyber-warfare. Listening to almost any politician discuss their opinions or policy surrounding cyber-warfare, it becomes apparent that usually lack an even shaky understanding of cryptography, hacking techniques, much less how the internet fundamentally works. But the thing is, you can’t blame them! The average citizen doesn’t have any of this knowledge either, hardly anyone does except discipline experts, and even then, you would need a panel of specialists to explain every part of it.

While these layers of technological abstraction, buildings blocks on building blocks that afford us everything that the internet offers us on an essentially hourly basis, it makes us as a society, policy makers and citizens, largely unable to have any sort of informed or sophisticated conversation about the ethics, limitations, and boundaries of these systems. Arguments get boiled down to meaningless phrases and rhetoric that lack any real substance.

While I don’t exactly see a solution to this growing divide between knowledge of tech systems and legislation relating to them, I think those who are working and studying in the sphere of tech need to be much more firmly brought into conversations about ethics, understanding the immense power and scale of their field.


This week’s looking outwards, I’ve chosen to write about Stefanie Prosavec, someone whom I’d heard of before, but really changed the way I thought about visualization and introduced me to code.

I first stumbled across her work about two years ago, and was immediately drawn in by the intricacy and artistry of her visualizations, that treated the product as a piece that was meant to be equally beautiful as it was informative. Specifically, her projects: writing without words, which looked at quantifying and visualizing text in a way I’d never seen before.

The visualizations she created were not so much exact and measured as they were textural and interpretive, providing comparisons that could be felt between texts, which created a sort of visual persona for different pieces of literature, and even recognizable visual styles for different authors.

Ironically, her work was created ‘by hand’ in illustrator rather than being coded, and after seeing how powerful this form of visualization could be, it only underscored the importance of programming as a design tool for exploring and communicating information.



This is a graph of all the rides in their most recent quarter. The Y axis is rides chronologically, the first ride at the top and the last at the bottom.

The X axis is the time of day, from early in the morning to late at night.

The size of the circles indicates the duration of the ride, and color corresponds to the type of rider. Pink is subscriber, Blue is customer, and Yellow (very rare) is daily pass.

The number labels are misrepresentitive, as I had to do some hacky conversions to make this work. 😉


(more documentation coming soon!)


by any other nameexperiments in synonymous corruption

PDF (600 kb):




My book is a collection of 14 pieces of writing that have been transformed and corrupted through a synonym-swapping algorithm. The source text comes from monologues, song lyrics, and poems, each printed in its original form, then recursively synonym-swapped 5 times. The resulting text is a twisted, often out of context, but somewhat related derivative of the original concepts. Inspired by the famous Shakespeare passage “a rose by any other name would smell as sweet”, I explored what happened when we use ‘other names’ for things to the point of destruction.

The software pipeline for this project begins with a Python for Processing program that parses .txt files of both the original source content and a large .txt thesaurus. The program looks at each word, identifies words with synonyms in the thesaurus, and then replaces them. This data is then written to CSV, tagging each word that has been swapped each iteration, as well as tagging ends of lines to preserve the line breaks.

These CSVs are then fed into a Basil.js program that iterates over an array of CSV files, printing the text of each iteration on a new page, and italicizing each swapped word in each recursive iteration.

Ultimately, this project was an interesting experiment, but was a bit unpolished and buggy as I began to run out of time as the deadline approached. Some words are swapped into non-english synonyms, and some synonyms appear nonsensical or unrelated. However, in there chaos there are moments of beauty. Sometimes the text is morphed into new and humorous or beautiful new ideas through the process.

Additionally, the visual form is very minimal, which I would have liked to develop more, but the Basil.js workflow is rather unfriendly to rapid iteration, so I kept things as clean as possible for the final. That being said, I do like the austerity of the small undecorated text on otherwise blank pages.

Github here

PDFs of cover and spreads

Code below:

import random
from random import randrange
import re
import csv
import sys

textName = 'loveCallsUs'

source = open('texts/'+textName+'.txt', 'r')
ts = open('texts/synAntNoHyph.txt', 'r')
ts2 = open('texts/MobyTs.txt', 'r')

export = open('export.txt', 'w')
csvExport = open('exports/'+textName+'.csv', 'wb')

def setup():
    size(100, 100)

    rawTs = ts.readlines()
    rawTs2 = ts2.readlines()
    rawlines = source.readlines()
    lines = []
    iterations = []
    numIterations = 5 

    for raw in range(len(rawlines)):
        aLine = rawlines[raw].split()
    #initialize csv with source
    writer = csv.writer(csvExport)
    writer.writerow( ('word', 'isNew', 'iteration', 'endLine') )
    print lines
    for x in range(len(lines)):
        isEnd = 0 #is it the end of a line?
        for y in range(len(lines[x])):
            word = lines[x][y]
            if y == len(lines[x])-1: #adjusted for index
                isEnd = 1
            writer.writerow( (word, '0' , '0' , isEnd ) )
    #first, print original text
    for line in lines:
        joined = ' '.join(line)
        print>>export, joined
    print>>export, '\n'

    for z in range(numIterations):
        # Find Synonyms
        for i in range(len(lines)):  # loop over lines
            for j in range(len(lines[i])):  # loop through words
                currentWord = re.sub(r'[^\w\s]','',lines[i][j]).title() #remove punctuation
                # print currentWord
                found = False #reset found (in first thesaurus)
                for k in range(len(rawTs)):  # loop through ts (first thesaurus)
                    if random.random() > 0.0:
                        if rawTs[k].startswith(currentWord+'.'): #if it's a period, grab the word right after
                                # print rawTs[k]
                                index = random.randint(1,len(rawTs[k].split())-1)
                                synonym = rawTs[k].split()[index]
                                synonym = str('_'+synonym)
                                found = True
                        if rawTs[k].startswith(currentWord+','): #if it's a comma, grab the second word
                                # print rawTs[k]
                                if len(rawTs[k].split())-1 >= 2:
                                    index = random.randint(2,len(rawTs[k].split())-1)
                                    index = 1
                                synonym = rawTs[k].split()[index] 
                                synonym = str('_'+synonym)
                                found = True
                if found == True:
                    if j > 0:
                        synonym = synonym.lower()
                        synonym = re.sub(r'[^\w\s]','',synonym)
                        synonym = synonym.strip(';')
                    lines[i][j] = synonym
        #write to CSV 
        print lines                                     
        for x in range(len(lines)):
            for y in range(len(lines[x])):
                isSyn = 0 #is it a synonym?
                isEnd = 0 #is it the end of a line?
                if lines[x][y].startswith('_'): #detect if its a synonym
                    lines[x][y] = lines[x][y][1:] #strip synonym identifier
                    isSyn = 1
                if y == len(lines[x])-1: #adjusted for index
                    isEnd = 1
                word = lines[x][y]
                writer.writerow( (word, isSyn , (z+1) , isEnd) )
                # print word
            # print str(lines[x])
            joined = ' '.join(lines[x])  
            print>>export, joined
        print>>export, '\n'

Basil/InDesign code:

#includepath "~/Documents/;%USERPROFILE%Documents";
#include "../../../bundle/basil.js";

var csvArray = [
'spottieOttie.csv' ];

var csvData;

var genCover = false;
var reset = true;

function setup() {

  // Clear the document at the very start. 
  if (reset == true){
    b.clear (b.doc());

  if (genCover == true){

  // Make a title page. 
  b.text("By Any Other Name", 72,72,360,36);
  b.text("Kaleb Crawford, Fall 2016", 72,108,360,36);

  } //end Gen Cover

  //72 points in an inch
  var margin = 72

  var titleX = 72; 
  var titleY = 72;
  var titleW = 72;
  var titleH = 72;

  var passageX = margin + 18;
  var passageY = margin * 2;
  var passageW = margin * 3;
  var passageH = b.height-margin*2;

  var innerText;
  var mainFrame;
  var tempFrame;

  //////////////////////CSV Array Loop//////////////////////
  for (var texts = 0; texts < csvArray.length; texts++){

  var csvString = b.loadString(csvArray[texts]);
  csvData = b.CSV.decode( csvString );
  b.println("Number of elements in CSV: "+csvData.length);

  var totalIterations = parseInt(csvData[csvData.length-1].iteration);
  var currentIteration = 0;

  ////////////////////// Iterations Loop////////////////////
  for (var i = 0; i <= totalIterations; i++) {

    b.addPage(); // Create the next page.

    //Create the frame
    b.textAlign(Justification.LEFT_ALIGN, VerticalJustification.TOP_ALIGN );
    mainFrame = b.text("", passageX, passageY, passageW, passageH);

    currentIteration = i;
    innerText = ''; //initialize inner text

    ////////////////////Word Loop///////////////////
    for (var l = 0; l < csvData.length ; l ++ ){
      if (csvData[l].iteration == currentIteration){

        innerText = csvData[l].word+" ";

        if (csvData[l].isNew == 1){
        else{ b.textFont("Garamond","Regular"); }

        if (csvData[l].endLine == 1){
          innerText += "\n";

        tempFrame = b.text(innerText, 0, 0, 100, 100);

        // mainFrame.contents += tempFrame.contents;



      } //end if currentIteration check
    } // end line loop
    b.println('text '+(texts+1)+" iteration "+currentIteration);
  } //end iteration loop
}//end csvArray loop
} // end setup

// This makes it all happen:


The presentation that I was most impressed by (of those I attended, which was unfortunately not as many as I would have liked) was the presentation by Stefan Welker about Google’s ‘daydream labs’. I appreciated the way they approached their timelines and decided to prototype from the ground up on a weekly basis, valuing diversity of techniques rather than working on refining a larger project. I think that this approach to VR development, an incredibly small team on a quick turnaround, is more honest to the medium which is arguably still in its nascency. Unlike many of the projects that I’ve seen that appear to be little more than ‘immersive’ ports of screen-based interactions, their prototypes focused on testing interactions unique to room-scale VR as a medium, finding successes and failures in both techniques and social contexts. As someone who is as interested in the interaction methods and context of VR as the content of VR, Welker’s role sounds immensely exciting, working broadly to explore new types of interactions for what many (myself included) believe will evolve into an increasingly prominent medium.

He also mentioned that they frequently make blog posts summarizing their findings here, working to build more of a community of best-practices and patternized interactions, which is the sort of early-stage interaction design that VR needs right now.


My project is a Skill-crane that you control with your face. I call it SkillCranium.

It uses the face position (specifically, between the eyes, to control the position of a skill crane. Opening your mouth opens the crane claw, and closing it closes the claw. The objective of the game is to get a box into the container on the left of the screen.

It’s remarkably hard. And it forces people playing to make a lot of silly faces.

I was interested in making something non-face-like at all, exploring the awkwardness of  face as an interaction method, and forcing unnatural expressions as a gesture. The result was entertainingly uncomfortable, a system that requires remarkable patience and attention to head position in a way we aren’t accustomed to.

Additionally, this project was a pretty steep technical leaning curve for me, as I have never implemented any sort of physics engine other than some simple hand-coded forces, and wrapping my head around Box2D, joints, and the World Coordinate system was quite a challenge.

Ultimately, I would have liked to explore the fidelity of the interaction more, making the ‘game’ elements more compelling, working on colors, graphics, and overall visual polish a bit more, but was slowed down far more than I expected with the Physics.

Because my code uses many different classes, in separate tabs, I’ve pasted just the content of the primary tab, and the classes are on github:


import shiffman.box2d.*;
import org.jbox2d.common.*;
import org.jbox2d.dynamics.joints.*;
import org.jbox2d.collision.shapes.*;
import org.jbox2d.collision.shapes.Shape;
import org.jbox2d.common.*;
import org.jbox2d.dynamics.*;
import org.jbox2d.dynamics.contacts.*;
import oscP5.*;

// A reference to our box2d world
Box2DProcessing box2d;

// A list we'll use to track fixed objects
ArrayList boundaries;
// A list for all of our rectangles
ArrayList boxes;
//A list of claw arms units
ArrayList claws;

//Initialize OSC object
OscP5 oscP5;

//declare new crane
Crane crane;

// face boolean and raw data array
int found;
float[] rawArray;

//which point is selected
int highlighted;

//Mouth open boolean
boolean mouthOpen;
float jawThreshhold = 0.5;

//Did you win
boolean youWin = false;
PFont proxima;

void setup() {
  size(960, 720);

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "rawData", "/raw");
  // Initialize box2d physics and create the world
  box2d = new Box2DProcessing(this);
  // We are setting a custom gravity
  box2d.setGravity(0, -30);
  //Create Arraylists 
  boxes = new ArrayList();
  boundaries = new ArrayList();
  claws = new ArrayList();
  // Add a bunch of fixed boundaries
  boundaries.add(new Boundary(width/2,height-10,width-20,10));
  //add a collection box
  boundaries.add(new Boundary(width/8-45,height-180,10,50)); //left
  boundaries.add(new Boundary(width/8-5,height-150,90,10)); //bottom
  boundaries.add(new Boundary(width/8 + 35,height-180,10,50)); //right
  //create our crane object
  crane = new Crane();
  proxima = createFont("data/ProximaNova-Black.otf",150);
}//end setup

void draw() {  
  //Box2D Step
  //Check mouth
  //Draw Objects
  if (rawArray!=null) {
    if ( found > 0 ) {
    for (int val = 0; val < rawArray.length -1; val+=2){ //if (val == highlighted){ fill(255,0,0);} //else{fill(100);} fill(240); noStroke(); ellipse(rawArray[val], rawArray[val+1],10,10); }//end points array loop //GUI AND DEBUGGING //debugging(); }//end face found check crane.update(); crane.drawCrane(); }//end rawArray length check // Display all the boundaries for (Boundary wall: boundaries) { wall.display(); }//end boundary draw // Display all the boxes for (Box b: boxes) { b.display(); didYouWin(b); }//end box draw // Boxes that leave the screen, we delete them // (note they have to be deleted from both the box2d world and our list for (int i = boxes.size()-1; i >= 0; i--) {
    Box b = boxes.get(i);
    if (b.done()) {
  //Display the cord
  // Display all the claws
  for (int i = 0; i < claws.size(); i++) { claws.get(i).display(); }//end claw draw if (youWin == true){ textAlign(CENTER); textSize(100); text("YOU WIN!",width/2,height/2); } }//end Draw ///////////////////////////////Crane Class////////////////////////////////// class Crane { Float glideSpeed = 0.02; PVector pulley,cross,base; Float baseWidth = width*0.05; Float baseHeight = height*0.9; Float crossWidth = width*0.875; Float crossHeight = height*0.04; Float pulleyWidth = 50.0; Float pulleyHeight = 50.0; Bridge cord; Crane(){ pulley = new PVector (width/2,height/2); cross = new PVector (width*0.1,height*0.5); base = new PVector(width*0.9,height*0.1); //length , number , anchorX, anchorY cord = new Bridge(width/5,width/40,pulley.x,pulley.y); } //update method void update(){ //update crossbar with top of nose Y value cross.y = cross.y - (cross.y - rawArray[55])*glideSpeed; //update pulley with top of nose X value pulley.x = (pulley.x - (pulley.x - rawArray[54])*glideSpeed); //update pulley Y with same as crossbar pulley.y = cross.y; //update the cord position Vec2 pulleyWorld = new Vec2(box2d.coordPixelsToWorld(pulley)); Vec2 anchorVec = cord.particles.get(0).body.getPosition(); cord.particles.get(0).body.setLinearVelocity(new Vec2( (pulleyWorld.x-anchorVec.x)*8, (pulleyWorld.y-anchorVec.y)*8)); } //drawCrane method void drawCrane(){ //stroke(0); noStroke(); fill(0); rectMode(CORNER); //Base rect(base.x,base.y,baseWidth,baseHeight); //Crossbar rect(cross.x,cross.y,crossWidth,crossHeight); //Pulley rectMode(CENTER); rect(pulley.x,pulley.y,pulleyWidth,pulleyHeight); //Claws drawn in draw loop } }//end crane class //////////////////////////////Did You Win///////////////////////////// void didYouWin(Box box) { Vec2 boxPos = new Vec2 (box2d.getBodyPixelCoord(box.body)); if ( boxPos.x > width/8-35 && boxPos.x < width/8 + 35 && boxPos.y > height-180 && boxPos.y < height - 130){ youWin = true; } } //////////////////////////////Create New Box///////////////////////////// void newBoxes() { Box p = new Box(crane.pulley.x,height-40); boxes.add(p); }//end new boxes function //////////////////////////////Is Mouth Open///////////////////////////// void isMouthOpen() { if (dist(rawArray[102],rawArray[103],rawArray[114],rawArray[115]) > dist(rawArray[108],rawArray[109],rawArray[96],rawArray[97])*jawThreshhold){
  mouthOpen = true;
  else{mouthOpen = false; crane.cord.toggleClaw(mouthOpen);}

void debugging() {
   text( "current index = [" + highlighted + "," + int(highlighted + 1) + "]", 10, 20);
}//end debugging function

void mousePressed() {
void keyPressed(){
  if (keyCode == RIGHT){
  highlighted = (highlighted + 2) % rawArray.length;
  } //end right key
  if (keyCode == LEFT){
    highlighted = (highlighted - 2);
    if (highlighted < 0){
      highlighted = rawArray.length-1;
    }//end highlighted if
  }//end left key
  if (keyCode == UP){
}//emd keypressed
////////////////////////////// OSC CALLBACK FUNCTIONS//////////////////////////////
public void found(int i) {
  println("found: " + i);
  found = i;
}//end found
public void rawData(float[] raw) {
  println("raw data saved to rawArray");
  rawArray = raw;
  for (int i = 0; i < rawArray.length; i++) {
  rawArray[i] *= 1.5;
  if (i%2 == 0){
  rawArray[i] = map(rawArray[i],0,width,width,0);
  }//end for loop
}//end found

There are quite a few hacky magic numbers in here. Sorry coding style gods.


krawlebplotsedit-2krawlebplotsedit-1  krawlebplotsedit-3

This week, I wanted to work on implementing new techniques or processes that I had never tried before, and find something interesting in these processes that emerged from exploration rather than going in with a set vision.

The two concepts I wanted to explore were voronoi diagrams and polar coordinates, neither were very complex alone, but I’ve never really with either and wanted to see what happened when they were combined.

I began by implementing a very ‘naive’ voronoi diagram which just jumped a set X and Y position, and drew a line to the nearest node, which were placed randomly across the canvas.

From here I explored number of nodes, number of subdivisions (lines), drawing rectangles instead of lines, then moved into 3D, voronoi in 3D rectangular space, cubes, lines, rectangles, etc.

Here is a collection of screenshots of the experiments and iteration as I went along:

process-plotter process-plotter2

Just after these screenshots, I explored the idea of only plotting the ‘end’ of the voronoi lines, which is where the nodes are located on the outside of a sphere, and then adding a variation in sphere radius using a noise function, then connecting them using a single curve interpolated between the points. The result was the thin and wire-sculpture structures that I ultimately decided to plot.

I liked the results because I thought they struck an interesting balance between computational and hand-drawn, where there’s a sense of geometry and space, but the lines are wavering and ambiguous, looking as if they could have been hand drawn:


Code here:

import peasy.*;
import peasy.test.*;
import processing.pdf.*;

PeasyCam cam;
boolean savePDF;

//Array of Node objects, where paths are drawn to.
ArrayList nodes = new ArrayList();
//Array of Path objects, which holds paths to be drawn.
ArrayList paths = new ArrayList();

int numNodes = 100;
int subWide = 60;
int subTall = 60;

float minDist;
float nearestX,nearestY,nearestZ;

float margin = 30;
float jitter = 10;
float rad = 100;
float ns = 0.2;
void setup(){
  //Visual Things
  //camera things
  cam = new PeasyCam(this, 100);
  //Generate Nodes
  for (int i = 0; i < numNodes; i++){
    Node n = new Node(map(noise(i*ns),0,1,0,rad),
  }//end nodes loop
  println("Nodes: "+nodes.size());
  //Generate Paths
  for (int col = 0; col < subWide; col++){
    for (int row = 0; row < subTall; row++){
      minDist = 99999999;
      float cRad = rad;
      float cTheta = map(row,0,subWide,0,PI*2);
      float cPhi = map(col,0,subTall,0,PI*2);
      float cx = cRad*sin(cTheta)*cos(cPhi);
      float cy = cRad*sin(cTheta)*sin(cPhi);
      float cz = cRad*cos(cTheta);
      for (int i = 0; i < nodes.size(); i++){
        //If the distance between current XY and Node XY is new minimum
        float distance = dist(cx,cy,cz,nodes.get(i).pos.x,nodes.get(i).pos.y,nodes.get(i).pos.z);
        if ( distance < minDist){
        //Set new minimum and record x,y
        minDist = distance;
        nearestX = nodes.get(i).pos.x;
        nearestY = nodes.get(i).pos.y;
        nearestZ = nodes.get(i).pos.z;
        }//end if distance check
      }//end distance loop
      //Create new path from cx,cy to nearX,nearY
      Path p = new Path(cx,cy,cz,nearestX,nearestY,nearestZ);
    }//end row
  }//end col
  println("Paths: "+paths.size());
} // end setup
void draw(){
  if (savePDF == true){
    beginRaw(PDF, "export.pdf");
  //Iterate over path array drawing lines
  for (int p = 0; p < paths.size(); p++){
  }//end line loop
  if (savePDF == true){
  savePDF = false;
}//end draw
void keyPressed() { if (key == 'p') { savePDF = true; }
}//end keypressed
class Node {
  PVector pos;
  Node (float r, float theta, float phi){
    pos = new PVector(r*sin(theta)*cos(phi),r*sin(theta)*sin(phi),r*cos(theta));
}//end node class
class Path {
  PVector start,end;
  float distance;
  Path (float sx, float sy, float sz, float ex, float ey, float ez){
    start = new PVector(sx, sy, sz);
    end = new PVector (ex,ey, ez);
    distance = dist(sx, sy, sz,ex,ey, ez);
}//end path class


This week, I’ve chosen to write about an interactive art piece created as a collaboration between Janet Echelman and Aaron Koblin called Unnumbered Sparks. Created as an installation for TED’s 30th anniversary, it allows the audience to interact with an enormous suspended fiber sculpture in real-time by painting on its surface with light using a chrome web app. It’s a networked experience that allows multiple users to interact at once, seeing their  brushstrokes interact with other audience members.

I’m personally a huge fan of public art, and I particularly enjoy the dynamic and ethereal nature of Echelman’s fiber work. All of her non-interactive pieces are beautiful, but I think this collaboration adds a new exciting layer to the project. Giving people a sense of power as their tiny touch-screen gestures are translated into enormous strokes of light is exciting and unusual and allows for kinds of collaborative dance and interaction to occur between strangers as they play and mingle with each other’s patterns.

That being said, I think the interaction method was perhaps a bit too simple, and afforded button-mashy swiping a bit too easily which makes the way people interact with it often chaotic and unrefined. Perhaps introducing more subtle interactions, or somehow throttling the effect would have created a more elegant output in the hands of the audience.

The project should also be applauded for its huge logistical complexity, with projection mapping and mounting of the sculpture alone being an amazing feat, not to mention the interaction all through the chrome browser.


Feedback on my clock was mostly positive, but as I noted in my write-up, the performance of my particle system was lacking, which forced me to make the clock small, often a bit laggy, and therefore it a little hard to read or observe over a longer scale. I would have liked to make the method of reading minutes more clear, but it didn’t seem to bother most people. Tega’s link about optimization seemed helpful and I’ll certainly keep that in mind going forward.


Updated with ~real~ gifs:

krawleb_1 krawleb_2

So originally I had some grand ideas about importing OBJ files and doing some nifty glitch-aesthetic distortion of the 3d models and being all trendy like that but after some technical hullabaloo with anchor points and textures and processing not being able to calculate things fast enough I decided to drop that and make some hard and fast limitations so I could get something out quickly and without over-complicating things.

My self-imposed limitations were:


2 seconds per loop @ 60fps = 120 frames.

Only black and white

Only use one type of shape (rectangle, circle, etc.)

Only a single for() loop. No object arrays, no particle systems, etc.

From there, my only direction was a general gist of “there’s going to be a lot of squares and they will spin”

So I just played with iterations for a while, letting ideas emerge from happy accidents and a lot of trial and error. I made a million iterations, some more rainbow and wacky than others, but found two that I particularly liked.

Overall, I think my approach was a bit of a cop-out, and I was disappointed in myself for not trying to learn a new algorithm or technique that I wasn’t familiar with. While the results I came up with are visually compelling to a degree, they don’t have the mind-bending wow factor of that bees&bombs quality stuff.

No pen and paper sketches this time, this was really a ‘code as sketch’ kind of experience. In retrospect, shoulda taken some screenshots along the way as there were many iterations.


Code here:


import peasy.*;
import peasy.test.*;

PeasyCam cam;

float count = 4000;
void setup() {
  //Settings Things
  size(400, 400, P3D);
  float cameraZ = ((height/2.0) / tan(PI*60.0/360.0));
  perspective(PI/5, width/height, cameraZ/10.0, cameraZ*10.0);
  //camera things
  cam = new PeasyCam(this, 100);
void draw() {
  //Refresh stuff

  for (int i = 0; i < count; i++){
    //float h = abs((((frameCount+i+00)%120)-60)*2);
    float h = abs((((frameCount+(i*sin(abs((((frameCount+i+00)%120)-60)*2)/100.0)+1))%120)-60)*2);
    float s = abs((((frameCount+(i*sin(abs((((frameCount+i+40)%120)-60)*2)/200.0)+1))%120)-60)*2);
    float b = abs((((frameCount+(i*sin(abs((((frameCount+i+80)%120)-60)*2)/300.0)+1))%120)-60)*2);
  //if (frameCount < 121){
  //}//end saveframe stuff
}//end draw

Commented out code is some of what I used for the different iterations.



For this looking outwards, I was interested in generative art that predated or didn’t use digital technology, inspired by how the reading positioned generative art as the revealing of existing principles or processes. Looking for this kind of work I stumbled on Tim Knowles project Tree Drawings in which he attaches markers to trees and allows the breeze moving the branches to create sparse, elegant, and natural patterns. I appreciate the simplicity of the project, how both the process that created it and the materials are straightforward, but the output is complex and varies with each unique tree and day. Knowles doesn’t create an algorithm, but captures an existent phenomenon in a way that reveals both the tree and the wind and their interaction. However, that is not to say he does not play a role in creation; where to mount pens, how many, what tree to pick, what size paper, how much wind, etc. were all parameters that I’m sure were carefully considered to perfect the outcome. Interestingly, while the capture technique is so simple, the pieces exhibit high effective complexity, to the point where different trees have identifiable styles, far from random but certainly not deterministic either.

More examples of the work:



1a: The kind of effective complexity that I’m most interested in comes from emergent systems, for example the flocking patterns of birds, which I believe sits squarely in the middle of Galanter’s spectrum. While the forms that emerge are certainly recognizable, they are unpredictably dynamic and evolving in dramatic ways that are far from random.  They are clearly not fractal or L-systems, but express a clarity or sense of intention that masks the complexity of the whole. Additionally, the rules that guide each agent (in this case bird) are remarkably simple relative to the pattern that emerges.


1B: The Problem of Postmodernity

This section I found extremely compelling and inspiring to read. As someone who in many ways subscribes to the poststructuralist idea of the death of the author (as discussed in ‘the problem of authorship’) I think generative art walks a delicate balance between authorial intent and an act of revealing of an existing system. I think the role of the generative artist / designer is far more editorial than expressive. The act of creating generative art is similar to found / street photography in the sense that the artist is by no means the sole artificer of the content, but frames and then presents that which already exists. Just as we do with photography, we still credit the artist as an aesthetician, appreciating their eye—not their hand—in the work.

I particularly love the passage:

Generative art can establish beauty as something that is not man’s arbitrary creation, but rather an expression of universal forces. Second, artists on that basis can demonstrate, by compelling example, reasons to maintain faith in our ability to understand our world. They can remind us that the universe itself is a generative system, and generative art can restore our sense of place and participation in that universe.

In this way, generative art is both deeply artificial (that is to say, made by human hands) but elegantly natural, eroding the philosophical barrier between ‘man’ and ‘nature’ in a way that makes the dichotomy seem contrived. While much of contemporary generative art is strikingly technological, it is simultaneously eerily primordial, which is what I believe makes it so compelling.


Fair warning, this thing is an inefficient and slow mess, especially so if it’s late in the hour.

p5js Source

Initially for this assignment, I was exploring the idea using natural cues to depict time, like the sunrise, and had an idea of a sort of panorama of livestreams from around the world that would rotate as the day passes so the result is a perpetual sunrise. After a while of fiddling with ‘sunrise cam’ APIs I decided that was too finicky and wanted something self-contained.

I still liked the idea of natural indicators or systems, but centered my new direction around growth, decay, and lifetimes. I learned about DLA systems from Bourke and Shiffman, and wanted to create a clock that would grow and evolve over the course of the day, but would produce a different and unpredictable output. Inherently, this type of time-telling is significantly less exact, and gives more of a “half-past two” kind of time-telling vibe.


The initial idea was pretty simple: create a new ‘drunken walk’ particle every second, and then let these pseudo-randomly accumulate over the course of the day, forming an increasingly large and complex tree structure.

However, as often happens, a few things changed along the way. Firstly, my dreams of having one plant grow over the course of a day were crushed by the slowness of JavaScript handling collision detection between any more than about 2000 particles, and at the rate I introduce new particles, there are 3600 generated an hour. So, I decided to have a new ‘tree’ grow every hour, and needed a new way to indicate the accumulation of hours.

As a debugging tool, I added a sphere that would rotate my plant to indicate where the new particles were coming from, and as a sort of happy accident I found an interesting interaction between the particles (rendered the same color as the background to avoid cluttering the clock) and the trail left by the rotating orb that spawned the particles.

This effect, and the program with a background can be seen below:

clock-background-hidden clock-background-revealed

The particles act as very fast moving random erasers, which when done at a small scale, produces a nice smokey / liquid inky looking effect.

I decided to implement this as part of the clock because it was the kind of visual ‘decay’ that complimented the growth of the tree but on a much shorter timeline.

The quantity of orbs then became the indicator of the hour, which I think is the most important that it be explicitly count-able, followed by minutes, which is sort of a visual approximation, followed by seconds, which are only perceived by the effect they have on the other two components.

This clock takes quick a while to ‘ramp up’ and has a good deal of inaccuracy when it’s started not at or near the top of the hour. More specifically, I compensate for the current ‘minute’ by tossing in a bunch of particles when it starts (60 per minute) but the nature of the random walk means that the tree will not entirely catch up to the time it’s supposed to be.

However, this does mean that over the course of the hour, it produces a lot of interesting and unique textures and changes appearance dramatically over the course of the day.

Here’s around 11:03pm, where there are not enough particles to clearly articulate the number of orb emitters:


Any here’s a more dramatic 2:15pm look:


Apologies for the novella, that’s all folks!



For this week, I’ve chosen to write about one of the first computational design projects I ever heard about, and one that certainly changed the way I understood programming forever. While enrolled in 15-104, we were working in processing, which felt enormously intuitive for me. I had encountered programming in an introductory class in high school, but it had always been so deeply rooted in a perspective of math and execution of function that it never really grew on me and I found it difficult. Processing flipped the programming metaphor on its head, establishing a visual feedback system that I immediately understood. Naturally I was curious about who had created this amazing tool and quickly stumbled upon Casey Reas portfolio. I was enchanted by the intricate and pseudo-natural patterning in his work, but couldn’t unpack it visually. Then I found his “Process Compendium” which describes the algorithms behind (much of) his work in plain english, a logic-based framework for creating interactions infinitely more complex than each component. This compendium also explains likely the name behind ‘Processing’ as the method of translating a ruleset or process into a coded algorithm which creates an output. This collection of projects and the mindset it implied is what really showed me how powerful programming is as a creative medium, and how it allows artists and designs to work in ways so far beyond the capabilities of their owns hands, in an orchestration of thoughts and rules to make beautiful systems. Since then, I’ve followed this theme both in programming based work, as well as learning about natural generative or emergent systems as a lens to observe, learn from, and emulate nature.


Kyle McDonald is an artist who works with code. He’s based in Brooklyn, but has shown personal and collaborative work at exhibits and festivals around the world, and was formerly an artist in residence at the STUDIO for Creative Inquiry. I chose to write about McDonald because I admire him as both an artist and professional and how that carries over into how he presents and embodies his work. While his projects exhibit similarity in their themes, human interactions, artificial intelligence, computer vision, etc., there is an immense fluidity and refreshing inconsistency in the work he produces. The work is not tied to one medium, platform, or coding language which gives each project a sense of technological agnosticism, putting emphasis on the idea being explored. That being said, he also deals with immensely technically complex work, but never overemphasizes the technical aspect of each project. Much of McDonald’s work is presented rather casually, as short video snippets or in a humorous tone that always seems approachable, not arcane or pretentious. Through his work, you can see that he’s passionately curious, and his projects are explorations to develop an understanding of something new. Additionally, he’s a humble and inviting speaker whose energy is visible when he speaks. One project that I’m particularly fond of is Sharing Faces, which was a interactive video installation between two galleries in Korea and Japan. It’s a project that’s incredibly intuitive to interact with, uses a complex technology in a very subtle and downplayed way, and ultimately creates something so human and touching that playfully addresses lingering social tension between the two countries.



This piece introduced a fascinating dichotomy for me to explore my own creative practice, but imposed a rather strict binary on types of artistic endeavors. Coming from experience in screen-based design, where it’s almost a given that your work will be outdated both aesthetically and technically within 3 years, I’ve naturally carried over that sense of currency to the more experimental work I do, anticipating that it will not ‘stand the test of time’ or become remembered as ‘last word art’. I rarely if ever question the longevity of my digital work because in many ways, so much of that is out of my control. The technologies that are essential to consume the work could be abandoned in favor of new ones or be patched with new features rendering my work unviewable. Additionally, in my work I’m naturally much more inclined towards diverse explorations of disciplines and ideas and have trouble gaining mastery in any one area, which I believe to be a prerequisite of ‘last word art’. While those two things push me towards the ‘first word’ side of the spectrum, I certainly still care about aesthetics and craft, and should put more thought into how I archive the things I make. I shy away from the sentiment that ‘first word art’ has to be concerned with novelty or at least be aware of it’s newness, and think the best art straddles this spectrum, bringing in new elements, techniques, or approaches, but demonstrating appropriate grace and consideration in its form.