My AR sculpture is a sandbox world-building app.
Thank you to Clair Sun for helping me a lot with my documentation!


Screen capture:

My sculpture allows people to take a world they have built with them anywhere. If I could change something about my app, I would incorporate the ability for people to upload their own models (maybe of their childhood homes, places of importance) and build their own unique environment with them. My object (the tracked drawing) is undoubtedly useless individually, as it is just a piece of paper to uniquely identify a built environment. However, when used with the AR app, it can become of sentimental value. My goal was for people to be able to take their created world easily, and anywhere, almost like a good luck charm.


Link to google drive:

"Recipes for the Mad..." is a recipe book that requires one to bring out their inner madness.

My initial inspiration for this project was from the bouba/kiki effect, in which people generally tend to link sharper sounds with more jagged shapes and softer sounds with more curvy shapes. (more information here: Bouba/kiki effect) I wanted to somehow split different ingredients into their "softness" category, for example, jelly would be soft and peppercorn would be sharp. I ended up changing my idea slightly so that, even though I had a mapping similar to that, it was based on the sweetness levels of an ingredient. Dream whip would be sweet and ghost pepper would be spicy.

With the grammar, I started off with very tame and mild sentences like "chop the pepper and stir it in a bowl" but I ended up getting super carried away with it. Sentences like "Lick the pepper and blend into the shape of a komodo dragon" or "Listen to the jelly and challenge it to battle until it becomes evil" became common sentences and I really enjoyed seeing what other funny responses it created. I created different arrays of different spiciness level ingredients and counted the net spiciness level of the overall product. I tried using that to create the relevant titles and color palette, but I ran out of time for it to work properly. I also wish that I made the generated patterns inside the pot more complex, like using the sandpiles tutorial from Dan Shiffman. At the end of the instructions, I wish there was more of a linking sentence like "now put all the ingredients together and boil in a pot."

Overall, however, I think this was one of the most fun projects I've worked on and I ended up fixing the color palettes/title based on sweet/spiciness after the book was due. Here are some pictures of recipes from my revised code:

Sweet example recipe:

Spicy example recipe:

import processing.pdf.*;
import rita.*;
PFont font;
int count = 0;
RiGrammar rg1;
RiGrammar rg2;
RiGrammar rg3;
float speed;
String[] numerics = {
  "1 and a half ",
  "2 ",
  "2 and a half ",
  "3 ",
  "3 and a half ",
  "4 ",
  "4 and a half ",
  "5 ",
  "5 and a half ",
  "6 ", 
  "6 and a half ",
  "50 ",
  "100 ",
  "1000 ",
  "500 ",
  "5000 ",
  "10 and a half ",
  "50 and a half ",
  "A quarter ",
  "A third "
String[] measurement = {
  "tablespoons of",
  "teaspoons of",
  "grams of",
  "liters of",
  "kilograms of",
  "fluid ounces of",
  "cups of",
  "pints of",
  "quarts of",
  "gallons of"
void setup()
  font = createFont("AlexandriaFLF.ttf", 12);
  size(432, 648, PDF, "chaine.pdf"); //6 x 9
void draw()
  PGraphicsPDF pdf = (PGraphicsPDF) g;
  PGraphics maskImage;
  PGraphics sourceImage;
  StringList all_ingredients;
  all_ingredients = new StringList();
  int num_ingredients = int(random(4, 9));
  int count2;
  String final_string = "";
  String title = "";
  //int count3;
  for (count2 = 0; count2 < num_ingredients; count2++) {
    rg1 = new RiGrammar();
    rg1 = rg1.loadFrom("recipes.json", this);
    rg2 = new RiGrammar();
    rg2 = rg2.loadFrom("recipes2.json", this);
    rg3 = new RiGrammar();
    rg3 = rg3.loadFrom("recipetitle.json", this);
    String var1 = rg1.expand();
    String var11 = rg2.expand();
    String var111 = rg3.expand();
    String ingred = " ";
    int rand_ing = int(random(0, 5));
    if (rand_ing == 0) {
      ingred = ultra_soft_ingredients[int(random(0, ultra_soft_ingredients.length))];
    else if (rand_ing == 1) {
      ingred = soft_ingredients[int(random(0, soft_ingredients.length))];
    else if (rand_ing == 2) {
      ingred = neutral_ingredients[int(random(0, neutral_ingredients.length))];
    else if (rand_ing == 3) {
      ingred = sharp_ingredients[int(random(0, sharp_ingredients.length))];
    else if (rand_ing == 4) {
      ingred = ultra_sharp_ingredients[int(random(0, ultra_sharp_ingredients.length))];
    int rand_ing2 = int(random(0, numerics.length));
    int rand_ing3 = int(random(0, measurement.length));
    String list_ing;
    list_ing = numerics[rand_ing2] + measurement[rand_ing3] + ingred;
    RiString var2;
    RiString var22;
    RiString var3;
    RiString var33;
    var2 = new RiString(var1);
    var22 = new RiString(var11);
    var3 = new RiString(ingred);
    var33 = new RiString(var111);
    var2 = var2.concat(var3);
    var2 = var2.concat(var22);
    final_string = final_string + " " + var2.text();
    title = var33.text();
    int store1 = 8;
    int store2 = 8;
    PGraphics[] pgs = new PGraphics[store1];
    PGraphics[] pgs2 = new PGraphics[store2];
    if (count <= 16) {
      pgs[store1-1] = createGraphics(width,height);
      //PGraphics sourceImage;
      for (int i = 0; i < store1; i++) {
        pgs[i] = createGraphics(width,height);
        pgs2[i] = createGraphics(width, height);
        pgs[i].fill(int(random(0,255)),int(random(0,255)),int(random(0,255)), 30);
        pgs[i].ellipse(int(random(141, 291)), int(random(249, 399)), int(random(150, 300)), int(random(150, 300)));
        if (i == 0) {
          pgs2[i].fill(int(random(0,255)),int(random(0,255)),int(random(0,255)), 20);
          pgs2[i].ellipse(int(random(141, 291)), int(random(249, 399)), int(random(20, 50)), int(random(20, 50)));
        else {
          pgs2[i].fill(int(random(0,255)),int(random(0,255)),int(random(0,255)), 20);
          pgs2[i].ellipse(int(random(141, 291)), int(random(249, 399)), int(random(20, 50)), int(random(20, 50)));
      maskImage = createGraphics(width, height);
      maskImage.ellipse(width/2, height/2, 150, 150);
      ellipse(width/2, height/2, 160, 160);
      ellipse(width/2, height/2, 170, 170);
      text(title, 60, 65);
      text("Ingredients:", 60, 100);
      for (int i = 0; i < all_ingredients.size(); i++) {
        text(all_ingredients.get(i), 70, 110 + (i * 13), 400, 600);
      final_string = final_string.substring(1);
      text(final_string, 60, 470, 312, 628);
 //432 648
  if (count == 0) {
    text("Recipes for the Mad...", 250, 291);
    text("by chaine", 250, 321);
  if (frameCount == 16) {
  } else {
  count += 1;


I love Allison Parrish's comparison of literature with space exploration in that there are places even in literature that are mostly unexplored because it is "taboo" such as books that only repeat a single word or speak in a made up generated language. This particular point stuck with me because it made me realize what other fields, not just space and literature, have this exciting opportunity. It makes me imagine what it would be like to explore vastly different fields with automatic systems, programs, or robots in ways which people had never thought of or thought was worthy of much exploration.



When we were first brainstorming ideas, we initially thought of creating a robot hand. We tried to pursue it in a couple different ways and printed/sculpted several different hand blueprints. Trying to make the hands look intricate and elegant was very difficult, as keeping track of and controlling all of the parts of a hand (the knuckles, joints, etc.) is not a trivial task. Eventually, we decided to scrap that idea and create a divided environment in the form of a divided box. The upper half represented land and its creatures, as the bottom half represented the sea. The stopping and restarting movements of the motors creates an illusion that makes each of them move or jitter and jolt to a stop.

The whole process was very time consuming because although thinking of an idea was alright, being able to execute them proficiently was much more difficult. Nicolas organized the overall structure of the project and did most of the coding job, like creating the environment, and I mostly handled the art. We had originally planned to incorporate other aspects (such as creating ridges surrounding our motor for our wires to jump up and down and move our animals), some of the logistics of the wire and the weight of the clay not adding up made that task difficult. Overall, creating a project with the Arduino for us was also new, frustrating, enjoyable, and rewarding.

Initial sketch:

Creating a blobfish:

Baking the clay with a makeshift hairdryer oven:

Accumulating creatures:


Machine Drawings by Ken Rinaldo is a series of drawings that were created from 3D models that were compressed to be nearly 2D and drawn with a rapidly prototyping printer robot. The drawings are then hand painted and embellished. What interested me the most about this piece was the relationship between chaos and order. While the printer robot does its best in copying things perfectly, there are still unknown variables causing machine flaws that are sometimes beautiful, although unplanned. This clash of media makes for some extremely abstract looking and vaguely familiar subjects. The person's outline is very distinct and unmistakable, but when other factors come into play, such as someone's clubbed foot, the drawing seems to draw that feature out way more in an interesting way. In some way, I would like to see how the project would have looked if it had entirely been 3D, but I'm glad overall that the work was compressed into being 2D although it was originally from 3D models. That was the only way to have created such specific and abstract drawings.




Mouth open:

My first idea (scratched):

Idea that I pursued:

Starting off this project, I was debating between two ideas. One where I would play around with rotations, and the other where I would manipulate the eyes depending on the facial expression. (especially the mouth) I decided that I would like to express emotions creatively, so I decided to pursue that. At first, I wanted the face to express sadness by crying whenever the mouth opened and the eyebrows were furrowed, but I had some issues with creating the tears to be more fluid-like, so it kind of just looked like a scary person shooting out spherical beams. I got some new ideas with that mistake; and I thought it would be cool for the person to transform into an evil medusa with red pupils when you made a scary face. Also, I ended up removing the mouth entirely because the focus was in the eyes. Aesthetically speaking, I wish I played around with the eyes' outlines more because I think it's quite distracting especially since none of the other shapes have an outline. I also wish I could diversify the eyes' shapes to make it more evil-looking and scary. (possibly?) Functionally, I wish the eyes would have shot out beams rather than circles or better conveyed the user being frozen by her stone-turning eye beams. In terms of the relationship between the facial motions and the treatment of it, I would say that the motions came first and I only adapted to the possibilities of those motions.

// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC
// 2012 Dan Wilcox
// for the IACD Spring 2012 class at the CMU School of Art
// adapted from from Greg Borenstein's 2011 example
import oscP5.*;
OscP5 oscP5;
//I used Dan Shiffman's box2d adaptation:
import shiffman.box2d.*;
import org.jbox2d.collision.shapes.*;
import org.jbox2d.common.*;
import org.jbox2d.dynamics.*;
import org.jbox2d.dynamics.joints.*;
Box2DProcessing box2d;
ArrayList<Boundary> boundaries;
ArrayList<ParticleSystem> systems;
ArrayList<Box> boxes;
// num faces found
int found;
// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();
// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;
PImage outerEye;
float aVelocity = .05;
boolean mouseVelocity = false;
float angle = 0;
float amplitudeX = 200;
float amplitudeY = 200;
float theta = 0;
PVector location;
float centerX;
float centerY;
void setup() {
  size(640, 480);
  box2d = new Box2DProcessing(this);
  box2d.setGravity(0, -20);
  systems = new ArrayList<ParticleSystem>();
  boundaries = new ArrayList<Boundary>();
  boxes = new ArrayList<Box>();
  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");
  outerEye = loadImage("circlebig.png");
  //boundaries.add(new Boundary(0,490,1280,10,0));
void draw() {  
  for (ParticleSystem system: systems) {;
    int n = (int) random(0, 2);
  for (Boundary wall: boundaries) {
  float varVelocity = calcVelocity(aVelocity);
  PVector angularVelocity = new PVector (angle, varVelocity);
  PVector amplitude = new PVector (amplitudeX, amplitudeY);
  PVector location = calculateCircle(angularVelocity, amplitude);
  //PVector centerCircle = calculateCenter(centerX, centerY);
  if(found > 0) {
  for (Box b: boxes) {
void semiTransparent() {
  float backColor = map (mouthHeight, 1, 5, 255, 0);
  fill(backColor, backColor, backColor, 40);
  rect(0,0, width, height);
//basics of eye blink, iris movement from:
float calcVelocity(float aVelocity) {
  float velocity = aVelocity;
  if (mouseVelocity == false) {
  if (mouseVelocity == true) {
    velocity = map(mouseX, 0, width, -1, 1);
  return velocity;
PVector calculateCircle (PVector angularVelocity, PVector amplitude) {
  float x = amplitude.x * cos (theta);
  float y = amplitude.y * sin (theta);
  location = new PVector (x, y);
  theta += angularVelocity.y;
  return location;
PVector calculateCenter (float centerX, float centerY) {
  PVector centerCircle = new PVector (centerX, centerY);
  return centerCircle;
void drawOscillatingX (PVector location) {
    float mouthScalar = map(mouthWidth, 10, 18, 0, 1.5); // make a scalar for location.x as a function of mouth
    float newPosX = map (posePosition.x, 0, 640, 0, width);
    float newPosY = map(posePosition.y, 0, 480, 0, height);  
    translate(width - newPosX, newPosY-100);
    float irisColR = map (mouthHeight, 1, 5, 102, 204);
    float irisColG = map (mouthHeight, 1, 5, 204, 51);
    float irisColB = map (mouthHeight, 1, 5, 255, 0);
    float leftEyeMove = map(location.x, - amplitudeX, amplitudeX, -25, 33);
    translate (leftEyeMove, 0);
    //Left iris
    fill(irisColR, irisColG, irisColB);
    float eyeMult = map (mouthHeight, 1, 5, 1, 2);
    float irisSizeL = map (eyeLeft, 2, 3.5, 0, 50);
    ellipse(-100, 0, irisSizeL * eyeMult, irisSizeL * eyeMult);
    float eyeOutlineCol = map (mouthHeight, 1, 5, 0, 255);
    float rightEyeMove = map(location.x, - amplitudeX, amplitudeX, -33, 25);
    translate(rightEyeMove, 0);
    //right EYE
    //Right Iris
    fill(irisColR, irisColG, irisColB);
    float irisSizeR = map (eyeRight, 2, 3.5, 0, 50);
    ellipse(100, 0, irisSizeR * eyeMult, irisSizeR * eyeMult);
    //Right Pupil
    //get eye informatio and set scalar
    float blinkAmountRight = map (eyeRight, 2.5, 3.8, 0, 125);
    float blinkAmountLeft = map (eyeLeft, 2.5, 3.8, 0, 125);
    float eyeMultiplier = map (mouthHeight, 1, 5, 1, 3);
    // right eye size, blink and movement
    ellipse (100, 0, amplitudeX *.6, blinkAmountRight * eyeMultiplier); //scalar added to eyeHeight
    if (eyeRight < 2.7) {
      fill(255, 230, 204);
      ellipse (100, 0, amplitudeX *.6, blinkAmountRight*1.6 * (4 * eyeMultiplier/5)); //scalar added to eyeHeight
    //left eye size, blink, and movement
    ellipse (-100, 0, amplitudeX *.6, blinkAmountLeft * eyeMultiplier); 
    if (eyeLeft < 2.7) {
      fill(255, 230, 204);
      ellipse (-100, 0, amplitudeX *.6, blinkAmountLeft*1.6 * (4 * eyeMultiplier/5)); //scalar added to eyeHeight
    if (mouthHeight > 3.3) {
      //float mapScale = map (poseScale, 0, 4, 0, 1);
      //translate(posePosition.x, posePosition.y);
      Box p = new Box((width - posePosition.x - 100), (posePosition.y - 50));
      Box q = new Box((width - posePosition.x + 100), (posePosition.y - 50));
public void found(int i) {
  println("found: " + i);
  found = i;
public void poseScale(float s) {
  println("scale: " + s);
  poseScale = s;
public void posePosition(float x, float y) {
  println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);
public void poseOrientation(float x, float y, float z) {
  println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);
public void mouthWidthReceived(float w) {
  println("mouth Width: " + w);
  mouthWidth = w;
public void mouthHeightReceived(float h) {
  println("mouth height: " + h);
  mouthHeight = h;
public void eyeLeftReceived(float f) {
  println("eye left: " + f);
  eyeLeft = f;
public void eyeRightReceived(float f) {
  println("eye right: " + f);
  eyeRight = f;
public void eyebrowLeftReceived(float f) {
  println("eyebrow left: " + f);
  eyebrowLeft = f;
public void eyebrowRightReceived(float f) {
  println("eyebrow right: " + f);
  eyebrowRight = f;
public void jawReceived(float f) {
  println("jaw: " + f);
  jaw = f;
public void nostrilsReceived(float f) {
  println("nostrils: " + f);
  nostrils = f;
// all other OSC messages end up here
void oscEvent(OscMessage m) {
  if(m.isPlugged() == false) {
    println("UNPLUGGED: " + m);


Nova Jiang's "Figurative Drawing Device" (link here) was exhibited at the New Wight Gallery in Los Angeles. This device requires two people, a designated tracer and the person to be outlined, and graphs the outline with imperfections that are clearly evident. I was drawn to this piece because of its personal and irregular nature. No two outlines would be the same and it was also dependent on the tracer. The device seems to be made up of a series of metal bars that translate the bigger outline of the tracer's drawing to something that could be fit into a sketchbook's size. Although overall, I love this piece, I wonder how the traces would look if the outline was completely black, creating a stronger contrast with the white paper's background. I respect the social and psychological elements in this piece within the relationships that it creates and also the fact that a single outline is not something that can be done quickly and perfectly. The poser must stay relatively still in poses that may be hard to maintain while the outliner must focus on doing his/her best in creating the drawing. Within completion, the drawing serves as an interpretation of the participants' combined effects, which is something I find exciting.


This app is for however many people you want there to be, as it is an interactive drawing canvas. Simply click on the screen to shoot out paint balls the same color as you, press left/right to grow smaller/bigger, press q for "party mode" (anyone could toggle it on or off), and any other key to respawn with a different size/color.


The agario canvas is a drawing board that changes its brush quality according to the player itself.
Originally, I wanted to make an endless platformer of some sort with randomly generated holes--when a player crashes into the walls, they would explode and carve out this platformer for other people to get further. I tried creating this at first using the agario template's centering functionality already set for me. There were a lot of issues with this, however, and I decided to scrap that idea and create something entirely different while still using the agario template. I liked the idea of being able to move around freely as a player in his/her own painting. The trickiest part of this assignment was getting players to shoot out paint and make their mark stay relative to a bigger canvas than just a simple width and height. Although I implemented the core functions, I wish there were some more functions available. Some other ideas I had were players turning into different shapes to shoot something other than a circle, players being able to control their own paintballs (sway back and forth), etc. In terms of design issues, my canvas is based on equal roles with many people painting with many other people. It is a shared space where people can paint with their own "bodies".

Previous ideas: