“3. The Critical Engineer deconstructs and incites suspicion of rich user experiences.”

I found this tenet of the Critical Engineering Manifesto curious, since rich user experiences are typically thought to be solely positive. Upon further thought, very rich user interface can disguise underlying problems. A very basic example would be purchasing some sort of commodity, say headphones. People might want to buy really edgy and well designed headphones even if it’s overpriced and the sound quality isn’t as good as alternatives. This happens with the way almost anything is packaged. When strong user interface and convenience intersect, the better alternative solutions and problematic engineering can be swept under the rug. A basic (perhaps opinionated) example would be the popularity of Venmo. Linking your banking info to this mobile app often does raise red flags due to its convenience, despite the fact that it’s not even that much of a hassle to pay someone back in cash. This surely extends to issues whose consequences go beyond the individual.


The Beginner’s Guide was a PC game I played on my classmate drewch’s Steam account (funny enough we both decided to create Unity games for our final projects).

The game uses narration and a series of another (likely fictional) game designer’s works to construct a sense of this unknown artist’s character. The game is really powerful not because of intensely beautiful visuals, story (in the traditional sense), or fun gameplay. It’s really about the way the viewer projects their own memories onto the works this unknown game designer seemingly created about his own problems.

The use of games as really an art medium made me want to explore games as part of my own practice. I’m also very interested in the use of lights within artworks. The simulation of lights within the Unity game space is something I wanted to play with. This led me to make something in Unity for the final project.


ChainFORM from the MIT Media Lab presents some interesting insight to the present and future of technology. The bit where they use the chain to detect and correct the person’s posture was to me an unexpected way to use it. There’s such a broad range of uses it’s pretty fascinating to think where it could wind up in the future. I can see it being used in children’s toys, clothing in performance art, robotic arms, future styluses… the list goes on.


Ever since I played The Beginner’s Guide, a PC game on Steam, I saw games as a more accessible medium to make art. Though I’ve played plenty of games I would consider meaningful, they were very high production in depth games. The Beginner’s Guide includes a series of short games that convey something even without a true storyline to each of them. Unity, a free gaming engine, seemed like the perfect entry point to experiment with making a basic “game.” My main goal was to make something in Unity and get past the entry learning curve, adding Unity to the list of tools I’m familiar with.

Since I would think it would be somewhat difficult to have complicated visuals within this short time, light and sound became the main components of the project. My concept for the project came about while I was listening to a song that felt really nostalgic. The nostalgia I had made me think about the close relationship between emotions, memory, and sound. I wanted to create a space where these sound memories could live. Each orb of sound in the space is related to a specific memory I have. None of the sounds were made for this project, some are found online while others are ripped from existing videos I have.

Code can be found here
Explore the project here


For my final project I’d like to make something in Unity. Though I’d really like to make something meaningful, I’m more concerned about actually jumping into Unity for the first time and making something. I’d like to have almost everything coded.


I wanted to have decorate to bodies through this mocap project. Both figures would be created by cubes. Figure A would be the light source, mostly dark since the inner light would cast a shadow on the visible parts of its cubes, and figure B would be lit only by figure A’s light. I liked the irony of the light source being dark, and the figure without its own light being bright.





import java.util.List;


class PBvh
BvhParser parser;

PBvh(String[] data) {
parser = new BvhParser();
parser.parse( data );

void update( int ms ) {
parser.moveMsTo( ms );//30-sec loop

void draw() {
// Previous method of drawing, provided by Rhizomatiks/Perfume

for ( BvhBone b : parser.getBones()) {
translate(b.absPos.x, b.absPos.y, b.absPos.z);
ellipse(0, 0, 2, 2);
if (!b.hasChildren()) {
translate( b.absEndPos.x, b.absEndPos.y, b.absEndPos.z);
ellipse(0, 0, 10, 10);

// Alternate method of drawing, added by Golan

void drawBones(int light) {

List<BvhBone> theBvhBones = parser.getBones();
int nBones = theBvhBones.size(); // How many bones are there?
BvhBone cBone = theBvhBones.get(1);
PVector boneCoordc = cBone.absPos;
float x2 = boneCoordc.x; // Get the (x,y,z) values
float y2 = boneCoordc.y; // of its start point
float z2 = boneCoordc.z;
BvhBone bBone = theBvhBones.get(16);
PVector boneCoordb = bBone.absPos;
float x3 = boneCoordb.x; // Get the (x,y,z) values
float y3 = boneCoordb.y; // of its start point
float z3 = boneCoordb.z;
line(x2, y2, z2, x3, y3, z3);
BvhBone dBone = theBvhBones.get(0); // Get the i'th bone
PVector boneCoordx = dBone.absPos; // Get its start point
float xl = boneCoordx.x; // Get the (x,y,z) values
float yl = boneCoordx.y; // of its start point
float zl = boneCoordx.z;

if (light == 1) {
pointLight(255, 255, 225, xl, yl - 60, zl);
for (int i=0; i<nBones; i++) { // Loop over all the bones
BvhBone aBone = theBvhBones.get(i); // Get the i'th bone

PVector boneCoord0 = aBone.absPos; // Get its start point
float x0 = boneCoord0.x; // Get the (x,y,z) values
float y0 = boneCoord0.y; // of its start point
float z0 = boneCoord0.z;
String boneName = aBone.getName();

if (aBone.hasChildren()) {

// If this bone has children,
// draw a line from this bone to each of its children
List<BvhBone> childBvhBones = aBone.getChildren();
int nChildren = childBvhBones.size();
for (int j=0; j<nChildren; j++) {
BvhBone aChildBone = childBvhBones.get(j);
String childName = aChildBone.getName();

PVector boneCoord1 = aChildBone.absPos;

float x1 = boneCoord1.x;
float y1 = boneCoord1.y;
float z1 = boneCoord1.z;
//for (
int cubeNum = 10;
float deltaZ = (z1 - z0)/cubeNum;
float deltaY = (y1 - y0)/cubeNum;
float deltaX = (x1 - x0)/cubeNum;

float maxDelta = max(deltaZ, deltaY, deltaX);
for (int c = 0; c < cubeNum; c++) {
translate( x0 + deltaX*c + random(-5, 5), y0+ deltaY*c+ random(-10, 10), z0+ deltaZ*c+ random(-5, 5));
//translate(x0 + deltaX*c, y0+ deltaY*c, z0+ deltaZ*c);
box(random(2, 5));

//line(x0, y0, z0, x1, y1, z1);
} else {
// Otherwise, if this bone has no children (it's a terminus)
// then draw it differently.
PVector boneCoord1 = aBone.absEndPos; // Get its start point
float x1 = boneCoord1.x;
float y1 = boneCoord1.y;
float z1 = boneCoord1.z;

//line(x0, y0, z0, x1, y1, z1);
boneName = aBone.getName();

if (boneName.equals("Head")) {
if (light == 1) {

//stroke(255, 50);
translate(x1+random(-5,5), y1+random(-5,5), z1+random(-5,5));
} else {
translate(x1+random(-5,5), y1+random(-5,5), z1+random(-5,5));




After struggling to create even a scatterplot in D3, I’m mesmerized by pretty much any data visualization. But even without that sidenote, Rachel Binx’s representation of viral Facebook stories is really beautiful. The color choices are well thought out (and it’s nice that the different colors actually reflect the data, in this case gender).



I used the bike data to visualize the number of racks against the number of rides each station had. I wanted to see if there was a correlation between the number of racks and the popularity of the station. It turns out this is not the case, as the popularity of the 19 rack ranged from over 4000 and under 500. The number of racks seems rather arbitrary, so it questions how much thought went into the installation of the bike racks. Ideally I would have like there to be an interactive component to this data, where you’d be able to hover over each dot and see the station name it represents.




I can hardly believe these are actual quotes from the New York Times. They’re all gold tweets. I’m not sure if they’re literal quotes or markov generated text, but it seems like they’re quotes. I’m having a hard time thinking of any context this would fit in. The image of the New York Times and these tweets are so off that they’re pretty fascinating to read through.



I wanted to make some fun random milkshake ingredients with a hint of nonsensical evil. At first I was going to just have normal ingredients mixed with things like “1 cup from the blood of your enemies.” After finding milkshake recipes that actually had instructions, I decided to have actual recipes with unnecessarily emotional directions. After running the code, I found that some of the directions read more sweet (since many of the hate letters were love hate letters). The contrast of either love or hate with the cooking instructions was an entertaining idea to me.

I generated my book using Processing code. The milkshake names are randomly generated from a list of negative adjectives. The ingredients are random milkshake ingredients, and the directions are a markov chain from the Rita library combining hate letters from the internet and milkshake recipes. The book definitely would’ve benefitted from some sort of images, perhaps just images from #milkshake from instagram. There was also an extreme lack of coherence. Perhaps if I’d had more recipes and hate letters I would’ve been able to tweak the markov chain to make more grammatically-correct sentences (that still had the strange mix of recipe + hate letters). The ingredients also could’ve been more entertaining with a couple strange ingredients added. I’m having a hard time accessing the end result of my book, since nothing seems very funny after reading several iterations of the code.

img_1434 img_1436


Cover PDF:


Contents PDF:


Cover Code:

import processing.pdf.*;
PImage img;
ArrayList points = new ArrayList();
int index = 0;
float[] milkshake = {104.0, 364.0, 98.0, 367.0, 90.0, 372.0, 82.0, 377.0, 78.0, 381.0, 74.0, 386.0, 74.0, 393.0, 74.0, 398.0, 78.0, 405.0, 88.0, 410.0, 98.0, 415.0, 104.0, 416.0, 113.0, 417.0, 122.0, 418.0, 131.0, 418.0, 141.0, 418.0, 148.0, 418.0, 156.0, 417.0, 163.0, 416.0, 174.0, 414.0, 185.0, 408.0, 191.0, 404.0, 197.0, 399.0, 199.0, 394.0, 198.0, 389.0, 196.0, 383.0, 182.0, 374.0, 168.0, 371.0, 165.0, 364.0, 168.0, 352.0, 202.0, 167.0, 204.0, 162.0, 203.0, 134.0, 202.0, 125.0, 76.0, 125.0, 201.0, 123.0, 198.0, 117.0, 196.0, 109.0, 188.0, 101.0, 183.0, 102.0, 177.0, 95.0, 175.0, 88.0, 176.0, 94.0, 180.0, 61.0, 180.0, 57.0, 177.0, 52.0, 147.0, 12.0, 144.0, 10.0, 139.0, 12.0, 139.0, 18.0, 171.0, 60.0, 168.0, 61.0, 168.0, 65.0, 168.0, 81.0, 165.0, 82.0, 160.0, 81.0, 160.0, 85.0, 156.0, 83.0, 144.0, 82.0, 136.0, 81.0, 136.0, 82.0, 140.0, 81.0, 139.0, 71.0, 136.0, 64.0, 130.0, 62.0, 123.0, 63.0, 127.0, 62.0, 129.0, 54.0, 126.0, 56.0, 123.0, 60.0, 123.0, 55.0, 118.0, 50.0, 117.0, 57.0, 107.0, 43.0, 107.0, 47.0, 116.0, 55.0, 113.0, 57.0, 108.0, 53.0, 106.0, 53.0, 108.0, 61.0, 101.0, 55.0, 102.0, 59.0, 98.0, 65.0, 107.0, 66.0, 104.0, 66.0, 102.0, 72.0, 100.0, 79.0, 105.0, 87.0, 96.0, 91.0, 86.0, 102.0, 85.0, 105.0, 84.0, 111.0, 81.0, 112.0, 76.0, 112.0, 75.0, 118.0, 74.0, 124.0, 69.0, 125.0, 70.0, 163.0, 103.0, 352.0, 108.0, 355.0, 105.0, 364.0};

PShape s;
PFont font;

void setup() {
  beginRecord(PDF, "everything.pdf");
  size(1000, 500);
  font = createFont("RalewayThin.ttf", 30);
  s = createShape();
  for (int i = 2; i < milkshake.length; i+=2) {
    s.vertex(milkshake[i-2] + 510, (milkshake[i-1]+20));}

  shape(s, 0, 0);
  text("Emotional Shakes", 720, 300);

void draw() {
void mousePressed() {

Contents Code:

import processing.pdf.*;
import rita.*;
RiLexicon lex = new RiLexicon();
PFont font;
int count = 0;

RiMarkov markov;
String line = " ";
String[] files = { "../data/cAH.txt", "../data/recipes.txt", "../data/hateLetters.txt", 
  "../data/moreMilkshakes.txt", "../data/bakingBook.txt"};
String[] ingList = {"1 large or 2 small bananas", 
  "vanilla ice cream", 
  "2 1/2 cups frozen blueberries", 
  "1 1/4 cups apple juice", 
  "1 cup vanilla frozen yogurt", 
  "1/4 cup skim milk", 
  "3/4 teaspoon ground cinnamon", 
  "4 scoops vanilla ice cream or frozen yogurt", 
  "3/4 cup cold milk", 
  "1/4 cup chocolate syrup", 
  "3 maraschino cherries, stems removed", 
  "whipped topping & additional cherry", 
  "1/4 cup milk", 
  "2 scoops vanilla ice cream", 
  "1 tablespoon chocolate syrup", 
  "2 tablespoons peanut butter", 
  "3 cups vanilla ice cream", 
  "1 cup chocolate milk", 
  "3 heaping tablespoons chocolate powder", 
  "2 medium sized chocolates", 
  "3 scoops vanilla ice cream", 
  "2 tablespoons chocolate chips", 
  "1/2 banana", 
  "1 tablespoon instant coffee", 
  "2 tablespoon chocolate syrup", 
  "1 cup milk", 
  "1 teaspoon vanilla", 
  "1 cup milk", 
  "1 pkg. (4-serving size) Jello, any flavor", 
  "1 pint vanilla ice cream, softened", 
  "2 cups (480ml) vanilla ice cream", 
  "1 cup (240ml) whole milk", 
  "1/4 cup (60ml) half & half", 
  "2 1/2 tablespoons (35g) sugar", 
  "1/8 teaspoon vanilla extract", 
  "4 scoops vanilla ice cream", 
  "1 cup peaches, peeled, pitted, cut up into chunks", 
  "1 cup cold orange juice", 
  "2 strawberries (garnish)", 
  "1/4 cup milk", 
  "1 scoop lime sherbet", 
  "1 scoop vanilla ice cream", 
  "handful of ice cubes", 
  "1 cup milk", 
  "2 teaspoons vanilla syrup", 
  "1 teaspoon malted milk powder", 
  "2 scoops vanilla ice cream", 
  "1 cup chocolate or vanilla ice cream", 
  "1/2 cup whole milk", 
  "3 - 4 tablespoons chocolate syrup", 
  "1 cup milk", 
  "2 scoops of your favorite ice cream", 
  "2 tablespoons instantmalted milk powder", 
  "8 ounces chai concentrate", 
  "14 ounces vanilla ice cream", 
  "vanilla ice cream", 
  "about 6-8 Oreo cookies", 
  "4 oz. canned pumpkin, chilled", 
  "1-1/2 cups cold low fat milk", 
  "8 teaspoons sugar", 
  "1/8 teaspoon cinnamon", 
  "1 teaspoon vanilla", 
  "1/3 cup frozen grape juice concentrate, thawed", 
  "1 banana, peeled and sliced", 
  "1 cup milk", 
  "1/2 cup ice cubes", 
  "1 cup milk", 
  "2 scoops vanilla ice cream", 
  "4 large fresh strawberries (or 1/2 cup frozen strawberries)", 
  "1 large banana, chopped", 
  "5 cups chocolate or vanilla ice cream", 
  "2 1/2 cup milk", 
  "1 1/4 cup canned unsweetened coconut milk", 
  "1 large pineapple, peeled, cored, chopped", 
  "5 ripe bananas, peeled", 
  "3 medium papayas, peeled, seeded, chopped", 
  "32 strawberries, hulled", 
  "1/2 cup milk", 
  "1/2 cup whipped cream", 
  "2 tablespoons cherry juice", 
  "1/2 cup ice", 
  "2 - 3 scoops vanilla ice cream or frozen yogurt", 
  "1/3 cup cold milk", 
  "3 tablespoons chocolate malt syrup", 
  "whipped cream topping"};

int x = 160, y = 240;
int ingIndex0 = int(random(0, 87));
int ingIndex1 = int(random(0, 87));
int ingIndex2 = int(random(0, 87));
int ingIndex3 = int(random(0, 87));
String hateAdj = "";
String[] hateWords;
int offset = 40;

void setup()
  font = createFont("CrimsonText-Roman.ttf", 16);
  hateWords = loadStrings("hateWords.txt");
  hateAdj = hateWords[int((random(0, 526)))];
  size(500, 500, PDF, "arialyBook.pdf");

  //textFont(createFont("times", 16));

  // create a markov model w' n=3 from the files
  markov = new RiMarkov(2);
  markov.loadFrom(files, this);

void draw()
  PGraphicsPDF pdf = (PGraphicsPDF) g;  // Get the renderer
  text(hateAdj + "Shake", 50, 30 + offset);
  text("Ingredients", 50, 80 + offset);
  text(ingList[ingIndex0], 60, 105 + offset);
  text(ingList[ingIndex1], 60, 130 + offset);
  text(ingList[ingIndex2], 60, 155 + offset);
  text(ingList[ingIndex3], 60, 180 + offset);
  text("Directions", 50, 215 + offset);
  text(line, x+10, y+175 + offset, 400, 400);
  hateAdj = hateWords[int((random(0, 526)))];
  ingIndex0 = int(random(0, 87));
  ingIndex1 = int(random(0, 87));
  ingIndex2 = int(random(0, 87));
  ingIndex3 = int(random(0, 87));
  if (!markov.ready()) return;

  x = y = 50;
  String[] lines = markov.generateSentences(3);
  line = RiTa.join(lines, " ");
  if (count == 0) {
    text("the intersections of milkshakes and hate letters", 250, 250);
  if (frameCount == 42) {
  } else {
    pdf.nextPage();  // Tell it to go to the next page 
  count += 1;

void mouseClicked(){



When thinking about a concept for the face tracker, I was more interested in how the tracker would control something other than a face. Rather than use the movements of the face to control something, I wanted to control something with the lack of movement.


import processing.sound.*;
SoundFile file;
Amplitude amp;
int dim;
int stillEnough = 0;
float vol = 0;
float thinness = 0;
float moonX = 525;

Particle[] particleList = new Particle[50];
int count = 0;
float globalRMS = map(.15, 0, .3, 0, 20);

// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC
// 2012 Dan Wilcox
// for the IACD Spring 2012 class at the CMU School of Art
// adapted from from Greg Borenstein's 2011 example
import oscP5.*;
OscP5 oscP5;

// num faces found
int found;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

void setup() {
  dim = width/2;
  size(640, 480);
  for (int o = 0; o<particleList.length; o++) { particleList[o] = new Particle(); } // Load a soundfile from the /data folder of the sketch and play it back file = new SoundFile(this, "portTown.mp3"); amp = new Amplitude(this); oscP5 = new OscP5(this, 8338); oscP5.plug(this, "found", "/found"); oscP5.plug(this, "poseScale", "/pose/scale"); oscP5.plug(this, "posePosition", "/pose/position"); oscP5.plug(this, "poseOrientation", "/pose/orientation"); } class Particle { PVector position; PVector velocity; float offset = random(-7, 1); Particle() { position = new PVector(width/2, height/2); velocity = new PVector(1 * random(-1, 1), -1 * random(-1, 1)); } void update() { // Add the current speed to the position. position.add(velocity); if (position.x > width) {
      position.x = 0;
    if (position.x < 0) { position.x = width; } if ((position.y > height) || (position.y < 0)) { velocity.y = velocity.y * -1; } } void display() { // Display circle at x position //stroke(200); fill(255, 255, 224, thinness); ellipse(position.x, position.y, globalRMS+offset, globalRMS+offset); ellipse(position.x, position.y, (globalRMS+offset)*.5, (globalRMS+offset)*.5); } } void drawGradient(float x, float y, float r, float g, float b) { for (float ra = x*.75; ra > 0; ra -= 3) {
    fill(r, g, b);
    ellipse(x, y, width*1.5, ra);
    r = (r - 2) % 360;
    b = (b-.5) % 360;
    g = (g -2)%360;

int still() {
  if (posePosition.x < width/2 + 50 && posePosition.x > width/2 -50) {
    return 1;
  } else {
    return 0;

void draw() {  

  ellipse(moonX, 75, 70, 70);
  ellipse(moonX + 15, 75, 50, 50);

  fill(0, 100, 0, 200);
  rect(0, height/1.4, width, height);
  fill(0, 191, 255);
  drawGradient(width/2, height*.99, 0, 191, 255);

  //ellipse(width/2, height * .92, width *1.5, height/3);

  if (still() == 1) {
    if (thinness < 150) {
      thinness +=10;
    if (vol == 0) {;
    if (vol < 1) { vol+= .001; file.amp(vol); } float rms = amp.analyze(); float mapRMS = map(rms, 0, .3, 10, 25); globalRMS = mapRMS; } else { if (vol > 0) {
      vol-= .01;

  if (posePosition.x > width/2 +50) {
    for (int m = 0; m<particleList.length; m++) { if (particleList[m] != null) { particleList[m].position.x -=2; } } if (thinness > 0) {
      thinness -=10;
    moonX -=1;

  if (posePosition.x < width/2 -50) {
    for (int n = 0; n<particleList.length; n++) { if (particleList[n] != null) { particleList[n].position.x +=2; } } if (thinness > 0) {
      thinness -=10;
    moonX += 1;

  for (int o = 0; o<particleList.length; o++) { if (particleList[o] != null) { particleList[o].update(); particleList[o].display(); } if (found > 0) {
      //translate(posePosition.x, posePosition.y);
      fill(255, 255, 255, 20);
      ellipse(posePosition.x, posePosition.y, 16, 16);


public void found(int i) {
  //println("found: " + i);
  found = i;

public void poseScale(float s) {
  //println("scale: " + s);
  poseScale = s;

public void posePosition(float x, float y) {
  //println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);

public void poseOrientation(float x, float y, float z) {
  //println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);
// all other OSC messages end up here
void oscEvent(OscMessage m) {
  if (m.isPlugged() == false) {
    //println("UNPLUGGED: " + m);


I found Jeremy Bailey to be the most memorable personality at Weird Reality. I heard his presentation, experienced his AR Pregnancy Simulator at the VR salon, and talked to him a little in person. It was definitely fun to be able to talk to him and then hear his commentary in the Pregnancy Simulator.  The piece itself is pretty relaxing with a stream of commentary, calm background music (birds chirping if I remember correctly), and its setting in the field. But getting to see other people wear his VR headset and rub their imaginary belly at the VR salon was probably my favorite part of the piece. People’s movements almost seemed choreographed. Looking around, then at the hands, then rubbing their belly. Being able to manipulate people’s movement’s in an open environment is both very entertaining and a strange concept.


arialy – Plot





I really enjoy the aesthetics of the handful of plotter art I’ve seen. I wanted to combine the utility of using a plotter while still making a representational image. I liked the idea of making a tunnel by using a series of incrementing curves. It was an interesting challenge to try to make this very mechanical process appear more organic. The plotter actually helped with this, since the pen didn’t consistently apply the ink. The machine also started to have tiny wavers in the line. I plotted it both at 5x6in and 2.5x3in, and the very small scale actually made it a much more intimate image. I hand colored the person, and I think if I were to plot it again I would hand draw the lines of the person as well. The contrast between the plotted and hand drawn parts of the piece is something I’d like to explore.


I’m not very familiar with varying sound with code, which maybe makes me even more intrigued with this project. I have no idea how complex implementing their algorithm was, but it’s very engaging to see dance and movement in general change sound. It seems to me that there is a little bit of lag between their movement and the actual sound it creates, but nonetheless the interaction between two methods of expression, movement and sound, collide here. It’s definitely the sort of thing I’d like to see more projects like.


The initial velocities of the different dots definitely should’ve been randomized so they weren’t all the same direction. Lauren’s point that the piece lacks of strong focal point is accurate. Perhaps it would be slightly nicer to look at had I chosen a nicer color (maybe just a nice white), but also have slightly different colors per hour would also help. Having the piece be slightly more intuitive to it representing the hours left in the day is important, though I really like how it breaks down the hours used up so far.


I really love how playful this piece is. The idea of mirroring a person’s silhouette isn’t the most original idea, but it’s the use of the stuffed animal penguins is incredibly fun. It’s a unique use of the material, and the contrast of the black and white works very well. The algorithm itself isn’t very complex as far as I can tell. They just make the penguins turn to represent the person’s silhouette. Without anyone there, consecutively larger circles in the gird of penguins are set to make the penguins turn. I really do enjoy this piece even if it’s not super complex, it’s simple and entertaining.




I focused more on the process of creating this piece more than what the end result be. I wanted to see what would happen when I make the end points of lines trail around the opposing edges of the canvas border. I made one line do so and just mouseClicked to find what its coordinates were later on. It was satisfying to see what pattern it would make in the end. I could have pushed the pattern a little further by perhaps creating another square in the middle, and maybe find more sense in the amount of lines I used.


1A. I enjoy using Arduinos. It’s pretty complex when you analyze all of its components, but it’s also very orderly. If there isn’t electricity running through a pin, that pin is turned off. There isn’t too much disorder when it comes to running electricity through wires.

1B. The Problem of Authenticity – I see art as a form of expression of myself. I’m not sure what to make of art that I make and only afterwards think of a meaning for. If I accidentally create something I like, I’m unsure whether or not it’s expression of myself. When it comes down to it I’m the one that decides whether or not it’s art, so I would still feel ownership of it even if it’s even accidentally generated.


This piece by TeamLab was the catalyst that got me interested in new media arts. The animation is projected onto the viewers in the gallery, giving the whole room a sense of movement. The music combined with the motion of the work creates an emotional and almost overwhelming experience. TeamLab is a Japanese collective group of engineers, animators, designers, artists, programmers, etc. I’m not sure how many people work on a single piece or how long it may have taken them. I imagine that this animation required custom software, but I’m very well versed in animation software. Their work is clearly rooted in and influenced by prior Japanese art pieces. TeamLab makes a lot of public art or geared towards children, so their pieces are both very engaging and relatable to the general audience.

arialy – clock


While putting pen to paper, I had a pretty difficult time figuring out specifically what I wanted to make. I had general themes I like, such as having time break things apart, but it took me a while to actually get into p5. My final product generally has a similar vibe as what I was going for, I do enjoy the way the circles slowly fall apart. The larger circles represent hours left in the day and are comprised of particles representing one minute. A small particle breaks away from the larger hour circle as every minute passes. Every time the mouse is pressed the hours in the day that have already passed will re-explode. I think this project can be cleaned up a bit, tweaking the colors and creating more randomized velocities, as well as perhaps changing the minute circles to seconds instead.



I try to appreciate art of every form, but my preference leans towards last word art. I think it’s necessary for mediums to evolve and break new boundaries, but I often feel like I have a harder time relating to these works. I do however appreciate the expanding breadth of technology. I might just be more comfortable with works of art at new boundaries of technical abilities. 360 videos and VR headsets may have a certain initial novelty, but as long as there’s substance to the work, I don’t think novelty diminishes its artistic value. If it further excites its viewer, even better. When people saw Michelangelo’s statues, some may have even thought it was real. The fact that this was new for its audience doesn’t take away from the endless appreciation we have towards his work. Similarly for new technologies, though one may feel more excited by the mere medium of expression, there must be something substantial at the core of it.


Gene Kogan is a New York based artist studying machine learning through sound and artworks. He dove into neural networks and the algorithms that allow them to act as artificial intelligence and their ability to create images. Through machine learning these networks can analyze images, handwriting, specific objects, and search for recurring themes and styles. These images often take on abstract and painterly forms naturally, but he also explores purposefully applying these algorithms to transform images and videos as a medium for creative expression. The ability for computers to almost autonomously create visuals that resemble painterly styles is quite stunning (whether it be abstract or combining the Starry Night and Mona Lisa. Kogan breaks down machine learning, such as the way neural networks operate, into simpler and easier to understand lectures for the mass audience. In addition to content he publishes online, he has taught classes at NYU, Bennington College, and SchoolOfMa. He has been a part of international open source projects and writes code for visual and sound performances.