Arousal vs. Time

Arousal vs. Time from Miles Peyton on Vimeo.

Arousal vs. Time: a seismometer for arousal, as measured by facial expressions.

Overview

One way to infer inner emotional states without access to a person’s thoughts is to observe their facial expressions. As the name suggests, Arousal vs. Time is a visualization of excitement levels over time. The more you deviate from your resting expression, the more excited you are presumed to be. An interesting context for this tool is in everyday social interactions. Watching the seismometer while talking to a friend can generate insights into the nature of that relationship. It might reveal which person tends to lead the conversation, or who is the more introverted of the two. Watching a conversation unfold in this visual manner is both soothing and unsettling.

Inspiration

Arousal vs. Time is the latest iteration in a series of studies. After receiving useful feedback on my last foray into face tracking, I decided to rework the piece to include sound, two styrofoam heads, and text for clarity. Daito Manabe’s and Kyle McDonald’s face-related projects – ”Face Instrument”, “Happy Things” – informed the sensibility of this work.

“Face Instrument” – Daito Manabe

 

“Happy Things” – Kyle McDonald

Implementation 

A casual conversation between myself and a friend was recorded on video and in XML files. I wrote the two software components of this artwork – the seismometer and the playback mechanism – in openFrameworks 0.8. I used the following three addons:

  1. ofxXMLSettings – for recording and playing back face data
  2. ofxMtlMapping2D – projection mapping
  3. ofxFaceTracker – tracking facial expressions

 

The set
The set

The projection mapping on the styrofoam heads was carried out on two laptops with two pico projectors. I stored facial data in XML files, and recorded video and audio with an HD video camera and an audio recorder.

The audio file was manipulated in Ableton Live to obscure the content of the conversation. I used chroma keying in Adobe Premiere to remove the background of the video, such that the graphs would seem to emerge from behind the heads, and not from  some unseen bounding box. Finally, the materials – a video file, two XML files, and an audio file – were brought together in a second “player” application, also built in openFrameworks.

Reflection

Regarding a conceptual impetus for this project, I keep thinking back to a point professor Ali Momeni made when I showed an earlier version of this project during critique. He questioned not my craft, but my language:  the fact that I used the word ”disingenuous” to describe my project. I’m still don’t have a satisfying response to this, just more speculation.

Am I trying to critique self-quantification by proposing an alienating use of face tracking? Or am I making a sincere attempt to learn something about social interaction through technology? The ambivalence I feel toward the idea of self-quantification leads me to believe that it is worthwhile territory for me to continue to explore.

Face Seismograph

Screen Shot 2013-12-05 at 12.21.44 PM
Soliciting participants on Facebook – my original scheme for the final project
Screen Shot 2013-12-05 at 12.43.57 PM
I planned to print screenshots out and frame them like so

As is often the case in art, my project to capture the things that make us smile turned out to have been implemented a year before by Brooklyn artist Kyle McDonald. The embarrassing part of this is that I – unknowingly – used Kyle’s library to make my project.

In any case, this initial attempt/failure emboldened me to try something more nuanced with faces. I wanted to consider a continuum of expressions as opposed to a binary smile-on smile-off.

Face Seismograph 

Screen Shot 2013-12-05 at 8.50.50 AM

Screen Shot 2013-12-05 at 8.51.20 AM

Face Seismograph is a tool for recording and graphing states of excitement over time. It was written in OpenFrameworks using Kyle McDonald’s ofxFaceTracker addon.

excitement1
Excited?
So excited
Excited!

The seismograph measures excitement by tracking the degree to which one smiles or moves their eyebrows from a resting state.

One limitation of this approach is that in practice, internal states of excitement or arousal may not have corresponding facial expressions.

So excited
Genuinely excited
Doesn't get it
Depressed

I staged a casual conversation between myself and a friend. While we chatted about life, two instances of Face Seismograph approximated and recorded the intensity of our excitement. Viewing the history of our facial expressions, I began to notice surprising rhythms of expression.

Screen Shot 2013-12-05 at 12.32.42 PM Screen Shot 2013-12-05 at 12.32.53 PM Screen Shot 2013-12-05 at 12.33.00 PM Screen Shot 2013-12-05 at 12.33.45 PM Screen Shot 2013-12-05 at 12.34.23 PM

To present this conversation, I play each recording on a separate iMac. The two recordings are synchronized via OSC. A viewer can scrub through the video on both computers simultaneously.

In a future iteration of this project, I’d like to highlight the comparison of excitement signatures with greater clarity. Also, I need to label my axes.

 

Miles: Looking Outwards & Project Ideas

inForm – Tangible Media Group at MIT Media Lab

inFORM – Interacting With a Dynamic Shape Display from Tangible Media Group on Vimeo.

inForm is a Dynamic Shape (shape-shifting) Display by the Tangible Media Group at MIT Media Lab. It is a step towards the group’s vision of “Radical Atoms”, or materials that change their physical form to reflect an underlying digital model. The documentation video is fairly comprehensive. It includes demos in which the display manipulates physical objects, visualizes data and responds to events like phone calls. The inForm reminds me of a pin point impression toy that I used to play with as a child.

I think this project has enormous potential to make the abstract tangible. One could use it to visualize trigonometric functions, or to represent data collected in an experiment. It also has architectural connotations. If one installs inForm in the floor of a room, the room itself can dynamically shapeshift.

“Face Visualizer”, “Face Instrument” – Daito Manabe

From Daito’s description of the project:

‘I got inspired “we can make fake smile with sending electric stimulation signals from computer to face, but NO ONE can make real smile without humans emotion”. This is words from Mr. Teruoka who is my collaborator to make devices.’

The notion of a “fake smile” is the impetus for my “Say Cheese” proposal below.

It’s interesting to conceive of the face as a means of visualizing emotional data. Daito’s project focuses on the performative aspect of the face, and the uncanny reality that a computer can manipulate a face with surgical precision.

I’m enthralled by the idea that facial expressions can be quantified and deployed on a face. It raises possibilities for cyborg theatre, performance art and retail technology.

Here are two project ideas that I thought of:

Consider a video game in which an alien jellyfish that attaches itself to your character’s face, then proceeds to take control of your real-life face.

Empathy Mirror: First, FaceOSC detects the facial expression of a person standing across from you. It then sends that data to a microcontroller which is attached to a series of electrodes. It contorts your face to the other person’s expression.

MOSS, The Dynamic Robot Constructor – Modular Robotics

The MOSS Kickstarter has, at the time of this writing, raised $252,042 – more than $100,000 over the original goal. It still has 20 days to go.

MOSS is the next iteration of Modular Robotics’ previous product, Cubelets. MOSS is a construction kit for building robots from magnetically connectable cubes and other components. One can combine and program them to make an infinite number of tiny robots – no coding required. Additionally, cubes transmit power and data between each other, so there is no need to individually program or charge them.

It’s clear from the success of the Kickstarter campaign that MOSS has the potential to make robotics more accessible than ever before. However, I’m concerned about the viability of a system in which coding isn’t an option. To what extent does this approach preclude complex designs/behaviors?

In any case, I’m excited to see how MOSS develops.

Project idea 1: Say Cheese

For far too long I have suppressed a burning hatred for cultural situations which require smiling. But I have had to “grin” and bear it: the ability to smile on command is a vital skill in America.

Recent immigrants and tourists might not be familiar with American smiling conventions. They might be shocked to find, for instance, that their neutral expression is interpreted as a sign of distress.

I propose a product called “Say Cheese” with these groups in mind. Say Cheese is a device comprised of a lavalier microphone, two electrodes, a wireless receiver (Pololu Wixel), and an Arduino microcontroller.

Using the Google Text to Speech API, it can pick up on certain key phrases:

• Say cheese

• You okay?

• What’s wrong?

• Are you depressed?

• You should really smile more

• Smile!

Each of these phrases prompts the device to send a current through the two electrodes, which are attached to the user’s face.

French neurologist Duchenne de Boulogne (1806 – 1875) experimented with electrically induced facial expressions

Regarding style, I figure that electrodes can’t be much worse than earbuds. And plus, wearing a Say Cheese indicates an earnest desire to assimilate to Our Way.

smile1 smile2

Variation 1: Clerk Control, a means of enforcing customer relations standards in the retail sector. The phrase “thank you, have a nice day” could trigger a wide, toothy grin.

Variation 2: Empathy Mirror. As described in my looking outwards, the Empathy Mirror matches your expression to that of another person (detected with FaceOSC).

Project idea 2: Turn on, tune in

Alarm Gates

Every time I pass the alarm gates to exit Hunt library, I hear a shrill, high pitched squeal – but only when I’m listening to music on earbuds. I was curious about this phenomenon, so I asked about it on a sound design forum. Here is what someone had to say:

“Basically your earbuds’ cables act as an antenna and pick up the RF signal sent out by the gates to check for tags passing (which when present cause a specific signal to be picked up by the receiver in the gates which in turn triggers the alarm).

To “exploit” this all you need is a radio transmitter and a receiver ;)”

– André Engelhardt, Sound Design on Stack Exchange

I’m fascinated by the prospect of invading someone’s private musical space, even with a modest squeal. I need to research more to find out exactly how this could be implemented, but ideally the radio transmitter would be small and portable. Walking around with it would be like emitting an aural scent to anyone in range (anyone wearing headphones that is).

tuneout

Backup idea: Breath Graph

Airflow sensor from Cooking Hacks

Controlled breathing is a crucial skill in many activities: meditation and singing to name two. While it is possible to watch breathing in the present, it is difficult to notice gradual trends in breathing. The Breath Graph is a simple device that produces a history of breathing during the course of an activity – a breath graph.

It uses an airflow sensor to measure airflow rate from the nostrils, and a thermal printer to print a breath graph – a line graph with airflow rate on the y axis and time on the x axis – when the session ends.

breath

 

Sitting Above [Adam & Miles]

Sitting Above from adambd on Vimeo.

Screen Shot 2013-11-19 at 1.18.17 PM

Sitting Above is a dynamic sign that displays an estimate of the number of people currently flying overhead. We were interested in bringing attention to the fact that people are always above us. Commercial air travel, once a remarkable feat, has become a necessary and even “inconvenient” reality. In keeping with our low expectations for air travel, Sitting Above uses the alienating visual language of street signs.

We experimented with two modes of representation: kinetic and numerical. We had considered visualizing the biomass of people above using automatically blown bubbles – but were discouraged by the scarcity of helium gas (not to mention the questionable ethics of using helium).

moredata
When Wolfram is unable to provide flight operations data for an airline, we resort to random numbers (between 150-200).

The sign uses a Wixel to communicate wirelessly with a laptop. A Python program queries WolframAlpha for a list of planes above, then asks Wolfram for the average number of people on a given airline. The sum total of these figures is sent to Sitting Above, which shows the value on a seven-segment display.

mannsh
“It’s only powers of primes I think.” – man outside Newell Simon Hall
goodshot
“Paul! There’s some kind of device.” – Police officer

Python program

import wolframalpha
import time
import random
import serial

ser = serial.Serial('/dev/cu.usbmodemfa131', 9600)

def testQuery():
    client = wolframalpha.Client('X55U4H-PQ459QE3U4')
    res = client.query('planes overhead')
    output = next(res.results).text
    lines = output.splitlines()
    planes = []
    for line in lines:
        if(line[0] != '(' and line[0] != '|' and line[0] != ' '
           and 'flight' in line):
            endName = line.index('flight')
            planeString = str(line[:endName]) + 'flight operations data'
            if(planeString not in planes): 
                planes += [planeString]
    print 'Number of planes: %s' % len(planes)
    totalPeople = 0
    for plane in planes:
        #print 'trying %s' % (plane)
        planeResults = client.query(plane)
        planeInfo = next(planeResults.results).text
        planeData = planeInfo.splitlines()
        perFlightLine = [l for l in planeData if('average per flight' in l)]
        if(perFlightLine != []):
            start = len('average per flight | ')
            end = perFlightLine[0].find('people')
            perFlight = perFlightLine[0][start:end]
            totalPeople += int(eval(perFlight))
            print 'Per flight for %s: %s' %  (plane,perFlight)
        else:
            randPeople = random.randint(150,200)
            #print randPeople
            totalPeople += randPeople
            print 'Rand people: %s' % randPeople 
    digit = 1
    print totalPeople
    for i in xrange(4):
        ser.write(str((totalPeople/digit) % 10))  
        digit *= 10

while True:
    testQuery()
    updateTime = random.randint(30,120)
    time.sleep(updateTime)

Arduino sketch

#include 
#include "Adafruit_LEDBackpack.h"
#include "Adafruit_GFX.h"
int number = 0;
int digit = 1;  

Adafruit_7segment matrix = Adafruit_7segment();

void setup() {
  matrix.begin(0x70);
  Serial.begin(9600);
}

void loop() {
  if(Serial.available()) {
    if(digit == 1) number = 0;
    number += (int(Serial.read()) - int('0'))*digit;
    digit *= 10;
    if(digit == 10000) digit = 1;
    Serial.println(number);
  }
  matrix.writeDigitNum(4, number % 10);
  matrix.writeDigitNum(3, (number/10) % 10);
  matrix.writeDigitNum(1, (number/100) % 10);
  matrix.writeDigitNum(0, (number/1000) % 10);
  matrix.writeDisplay();

}

Looking Outwards – Arduino Shields

Sorry

Sparkfun Touch Shield

This touch shield from Sparkfun adds nine capacitive touch pads to an Arduino. Touch pads could be handy in a music application, like an Arduino powered beatpad. What about an Arduino powered phone, or an Arduino controlled interactive museum display? Additionally, the pads could correspond to settings on a robot or some other device.

I wonder – why one would use a shield as opposed to nine individual capacitive touch sensors? The Arduino documentation has a simple answer: shields “are easy to mount, and cheap to produce.” Also, if you build a custom shield for a project, you can recycle that functionality later in another project.

 Adafruit Motor Shield

The improved Adafruit Motor Shield accommodates 2 stepper motors, 4 bi-directional DC motors, and 2 5V servos. It features a stackable design, allowing for up to 32 motor shields stacked atop each other, so one could conceivably control 64 stepper motors or 128 DC motors in a single project. There is also a small prototyping area on the board for wires or other components.

I had to look up the difference between a motor and a servo. A motor is either on or off, but its speed can be controlled via PWM (pulse width modulation). A servo, on the other hand, moves to an output position specified by a control signal. So it has a 3 wire connection: for power, ground and control.

I wonder how difficult it would be to make an automatic centipede from stacked motor shields and DC motors.

Protoshield

The Protoshield is a prototyping shield. Attach it directly to a breadboard for increased working space, shown below:

Protoshield with breadboard

With a protoshield I can work on a bus, plane or volcano. Besides, it’s pretty cumbersome to lug around a prototyping plate.

When the prototyping phase is over, solder directly to the board. The stacking functionality keeps all components snugly secured to the Arduino. This protoshield works with the UNO, but there is a larger edition called the Mega Protoshield. It has even more prototyping space – though it’s only compatible with the Arduino Mega.

Shiny New Toys

Programmable Coin Acceptor

This coin acceptor/validator module works with any coin. It determines if a coin is valid by looking at its diameter, thickness and dropping speed. One might conceive of an arcade-style gallery – an “artcade” – in which a viewer purchases a single viewing of an artwork with one or several quarters. Unlike the appstore, the artcade brings art enthusiasts together in a communal space. Unlike traditional art galleries, the artcade enables “everyone else” to engage with and support contemporary art in a tangible way. We might also envision an art tollbooth, where a passerby is charged a small fee to enter an installation.

 Conductive Knit Jersey

Conductive fabric raises the possibility of textile interfaces. According to the description on adafruit, the knit is actually a single strand of fiber. So if there is a tear in the thread – does the whole square unravel? Ignoring this for a moment, the fabric has many interesting uses in media art. Using a LilyPad Arduino to receive inputs and execute instructions, it becomes possible to invent interactive, “intelligent” clothing. For instance, a tap on the breast pocket of a shirt could trigger a program which conveys the current number of unread emails in one’s inbox. Tap, and wait for the ensuing jolts: “Ow! Ow! Ow! Three unread emails.”

Ultrasonic Range Finder

 

Maxbotix LV-EZ1 is an ultrasonic range finder. It emits a 42 kHz wave and records the time it takes for the wave to return to the module. Based on the speed of sound in air at sea level, the module calculates its distance from some object. I have one question: would an array of these work as a depth camera? This might be feasible if each device emitted a unique frequency, so that the modules working in parallel wouldn’t confuse each other. But I wonder – how might such a depth camera compare to a depth camera that uses IR?

 

Keyfleas

http://vimeo.com/77109691

The Keyfleas live on a two-dimensional flatland. They travel as a flock, over key mountains and through aluminum valleys. They avoid touching letterforms, since they suspect that the symbols are of some evil origin. On occasion, a hostile tentacle invades the flatland and disturbs its inhabitants.

Although I had several ideas for contexts in which an augmented projection could exist, most of them amounted to arbitrary particles careening across a surface. No poetry, no narrative. So instead of an architectural surface as originally planned, I project on an Apple Keyboard. My reasons for this are both practical and conceptual. The keys are clean and white, and the Pico projector can attach via Manfotto Magic Arm to a nearby table. This addresses the constraints of a low powered projector, as well as issues relating to variable lighting and surface conditions. My solution for key calibration was as follows: key-shaped boundaries are placed in the Box2D world using the mouse, and then the key is pressed in order to map that body to its corresponding key. This calibration process can be seen at the end of the video.

But these are only technical considerations; more important was choosing a context in which a narrative – albeit a simple one – could emerge. The suggestion that there are parasitic entities living in our devices is an interesting an unsettling one. An obvious inspiration for this project was Chris Sugrue’s “Delicate Boundaries”, where light bugs crawl out of the screen and onto the viewer’s hand.

A project which explores the delicate boundary between screen space and physical space


An improved Keyfleas might develop creatures with more character than mere filled ellipses (see Delicate Boundaries above). Or the shapes might pulsate and respond to keystrokes in a more intelligent manner than they currently do.

img003

The Wind Walker

The Wind Walker likes to use his head to feel for subtle variations in wind currents. He moves if he feels uncomfortable or bored.

Play with the Wind Walker on OpenProcessing (requires Java)

Behavior is more important than visual realism in creating the illusion of life. We observe this in Karl Sims’ “Evolved Virtual Creatures”, a simulation in which evolved box creatures interact with their environment in surprising and often humorous ways.

With this principle in mind, I sought to invent a charming creature with lifelike mannerisms. I drew inspiration from two sources: the dutch artist Theo Jansen, and the Kikkerland line of Wind Up Toys.

A wind-powered Strandbeest (Dutch: strand = beach) roams a beach
A charming toy that teaches children about the perils of the adult world

I wrote a custom spring system based on Hooke’s Law to control his limbs, and the sketch runs using the P3D renderer. It is interactive in a limited sense – if one clicks, the Wind Walker turns to face the mouse.

img001
Preliminary sketches with greasy fingers

Looking Outwards 3

One Hundred and Eight – Nils Völker

One Hundred and Eight by Nils Völker is a 2.4 by 1.8 meter wall-mounted grid of garbage bags. Columns of bags deflate in response to the silhouette of a viewer as detected by a camera – although the grid is able to operate autonomously should it be left alone. Unlike many of the other Arduino-based projects I researched, the technical “gee-whiz” aspect of this work is secondary to a study of material and a reversal of expectations. Völiker breathes just enough life into the bags to give them interest, but not so much as to overpower the sensitivity of the forms. In doing so, he transforms the plastic bag from a symbol of waste to an object of awe. I’m impressed by Völker’s disciplined use of interaction itself (at a bare minimum) as an element in a highly formalist work. A principal struggle for artists working with computation lies in tucking away the engineering of an artwork, but this is something that Völkner does to great success. Also, I don’t think Jim Cambell’s Formula for Computer Art applies here, since the signal and response are so elegantly unified.

Noisy Jelly – Raphaël Pluvinage

Raphaël Pluvinage describes Noisy Jelly as a “game where the player has to cook and shape his own musical material, based on coloured jelly.” Here, as in One Hundred and Eight, inanimate objects personified through a simple interaction. But unlike One Hundred and Eight, the jellies are unresponsive until touched, that is, they don’t do anything on their own. Pluvinage uses an Arduino to detect a hand touching a jelly, I presume by passing a small current through it. He uses Max/MSP for the sound – relying on oscillators whose frequency corresponds in some way to the touches. While I normally find pure tones with no harmonics excruciating to listen to, they work well with the jelly. As I see it, the jelly and crude synthesized sounds refer to failed experiments like the Segway, lending the work a jarring retro-future aesthetic with a hint of irony. I especially enjoy how Pluvinage gives the various jelly shapes unique sonic personalities.

SENSELESS DRAWING BOT #2 – So KANNO + Takahiro YAMAGUCHI

Much like mudlevel’s robo-rainbow, SENSELESS DRAWING BOT #2 by So KANNO and Takahiro YAMAGUCHI makes graffiti so that we don’t have to. It raises interesting questions surrounding the notion of authorship, as well as the problem of responsibility when robots can perform illegal tasks on our behalf. Concretely, the bot consists of high pressure washers equipped with spray cans, mounted on a motorized platform with wheels. An Arduino manages the servos used to release the paint – it is unclear from the documentation whether the robot’s movements are being controlled remotely or internally. Unlike the two other Arduino projects I cited, in which art-objects surprised us by being interactive, SENSELESS DRAWING BOT is poetic because gun-wielding robots aren’t normally thought of as art-machines. The work destabilizes our expectations, and exploits our cynicism to convey the message that robots can be machines of creation in addition to machines of destruction.

 

 

 

Segregation and Other Intolerant Algorithms [Lasercut Screen]

http://cmuems.com/2013/a/wp-content/uploads/sites/2/2013/10/output1.pdf

Drawing loosely from the Nervous System presentation, I began thinking about processes I could exploit to churn out varied, yet unified designs. While searching for information about laplacian growth, I found this pithy sketch by echoechonoisenoise on OpenProcessing, which employs a grid of automata to generate a segregation pattern.

My cells are similarly situated in a grid, wherein three main processes occur. First, a matrix of cells is seeded by a scaled noise field, which is in turn refined and restricted using the modulus operator and a threshold. This design is problematic out of the tube, since the laser cutter wants lines and not filled blobs.

Filled blobs, before the outlines are isolated

So the second step is to use a neighbor-counting technique similar to echoechonoisenoise’s to isolate the border of the blob shapes. (If a cell has three out of eight possible neighbors, I can assume with some confidence that it is a bordering cell.) Third, to convert a set of disparate points to vector lines, I plot lines from each cell to the nearest available living cell.

Disclaimer: I try to produce smooth-ish lines in a relatively straight-forward fashion, but I admit that there are instances of weirdo trickery in my code:

import processing.pdf.*;

float cells[][];
float noiseScale = 100.0;
float scaleFactor = 1;
int dist = 3;
//density of pattern
int bandWidth = 1200;
//noise seed
int seed = 9;
int[] rule = {
  0, 0, 0, 1, 0, 0, 0, 0, 0
};
int searchRad = 12;
int cellCount = 0;

void setup() {
  size(900, 900); 
  cells = new float[width][height];
  generateWorld();
  noStroke();
  smooth();
  beginRecord(PDF, "output.pdf");
}

void generateWorld() {
  noiseSeed(seed);
  //Using a combination of modulus and noise to generate a pattern
  for (int x = 0; x < cells.length; x++) {
    for (int y = 0; y < cells[x].length; y++) {       float noise = noise(x/noiseScale, y/noiseScale);       if (x % int(bandWidth*noise) > int(bandWidth*noise)/2) {
        cells[x][y] = 0;
      }
      else if (y % int(bandWidth*noise) > int(bandWidth*noise)/2) {
        cells[x][y] = 0;
      }
      else {
        cells[x][y] = 1;
      }
    }
  }
}

void draw() {
  background(255);
  drawCells();
  //Draw the world on the first frame with points, connect the points on the second frame
  if (frameCount == 1) updateCells();
  else {
    for (int x = 0; x < cells.length; x++) {
      for (int y = 0; y < cells[x].length; y++) {         if (cells[x][y] > 0) {
          stroke(0);
          strokeWeight(1);
          //Arbitrary 
          for (int i = 0; i < 20; i++) {
            PVector closestPt = findClosest(new PVector(x, y));
            line(x * scaleFactor, y * scaleFactor, closestPt.x*scaleFactor, closestPt.y*scaleFactor);
          }
        }
      }
    }
    endRecord();
    println("okay!");
    noLoop();
  }
}

//Finds closest neighbor that doesn't already have a line drawn to it
PVector findClosest(PVector pos) {
  PVector closest = new PVector(0, 0);
  float least = -1;
  for (int _y = -searchRad; _y <= searchRad; _y++) {
    for (int _x = -searchRad; _x <= searchRad; _x++) {
      int x = int(_x + pos.x), y = int(_y + pos.y);
      float distance = abs(dist(x, y, pos.x, pos.y));
      if (x < 900 && x > 0 && y < 900 && y > 0) {
        if (distance != 0.0 && (cells[x][y] == 1) && ((distance < least) || (least == -1))  
          && cells[x][y] != 2) {
          least = distance;
          closest = new PVector(x, y);
        }
      }
    }
  }
  cells[int(closest.x)][int(closest.y)] = 2;
  if (closest.x == 0 && closest.y == 0) return pos;
  else return closest;
}

//If the sum of the cell's neighbors complies with the rule, i.e. has exacly 4 neighbors,
//it is left on, otherwise it is turned off. This effectively removes everything but the 
//outlines of the blob patterns. 
void updateCells() {
  for (int x = 0; x < cells.length; x++) {
    for (int y = 0; y < cells[x].length; y++) {
      cells[x][y] = rule[sumNeighbors(x, y)];
      if (cells[x][y] == 1) cellCount ++;
    }
  }
}
int sumNeighbors(int startx, int starty) {
  int sum = 0;
  for (int y = -1; y <= 1; y++) {
    for (int x = -1; x <= 1; x++) {
      int ix = startx + x, iy = starty + y;
      if (ix < width && ix >= 0 && iy >= 0 && iy < width) {
        if (cells[ix][iy] == 1) {
          if (x != 0 || y != 0) sum++;
        }
      }
    }
  }
  return sum;
}

void drawCells() {
  loadPixels();
  for (int x = 0; x < cells.length; x++) {
    for (int y = 0; y < cells[x].length; y++) {
      int index = (int(y*scaleFactor) * width) + int(x*scaleFactor);
      if (cells[x][y]==1) {
        pixels[index] = color(255);
      }
    }
  }
  updatePixels();
}

void mousePressed() {
  saveFrame(str(random(100)) + ".jpg");
}