A recent obsession of mine – having read “The Coming Technological Singularity” by Vinge Vernor in my writing course –  is the notion of humans playing machine roles. What would a person look like as a vacuum cleaner? Toaster? Clock? Petunia the Clockman has a biological affinity with time, as represented by seconds, hours, and minutes. That is, the boils on his forehead correspond to the current number of hours, and the movement of the flower on his head to the number of seconds in the current minute. He blinks the current number of minutes in binary every six seconds (one second per bit). Golan pointed out that the blinking aspect of the clock has nothing to do with the boil aspect – a criticism I agree with. So an improved version of the clock might rely on just one display mechanism instead of three working in tandem. Or it might dispose of the 12-hour clock altogether. That said, I’m pleased with the way in which Petunia’s eyes follow the flower on his head.

Man man;
PImage ball;

void setup() {
  size(400, 400);
  man = new Man();
  ball = loadImage("ball.png");

void draw() {  

void mousePressed() {
class Antennae {
  PVector pos;
  PImage antennae;
  PImage petunia;
  float rot, initRot;
  int scaleDown = 2;
  int num;

  Antennae(PVector _pos, float _rot, int _num) {
    pos = _pos;
    initRot = _rot;
    num = _num;
    antennae = loadImage("antennae.png");
    petunia = loadImage("petunia.png");

  void update() {
    float theta = map(millis()%60000,0,59999, 0, TWO_PI);
    rot = map(sin(theta), -1, 1, -.19, .19);

  void draw() {
    translate(pos.x, pos.y - (antennae.height/(scaleDown*2)));
    for (int i = 0; i < num; i++) {
      image(antennae, 0, 0, 1.3*(antennae.width/scaleDown), antennae.height/(scaleDown*2)); 
      translate(0, .8 * (antennae.height/(scaleDown*2)));
      if(i == num-1) {
class Man {

  ArrayList a = new ArrayList();
  PImage closeEyes, eyes, headCloseEyes, headOpenEyes, openEyes;
  PImage[] pimps = new PImage[12];
  PVector lEyePos = new PVector(373, 367), rEyePos = new PVector(436, 367);
  int bobAmp = 3;
  int blinkFrame = 0;
  int numAntennae = 1;
  int eyesX;

  Man() {

  void update() {

  void blink() {
    String binMinutes = Integer.toBinaryString(minute());
    int blinkBracket  = second()/10;
    int numToAdd = 6 - binMinutes.length();
    for (int i = 0; i < numToAdd; i++) {
      binMinutes = "0" + binMinutes;

    if (binMinutes.charAt(second() % 6) == '1' && abs(frameCount - blinkFrame) >= 30) {
      blinkFrame = frameCount;

  void drawMe() {
    translate(0, height*.2);

  void loadImages() {

    for (int i = 0; i < 12; i++) {
      pimps[i] = loadImage((i + 1) + ".png");

    closeEyes = loadImage("close-eyes.png");
    eyes = loadImage("eyes.png");
    headCloseEyes = loadImage("head-close-eyes.png");
    headOpenEyes = loadImage("head-open-eyes.png");
    openEyes = loadImage("open-eyes.png");

  void generateAntennae() {
    for (int i = 0; i < numAntennae; i++) {
      a.add(new Antennae(new PVector(206, 153), random(2, 3), 10));

  void drawHead() {    
    float bobHead = map(frameCount % 200, 0, 199, 0, TWO_PI);
    image(headOpenEyes, 0, 0, width, height);
    float eyeCreep = map(cos(atan2(eyesX+width/2, height/2)), 0, 1, 7, -5);
    translate(eyeCreep, 0);
    image(eyes, 0, 0, width, height);
    image(openEyes, 0, 0, width, height);
    if (frameCount == blinkFrame || 
      (mousePressed && mouseX > width * .35 && mouseX < width * .682)) {
      image(headCloseEyes, 0, 0, width, height);
      image(closeEyes, 0, 0, width, height);
    int hour = ((hour()+1) > 12) ? hour() - 12 : hour();
    for (int i = 0; i < hour; i++) {
      image(pimps[i], 0, 0, width, height);

  void drawAntennae() {
    for (int i = 0; i < numAntennae; i++) {
      Antennae _a = (Antennae) a.get(i);
      eyesX = int(map(_a.rot, 0, .19, 0, width));


As Kyle McDonald so aptly puts it, the face is “one of the most salient objects in our day-to-day life.” It is arguably the most central means by which we communicate non-verbally with others, and is undoubtedly our most powerful tool for emotional expression. Given its primacy in these areas, I was interested in the history of the face in both art and science. After some research I stumbled upon the then science, now pseudoscience “Physiognomy”, which aims to judge one’s character and intellectual aptitude based on facial structure alone.

Physiognomy still going strong in North Korea, here applied to judge Kim Jong-Un’s leadership potential

While the field may not have contributed much to Western science, it did yield some beautiful illustrations.

One of Charles Le Brun’s (1619 – 1619) anamorphic physiognomies


Owl + Man in a fashionable orange jumpsuit

Tapping into the idea of a human-animal hybrid, I composited a half owl half man chimera in Photoshop. One can spy on this chimera through a keyhole. Although the face itself does not respond to the user, the scene does. By moving one’s head back and forth, ones changes the perspective from which the creature can be seen, thus the illusion of depth. This effect is called parallax, and the layers of images move at distinct speeds accordingly. I applied a fisheye filter to the assets that lie behind the keyhole, so as to give the impression of looking through a piece of curved glass.

Ideally, the face would respond to the user as well. However, for the purpose of this project, the parallax achieves some level of immersion on its own. I also think the voyeuristic quality of this piece reveals rich territory for future exploration/exploitation/exposition. 

I used FaceOSC’s poseOrientation property to govern the rotation of the head (a texture on a cylinder), the posePosition property to pan the images, and the poseScale property to control the zoom level of the scene.

Looking Outwards 2

BLADE RUNNER revisited >3.6 gigapixels – François Vautier

BLADE RUNNER revisited >3.6 gigapixels from françois vautier on Vimeo.

In his installation for WORLD EXPO Shanghai 2010, François Vautier composites all 167,819 frames of Ridley Scott’s Blade Runner into a colossal 3.5 gigapixel image. Since it is impossible to show in detail on a single display, Vautier employs a virtual camera – represented by a glowing cube – to pan across the image. This creates a zootrope effect that echoes early projectors, as Vautier explains in the description of the video. Insofar as there is no visual indication of a continuous image, the artwork’s premise diverges its visual impact. There is tension between the proposed concept – a continuous image – and the way in which the eye interprets the succession of what appear to be separate frames. We have to take on faith that there really is a single image, since this fact is unknowable from the visual evidence alone. The conceptual layer of the artwork alienates and deconstructs its visceral draw.

Mickey Mouse Club – Matthew Plummer-Fernandez

“Blurring images has become both a widely recognised cultural aesthetic and also used to obscure the identity of persons photographed or filmed. After recent clashes with 3D printers over IP concerns I’ve chosen to disguise my latest derivative of Mickey Mouse and to explore this smoothed 3D aesthetic that is counter to the popularity and push for highly detailed 3D printing.” – Matthew Plummer-Fernandez

Plummer-Fernandez appropriates and deforms Mickey Mouse in “Mickey Mouse Club”, a wry response to recent concerns about the impact of 3D printers on intellectual property. As explained in a Creative Applications post, Plummer-Fernandez’s process is straightforward; he smoothes the Mickey Mouse mesh in Processing (akin to blurring a 2D image) before 3D printing the output. Although the final mouse is unrecognizable as Mickey, one can conceive of an intermediary stage in the blurring at which time the iconic mouse is recognizable, yet somehow perverted. This raises an interesting question: at what point in the smoothing of the mesh does it cease to be Disney’s property?  And by extension, how does a corporation like Disney react to the widespread remixing of objects? I anticipate artists will continue to critique institutions and mass culture in coming years, leveraging generative and/or digital fabrication techniques in innovative ways.

InfObjects – Johannes Tsopanides

Tsopanides’ “InfObjects” are data visualizations in physical form. The design of each cup, bowl or plate depends on the energy cost and price of the food to which it corresponds. In addition to mapping one signal to another (I recall Campbell’s prescient Formula for Computer Art), there is a functional and symbolic dimension to Tsopanides’ objects; their usefulness as dinnerware is determined by the CO² quotient of the dish in question, which in turn determines the number of holes in the object. So the plate generated by butter (517 g of CO²)  is less usable than the tomato plate (315g CO²), which calls into question the relative merits of butter. The important point is that through generative design, Tsopanides makes tangible and even humorous the grave threat of greenhouse gasses. He furthers an environmental agenda by challenging our assumptions about the function of everyday objects. In doing so, InfObjects exemplifies art as activism: art making in the service of a social good.


Nipple Congress


Dwelling the unique constraints and opportunities presented by the .gif format, I had the idea to generate a pattern, and to then use that pattern as a seed for a particle system. As it turned out, this pattern-as-seed plan was pretty convoluted. My pattern held its own aesthetically, and there was no need to abstract it further. At this point, the difficulty was to animate the pattern in a seamless fashion. Golan helped me formulate a solution, which entailed gradually shifting initial velocities according to trigonometric relationships.

I think the .gif succeeds in its playful treatment of stability and instability: the nipples are effectively stationary, where the arm-tentacles twitch and/or undulate erratically. Generally speaking, the experience of the .gif changes dramatically from region-to-region. Maybe this lack of focus is a drawback, and if so, I could improve the piece by unifying it visually. It might also benefit a clarified kinetic gestalt, or put in other words: buttery, logical motion. (Or its charm could lie in the jerky crudeness.)

(As a side note, the edicts of my Nipple Congress have strong ties to those of Jared Tarbell’s Substrate, since a nearly identical branching rule governs the spatial aspect of the forms generated onscreen. )

20130912_122607 20130912_122618

GridParticle[] gP = new GridParticle[40];
int seed = 10;
PVector texture;

void setup() {
  size(600, 400); 


void initGrid() {
  for (int i= 0; i  < gP.length; i++) {
    gP[i] = new GridParticle(i);

void draw() {  
  for (int n = 0; n < 1400; n++) {
    for (int i= 0; i  < gP.length; i++) {

class GridParticle {
  PVector pos; 
  PVector vel = new PVector(0, 1);
  float strokeW;
  int changedThresh;
  int stroke = 235;
  int blinkOffset;

  GridParticle(int _blinkOffset) {
    blinkOffset = _blinkOffset;
    strokeW = 5;
    changedThresh = int(random(0, 10));
    pos = new PVector(random(width*.2, width*.8), random(height*.2, height*.8));
    float randomRadiusX = random(-1,1);
    float randomRadiusY = random(-.5,.5);
    float randomAngle = random(HALF_PI);
    float frameBasedAngle = map(frameCount % 10000, 0, 9999, 0, TWO_PI);
    vel.x = randomRadiusX * cos(randomAngle + frameBasedAngle);
    vel.y = randomRadiusY * sin(randomAngle + frameBasedAngle);

  void update() {


    float yNoise = map(noise(vel.y/100.0), 0,1, -.002, .002);
    vel.y += yNoise;

    if (pixels[floor(width*int((pos.y + 5) % height) + int(pos.x))] != color(0,4,118) || 
       pixels[floor(width*int((pos.y) % height) + (int(pos.x + 5) % width))] != color(0,4,118)) {
      strokeW *= .985;

      vel.x *= -1; 

    pos.x = abs((pos.x + vel.x)) % width;
    pos.y = abs(pos.y + vel.y) % height;

  void drawMe() {
    float vary = noise(pos.x/10.0, pos.y/10.0)*4;
    strokeWeight(strokeW * vary);

    float blink = map((frameCount + blinkOffset) % 10, 0, 9, 0, 1);

    stroke((stroke + 20)*blink, 0, 0);
    point(pos.x, pos.y);
    point(pos.x-(strokeW * vary),pos.y); point(pos.x+(strokeW * vary),pos.y);

Miles Peyton – Schotter

int border = 50;
float jitter = .03;
float side;
int numPerRow = 12;
int space;

void setup() {
  size(400, 800);  
  side = ((width-(border*2))/float(numPerRow));  

void draw() {
  for(int y = border; y < height-border; y+=side) {
    for(int x = border; x < width-border; x+=side) {
        float center = random(side);
        translate(x-width/2 + center,y-height/2 + center);
        rotate(random(-jitter, jitter));

Like Strangers to an OPEN Sign

The neon OPEN sign and its variants designate temporary shelters for the public. In my IFTTT recipe, this universal symbol is co-opted for the purpose of remedying loneliness.

OPEN Sign is a hassle-free way to invite strangers into your home. One need only tweet a message containing the hashtag #lonely, and the sign will emit an attractive, familiar glow. This avoids the inconvenience of getting out of bed to switch on a sign, as well as the depressing rituals of other loneliness antidotes (like poking people on Facebook). The OPEN Sign is a direct, unambiguous invitation. Most importantly, it provides the satisfaction of a real-life encounter should it succeed in instigating one.

(It may be placed in front of a door, window, garage door, or on the participant’s body.)






If a new article is posted on the New York Times containing the keyword ‘explosion’, then send a blink event.

An explosion somewhere in the world is the catalyst for a reaction that goes something like this: explosion > reporters and witnesses > publication > IFTTT > blink(1). In that sense, the blink(1) event is a continuation of a reaction set off by a real world explosion.

• • •

A bulk of the craft involved in arts-engineering is making technologies talk to each other. This fact is highlighted in Jim Campbell’s “Formula for Computer Art”, a diagram resembling a slot machine that reveals (painfully so, for me) the unoriginality of merely mapping an input to an output via some unseen algorithm. Campbell’s astute diagram is a challenge to move beyond this formulaic approach to art-making. But as Golan pointed out in class, Campbell neglects to include mention of a viewer or participant (aside from inputs like “spoken words” or “number of people”). Perhaps this exclusion is telling, and interactive elements makes for more engaging art.

I don’t think Campbell’s diagram precludes mapping of some input to an output, so long as the mapping is meaningful. In “Art and the API” Jer Thorp makes the case for connections via APIs. APIs are glue for connecting people, systems and events through the medium of software. He illustrates a number of interesting use cases for APIs, like using drone strike data provided by The Bureau for Investigative Journalism to populate a Twitter feed. IFTTT (“If This, Then That”) simplifies the process of orchestrating technologies by leveraging the APIs of popular services.

What IFTTT gains in accessibility, it sacrifices in flexibility; connections can only be made between the included APIs. For prototyping projects at a smaller scale that aim join two of the included services, I could see IFTTT being useful.


Instructional Drawing in Reverse


Setup (do this once)

Distribute nonintersecting lines of similar length and variable orientation across a plane.

Loop (repeat this indefinitely)

Allow lines to branch out of existing lines, provided the new lines:

– are drawn perpendicular to the lines from which they stem

– do not intersect other lines

– do not exceed the length of the lines from which they stem