Set 08 (Due 10/29)

The readings, assignments, and projects below constitute the Week 08 Deliverables and are due by 11:59pm EDT on Thursday, October 29.

Readings and Viewings

  • Please look over this entire Deliverables page. There are lots of things to look at and learn from.
  • We recommend you read Chapter 7 in GSWp5js, “Media” (pages 122-129), especially in relation to loading images.
  • Below is a video of Dan Shiffman discussing how an Object can own a reference to an image in p5.js. We recommend you watch it.

Looking Outwards 08: on Looking Outwards

Our Looking Outwards topic for this week is: the Looking Outwards assignments by your peers!

This week, we hope you’ll find inspiration in the things your friends have discovered. Find one or two peers in the course whom you know. Browse through their Looking Outwards assignments until you find something unfamiliar, that sounds like it could be interesting. Read your peer’s reviews, then check out the project they cite. In a blog post of about 100-200 words,

  • What are your thoughts about the cited project? In what ways do you agree or disagree with your peer’s assessment? Respond to their report with some thoughts of your own. What can you productively add to their discussion of the project?
  • Link (if possible) to the original work, and to your peer’s Looking Outwards post. (Be sure to provide the creator’s name, title of the work, and year of creation.)
  • Embed an image, sound, and/or a YouTube/Vimeo video of the project (if available).
  • Label your blog post with the Category, LookingOutwards-08.
  • Label your blog post with the Category referring to your section (e.g. GolanSectionRogerSectionA, RogerSectionBRogerSectionD, or RogerSectionE).

Assignment 08-A: Animation Walk Cycle

In this Assignment, you are aked to write code to animate a walking character with sprite images. (You are provided, below, with the necessary sprite images and some template code to complete.) The final goal of this Assignment is to create an interactive scene, in which the character walks over to the places where the user clicks. Here is a animated GIF demonstrating what your solution will resemble:


Here is the source image for the walk cycle of the animated character we’ll be using:


The individual frames of this animation can be found in this album, and also in this zip file. There are 8 frames, which are provided to you as .PNG images with transparent backgrounds. (We recommend that you don’t use the local copies of the images unless you plan to be working offline and you understand how to test p5.js sketches from a local server.)

Below is the starter “template” code you will use to create your project. Please carefully note the places where you are expected to add code.



  • Please use our lightweight sketch template ( Copy-paste the code above into that template, in order to get started.
  • Run the starter program. You should see the program above running.
  • There are 4 fragments of code for you to write, which are described in more detail below:
    1. Load the images into an array;
    2. display the character with a cycling frame number;
    3. move the character across the canvas;
    4. flip the character left/right appropriately.
  • Load the images. Currently the program loads just a single frame of the walk cycle, purely for demonstration purposes. Inside the preload() function, around line 24, you are advised to PUT CODE HERE TO LOAD THE IMAGES INTO THE frames ARRAY, USING THE FILENAMES STORED IN THE filenames ARRAY. In other words, this is where you should write code that loads all 8 of the walk cycle images. Specifically, you should write a for loop that fills up the frames array with images obtained by loading the (respective) URLs given in the array of filenames strings.
  • Display the current frame. In the draw() function, around line 40, you are advised to PUT CODE HERE TO DISPLAY THE CHARACTER, CYCLING THROUGH THE FRAMES. You’ll be successively displaying each of the images in the frames array, incrementing the array-index of the image you display on each refresh of the draw() function. (Don’t forget to restart the index at zero when your index exceeds the length of the frames array.) One possibility is to apply the modulus operator to the frameCount system property.
  • Move the character across the canvas. The target (“goal”) location is set in the mousePressed() function, whenever the user clicks the canvas. In line 34, you are advised to PUT CODE HERE TO MOVE THE CHARACTER TOWARDS THE TARGET. You’ll need to devise some way of moving the character a small portion of the way from its current location towards the target location, essentially reassigning the character’s position on every frame. There’s no single correct way to achieve this, and we’re curious what you come up with. One solution is to use lerp(). Another method uses some simple trigonometric proportions based on the numbers dx, dy, and distanceFromCharacterToTarget.
  • Flip the direction the character is facing, if you’re able, when the character is moving towards the left. You are advised to do this in Line 41 (FLIP THE IMAGE IF THE CHARACTER'S HEADING LEFT),  just before the character is rendered to the screen. We recommend you achieve this by applying a scale() of (-1,1) to the character’s image inside a push()/pop() structure. This step may be tricky, so save it for last.
  • Don’t forget to comment your code, and please give attention to code style.

You may find the following reference materials helpful for this Assignment:

Then, as per usual for all Assignments uploaded to Autolab:

  • Put the following information into comments at the top of your code: Your name; Your class section or time; Your email address, including; and Assignment-08-A
  • Name your project UserID-08-A. For example, if your Andrew ID is placebo, then your project name should be placebo-08-A.
  • Zip and upload your code to Autolab, using the provided instructions. Zip your entire sketch folder. (Since you’re using our lightweight sketch template, there will be no libraries subdirectory to include.)

Assignment 08-B: Eye Tracking

In this Assignment, you will write code to locate the brightest pixel in an image.


To help you appreciate why this might be interesting, please watch the following two short videos. Both of the projects documented in these videos were created by Evan Roth and members of his artist collectives, the Graffiti Research Lab (GRL) and the Free Art and Technology (FAT) Lab, respectively. Both projects depend heavily on being able to find the brightest pixel in an image. 

The first video describes L.A.S.E.R. Tag (2007), an application in which the bright spot of a laser pointer is tracked by a camera system. The path of this laser pointer is used to control building-scale interactive projections, in order to create building-scale “virtual graffiti”, viewable for miles:


In the second video, Roth and his collaborators created the Eyewriter (2009), an inexpensive, open-source eye-tracker. Roth created the Eyewriter as a custom tool for Tempt One (Tony Quan), a legendary LA graffiti artist who became totally paralyzed due to ALS (amyotrophic lateral sclerosis, or Lou Gehrig’s Disease). At the time when the Eyewriter project was released, eye-trackers cost $20,000+, and used entirely proprietary software. The Eyewriter, by contrast, could be built for less than $100 in parts, and was the first low-cost eye-tracker with free, open-source software. It was an eye-tracker made by artists, for an artist, to help him regain a life of creativity he had lost. With it, Tempt was able to draw again for the first time in seven years.


Why is this example relevant? Well, as with the L.A.S.E.R. Tag project, eye-trackers also work by finding the brightest point in an image. In order to estimate where you are looking, eye-trackers compare the location of the center of the pupil, with the location of the brightest point or points — generally a glint or reflection on the eye of one or two nearby infrared LEDs. This video makes this clear:


The Task.

In this assignment, you will use a provided template to write code that computationally locates the brightest pixel in an image of an eye. This task is a precursor to eye-tracking, as well as many related interactions.

In fact, there are three images for you to work with, and your code must work correctly with all of them. They are here:

All you have to do is ensure that your code sets the values of the two variables, brightestPixelX and brightestPixelY, correctly. The template will take care of the rest (including drawing the transparent yellow cross-hairs). When you’re done, your project should look something like the following GIF. (The program advances to the next eye image each time you click):


The code template is provided below. Detailed instructions can be found below the code.

// This Assignment presents a simple image analysis scenario
// in which you must search for the brightest pixel in an image. 
// In this scenario, you will search for the "1st Purkinje Image",
// a corneal reflection which forms the basis for eye-tracking. 
// This is commonly known as the specular "highlight" in your eye. 
// This sketch loads 3 infrared photographs of eyes from 
// Whenever the user clicks the mouse, a different image is selected. 
// You must compute and indicate the location of the brightest pixel
// in that image, corresponding to the Purkinje reflection. 
// Here are the three images that your app will load. Note: 
// It's the exception, not the rule, that cross-domain fetching works. 
var eye0url = "";
var eye1url = "";
var eye2url = "";

function draw() {
    // Render the current image to the canvas.
    image(currentImage, 0, 0);
    // Initialize important variables. 
    var currentImageW = currentImage.width; 
    var currentImageH = currentImage.height;
    var brightestPixelX = width/2; // provisional - fix me!
    var brightestPixelY = height/2; // provisional - fix me!
    // Search for the brightest pixel in currentImage. 
    // Store its (x,y) location in brightestPixelX and brightestPixelY. 
    // Note: You might find the brightness() function to be helpful. 
    // ...
    // Draw a crosshair to indicate the brightest pixel.
    stroke (255,255,0, 128);
    line (brightestPixelX,0,brightestPixelX,height); 
    line (0,brightestPixelY,width,brightestPixelY); 


// As an alternative to the above, you may fetch copies of these images from 
// the following locations, and place them in the same directory as sketch.js. 
// (NOTE: Unlike, these URLs will NOT work for direct embedding!!)
// After doing so, you can use the following for local deveopment: 
var eye0url = "eye0.png";
var eye1url = "eye1.png";
var eye2url = "eye2.png";

var eyeImage0;
var eyeImage1;
var eyeImage2;
var currentImageIndex;

function preload() {
    // Preload our images. 
    eyeImage0 = loadImage(eye0url);
    eyeImage1 = loadImage(eye1url);
    eyeImage2 = loadImage(eye2url);
    currentImageIndex = 0;

function setup() {
    // Create the canvas; make the pixels of the first image readable. 
    createCanvas(192, 143);
    currentImage = eyeImage0;

function mousePressed(){
    // Select the next image; make its pixels readable.
    currentImageIndex = (currentImageIndex+1)%3; 
    if (currentImageIndex === 0){
      currentImage = eyeImage0;
    } else if (currentImageIndex === 1){
      currentImage = eyeImage1;
    } else if (currentImageIndex === 2){
      currentImage = eyeImage2;


  • Please use our lightweight sketch template ( Copy-paste the code above into that template, in order to get started.
  • Run the program and click in the canvas; note how the current image switches. The variable currentImage contains one of the three eye images. You can fetch the color at each or any of its pixels (x,y) by using the currentImage.get(x,y) function. See the p5.js Pointillism example to understand this.
  • There are several ways to determine the brightness of a color. One way is to use the average of its red, green, and blue components, with those respective functions. Another is to use the brightness() function.
  • Your task is to write code in the draw() function beginning at line 35, such that the variables brightestPixelX and brightestPixelY contain the correct values. You’ll need to loop over every pixel in order to find the brightest. You’ve solved problems like this before….
  • Don’t forget to comment your code, and please give attention to code style.

Then, as per usual for all Assignments uploaded to Autolab:

  • Put the following information into comments at the top of your code: Your name; Your class section or time; Your email address, including; and Assignment-08-B
  • Name your project UserID-08-b. For example, if your Andrew ID is placebo, then your project name should be placebo-08-b.
  • Zip and upload your code to Autolab, using the provided instructions. Zip your entire sketch folder. (Since you’re using our lightweight sketch template, there will be no libraries subdirectory to include.)

Project 08: Computational Portrait (Custom Pixel)

In this creative Project, to be uploaded to WordPress, you will create a computational portrait, using some kind of original surface treatment (such as a “custom pixel”) of a hidden underlying photograph.



In Peripheral Vision: Bell Labs, the S-C 4020, and the Origins of Computer Art, Zabet Patterson writes:

In 1959, the electronics manufacturer Stromberg-Carlson produced the S-C 4020, a device that allowed mainframe computers to present and preserve images. In the mainframe era, the output of text and image was quite literally peripheral; the S-C 4020 — a strange and elaborate apparatus, with a cathode ray screen, a tape deck, a buffer unit, a film camera, and a photo-paper camera — produced most of the computer graphics of the late 1950s and early 1960s. At Bell Laboratories in Murray Hill, New Jersey, the S-C 4020 became a crucial part of ongoing encounters among art, science, and technology. 

The above image, created with the S-C 4020, is one of the first and most famous computer artworks ever made: “Studies in Perception”, by Ken Knowlton (1931-) and Leon Harmon (1922-1982), developed as an experiment at Bell Laboratories in 1966. According to MedienKunstNet,

The reclining nude represented the first experiment to scan a photograph into a computer and reconstitute it with a gray scale, using 12 discrete levels of gray, produced by mathematical and electronic symbols. The scanning process established a certain level of gray in a certain area of the photo and replaced it with one of the symbols. This process was used to try to establish the minimum amount of information the human eye needed to resolve an image. 

The curators of the V&A Museum write:

Only by stepping back from the image (which was 12 feet wide), did the symbols merge to form the figure of a reclining nude. Although the image was hastily removed after their colleague returned, and even more hastily dismissed by the institution’s PR department, it was leaked into the public realm, first by appearing at a press conference in the loft of Robert Rauschenberg, and later emblazoned across the New York Times. What had started life as a work-place prank became an overnight sensation.

Around the same time, American painter Chuck Close (1940-) — an associate of Rauschenberg — began working to systematically deconstruct photography through painting. After achieving “photorealistic” effects through paint alone (as in his famous 1966-1967 self-portrait), Close moved on to create his own, highly personal “picture-elements”. In one series of portraits, Close’s images are broken down into regions comprised of numerous small circles in complementary colors. (Note how no square actually contains the color of the subject’s skin! Instead, a skin tone is approximated by a pair of blue and orange circles.)


Close has been at this for decades and has devised many provocative and truly astonishing picture elements, the most ‘personal’ of which are undoubtedly the thumbprints he used to construct this 8-foot tall portrait of Fanny, his wife’s grandmother (click to enlarge).


Close is a man who keenly understands procedural art; his portrait of his friend Roy Liechtenstein (below), is as algorithmic as anything you’ll see in this CS course. Check out this astounding time-lapse, in which a group of workers accrete more than 40 different colors of paper pulp through specially-cut stencils:

Whereas Chuck Close’s colored square-circle self-portrait suggests a translation strategy of direct pixel-for-paint substitution, his “Fanny” portrait suggests an altogether different strategy, in which values are rendered through the accumulation (or massing) of marks in different densities. In a word: more thumbprints = darker. We will see both of these visual strategies in the works of computational new-media artists, below.

Primary among these is New York-based artist Danny Rozin, who for nearly two decades has been exploring “physicalized” custom pixels. His 1999 “Wooden Mirror” is widely considered a classic work of new media art; it consists of 840 pieces of wood which tilt toward or away from an overhead light. This is the “substitution” strategy; Rozin directly maps each (camera) pixel’s brightness to the orientation angle of a servo motor. Brightness = angle.


His “Peg Mirror” (2007) likewise rotates 650 cylindrical elements towards or away from the light:

Rozin’s “Weave Mirror” creates an image by rotating interlocking elements, whose exteriors are colored with a gradient from light to dark.

Ryan Alexander allows fungus-like filaments to grow according to forces from an underlying image. It turns out that those filaments are actually strings of words.



In the work below by Robert Hodgin, 25,000 dots recreate a portrait by Rembrandt Peale. According to Hodgin, “Each dot pushes away its neighbors. The strength at which the dot repulses and the radius of the dot are both governed by a source greyscale photo.”


We see the massing or density-based strategy once again in the portraits below, each of which consist entirely of just a single line! These are examples of so-called “TSP art”, in which an algorithm which solves the well-known Traveling Salesman Problem algorithm is applied to a set of points whose density is based on the darkness of the image. On the left is a self-portrait by Mark Bruce made with StippleGen; on the right is a portrait of Paul Erdős by Robert Bosch. (Click to enlarge)



In this interactive dance performance by Camille Utterback, the space between the performers is filled with colors derived from their bodies and movements.


The use of dynamism to resynthesize photographic imagery is another important visual strategy. In this self-portrait from 2007, Robert Hodgin uses “a 3D flowfield to control [moving] particles that pick up their color based on webcam input.” In essence: moving particles pick up color, and shmear it, as they move around on top of an underlying photo. 


Erik Natzke also uses moving particles to create his compositions:


The Task: A Computational Portrait / Custom Pixel

Ok, enough already. In this Project, you will create a computational portrait, rendering data from a (hidden) photograph with a “custom pixel” or other original algorithmic graphical treatment. 


  • The portrait must be a real living person that you know personally. It may not be Barack Obama, Marilyn Monroe, Elvis, Bob Marley, Jesus Christ, or any other celebrity or religious figure. You must secure permission from your subject, and use a photo you’ve taken yourself. Self-portraits are permissible.
  • You wanted to make a portrait of the course faculty (Roger, Golan, TAs)?  Ha, ha, nobody has thought of that before. No.
  • Students whose religions prohibit the representation of sentient beings are excused from the first constraint, and may create a representation of an animal instead, or make some other proposal by arrangement with the professors.
  • You may not display the original photograph in unaltered form at any time. You may only display the portrait generated from it.
  • You must use an actual photograph as a starting point for a computational interpretation; you may not synthesize a portrait “from scratch” (as you did in your Variable Faces project).
  • Get some practice using a local server to develop and test sketches that load external media such as images.
  • Be sure to test your work when you upload it to the WordPress blog. You may need to change the URL of the source image for it display correctly. We recommend storing your image on, which simplifies development and deployment.
  • Dimensions may be no larger than 800×800 pixels, though smaller canvases are fine.

Sample Code:

We hope the above examples are stimulating, and as usual, we hesitate to provide sample code. Still, it’s helpful to know how to set up an image for algorithmic interpretation. We highly recommend you look at Dan Shiffman’s great p5.js Pointillism Example. Here also is Golan’s simple self-portrait, composed of random dots and mouse-drawn lines:


Here are the Project-08 Requirements: 

  • Create a computational portrait program in p5.js, as described above.
  • When you’re done, embed your p5.js sketch in a blog post on this site, using the (usual) instructions here. Make sure that your p5.js code is visible and attractively formatted in the post. Include some comments in your code.
  • In your blog post, write a sentence or two reflecting on your process and product. In discussing your process, it would be awesome to see any of your paper sketches from your notebook; these could be as simple as photos captured with your phone.
  • Include a couple screen-shots of your finished portrait.
  • Label your project’s blog post with the Category Project-08-Portrait.
  • Label your project’s blog post with the Category referring to your section (e.g. GolanSection, RogerSectionA, RogerSectionBRogerSectionD, or RogerSectionE).