Set 10 (Due 11/12)

This set of Week 10 Deliverables has 3 parts, and is due by 11:59pm EDT on Thursday, November 12th, 2015.

Looking Outwards 10: A Focus on Women Practitioners

Although there are many innovative women producing exceptional work in the fields of computational design and new-media arts, they remain statistically under-represented in many festivals, mediaexhibitions, conferences, museums, and panels. In this week’s Looking Outwards assignment, we aim to deepen our familiarity with their work, as a step towards building a more equitable economy of attention. You are asked to identify an interesting interactive artwork, visualization, tactical media project, or other computational design, that was created by someone who happens to not be a dude.

To help you get started, we have prepared a partial list of accomplished women working in these fields. You are welcome to consult this list — and you are also welcome to depart from it if there is someone we’ve accidentally overlooked. Please try to select a project that involved the creation of custom software.

Once you have identified a particular project or work which you find intriguing or inspirational, then, in a blog post of about 100-200 words,

  • Please discuss the project. What do you admire about it, and why do you admire these aspects of it?
  • Provide a short biography of the creator. What did she study? Where does she work? What kind of work does she do, broadly speaking?
  • Link (if possible) to the work. To the best of your abilities, be sure to provide the creator’s name, title of the work, and year of creation.
  • Embed an image and/or a YouTube/Vimeo video of the project (if available).
  • Label your blog post with the Category, LookingOutwards-10.
  • Label your blog post with the Category referring to your section (e.g. GolanSectionRogerSectionARogerSectionBRogerSectionD, or RogerSectionE).

Assignment 10-A: Text Rain

In this Assignment, you are asked to create a “cover version” (re-implementation) of the interactive video work, Text Rain. The purpose of this assignment is to strengthen your skills in pixel-processing and image analysis, while introducing classic principles of interactive video art. For this Assignment, you will need to work at a computer with a webcam. You will be making a real, interactive video system that works in the browser!


Background: About Text Rain

Let’s begin with a brief discussion of a key precursor: Myron Kruger’s Video Place. If you’ve ever wondered who made the first software system for camera-based play, wonder no longer; Video Place is not only the first interactive artwork to use a camera, it’s also one of the first interactive artworks, at all. Krueger (born 1942) is a pioneering American computer artist who developed some of the earliest computer-based interactive systems; he is also considered to be among the first generation of virtual reality and augmented reality researchers. Below is some 1988 documentation of Myron Krueger’s landmark interactive artwork, which was developed continuously between ~1972 and 1989, and which premiered publicly in 1974. The Video Place project comprised at least two dozen profoundly inventive scenes which comprehensively explored the design space of full-body camera-based interactions with virtual graphics — including telepresence applications, drawing programs, and interactions with animated artificial creatures. Many of these scenes allowed for multiple simultaneous interactants, connected telematically over significant distances. Video Place has influenced several generations of new media artworks — including Camille Utterback’s Text Rain:

Below is Text Rain (1999) by Camille Utterback and Romy Achituv — also widely regarded as a classic work of interactive media art. (You can experience Text Rain for yourself at the Pittsburgh Children’s Museum.) In watching this video, pay careful attention to how Camille describes her work’s core computational mechanism, in 0:48-0:55:

Did you hear Camille when she says, “The falling text will land on anything darker than a certain threshold, and fall whenever that obstacle is removed.” That’s what you’re going to be implementing in this Assignment!

Getting There: Testing Your Webcam Setup

Let’s make sure that p5.js is working properly with your computer’s webcam. NOTE: We will be using a new template for this Assignment, specially designed for capturing live video from webcams. You can get this capture template here (and see it running live below):

So: below you should see a live feed from your webcam, which uses this template:


If you don’t see any live camera, please be sure to troubleshoot the following 3 things:

1. Device Permissions? It’s necessary to give your browser permission to access your webcam, else people could violate your privacy. You’ll see a popup like one of the following; be sure to click “allow”:


Note that if you have clicked “Block” at some point in the past, while visiting your test page, then Chrome will remember this setting. You may need to manage the camera settings for your browser; see the screenshot image in the section below for some clues for fixing this.

2. Correct Device? It’s possible that your browser may be defaulting to the incorrect video capture device. This is definitely a problem if you’re using the Macs in the CMU Computing Services cluster in CFA-318. Those machines have a second capture device called CamCamX (an app used for screencasting); because this occurs alphabetically before your Facetime HD Camera, your browser may be connecting to CamCamX instead of your webcam. In the upper right corner of your Chrome browser, click on the little camera icon, and then select your web camera from the pulldown menu:


3. Is the Camera already in use? Devices like cameras generally don’t allow themselves to be accessed by more than one piece of software at a time. If you don’t see any video coming from your webcam, check to see if there might be another process running in the background that is currently accessing the camera. One pro-tip is that Google Chrome will indicate which tabs are accessing the camera, with a small red dot. Be sure to close any other tabs that might be trying to access the camera:


If everything is working correctly, you should be able to see your video in a working p5.js environment. For the Text Rain assignment in particular, we strongly recommend that you set yourself up against a light-colored wall, and that you wear dark colors, as follows:


Below is an animated-GIF recording of my version of TextRain, based off the provided code template, and written in p5.js:


If you’re still unable to get your webcam working: this is certainly a bummer, but you can still complete the Assignment. Instead of using the webcam, use this static image instead (or a similar one that you make yourself). Your code will be almost exactly the same if you use the static image; you’ll just have less fun.


  • Test your camera/browser relationship in p5.js, for example, by visiting the p5.js reference page for camera capture devices.
  • Create a sketch derived from the provided If you have problems testing your sketch in the browser, you can try switching to the p5.js IDE, or check some of the other suggestions above. Your canvas should be 640×480, which is a very common camera capture dimension.
  • Set yourself up in dark clothing against a light-colored wall, or swap in the test image provided above (or something similar) if you don’t have a good background to work against.
  • Do it. Here’s a step-by-step guide to get started coding Text Rain:
    • Our strategy will be to design the behavior of a single letter-particle first, and then (eventually) extend it to a whole bunch of them. Begin writing your code with a single particle whose location is represented by a pair of global variables, px and py. Place that particle at the top of the canvas somewhere (for example, the location (320,0)), and — for the time being — display it with a small ellipse.
    • On each frame of the draw() call, fetch the color in the video at that particle’s location using the capture.get() command, and store that color in a local variable called theColorAtPxPy. Furthermore, use the brightness() command to compute the brightness of that color, and store this in a variable called theBrightnessOfTheColorAtPxPy. (Alternatively, you could compute the brightness of that color by averaging its red(), green(), and blue() components.)
    • On each frame, check to see if theBrightnessOfTheColorAtPxPy is greater than some threshold — we recommend a global variable called brightnessThreshold). If it is, then move the particle downwards, by summing in a small positive amount to py.
    • On the other hand, if theBrightnessOfTheColorAtPxPy is less than a different global variable, darknessThreshold, then the behavior is slightly more complex. (FYI: This is the situation in which the particle has somehow worked its way down into a dark region. For example, perhaps the visitor, wearing a black shirt, abruptly moved their body upwards. If this happens, then the particle can become “trapped” in the interior of the dark region; it needs to climb back “up” to sit on the user’s shoulder.) If this is the case, then — within a single frame of draw()— use a while() loop to move the particle upwards until it is either no longer in a dark region, or it hits the top of the canvas.
    • If a particle reaches the bottom of the screen, it should reappear at the top of the canvas.
    • For some recommended starting values, try using 50 for the brightnessThreshold, and try having darknessThreshold, be 45 (in other words, just a few gray-levels darker than the brightnessThreshold.) You could also try setting them to be the same value, initially. (The exact thresholds to use will depend on your camera, lighting, clothing, and wall color.) A downward velocity of 1 pixel per frame is fine.
  • If you achieve the above, then now it’s time to generalize the single particle into an array of particles… and make them look like letters.
    • Find or write a short poem (or line of text) “about bodies”. It should have at least 12 characters, preferably more.
    • Below you’ll find a partially-written prototype for a class called TextRainLetter, which stores a horizontal position px, a vertical position py, and a single letter. We advise you to complete the code in this class. For example, its render() method should draw its letter at (px, py). You might find functions like textSize() and textAlign() to be helpful.
    • Create a globally-scoped array of TextRainLetters, and in your setup() function, populate this array with objects created from the characters in your poem. We think you’ll find the string.length and string.charAt() functions very useful for obtaining and assigning the letters. You’ll also probably want to use the p5.js map() function to distribute the positions of the letters across the canvas.
    • For full credit, move all of the decision-making “intelligence” for animating the particles into the update() method of the TextRainLetter class, so that the letters individually know how to move themselves in relationship to the underlying video pixels.
// A class to contain a single letter in the TextRain poem. 
// Basically, this is a particle that associates a position and a character.
function TextRainLetter (inputL, inputX, inputY) {
    this.letter = inputL;
    this.px = inputX; = inputY;

    this.update = function() {
        // Update the position of a TextRainLetter. 
        // 1. Fetch the color of the pixel at the (px,py) location of the TextRainLetter.
        // 2. Compute its brightness.
        // 3. If the TextRainLetter is in a bright area, move downwards.
        //    Else, if it's in a dark area, move up until we're in a light area.

    this.reset = function() {
        // Reset py to its initial position at the top of the screen. 
        // Also useful for testing. 

    this.render = function() {
        // Render the letter. 

Then, as per usual for all Assignments uploaded to Autolab:

  • Put the following information into comments at the top of your code: Your name; Your class section or time; Your email address, including; and Assignment-10-A
  • Name your project UserID-10-A. For example, if your Andrew ID is placebo, then your project name should be placebo-10-A.
  • Zip and upload your code to Autolab, using the provided instructions. Zip your entire sketch folder.

Project 10: An Interactive Creature

Create an interactive creature that responds to the cursor. 

Here are the Project-10 Requirements: 

  • Create a p5.js sketch featuring an interactive creature that responds to the user’s cursor.
  • Your creature must respond to the position and/or click state of the user’s cursor, though how it responds is entirely up to you. Optionally, your creature may also respond to key presses. Give your creature a goal (e.g. searching for food, climbing the screen, avoiding the cursor, etc.) and have it react when it obtains its goal.
  • Your creature’s body should be designed (which is to say, not just an ellipse or rectangle primitive); it should be expressive (which is to say, it should reveal something about the creature’s internal state); and it should be responsive (which is to say, it should be affected by things external to the creature). In support of this, you are encouraged to give your creature body parts like fins, eyes, spots, tendrils, limbs, chromatophores, etcetera, which can be expressively animated. Here’s a fun list of animal anatomy body parts to help inspire you.
  • You are encouraged, but not required, to use springy physics, particle systems, and/or similar simulated logics to construct your creature. For example, your creature’s body might consist of a cluster of particles, or a curved blob connecting a series of points, or be scaffolded by a simulated mesh or truss of connected springs. Some helpful sample code for springs can be seen here.
  • Consider the seven defining properties of living thingsmovement, respiration, sensitivity, growth, reproduction, excretion, nutrition. Which of these have you implemented?
  • Your canvas should be no larger than 800 pixels in any dimension, please.
  • When you’re done, embed your p5.js sketch in a blog post on this site, using the (usual) instructions here. Make sure that your p5.js code is visible and attractively formatted in the post. Include some comments in your code.
  • In your blog post, write a sentence or two reflecting on your process and product. In discussing your process, it would be awesome if you included any of your paper sketches from your notebook; these could be as simple as photos captured with your phone.
  • Please include one or two screen-shots of your finished composition.
  • Label your project’s blog post with the Category Project-10-Creature.
  • Label your project’s blog post with the Category referring to your section (e.g. GolanSection,RogerSectionA,RogerSectionBRogerSectionD, or RogerSectionE).

Below are some examples of interactive artificial lifeforms you may find inspirational.

Here’s a quick sketch for showing how you can make an eye look in a given direction, such as towards the cursor. All of the magic is in line 22, where you see the atan2() function!