p5.geolocation looks like a really cool and useful library for many projects; given how much / reliant one's location is these days, I can imagine a handful of projects where I would want to be able to use a library like this to further my capabilities with them.


I really liked the soft body example shown on p5.js because it offers a lot of different ways to be able to transition from one thing to another in a very smooth and in an aesthetic pattern that I feel like I may want to use.


The square-connect app looks like a very useful thing to have as the company square becomes more and more popular for sellers to have in their back pocket. It also have a very simple application to use, and I think more people can take advantage of it.


Can you beat Mario Bros 1-1 with ONLY your face? 

(Turns out yes, but with many trials, slow game play, and very little chance to become a top ranked gamer.)

I downloaded FaceOSC, investigating how the different components worked, and read over the examples + their code to see what I could do with FaceOSC, and how the program is ran.

I really love the "Mario Bros 1-1 but with a twist" trend so I thought it would be funny to make a "Can you beat Mario Bros 1-1 with ONLY your face?"


As I tested my own game, the exact threshold for where the "center" of the game is was rather unclear, because I was estimated of where the face would be (assuming it was relatively center). Therefore, I later then added a "centering" section in the beginning, where the program would wait a couple seconds for the player to calibrate themselves before setting the thresholds for the left and right side.

I also later switch the left and right because the camera is flips the user so that left is their left eye, but can be confusing for when the player is actually playing cause it looks to be the opposite of what the intended result actually is.

FaceOSC and Processing needs to be installed before usage:

// Sabrina Zhai
// 9.23.19
// This program is intended to be used with SuperMarioBros, 
// allowing players to play 1-1 with a twist.
// This code is adapted from FaceOSCReceiver.
// A template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC
// 2012 Dan Wilcox
// for the IACD Spring 2012 class at the CMU School of Art
// adapted from from Greg Borenstein's 2011 example

import oscP5.*;
OscP5 oscP5;

import java.awt.*;
import java.awt.event.*;
import java.awt.event.KeyEvent.*;

// num faces found
int found;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;

Robot robot;

float openThreshold = 4;
float leftThreshold; // for the left SIDE and not the left EYE
float rightThreshold;

float previousMouthHeight;
float previousPosition;

int begin; 
int duration = 3;
int time = 3;
boolean faceSet = false;

void setup() {
  size(640, 480);

  begin = millis();  

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");

  //Sets up the Robot to type into the computer
  try {
    robot = new Robot();
  catch (AWTException e) { // (Exception e) {

void draw() {

  if (time > 0) { 
    time = duration - (millis() - begin)/1000;
    text("Setting current face position as center in..." + time, 10, 20);
  } else if (!faceSet) {  
    text("Setting current face position as center in...Face set!", 10, 20);

    //Set the face's threshold positions
    leftThreshold = posePosition.x - 75; 
    rightThreshold = posePosition.x + 75; 
    faceSet = true;

  //Helps user see where the threshold to move their head is
  line(leftThreshold, 0, leftThreshold, height);
  line(rightThreshold, 0, rightThreshold, height);

   // Actions after a face is found
  if (found > 0) { 

    // Draw the face
    translate(posePosition.x, posePosition.y);
    ellipse(-20, eyeLeft * -9, 20, 7);
    ellipse(20, eyeRight * -9, 20, 7);
    ellipse(0, 20, mouthWidth* 3, mouthHeight * 3);
    ellipse(-5, nostrils * -1, 7, 3);
    ellipse(5, nostrils * -1, 7, 3);
    rect(-20, eyebrowLeft * -5, 25, 5);
    rect(20, eyebrowRight * -5, 25, 5);

    // Makes Mario jump
    if (mouthHeight > openThreshold) { // Mouth open (continuously)
      if (previousMouthHeight < openThreshold) { // If the mouth is only opened ONCE (closed)
    previousMouthHeight = mouthHeight;

    // Moves Mario to the left (user moves to the right)
    if (posePosition.x < leftThreshold && previousPosition < leftThreshold) {
    } else {

    // Moves Mario to the right (user moves to the left)
    if (posePosition.x > rightThreshold && previousPosition > rightThreshold) {
    } else {
    previousPosition = posePosition.x;

public void found(int i) {
  //println("found: " + i);
  found = i;

public void poseScale(float s) {
  //println("scale: " + s);
  poseScale = s;

public void posePosition(float x, float y) {
  println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);

public void poseOrientation(float x, float y, float z) {
  println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);

public void mouthWidthReceived(float w) {
  //println("mouth Width: " + w);
  mouthWidth = w;

public void mouthHeightReceived(float h) {
  //println("mouth height: " + h);
  mouthHeight = h;

public void eyeLeftReceived(float f) {
  //println("eye left: " + f);
  eyeLeft = f;

public void eyeRightReceived(float f) {
  //println("eye right: " + f);
  eyeRight = f;

public void eyebrowLeftReceived(float f) {
  //println("eyebrow left: " + f);
  eyebrowLeft = f;

public void eyebrowRightReceived(float f) {
  //println("eyebrow right: " + f);
  eyebrowRight = f;

public void jawReceived(float f) {
  //println("jaw: " + f);
  jaw = f;

public void nostrilsReceived(float f) {
  //println("nostrils: " + f);
  nostrils = f;

// all other OSC messages end up here
void oscEvent(OscMessage m) {
  if (m.isPlugged() == false) {
    //println("UNPLUGGED: " + m);


Menstrual Clock. /  Code

The circle represents the actual period cycle.

Whereas the red arc of the circle represents the time actually bleeding. In this case, the program takes the average, 28 day cycle with 6 days of bleeding.

I think an interesting/practical place for my clock to live is on the landing page of a period tracker app. It can live in the background as the user checks through their calendar / expected periods. Information wise, it may not as accurate as say, a table chart, but I think the visual representation can be a nice touch to the app.

In the future, I would like the day that the period starts and the length of bleeding + period cycle to be dependent on the actual user, and take data from previous cycles.


I originally intended this clock to be used over the course of days, and the fluid simulates the actual flow of a period (color getting darker on the last couple days, starting off little and peaking on day two, then gradually decreasing). The amount of fluid (density, instead of area), that is being added was intended to be close to a bell curve and I tried using an easing function to simulate this, but my results don't have a drastic change towards the density of the fluid.

As I got my fluid simulation to work, I realized that visually, it works best at a second time step than a day one, hour one, or even minute:


(You can see there are hardly any difference between day vs hour vs minute.)

Therefore, I decided to make the fluid simulation based on the seconds passing, rather than any other time factor (since the density of fluid added wasn't working as well as I hoped anyways, which was also supposed to be set at a day-scale).

Changing between days:

As you can see, the line increases, but because the bleeding period is over, the arc is white.




The interaction of this piece consists of people playing the literal strings they see criss crossing through the room, and then the piano will play a corresponding note to that string.

I think it's so interesting to represent a string instrument with actual string; the spatiality of this piece really draws me towards it, and the feedback (music) I get makes me want to continue playing it.

The concept is centered on this question: What if we could express architecture through music? Architecture and music, to me, provides very different senses but through this, I'm able to have multi-sensory responses as I am walking through the installation.


I enjoy the process of which the artists created this because the utilized 3D modeling tools, coded the layout and music, and wired everything with Arduino, but the final piece feels like all of that is really in the background; the tech doesn't really feel like it is in the forefront of the installation at all. The piece focuses on the strings and the music that accompanies it, and I feel like it's an immersive way for people to interact with song.



I created the wave/oscillating motion with the easing function sineOut(x). I achieved the oscillating motion that I intended but I fell short in trying to fully loop (without hiccup) the add ons/effects I had paired with this animation:

I had hoped to achieve a motion blur effect, which with the affect of the arcs changing dimensions (closing and opening like Pacman), gives the trail it leaves behind a certain effect.

I was inspired after looking through various of beesandbombs' gifs:

Where the movement of each individual piece seems to be controlled by a sinusoidal wave. I also tried playing with the overlap of the colors of each piece (where some parts of the line looks darker because of the varying opacity).



I think my interests are located in between both first word and last word art. I'm intrigued by the possibilities that new media art can offer -- first word art -- but the ways I want to use new media have been experimented before and I enjoy being able to use what others have discovered. My interested center around technologies that already exist in our culture and seeing what more we can do with them. Technologies can shape our culture the same way that culture can shape technologies; the perspective that we have towards it affects the possibilities and usages we see that technology offers.

Being able to create / invent a way of thinking that can leave a mark in our culture/society is certainly appealing, but I feel as though, especially in today's world, creating something that is entirely novel is difficult and near impossible. I think that we mostly are able to take what has previously been create and see them in new lights, give them new perspectives. It's this sort of reasoning that makes me feel like I am in the middle of the spectrum between first and last word art because I want to be able to use what has already be discovered and experimented with before but give it a twist, make it my own.



  1. The artwork is square with a border from the edge of the frame to the cluster of lines.
  2. The lines are are short and are all the same size.
  3. There are many lines in clusters that are spread about half of the lines' length.
  4. There are "holes" within the clusters of lines. These holes seem to be at most the area of 5 lines length. Generally they are smaller.
  5. The way that the lines seem to be clustered is mostly vertical or up to a 45 degree angle. Some are more (almost vertical), but not as many.
  6. The lines overlap slightly but for the most part, seem to be given its own space (side to side). There is slightly overlap row to row (this is rather consistent).
  7. Sometimes, observation 5 is flipped, so that most lines are horizontal rather than vertical.
  8.  Around the hole, there is less overlap between the lines and more lines are simply "floating" (not touching anything else).
  9. The gaps are at odd shapes ranging from more rectangular to small tiny circles. Other gaps are just larger spaces in between each overlap of the lines. The gaps in its entirety only compensates for a small percentage of the canvas.
  10. Some of the lines are repetitively at the same angle (side to side).


At first the result I made looked like the image below, but I felt as though my scatter of the lines weren't as long / random (?) as Molnar's. I increased the scale factor my lines, and when drawing them, I also added a factor to increase the scale as its drawn. Making the lines look the way I want was particularly hard, especially since I had the most struggle with finding a way to rotate the lines (since rotate() in p5.js turns the entire canvas, and not just the lines I'm looking at).

For the actual interruption, I used Perlin noise to achieve the effect you see above. Overall, this assignment was really interesting to see how I can achieve the look of this project, trying to figure it out piece by piece.




Zero One is a code-based generative video programmed by Raven Kwok and sound by Mike Gao. It was programmed and generated with Processing with minor edits in Premiere during composition. It consists of multiple interlinked generative systems, each of which has its customized features, but collectively share the core concept of an evolving elementary cellular automaton.

I really admire that within any still shot, even within repetitive patterns, each design/part looks different or unique from each other. It gives the entire video a more organic/natural feel to it, instead of being super cookie-cutter.

The project consists of multiple interlinked generative systems, each of which has its customized features, but collectively share the core concept of an evolving elementary cellular automaton.

The colors, shapes, and the motion graphics I feel are the areas in which the artist has taken control of and used their artistic sensibilities to adjust the project towards what they want appealing and attractive.

The order in this project is the similarity in shapes; the repetitive usage of circles and lines through out the entire video, and similar actions within a scene (e.g. the slant of all the images, or everything moving downwards). The disorder in the video are the specifically different forms each shape takes (size, color, each pixel movement). Each circle is sized a little differently, and the placing of, say a line through the circle, varies each time. This is the disorder of the artwork, but because it is placed in an orderly fashion / balanced order, it achieves effective complexity.

by Raven Kwok / 

Zero One | Video


I feel as though (many) natural things display signs of disorder or randomness while (many) human-designed/built things display signs of order, as we have dictated precisely where and how we want to place something. In this way, I feel as though flower fields or gardens exhibits effective complexity because planted by humans, they have specified growth locations, and are trimmed to the desire of the gardener. Many can be placed in rows or in specific grid structures, which is total order. However, I also feel like many naturally growing foliage have a mind of it's own and after the initial set up (planting the seed), so long as it does not cross a certain boundary, the plants are allowed to grow however they want.

Because most human planted flowers/trees/etc do have designated areas, I think this selection sits a little closer to total order. Due to the way that plants grow, I think there is an inherent degree of disorder (if left unmanaged by humans). Though, this also depends a little on the type of garden:

Image result for gardens

Somewhat in between -- the flowers/trees aren't allowed to (really pass into the concrete path (though it does, a little), but it also grows with a degree of disorder (leaning towards the path, not being completely straight/perpendicular to the ground).

Image result for flower fields

Much closer to total order; very organized with extremely even paths; in the distance, the colors are very distinguishable from each other.

Image result for flower fields

Although they are all the same breed, the colors of the flowers show a degree of randomness and disorder.


The Problem of Dynamics: Must generative art change over time while being exhibited to an audience?

The dynamism of a generative art piece is very subjective; I don't think that adding time as a factor necessarily increases it. To be considered generative art, it doesn't need to change over time -- as some art may be missed by the user, and while this is valid for art, I don't think this is an effect of generative art -- but I think that art that changes over time can certainly be considered generative.

As generative art is "autonomous", I think that the way the art is produced should be able to produce different art each time it is used (so it is not repeating / looping) but because the art must be created, recorded, started/stopped at some point, I do not agree that the art must change over time as it is being exhibited. It is only the point of creation do I think it should be able to change (not necessarily is changing, as this is up to the discretion of the creator, but simply has the ability to change).