szh-Body

Can you beat Mario Bros 1-1 with ONLY your face? 

(Turns out yes, but with many trials, slow game play, and very little chance to become a top ranked gamer.)

I downloaded FaceOSC, investigating how the different components worked, and read over the examples + their code to see what I could do with FaceOSC, and how the program is ran.

I really love the "Mario Bros 1-1 but with a twist" trend so I thought it would be funny to make a "Can you beat Mario Bros 1-1 with ONLY your face?"

 

As I tested my own game, the exact threshold for where the "center" of the game is was rather unclear, because I was estimated of where the face would be (assuming it was relatively center). Therefore, I later then added a "centering" section in the beginning, where the program would wait a couple seconds for the player to calibrate themselves before setting the thresholds for the left and right side.

I also later switch the left and right because the camera is flips the user so that left is their left eye, but can be confusing for when the player is actually playing cause it looks to be the opposite of what the intended result actually is.

FaceOSC and Processing needs to be installed before usage:

//
// Sabrina Zhai
// 9.23.19
//
// This program is intended to be used with SuperMarioBros, 
// allowing players to play 1-1 with a twist.
//
// This code is adapted from FaceOSCReceiver.
//
// A template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230
// 
//

import oscP5.*;
OscP5 oscP5;

import java.awt.*;
import java.awt.event.*;
import java.awt.event.KeyEvent.*;

// num faces found
int found;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;

Robot robot;

float openThreshold = 4;
float leftThreshold; // for the left SIDE and not the left EYE
float rightThreshold;

float previousMouthHeight;
float previousPosition;

int begin; 
int duration = 3;
int time = 3;
boolean faceSet = false;

void setup() {
  size(640, 480);
  frameRate(30);

  begin = millis();  

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");

  //Sets up the Robot to type into the computer
  try {
    robot = new Robot();
    robot.setAutoDelay(0);
  } 
  catch (AWTException e) { // (Exception e) {
    e.printStackTrace();
  }
}

void draw() {
  background(255);
  stroke(0);
  textSize(18);

  if (time > 0) { 
    time = duration - (millis() - begin)/1000;
    text("Setting current face position as center in..." + time, 10, 20);
  } else if (!faceSet) {  
    text("Setting current face position as center in...Face set!", 10, 20);

    //Set the face's threshold positions
    leftThreshold = posePosition.x - 75; 
    rightThreshold = posePosition.x + 75; 
    faceSet = true;
  }

  //Helps user see where the threshold to move their head is
  line(leftThreshold, 0, leftThreshold, height);
  line(rightThreshold, 0, rightThreshold, height);

   // Actions after a face is found
  if (found > 0) { 

    // Draw the face
    translate(posePosition.x, posePosition.y);
    scale(poseScale/2);
    noFill();
    ellipse(-20, eyeLeft * -9, 20, 7);
    ellipse(20, eyeRight * -9, 20, 7);
    ellipse(0, 20, mouthWidth* 3, mouthHeight * 3);
    ellipse(-5, nostrils * -1, 7, 3);
    ellipse(5, nostrils * -1, 7, 3);
    rectMode(CENTER);
    fill(0);
    rect(-20, eyebrowLeft * -5, 25, 5);
    rect(20, eyebrowRight * -5, 25, 5);

    // Makes Mario jump
    if (mouthHeight > openThreshold) { // Mouth open (continuously)
      robot.keyPress(java.awt.event.KeyEvent.VK_UP);
      if (previousMouthHeight < openThreshold) { // If the mouth is only opened ONCE (closed)
        robot.keyRelease(java.awt.event.KeyEvent.VK_UP);
      }
    }
    previousMouthHeight = mouthHeight;

    // Moves Mario to the left (user moves to the right)
    if (posePosition.x < leftThreshold && previousPosition < leftThreshold) {
      robot.keyPress(java.awt.event.KeyEvent.VK_LEFT);
    } else {
      robot.keyRelease(java.awt.event.KeyEvent.VK_LEFT);
    }

    // Moves Mario to the right (user moves to the left)
    if (posePosition.x > rightThreshold && previousPosition > rightThreshold) {
      robot.keyPress(java.awt.event.KeyEvent.VK_RIGHT);
    } else {
      robot.keyRelease(java.awt.event.KeyEvent.VK_RIGHT);
    }
    previousPosition = posePosition.x;
  }
}


// OSC CALLBACK FUNCTIONS
public void found(int i) {
  //println("found: " + i);
  found = i;
}

public void poseScale(float s) {
  //println("scale: " + s);
  poseScale = s;
}

public void posePosition(float x, float y) {
  println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);
}

public void poseOrientation(float x, float y, float z) {
  println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);
}

public void mouthWidthReceived(float w) {
  //println("mouth Width: " + w);
  mouthWidth = w;
}

public void mouthHeightReceived(float h) {
  //println("mouth height: " + h);
  mouthHeight = h;
}

public void eyeLeftReceived(float f) {
  //println("eye left: " + f);
  eyeLeft = f;
}

public void eyeRightReceived(float f) {
  //println("eye right: " + f);
  eyeRight = f;
}

public void eyebrowLeftReceived(float f) {
  //println("eyebrow left: " + f);
  eyebrowLeft = f;
}

public void eyebrowRightReceived(float f) {
  //println("eyebrow right: " + f);
  eyebrowRight = f;
}

public void jawReceived(float f) {
  //println("jaw: " + f);
  jaw = f;
}

public void nostrilsReceived(float f) {
  //println("nostrils: " + f);
  nostrils = f;
}

// all other OSC messages end up here
void oscEvent(OscMessage m) {
  if (m.isPlugged() == false) {
    //println("UNPLUGGED: " + m);
  }
}

zapra – Body

view code


Initial sketches for hair changing, marionettes, creepy face masks.

My original idea was to make a whole set of masks, but I decided to scale it back and just focus with one. I was partially inspired by Picasso portraits, and a local American artist, Richard Merkin, who's art is hanging in my parent's dining room. I had a lot of fun playing around with the face tracker and exploring how I could use motion and facial expressions as visual triggers for different events. I did run into some problems with determining the ratios for the face when registering blinks or smiles, since the values change just by tilting the head or moving closer to the camera. From testing the mask with other people, I've realized that the ratios are mostly tuned to my own proportions. I think if I could spend more time with this, I'd find a more consistent way of registering these expressions and also explore rotation further.


Another mask idea I explored while playing around with the face tracker.


Additional sketches and some skateboarding cats.

iSob-Body

Video (version with sound coming soon):

GIF:

Sketch (warning, if you've given p5 webcam permissions it will start automatically.)

I enjoyed many hours of suffering with my original idea, which was to make the body melt by moving its pixels around. Unfortunately, using pixel setting and getting slowed my code to a near unusable speed. The body segmentation model, bodyPix with part segmentation, was also not fast or accurate enough for what I wanted. And finally, to make a two-dimensional area 'melt' convincingly would require lots of complicated math/physics - like how I was trying to make my blobs from deliverable three bounce correctly, but even more complicated. This probably requires something like a physics engine.

Instead, I decided to learn how to use 3D primitives in p5. There aren't many of them, but a lot can be accomplished with toruses and ellipsoids. I learnt how to position them using transform() and rotate(), use different textures, and set/position the lights. I also delved a bit into learning sound by creating an oscillator whose amplitude and frequency change as you open your mouth or move side to side, respectively. I even learnt about the limitations of the camera in p5, and had to do various weird tricks just to make the background rainbow (I could also have displayed the webcam video, but it would be distorted unless I used the orthogonal camera.)

The visual aesthetic of this piece is nothing of great interest, but ideally it will make the viewer feel like a singing egg trapped in a rainbow dreamscape. Listen to the slightly pitched-down anime girl soundtrack, and augment it with your own special song. Kick back and relax.

Physical sketches:

sovid – Body

Sketch!

I really can't explain the design inspiration for this one. I mainly wanted to play around with 3D in P5, and having just watched 'A Fish Called Wanda'

and modeled a cowboy hat for fun, I decided to combine them all. The program tracks your face rotation and monitors your blinks and the model follows that data. Working with multiple colors proved difficult, especially because I couldn't use textures I had on the model since the UV maps didn't work in P5. Using the BRFv4 face tracker, I used the rotation value to move the model, and figured out the points for the eyes. Each eye had six points - two on the top, two on the bottom, and one on each corner.

I was able to track blinks by finding the ear aspect ratio between each point on the eye, using research by a team named Soukupová and Čech.

vikz-Body

Eyes Up Here! 

https://editor.p5js.org/vikz/sketches/3ZMNW-_Bq 

Eyes Up Here helps EVERYONE -- instead of just faking an obvious glance to someone's boobs while they are talking, why not just make it easier for us all and have the boobs shift right to the eyes? In fact, it would be a shame to stop right there -- Eyes Up Here allows ALL the genitals to move, right to the face -- talk about multitasking! Eyes up here, and boobs, and vaginas, and penises, and balls!

Click with your mouse to generate any of the vagina and boob pairings, and open your mouth to generate any of the balls and penis pairings. I had the most fun collecting sketches with my friends, and mostly struggled with implementing the images in a way that would scale accordingly and generate randomly.

I had initially explored a breadth of ideas, including creating games in which one would have to fit certain facial parts within certain boundaries, and/or exploring the negative space between the different lines created when connecting different points on the face, however, I became particularly inspired by Jeremy Bailey's works. Also, I decided that many of those (initial) ideas would become poorly executed due to the natural gunkiness from the templates given, and I didn't have any strong vision that would make those ideas worthy of coming to life, aside from the inherent intrigue (ie. negative spaces, etc).

I am quite happy with the final outcome, as it was intended to be a more straightforward piece, and help me better understand the facial detecting mechanism. If I had more time, I would love to implement a system where users can draw their own doodles and have those become imposed on the face.

Initial Sketches:

 

meh-Body

Link to p5js: https://editor.p5js.org/meij_who/sketches/yhAr76fDF

Inspiration from one of the earliest sci-fi film A Trip To The Moon(1902). The idea started with me accidentally found out how to create scratchy film effect, so I decided to do something that feels like stop-motion film. I initially used the distance from upper lip to lower lip to detect the mouth opening, and it can become less accurate as I move farther from the screen. Therefore, for detecting eye blinking, I compare the current eye height and eye height of the previous frame to get a more accurate blinking result. I also try to figure out the math in creating concentric circles with the rotation of the head, but the face sometimes goes out of the circles. If more time is given, I will scale the size of the facial features according to my distance from the camera, and add rotation to the face to make it more realistic. It will also be nice if i can create a projectile motion for the spaceship to land on the right eye of the moon:)

(A fun little music video I made for The Distance To The Moon by L CON)