Lumar – FaceOSC

KITSUNE MASK – will be filmed better when it’s day time again from Marisa Lu on Vimeo.

I wanted to play with things that happened in your periphery by utilizing the japanese legend of the shapeshifting fox spirit. When you turn away – the fox mask appears – the interaction of never being able to be sure of what you saw or seeing it only in your periphery plays on the mysterious, mischievous (and sometimes malevolent) nature of the fox spirit. (the floating fairy lights are ‘kitsunebi’). The experience speaks to the transient/duplicitous nature of appearances, but also has a childish side like monsters under the bed, but never there when you look under.

BEGINNING OF PROCESS

Some exploration:

“Eyebrows! Where’d my eyebrows go?”

Baseball cap took over as my hairline?

How might I use what’s in the real and physical environment around the user to influence my program? How do I take the interaction beyond screen and face to include physical objects in the surroundings?

screen-shot-2016-10-08-at-1-22-57-pm screen-shot-2016-10-08-at-1-24-07-pm screen-shot-2016-10-08-at-1-24-40-pm screen-shot-2016-10-08-at-1-26-29-pm screen-shot-2016-10-08-at-1-26-59-pm

“Will it follow my nose? How far will it go?”

It tries to keep the nose center point close to the center between the eyes – I don’t think it recognizes that it’s the same face turned, but rather a new face with somewhat squished features.

A give or take a little over 3/4 will cause the OSC to stop detecting  a face. It comes off at first as an issue/shortcoming of the program – but how might I use that feature not as a fault but as part of the experience…perhaps some sort of play on peripheral vision?

I really think things happening based on your action/peripheral vision is something the VR salon lacked. I was surprised that no one had explored that aspect of the vr experience yet. The environment stayed the same and they really didn’t play up on the fact that the viewer has a limited field of vision while exploring a virtual space.

What if it was a naughty pet? One that acted differently when you weren’t ‘looking’? I took some old code and whipped it into a giraffe – would be interesting if the person’s face was the leaf?

Or if there was a narrative element to it – if your face was red riding hood, and ‘grandma’ kept coming increasingly closer… (or perhaps a functional program that can produce something?)

scan-5

Grandma would have your generalized proportions from the FaceOSC data as well as your skin tone so it’d fit better into the story line of …well…of her as your grandma! (getting color values from live capture is easy to do with a get command)

Print

but as soon as you turned your face away, you’d see the wolf in your periphery.screen-shot-2016-10-10-at-12-16-31-am

….bezier is still such a pain. I think I want to stay with something that works more intuitively with the medium of code (utilizes the advantages coding offers) better than something as arduous as illustration. (Or go ahead and write a program that generates the bezier inputs for me?)

sketch

giraffie

Could I fake/simulate LEAP motion detection – make my program’s interaction feel like a pseudo leap motion based program on the web…based on….face motion?

 

What about the mouth? How accurate is it for that? Could I do lip reading?

lipreading

It would be a very very rudimentary lip reading – one that isn’t accurate. But it still has potential – the very fact that it’s not particularly accurate can have some comedic application.

 

Some more experimentation:

lightballsuccess

…..I was having too much fun. I really need to narrow down and think critically on something and develop it thoroughly. Having restraint during the ideating process is just such a struggle for me – it has really impeded my ability to produce polished deliverables under personal time constraints.

(kitsune’s in legend have the power to make floating lights)

scan-1 scan-2 scan-3 scan-4

WHY WON’T MY FINAL GIF UPLOAD HERE?!!?!?!?!??!

IT’S ON GITHUB:

HERE:

FINAL

Different features:

  • blowing kitsunebi lights away
  • producing the lights
  • wiping them away
  • traditional face painting marks
  • unveiling of the shapeshifting form factor of the fox in you rperipheral vision
  • The piece explores the duality of form through the shapeshifting legend of the japanese fox spirit. The play on the peripheral vision is key because it brings a specific interaction – a sense of surprise and uncertainty to the user, wherein one can never get a clear head on view of what appears when they turn away.

https://github.com/MohahaMarisa/Interactivity-computation/blob/master/Processing/bouncingboxface/blowing.gif

https://github.com/MohahaMarisa/Interactivity-computation/blob/master/Processing/bouncingboxface/sneezing.gif

 

 
// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230
//
import oscP5.*;
OscP5 oscP5;

import processing.video.*;
Capture cam;

// num faces found
int found;
float[] rawArray;
PImage lmark;
PImage rmark;
PImage mask;
ArrayList particles = new ArrayList();
boolean lightupdate = true;
void setup() {
  lmark = loadImage("kitsunemarkings.png");
  rmark = loadImage("kitsunemarkings2.png");
  mask =loadImage("kistuneMASK.png");
  size(640, 480);
  frameRate(30);

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "rawData", "/raw");
  
  String[] cameras = Capture.list();
  
  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } else {
    cam = new Capture(this, 640, 480, cameras[0]);
    cam.start();     
  }     
}

void draw() {  
  background(255);
  lightupdate= true;
  //stroke(0);
  noStroke();
  if (cam.available() == true) {cam.read();}
  set(0, 0, cam);
    float startvx;
    float startvx2;
    float startvy;
    float startvy2;
    float endvx;
    float endvx2;
    float endvy;
    float endvy2;
  if(found > 0) {
    startvx = 0.1*(rawArray[62]-rawArray[0])+rawArray[0];
    startvx2 = 0.1*(rawArray[90]-rawArray[32])+rawArray[32];
    startvy = (rawArray[73]+rawArray[1])/2;
    startvy2 = (rawArray[91]+rawArray[33])/2;
    endvx = startvx+0.8*(rawArray[62]-startvx);
    endvx2 = startvx2+0.8*(rawArray[70]-startvx2);
    endvy = (rawArray[63]+rawArray[97])/2;
    endvy2 = (rawArray[71]+rawArray[109])/2;
    pushStyle();
    imageMode(CORNERS);
    blendMode(SUBTRACT);
    image(lmark,startvx, startvy,endvx, endvy);
    image(rmark,startvx2, startvy2,endvx2, endvy2);
    popStyle();
    println("it's drawing the mark");
    float lipheight =rawArray[123] - rawArray[103]; 
    float mouthOpen = rawArray[129]-rawArray[123];
    float originy = (rawArray[129]+rawArray[123])/2;
    float originx = rawArray[128];
    int sizing = 2*int(mouthOpen);
    boolean creating = false;
    if(mouthOpen > 0.2*lipheight && !creating){
      println("start creating");
      BouncingBox anotherLight;
      creating = true;
      anotherLight = new BouncingBox(originx, originy, sizing);
      particles.add(anotherLight);
      if((rawArray[108]-rawArray[96])<1.25*(rawArray[70]-rawArray[62])){//mouth to nose proportion
        for (int i = 0; i < particles.size(); i++) {
          int newvel = int(particles.get(i).xx-rawArray[100]);
          if(newvel<0){ int vel = int(map(newvel, -width,0,1,10)); particles.get(i).xVel = -1*vel; particles.get(i).move(); lightupdate = false; } else { int vel = int(map(newvel, 0,width,10,1)); particles.get(i).xVel = vel; particles.get(i).move(); lightupdate = false; } } } else if(mouthOpen >0.5*lipheight && creating){
        particles.get(particles.size()-1).size = sizing;
      }
    }
    if(creating && mouthOpen <0.2*lipheight){
      creating = false;
    }
    for (int i = 0; i < particles.size(); i++) { BouncingBox light = particles.get(i); light.draw(); if(lightupdate){ light.update();} } float lside = rawArray[72]-rawArray[0]; float rside = rawArray[32]-rawArray[90]; float turnproportion = lside/rside; float masksize = 2.5*(rawArray[17]-rawArray[1]); if(turnproportion>3.7){
      int y = int(rawArray[1]-masksize/1.8);
      image(mask,rawArray[0], y,0.75*masksize ,masksize);
    }
  }
  else{
    for (int i = 0; i < particles.size(); i++) {
      particles.remove(i);
    }
  }
}

class BouncingBox {
    int xx;
    int yy;
    int xVel = int(random(-5, 5)); 
    int yVel = int(random(-5, 5)); 
    float size; 
    float initialsize;
    int darknessThreshold = 60;
    float noisex = random(0,100);
    BouncingBox(float originx, float originy, int sizing){
      xx = int(originx);
      yy = int(originy);
      initialsize = sizing;
      size = initialsize;
    }
    void move() {
        // Do not change this. 
        xx += xVel; 
        yy += yVel; 
    }

    void draw() {
        // Do not change this.
        pushStyle();
        blendMode(ADD);
        for(int i=0; i<50;i++){ float opacity = map(i, 0,50,20,-10); fill(255,250,240, opacity); float realsize = map(i, 0,50, 0, 1.5*size); ellipse(xx,yy,realsize, realsize); } popStyle(); } void update() { noisex+=random(0,0.1); move(); int theColorAt=cam.get(xx,yy); float theBrightnessOfTheColor=brightness(theColorAt); if (xx + size / 2 >= width ||
            xx - size / 2 <= 0 || theBrightnessOfTheColor < darknessThreshold) { xVel = -xVel; } if (yy + size / 2 >= height ||
            yy - size / 2 <= 0 || theBrightnessOfTheColor < darknessThreshold) {
            yVel = -yVel;
        }
        size=initialsize*0.3+initialsize*noise(noisex);
    }
}
/////////////////////////////////// OSC CALLBACK FUNCTIONS//////////////////////////////////

public void found(int i) {
  println("found: " + i);
  found = i;
}

public void rawData(float[] raw) {
  println("raw data saved to rawArray");
  rawArray = raw;
  
}

Comments are closed.