Drewch – FaceOSC

github: https://github.com/AndyChanglee/60-212/tree/readme-edits/headBanging

I’ll be honest, faces aren’t my thing and I tend to obscure them in my art, so coming up with feasible ideas for this project took some time. I had a revelatory moment when I started banging my head on the wall out of frustration: I’ll make a head-banging simulator!

Sunder the glass pane with the repeated bashing of your face, and keep doing it until you get your frustrations out. It works fine if I’m a sphere in a vacuum.

Written by Comments Off on Drewch – FaceOSC Posted in FaceOSC

Drewch – LookingOutwards05

Nitzu (Nitzan Bartov) said something very revelatory (at least to me) during the speed presentations, something along the lines: “In the future, everybody is going to be playing games. If you aren’t playing them right now, it’s because the games you want to play haven’t been made yet. What kind of game do you want to play?”

For a while now, I’ve settled for the idea that not all people play, want to play, or appreciate games, and that’s ok. But now that I think about it, I only thought that way because the way games were going then was mainly formalist games. You really ever only had three flavors on the market: competitive, story-based, and arcade games. Games are reaching more people now than ever before, not only because of technology and accessibility but also because new kinds of games are being made. “What kind of game do you want to play?”

Nitzu wanted to play a Soap Opera VR game, so she made The Artificial and the Intelligent. It’s hilarious, but also thought-provoking, and also a soap opera. I wish I could find videos of another one of her games: Horizon, but what The Artificial and the Intelligent proved to me was that games are for everyone, they just don’t know it yet.

Image result for the artificial and the intelligent nitzu The Artificial and the Intelligent

aliot-faceosc

chomp

import oscP5.*;
import netP5.*;
  
PImage hand;
PImage foot;
PImage pizza;
PImage burger;
PImage cupcake;
PImage donut;
PImage bg;
float timer;
ArrayList foods = new ArrayList();
PImage[] indexedFoods = new PImage[6];
OscP5 oscP5;
NetAddress serverAddress;
NetAddress clientAddress;
int listeningPort;
int sendingPort;
float mouthHeight=-1;
float position_x=-1;
float position_y=-1;
boolean mouthOpen = false;
int score = 0; 
PFont f;
String feedback = "LEVEL 1";
int feedback_timer = 150;
xy feedback_pos = new xy(300,200);
int fall_speed = 4;
int duration = 250;
int level = 1;

class xy
{
  float x,y;
  xy(float _x, float _y)
  {
    x=_x;
    y=_y;
  }
} 

void setup(){
  size(800,500);
  smooth();
  bg = loadImage("factory_background.png");
  image(bg, 0, 0);
  
  timer = millis();
  
  hand = loadImage("hand.png");
  pizza = loadImage("pizza_graphic.png");
  burger = loadImage("burger_graphic.png");
  cupcake = loadImage("cupcake_graphic.png");
  donut = loadImage("donuts_graphic.png");
  foot = loadImage("foot.png");
  indexedFoods[0] = hand;
  indexedFoods[1] = foot;
  indexedFoods[2] = cupcake;
  indexedFoods[3] = pizza;
  indexedFoods[4] = burger;
  indexedFoods[5] = donut;
  generateFood();
  
  listeningPort = 8338;
  sendingPort = 12345;
  
  oscP5 = new OscP5(this, listeningPort);
  serverAddress = new NetAddress("127.0.0.1", sendingPort);

  f = createFont("Arial",16,true);
}

void generateFood(){
  int x = int(random(100, 700));
  int y = -100;
  int z = int(random(2,5));
  int t = 0;
  int foodIndex = int(random(0,5));
  if (foodIndex == 0){
    IntList hand = new IntList();
    hand.append(0);
    hand.append(x);
    hand.append(y);
    hand.append(z);
    hand.append(t);
    foods.add(hand);
  } else if (foodIndex == 1){
    IntList foot = new IntList();
    foot.append(1);
    foot.append(x);
    foot.append(y);
    foot.append(z);
    foot.append(t);
    foods.add(foot);
  }else if (foodIndex == 2){
    IntList cupcake = new IntList();
    cupcake.append(2);
    cupcake.append(x);
    cupcake.append(y);
    cupcake.append(z); 
    cupcake.append(t);
    foods.add(cupcake);
  }else if (foodIndex == 3){
    IntList pizza = new IntList();
    pizza.append(3);
    pizza.append(x);
    pizza.append(y);
    pizza.append(z);  
    pizza.append(t);
    foods.add(pizza);
  }else if (foodIndex == 4){
    IntList burger = new IntList();
    burger.append(4);
    burger.append(x);
    burger.append(y);
    burger.append(z); 
    burger.append(t);
    foods.add(burger);
  }else if (foodIndex == 5){
    IntList donut = new IntList();
    donut.append(5);
    donut.append(x);
    donut.append(y);
    donut.append(z);  
    donut.append(t);
    foods.add(donut);
  }
}

void draw(){
  image(bg, 0, 0);
  if ( millis() - timer > 1000 ) {
    generateFood();
    timer = millis();
  }
  for (int i = 0; i < foods.size(); i++) {
    IntList food = foods.get(i);
    int index = food.get(0);
    int x = food.get(1);
    int y = food.get(2);
    int z = food.get(3);
    int t = food.get(4);
    PImage foodToDraw = indexedFoods[index];
    if (y < 150 && z == 2){
      food.set(2, y+ fall_speed);
    } else if (y < 200 && z == 3){
      food.set(2, y+ fall_speed);
    } else if (y < 275 && z == 4){
      food.set(2, y+ fall_speed);
    } else if (y < 350 && z == 5){ food.set(2, y+ fall_speed); } food.set(4, t+1); //add to time image(foodToDraw, x, y, 50*z, 50*z); if (t > duration){
      foods.remove(i);
    }
  }
  noFill();
  strokeWeight(5);
  stroke(200,50,50);
  if (mouthHeight != -1 && position_x != -1 && position_y != -1){
    ellipse(position_x, position_y, 100, 10*mouthHeight);
  }
  xy mouth_position = mouthClosed();
  if (mouth_position.x != -1 && mouth_position.y != -1){
    chomp(mouth_position.x, mouth_position.y);
  }
  if (mouthOpen == false && mouthHeight >= 2){
    mouthOpen = true;
  }
  textFont(f,30);
  fill(255); 
  text("Level " + level + "   score: " + score, 500, 50);
  if (feedback != ""){
    text(feedback, feedback_pos.x, feedback_pos.y);
    feedback_timer-=1;
    if (feedback_timer <= 0){ feedback = ""; } } if (score >= level*100){
    level = level+1;
    duration -= 10;
    fall_speed += 4;
  }
}
 
void oscEvent(OscMessage receivedMessage)
{
  if (receivedMessage.checkAddrPattern("/gesture/mouth/height")==true){
    mouthHeight = receivedMessage.get(0).floatValue();
  } else if (receivedMessage.checkAddrPattern("/pose/position")==true){
    position_x = 800 - map(receivedMessage.get(0).floatValue(), 80.0, 530.0, 0.0, 800.0);
    position_y = receivedMessage.get(1).floatValue();
  }
}

xy mouthClosed(){
  xy mouthPos = new xy(-1,-1);
  if (mouthOpen == true && mouthHeight <= 2){
    mouthOpen = false;
    mouthPos.x = position_x;
    mouthPos.y = position_y;
  }
  return mouthPos;
}

void chomp(float mouth_x, float mouth_y){
  String[] food_feedback = {"Get in ma belly!", "Yum!", "Good Job!", "+10", "+10", "+10", "+10", "+10", "+10", "+10"};
  String[] hand_feedback = {"Ew!", "A hand?!", "wtf!", "A hand?!", "-10", "-10", "-10", "-10", "-10", "-10"};
  String[] foot_feedback = {"Ew!", "A foot?!", "A foot!", "Oh my lord!", "-10", "-10", "-10", "-10", "-10", "-10"};
  int feedback_index = int(random(0,9));
  for (int i = 0; i < foods.size(); i++) {
    IntList food = foods.get(i);
    int index = food.get(0);
    int x = food.get(1);
    int y = food.get(2);
    int z = food.get(3);
    int wid = z*25;
    if ((abs((x+wid)-mouth_x) < 50) && (abs((y+wid)-mouth_y) < 50)){ foods.remove(i); if (index >= 2){
        feedback = food_feedback[feedback_index];
        feedback_timer = 40;
        feedback_pos.x = map(x+wid, 0.0, 800.0, 20.0, 600.0);
        feedback_pos.y = map(y+wid, 0.0, 800.0, 20.0, 600.0);
        score += 10;
      } else if (index == 0){ //hand
        feedback = hand_feedback[feedback_index];
        feedback_timer = 40;
        feedback_pos.x = map(x+wid, 0.0, 800.0, 20.0, 600.0);
        feedback_pos.y = map(y+wid, 0.0, 800.0, 20.0, 600.0);
        score -= 10;
      } else if (index == 1){ //foot
        feedback = foot_feedback[feedback_index];
        feedback_timer = 40;
        feedback_pos.x = map(x+wid, 0.0, 800.0, 20.0, 600.0);
        feedback_pos.y = map(y+wid, 0.0, 800.0, 20.0, 600.0);
        score -= 10;
      }
    }
  }
}

For this project, I wanted to create a game that could be played easily enough with the face so that the user had an intuitive response. I settled on making a game about capturing and eating food because the inclination for a user to open their mouth when they see a burger/pizza/etc is automatic. It’s also hilarious. I used the mouth height parameter as well as the face position parameter to give the user control over the food they chomp on. I tried to make the game funny to play as well as to watch, and I think I definitely achieved the latter. If I could spend more time on this game I would revise the graphics. I drew them myself in an attempt to make the game feel kinda cheesy and morbid at the same time. I would also add more objects, introduce some better effects, add sound, and change the ellipse to a pair of lips on the screen.

Written by Comments Off on aliot-faceosc Posted in FaceOSC

arialy-lookingoutwards05

I found Jeremy Bailey to be the most memorable personality at Weird Reality. I heard his presentation, experienced his AR Pregnancy Simulator at the VR salon, and talked to him a little in person. It was definitely fun to be able to talk to him and then hear his commentary in the Pregnancy Simulator.  The piece itself is pretty relaxing with a stream of commentary, calm background music (birds chirping if I remember correctly), and its setting in the field. But getting to see other people wear his VR headset and rub their imaginary belly at the VR salon was probably my favorite part of the piece. People’s movements almost seemed choreographed. Looking around, then at the hands, then rubbing their belly. Being able to manipulate people’s movement’s in an open environment is both very entertaining and a strange concept.

 

Kelc – LookingOutwards05

I had the pleasure of speaking with this wonderful woman so I thought I would do my LookingOutwards05 on

Salome Asega

vzoylp10

She is an artist and researcher from Brooklyn, NY apart of a duo called Candyfloss and has worked on many projects within the realm of interactive video-games, virtual reality simulations, and digital exploration.

At the VR Salon she facilitated a 3d drawing experience through the use of an Oculus headset and two game controllers. Users were able to bring their otherwise 2-d creations to life, changing the brush and color in real-time and creating marks in what looked like real space. What struck me about her piece in comparison to the others was heavy attention was paid to the quality of the graphics– the environment itself was convincing on its own, and the drawing technology was mesmerizing. One issue I saw detracting from the experience was the cord but otherwise the entire setup was pretty flawless.

https://www.instagram.com/p/BLUK44GAIkm/?taken-by=suhlomay

Iyapo Repository

http://iyaporepository.tumblr.com/

One collaborative project that really stuck with me is the Iyapo repository– a library / collection of physical and digital artifacts “created to reaffirm he future of peoples of African descent.” The pieces bring to life artifacts dealing with past, present, and future cultural endeavors of the African-American and African diasporic community. The character “Iyapo” comes from renowned sci-fi novelist Octavia Butler’s Little Blood, as well each piece addresses concepts of Afrofuturism from strikingly different yet related perspectives. The library tackles topics that range from the lack of diversity in science fiction and futurist media, as well as the crisis of documenting and eternalizing African-American culture and experiences.

Asega also participated in an event honoring Kara Walker’s A Subtlety in an attempt to amplify Walker’s message of heavy cultural significance as a collective experience. She was (is?) apart of a non-profit dedicated to connecting current digital artists just entering the New Media arts scene. She does a really incredible job of blending new media art and technology with her ideological / cultural identity.

Guodu-LookingOutwards05

daydream-labs-experiments

Lessons Learned from Prototyping VR Apps + Weird Reality Conference 

Stefan Welker (GoogleVR / Daydream Lab)

VR is something I’m not too knowledgeable about (yet), and still skeptical. The Weird Reality conference was my first exposure and experience related to this technology. I’m mostly concerned because of the potential motion sickness one can get from staying in a VR environment and the consequences from becoming disconnected from the physical world. But this conference has changed my perspective to view VR as a medium to further understand our natural world, collaborate in interdisciplinary teams, and help those experience or see something they normally cannot.

I was really intrigued by Stefan’s talk because of the parallels I saw between the way Google Daydream Lab approaches to designing for VR and the design process that I’ve been learning and applying in school. In design, we learn to feel comfortable with failure in order to improve; to iterate and test quickly to find the most appropriate solution to a problem. Stefan described their motto as Explore everything. Fail fast. Learn fast. It almost feels like they are in a rush to learn everything in order to have VR become a more widely accepted and helpful tool. In the past year they’ve built two new app prototypes each week, and the successes and failures show in just a few examples out of many that Stefan shared with us. Stefan even joked that their teams thought it wasn’t sustainable at first.

Lots of realizations, setting criteria, challenges and discussions arose from their experiments like

  • users will test the limits of VR
  • without constraints in a multi-player setting, users may invade the privacy or personal space of other users
  • users can troll by not participating or responding in a timely manner
  • ice breakers are also important in a social VR setting because without an initiation of some sort, their is still social awkwardness
  • cloning and throwing objects is a lot of fun (experienced the throwing aspect in  the Institute for New Feeling’s Ditherer, in which it was possible to throw avocados on the ground)
  • adding play and whimsy into VR because you can and it’s fun

 

Even after listing some of these observations, I realize that with the seemingly limitless explorations that VR provides, understanding natural human behavior and psychology is integral in creating an environment and situation that encourages positive behavior from users.

Ultimately, (as cliche as this sounds), Stefan’s talk and the Weird Reality Conference opened up a new world for me in terms of the new possibilities and responsibilities that come with designing for VR, or AR.

As Vi Hart says, VR is powerful; designer and developer’s have the ability to create anything in their imagination, and user’s will have new found capabilities to experience the sublime and fly, or maybe flap.

kander – LookingOutwards05

I was drawn to Martha Hipley’s work after viewing her project “Ur Cardboard Pet” in the VR Salon, which I found to be a tongue-in-cheek role-reversal of men’s attitudes towards women (I think her description said that it commented on the male gaze, but I don’t remember exactly).

Anyway, for this assignment, I looked at “Wobble Wonder” which is an immersive VR segway experience that Hipley collaborated upon with 3 other artists and engineers. The user stands on a platform, and they can tilt their body forward and backwards to move (like a segway). There are fans mounted to the users head, so if the user is moving “fast enough”, the fans will simulate air resistance. The project employs an Occulus headset through which the user can experience the world, which was largely modeled by Hipley. The world has a similar expressive feeling and color scheme to Hipley’s other work — she often uses paint in combination with code (for example, the images in “Ur Cardboard Pet” were hand painted).

I like this project because it has appreciation for what VR can actually do. The project is about VR, rather than simply using VR as a media to display something that could have been displayed on a flat screen. “Wobble Wonder” is a project that allows VR to shape the conceptualization of the project. Furthermore, it goes beyond simply constraining the users world to the visual, with the use of fans and movement. 

An onscreen rendering of what the user of “Wobble Wonder” would experience.

I couldn’t embed a video, but this page has a video of about the project.

 

 

Kander – LookingOutwards04

I looked at design I/O’s (Theo Watson and Emily Gobeille) interactive puppet installation called “Puppet Parade.” This project utilizes openFrameworks and Kinect to track a user’s hand movement and arm movement to control a projected bird.

I love the whimsy with which the birds and their environment are created in this project. The colors and shapes are lovely to look at, and I think, considering that this project is meant for children, they hit the nail on the stylistic head. However, the flip side to this artwork interacting with kids’ gesticulations is that the movement of the birds can often be quite jerky and uncomfortable (watch the video below, and you’ll see that nearly every kid is jumping up and down and waving their arms like Tigger after 3 bottles of 5-hr Energy. If they could have somehow found a way to smooth out that jerky movement, that would have improved the project. Additionally, I would love to see more interaction between the two bird being possible.

I think kids are an easy target for interactive art of this nature. Not that I’m saying there’s anything wrong with that — I applaud design I/O for recognizing that they have the perfect audience. But I a

lso think that interactive art has great potential to make a statement, as it incorporates the users into the artwork, and I’m not really seeing that in this piece.

Project Page (I love the last image on this page!)

 

I also found a video of Theo and Emily describing a prototype of their project. It has a bit more explication on how it works:

 

 

 

 

Lumar – FaceOSC

KITSUNE MASK – will be filmed better when it’s day time again from Marisa Lu on Vimeo.

I wanted to play with things that happened in your periphery by utilizing the japanese legend of the shapeshifting fox spirit. When you turn away – the fox mask appears – the interaction of never being able to be sure of what you saw or seeing it only in your periphery plays on the mysterious, mischievous (and sometimes malevolent) nature of the fox spirit. (the floating fairy lights are ‘kitsunebi’). The experience speaks to the transient/duplicitous nature of appearances, but also has a childish side like monsters under the bed, but never there when you look under.

BEGINNING OF PROCESS

Some exploration:

“Eyebrows! Where’d my eyebrows go?”

Baseball cap took over as my hairline?

How might I use what’s in the real and physical environment around the user to influence my program? How do I take the interaction beyond screen and face to include physical objects in the surroundings?

screen-shot-2016-10-08-at-1-22-57-pm screen-shot-2016-10-08-at-1-24-07-pm screen-shot-2016-10-08-at-1-24-40-pm screen-shot-2016-10-08-at-1-26-29-pm screen-shot-2016-10-08-at-1-26-59-pm

“Will it follow my nose? How far will it go?”

It tries to keep the nose center point close to the center between the eyes – I don’t think it recognizes that it’s the same face turned, but rather a new face with somewhat squished features.

A give or take a little over 3/4 will cause the OSC to stop detecting  a face. It comes off at first as an issue/shortcoming of the program – but how might I use that feature not as a fault but as part of the experience…perhaps some sort of play on peripheral vision?

I really think things happening based on your action/peripheral vision is something the VR salon lacked. I was surprised that no one had explored that aspect of the vr experience yet. The environment stayed the same and they really didn’t play up on the fact that the viewer has a limited field of vision while exploring a virtual space.

What if it was a naughty pet? One that acted differently when you weren’t ‘looking’? I took some old code and whipped it into a giraffe – would be interesting if the person’s face was the leaf?

Or if there was a narrative element to it – if your face was red riding hood, and ‘grandma’ kept coming increasingly closer… (or perhaps a functional program that can produce something?)

scan-5

Grandma would have your generalized proportions from the FaceOSC data as well as your skin tone so it’d fit better into the story line of …well…of her as your grandma! (getting color values from live capture is easy to do with a get command)

Print

but as soon as you turned your face away, you’d see the wolf in your periphery.screen-shot-2016-10-10-at-12-16-31-am

….bezier is still such a pain. I think I want to stay with something that works more intuitively with the medium of code (utilizes the advantages coding offers) better than something as arduous as illustration. (Or go ahead and write a program that generates the bezier inputs for me?)

sketch

giraffie

Could I fake/simulate LEAP motion detection – make my program’s interaction feel like a pseudo leap motion based program on the web…based on….face motion?

 

What about the mouth? How accurate is it for that? Could I do lip reading?

lipreading

It would be a very very rudimentary lip reading – one that isn’t accurate. But it still has potential – the very fact that it’s not particularly accurate can have some comedic application.

 

Some more experimentation:

lightballsuccess

…..I was having too much fun. I really need to narrow down and think critically on something and develop it thoroughly. Having restraint during the ideating process is just such a struggle for me – it has really impeded my ability to produce polished deliverables under personal time constraints.

(kitsune’s in legend have the power to make floating lights)

scan-1 scan-2 scan-3 scan-4

WHY WON’T MY FINAL GIF UPLOAD HERE?!!?!?!?!??!

IT’S ON GITHUB:

HERE:

FINAL

Different features:

  • blowing kitsunebi lights away
  • producing the lights
  • wiping them away
  • traditional face painting marks
  • unveiling of the shapeshifting form factor of the fox in you rperipheral vision
  • The piece explores the duality of form through the shapeshifting legend of the japanese fox spirit. The play on the peripheral vision is key because it brings a specific interaction – a sense of surprise and uncertainty to the user, wherein one can never get a clear head on view of what appears when they turn away.

https://github.com/MohahaMarisa/Interactivity-computation/blob/master/Processing/bouncingboxface/blowing.gif

https://github.com/MohahaMarisa/Interactivity-computation/blob/master/Processing/bouncingboxface/sneezing.gif

 

 
// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230
//
import oscP5.*;
OscP5 oscP5;

import processing.video.*;
Capture cam;

// num faces found
int found;
float[] rawArray;
PImage lmark;
PImage rmark;
PImage mask;
ArrayList particles = new ArrayList();
boolean lightupdate = true;
void setup() {
  lmark = loadImage("kitsunemarkings.png");
  rmark = loadImage("kitsunemarkings2.png");
  mask =loadImage("kistuneMASK.png");
  size(640, 480);
  frameRate(30);

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "rawData", "/raw");
  
  String[] cameras = Capture.list();
  
  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } else {
    cam = new Capture(this, 640, 480, cameras[0]);
    cam.start();     
  }     
}

void draw() {  
  background(255);
  lightupdate= true;
  //stroke(0);
  noStroke();
  if (cam.available() == true) {cam.read();}
  set(0, 0, cam);
    float startvx;
    float startvx2;
    float startvy;
    float startvy2;
    float endvx;
    float endvx2;
    float endvy;
    float endvy2;
  if(found > 0) {
    startvx = 0.1*(rawArray[62]-rawArray[0])+rawArray[0];
    startvx2 = 0.1*(rawArray[90]-rawArray[32])+rawArray[32];
    startvy = (rawArray[73]+rawArray[1])/2;
    startvy2 = (rawArray[91]+rawArray[33])/2;
    endvx = startvx+0.8*(rawArray[62]-startvx);
    endvx2 = startvx2+0.8*(rawArray[70]-startvx2);
    endvy = (rawArray[63]+rawArray[97])/2;
    endvy2 = (rawArray[71]+rawArray[109])/2;
    pushStyle();
    imageMode(CORNERS);
    blendMode(SUBTRACT);
    image(lmark,startvx, startvy,endvx, endvy);
    image(rmark,startvx2, startvy2,endvx2, endvy2);
    popStyle();
    println("it's drawing the mark");
    float lipheight =rawArray[123] - rawArray[103]; 
    float mouthOpen = rawArray[129]-rawArray[123];
    float originy = (rawArray[129]+rawArray[123])/2;
    float originx = rawArray[128];
    int sizing = 2*int(mouthOpen);
    boolean creating = false;
    if(mouthOpen > 0.2*lipheight && !creating){
      println("start creating");
      BouncingBox anotherLight;
      creating = true;
      anotherLight = new BouncingBox(originx, originy, sizing);
      particles.add(anotherLight);
      if((rawArray[108]-rawArray[96])<1.25*(rawArray[70]-rawArray[62])){//mouth to nose proportion
        for (int i = 0; i < particles.size(); i++) {
          int newvel = int(particles.get(i).xx-rawArray[100]);
          if(newvel<0){ int vel = int(map(newvel, -width,0,1,10)); particles.get(i).xVel = -1*vel; particles.get(i).move(); lightupdate = false; } else { int vel = int(map(newvel, 0,width,10,1)); particles.get(i).xVel = vel; particles.get(i).move(); lightupdate = false; } } } else if(mouthOpen >0.5*lipheight && creating){
        particles.get(particles.size()-1).size = sizing;
      }
    }
    if(creating && mouthOpen <0.2*lipheight){
      creating = false;
    }
    for (int i = 0; i < particles.size(); i++) { BouncingBox light = particles.get(i); light.draw(); if(lightupdate){ light.update();} } float lside = rawArray[72]-rawArray[0]; float rside = rawArray[32]-rawArray[90]; float turnproportion = lside/rside; float masksize = 2.5*(rawArray[17]-rawArray[1]); if(turnproportion>3.7){
      int y = int(rawArray[1]-masksize/1.8);
      image(mask,rawArray[0], y,0.75*masksize ,masksize);
    }
  }
  else{
    for (int i = 0; i < particles.size(); i++) {
      particles.remove(i);
    }
  }
}

class BouncingBox {
    int xx;
    int yy;
    int xVel = int(random(-5, 5)); 
    int yVel = int(random(-5, 5)); 
    float size; 
    float initialsize;
    int darknessThreshold = 60;
    float noisex = random(0,100);
    BouncingBox(float originx, float originy, int sizing){
      xx = int(originx);
      yy = int(originy);
      initialsize = sizing;
      size = initialsize;
    }
    void move() {
        // Do not change this. 
        xx += xVel; 
        yy += yVel; 
    }

    void draw() {
        // Do not change this.
        pushStyle();
        blendMode(ADD);
        for(int i=0; i<50;i++){ float opacity = map(i, 0,50,20,-10); fill(255,250,240, opacity); float realsize = map(i, 0,50, 0, 1.5*size); ellipse(xx,yy,realsize, realsize); } popStyle(); } void update() { noisex+=random(0,0.1); move(); int theColorAt=cam.get(xx,yy); float theBrightnessOfTheColor=brightness(theColorAt); if (xx + size / 2 >= width ||
            xx - size / 2 <= 0 || theBrightnessOfTheColor < darknessThreshold) { xVel = -xVel; } if (yy + size / 2 >= height ||
            yy - size / 2 <= 0 || theBrightnessOfTheColor < darknessThreshold) {
            yVel = -yVel;
        }
        size=initialsize*0.3+initialsize*noise(noisex);
    }
}
/////////////////////////////////// OSC CALLBACK FUNCTIONS//////////////////////////////////

public void found(int i) {
  println("found: " + i);
  found = i;
}

public void rawData(float[] raw) {
  println("raw data saved to rawArray");
  rawArray = raw;
  
}
Written by Comments Off on Lumar – FaceOSC Posted in FaceOSC

Antar-Plot

Hairy Beans

img_1971

Some initial doodles and process by hand

This week was all about learning about bezier curves, creating classes properly in Processing, understanding push/pop matrices, and how on earth do you make hairy beans drawn by a plotter look like they were drawn by hand. While needed a fairly heavy amount of help, I’m very pleased with the outcome of these beans. The greatest challenge was by far trying to achieve an organic, bean shape. A realization I had was how the easiest thing for a human to make, was the hardest for the computer, as well as vice versa. It’s very hard for a human to freehand a perfect circle, but takes no time at all to make a asymmetricimg_1869 blob with random short lines in all directions. However, in code it takes one line to draw a perfect circle, and it takes much more math and logic to create the organic blobs and the appropriate amount of hairs.

The biggest goal for this week was to create a generative art piece that would be nearly impossible to determine that a machine had drawn. Throughout this semester I want to make my digital art feel as close to my personal illustration style. These hairy beans were a great step in that direction. The organic blobs and small hairs came out just how I wanted them to and I’m quite pleased with the final plot. Using the acrylic paint markers on the translucent vellum paper gave a great dimension to the piece. I also enjoyed playing with line weight, colour, and materials, and how these could really transform the plotted piece from the generated pdf.

 

antar_4

PDF of what the plotter traces

 

img_1876

img_1871

Placing the vellum on top of different surfaces

img_1865 img_1851

Plotting in the process

Code for work

 

Written by Comments Off on Antar-Plot Posted in Plot

Antar—LookingOutwards03

Memo Akten

After briefly discussing Memo Akten’s work in class, I was immediately hooked on his work Forms he created in collaboration with Quayola. I was astonished by the elegant representation of the movement of athletes performing their art. I believe I’m also particularly fond of this work having been a semi-pro athlete myself. There had always been conflict for me, seeing as most artists aren’t usually athletic, and most athletes I’ve met aren’t often artistic. Additionally, rugby in particular is not seen as an artistic sport, the way that gymnastics, figure skating, or equestrian could be. However, I’ve always thought there was something beautiful about athletes that required intense power for their sport. As ad viewer, one often only sees aggression and brute force when watching a rugby game. That force and power comes from hours of delicate and well refined practice, making sure ever muscle is pulled and pushed exactly correctly, making sure every movement is agile and precise. I think that Akten’s work in Forms illustrates this perfectly.

“…it explores techniques of extrapolation to sculpt abstract forms, visualizing unseen relationships – power, balance, grace and conflict..”

After watching Forms many times, I grew more curious of the process, and I was delighted to find that Akten had published a video on his process work. Each athlete clip has the screen divided into five sections, the top left being the original video of the athlete, the top right shows the completed generative pice, and three are the individual measurement components that make up the final piece. From what I can determine, the bottom middle section shows the points of the body and follows the athletes movements closely, the bottom right is similar, but it leaves a trail of where each point has been, leaving long trails of movement. The bottom left shows the forces from the body, as if the athlete were covered in paint or water, and droplets were flung by the movement. To create the final piece Akten takes these three interpretations of movement and visualizes each piece differently in a way that is aesthetically appropriate for the sport.

Forms

Akten is clearly passionate about representing the body, kinetic motion, and human skill and art, in unique visual experiences. While this is clear in his interpretation of athletes in Forms, he has also done many works with choreographers and dancers. One of his earlier works with dancers was back in 2009 when he created using Computer Vision and openframeworks, Reincarnation. In this mesmerizing work, a dancer’s movement is reimagined as flame and smoke. Only when the dancer slows down or pauses can you see the human form, other than that, the dancer is completely transformed into fire.

Reincarnation

Zarard-Plot

You can find the code here:

https://github.com/ZariaHoward/EMS2/blob/master/FinalBLMPlot/FinalBLMPlot.pde

You can find the data here:

https://www.theguardian.com/us-news/ng-interactive/2015/jun/01/the-counted-police-killings-us-database

My original inspiration for this piece was the shooting of Alfred Olango last wednesday which you can read here: https://www.theguardian.com/us-news/2016/sep/30/san-diego-police-shooting-video-released-alfred-olango . Honestly I really just wanted to do something to show respect to the lives that are being destroyed publicly approximately every few days. This piece is a representation of the 198 lives that have been taken this year as of 9/28/2016.

One aspect of the publicity of the Black Lives Matter movement is that when stories are reported around the shootings, they are usually from either a statistics and facts and figures perspective or from a perspective that essentially serves to dehumanize the victim and make the victim’s death seem justified. No sufficient homage is paid to the lives these people once lived, and no one recognizes who they could’ve been beyond their current twitter hashtag.

So in this project I created a narrative of a life that could’ve been given to these black men and women, then repeated it line by line for every victim, and cut it off short where the police cut off their lives. The narrative is as follows:

Met Best Friend. Performed On Stage. Had First Kiss. First Day Of High School. Joined Basketball Camp. First Romantic Relationship. First Trophy. First Paycheck. Prom. Finished Freshman Year of College. Pledged To a Fraternity. Voted For The First Time.Celebrates 20th Birthday. First Internship. First Legal Drink. Graduation. Paid First Rent Check. Cooked First Real Meal. First Car. Got Off Of Parents Health Insurance Plan. Spent First Christmas Away From Home. Married the Love of Their Life. Bought First Home. Beginning of Hair Loss. Stopped Wearing Converse Sneakers. First Family Vacation. Watched The Birth Of First Child. Starts Volunteering In The Community. Had Second Child. Invests In The Stock Market. Buys Life Insurance. Awarded Huge Promotion. Got An Office With A View.  Moved Into New Home. Started Their Own Company. Went to High School Reunion. Parents Passed Away .Tries To Get Fit Again. Joined A Church. Celebrated Golden Anniversary. Both Kids Move Out Of The House. Hosted Thanksgiving At Home. Creates A Will. Bought Another Pet. Babysat Grandchildren. Retired. First Grandchild. Openly Uses AARP Discounts.

 

Below are the results of processing sketches that are visual representations of this project. In these first two sketches I played around a lot with how much of the narrative should be visible by overlaying the stories. I also played around with removing the narrative entirely and just letting the numbers be the narrative. But that goes back to reducing these lives to numbers and their deaths. One reason i considered keeping these aesthetics is because they both have that ghostly effect that really screams death to me. And I wasn’t sure if i should place emphasis on the idea of death.
screen-shot-2016-09-30-at-11-06-34-am screen-shot-2016-09-30-at-11-05-31-am

The next two sketches are my experimentations with color. I wasn’t sure whether I wanted to do some type of symbolism through color. I considered red because it symbolizes blood and also the silver-blue of gunmetal or alternatively the police. Ultimately it didn’t add anything in my opinion however when I print the piece I still want it to be printed in silver ink.

screen-shot-2016-10-02-at-9-34-26-am screen-shot-2016-10-02-at-9-34-02-am

This is an up-close snapshot of the composition. The clutter and slight illegibility is intentional. The cutoff of the narrative is intentional. The spacing and gapping is also intentional. The size of how you’re viewing it on your screen (if you are viewing it at 100%) is essentially how small I wanted the text to be to force the viewer closer to the piece. screen-shot-2016-10-02-at-9-29-36-am

This is what the composition looks like as a whole. It’s approximately 24 inches by 72 inches. Due to the scale I had to use a unique method to plot (which I honestly haven’t gotten the hang of yet) I haven’t been able to make the machine that prints large-scale work. Also I can’t reduce the size otherwise you won’t be able to see the narrative. And unfortunately because 198 black people were killed by the cops this year, I have to print the entire 198 lines. Overall my biggest criticism for myself is that I couldn’t get the large plot machine working. This would have been so much more powerful with the handwritten visuals that would have been conveyed by silver pen on paper.

screen-shot-2016-10-02-at-9-32-52-am

Written by Comments Off on Zarard-Plot Posted in Plot

Darca-ResponseToClockFeedback

Thanks for all the kind words about the cat being cute;)

I realize my design is still actually a traditional clock, which really is not ideal. When I look back to the time when I decide to use this idea, I was a little caught up by the expectation of the “cuteness” of it that I just didn’t go further. My lesson here is to try not to be caught up by the ideas that I have, but to challenge even more and out of my comfort zone. And always remind myself to take risks.

Then comes the mechanism of the clock. There are really so many aspects that I need to reflect on. Basically, it does not always show the time right and the arms and legs are unattached. The former is mainly because it’s mainly because whenever the arms and legs coincide, the cat has to mirror itself, and when I let myself import a complex shape to avoid literally drawing the curves out in P5, I planted a seed that there is a limitation for me to try and modify and make use of the point data to be more weird and perky. Or maybe I could… Hmm…

Another problem about my design is that I did not find a good solution to integrate the concept of cat moving the arms and legs with the moving of the long hand and the short hand. While it is unrealistic for the cat to stretch 180 degrees when sleeping, I eventually could not make it its own feature by turning it into something fun instead of awkward.

While as someone who is horrible at time management, the first thing on my list is to try to get the work planned as soon as I can to have fewer regrets in the final result. At the same time I realize I DO need a lot more coding practice than I expected. 

Darca-LookingOutwards04

The project that I want to write about this week is part of the Future Forward event in New York City, Drift, a thermal-responsive chandelier that interact with the lighting system in the gallery space. The reason this project stuck with me is that, while we are trying to achieve interaction with complicated electronic and digital tools, Doris Sung chose a totally different approach from her experience as an architect in experimenting with materials, made Drift itself free from electronics and digital controls, saying the installation is “something natural and seemingly unlikely”. How it works is that when the light beams change their path on the structure, then change the heat distribution in the region, that the pieces in the chandelier made of heat-sensitive metal change the curvature and tension accordingly, change the overall appearance of the whole structure. The way the metal pieces move one by one and try to slowly going back to balance is really mesmerizing, and I imagine it would be calming to watch even for hours. In fact it demonstrates the capabilities for smart buildings to move with trajectory of the sun in the future.

nibubnnnn

screen-shot-2016-09-30-at-7-49-36-am screen-shot-2016-09-30-at-7-49-45-am screen-shot-2016-09-30-at-7-49-47-am screen-shot-2016-09-30-at-7-49-49-am screen-shot-2016-09-30-at-7-49-55-am screen-shot-2016-09-30-at-7-50-00-am screen-shot-2016-09-30-at-7-50-05-am

Aside from the movement of the pieces of the structure, the material itself creates a very soothing dynamic in the space: the shimmering reflections of the metal pieces changing when people walk by, or the tilted and slightly swinging metal line where the light comes through when others stands still. In the very short video about the making of this installation, Doria Sung also dis Use the idea of balance and pivoting, she said by using the idea of balance and pivoting, there is a position that it wants to naturally be in. That reminds me of the waterwheel, a traditional water transportation system that turns according to the accumulation of water in each slot, an elegant integration of nature and human activities.

small_20153161812150

anson-Plotter

This is my plotter drawing, “Cocktail Olive Generator.”

olives_small1

I’m pleased with this drawing, though it is by no means complex. This is very simple code created with a double for loop and some random parameters to generate a little variety. Honestly, the reason for the simplicity is two fold – one, this week my time was extremely short due schedule, so I created something simple that I knew I could execute in a reasonable amount of time (both to write the code and draw with the plotter). And two, I’m still at the early stages of creating generative drawings, so I’m sticking within my knowledge base on this particular assignment. When I wrote the code, I originally intended to create something that looked like bubbles. However, what appeared had the immediate and distinct appearance of cocktail olives. Such simple shapes took on a little bit of humor and playfulness, which was pleasantly surprising. It was a lot of fun to draw these with the plotter, and I would like to try to create a more complex drawing with this machine again. In addition to being fun to watch, it was very satisfying to have something that had been entirely digital, take on physical form, using nice paper, a good pen, and a tangible output. I think the pedagogic value of this assignment is multi-layered, but this connection between the digital and physical is paramount. Witnessing a machine make marks with a pen has an uncanny effect of seeming both animate and mechanistic at the same time. This also displayed for us the actual process that the computer uses to draw the lines we generate in Processing – breaking them down into separate actions instead of immediately appearing on screen as it does in the browser (play window). This helped us to dissect our code, seeing how we might logically change it to make the plotter process faster or more streamlined. One odd thing was that the plotter actually drew each circle twice before lifting up and drawing the next circle. There must be a reason in my code, but I wasn’t able to determine this.

I think it would be fun to create a drawing that utilize the plotter and hand drawing, using the machine as a collaborator to create an artwork.

Below is the code:

screen-shot-2016-10-01-at-5-55-45-pm

Antar-LookingOutwards04

 

Béatrice Lartigue

Béatrice Lartigue is a designer and artist who works in the area of interactivity and the relationship between space and time. Her interest in this area stemmed from her childhood love of comic books, where she first began to draw the lines between how each panel was a visual representation of space and time.

She is also a member of Lab 212 collective, a group of friends who graduated from the School of Visual Art les Gobelins in Paris, and studied Interactive Design. The interdisciplinary art collective works on pushing the boundaries on what can be defined as a visualization in our daily lives.

I am attracted to Latrigue’s elegant interactions and sophisticated visualizations, especially in her work related to light and sound. Additionally, I believe her style of dark surroundings filled with crisp blue light is very similar to the aesthetic that I have been working towards for some time.

Lartigue is also passionate about the realtime visualization of sound and music. In her work Portée/ she worked with her colleagues from Lab 212 to create a minimalist music interaction. When the audience plucks a string, it plays the corresponding note on the connected piano. This work reminds me of previous project I’ve discovered this semester, particularly along the theme of the necessity of collaboration. Much like 21 Balançoires , while one could simply swing, or in the instance of Portée/, pluck, alone and create a beautiful note, the true magic occurs when many come together to participate. In order to create the art you need others around you. Whether they are strangers or friends is irrelevant, because in that moment you are all simply a note, coming together to create a melody.

I also adore her work as a VR art director in Notes on Blindness: Into Darkness. In this interpretation of the audio-diary cassettes of John Hull, the user can only see what the user can here. Nothing is visible until sound touches it, which is exactly how Hull describes the world around him. His description or rain is breathtaking, as he describes that only when it rains can he truly see an environment, rather than pieces, here and there. He wishes that it could rain indoors, so that he could see his home the way he can see trees, pavement, and gutters. In Latrigue’s work the user truly feels deep empathy for Hull, and his world that is entirely dependent on sound. After watching the original film Notes on Blindness, I felt that this interpretation fell short, and that the VR expression is much more elegant in its method of visualization and storytelling.

While reading All the Light we Cannot See by Anthony Doerr (read: my favourite book), I was completely invested in his description of how a little blind girl “saw” the world around her in the 1940’s. I feel that Doerr and Latrigue both do an exceptional job at describing a world for someone one had, but has now lost their vision. In Doerr’s work the girl’s story is told in parallel of that of a little boy’s (with normal vision). The two childrens’ paths cross very briefly, but they have a significant impact on each other. I think that the VR experience that Latrigue created would be a fascinating way to tell Doerr’s story, in particular I would be interested to see how the little boy’s world would differ from the girl’s, and what would happen when they meet.

 

Guodu-LookingOutwards04

Adrien M /  Claire B

I stumbled upon some of the recent works of Adrien M /  Claire B, a french company headed by artists Adrien Mondot and Claire Bardainne. They create a range of digital arts for performances and exhibitions, combining the virtual and physical world. Their motto is “placing the human body at the heart of technological and artistic challenges and adapting today’s technological tools to create a timeless poetry through a visual language based on playing and enjoyment, which breeds imagination.”

I particularly enjoyed this performance, Coincidence (2011), where a juggler dances, juggles both a metal and digital sphere, and interacts with a background of living type. Adrien and Claire have been developing eMotion, a tool they implement in their projects to create objects (particles, text, drawing strokes, quartz compositions) that move and interact live with a performer.

Typography is always around us everyday, from the nutrition facts on Nutella spread to street crossing signs to Facebook etc. I thought the projection of large type surrounding, and even attacking the performer is so poetic; it is no longer that the human controls and has influence over type (type designers, readers, writers), but the type equally influences us in good and bad ways (clarity, legibility, information, helpful, demanding). But what’s even more impressive, is the ability for the type to seem alive and aware of the performers. Both are having a conversation with each other.  I think it’s so much more natural and right that projections for performances are generated in real time instead of pre-recorded, it brings us into a more convincing new world. It’s just like a pit that responds accordingly to the actors and singers of musical. Humans will always make mistakes and algorithms are new lending hands.

More projects by Adrien M and Claire B:

Antar-Clock-Feedback

The clock project was an excellent opportunity for me to both explore Perlin Noise and how to use it, as well to begin translating my personal style to my digital work. There were some aesthetics that we’re not where I wanted them to be, but I was aware that they needed fixing. For instance the jumping around of the branches every second–I realized too late in the game that my structure didn’t permit for that flexibility because of having to refresh the background.

I read the article that Tega Brain suggested and I thought it was fascinating. I’ve always been loosely interested in cryptography but never really took the time to read more about it (watching Morten Tyldum’s film adaptation of Andrew Hodges’ book, The Imitation Game is about the most learning I’ve done on the topic). In design we are taught that everything should have a meaning. Almost nothing should be placed, coloured, or used arbitrarily, and if it is, there should be a strong defense for it. However, I’ve never enforced this mentality on my illustrations, since their sole purposed has always been personal amusement. After reading the article Tega suggested, “How to Make Anything Signify Anything”, it made me rethink my illustrations. To almost anyone, my illustrations could continue to looks simply like illustrations, but if I were to apply Bacon’s cipher system to my illustrations, it would force me to think more critically about placement, and pattern. I enjoy mixing type and illustration together anyway, so this could really add a new depth to my work. Thank you Tega Brain for opening my eyes to different methods of applying meaning to my work!

It was very encouraging to read Laurent McCarthy’s feedback! I completely agree that I could work on more technically simple but aesthetically sophisticated work. I think taking a more “slow but steady” approach to the technical work will be more beneficial for me in the long run. I think it will also help me to think more creatively, therefor my work will really help me push my creative skills while also learning some interesting coding techniques that I can handle.

All of my peers said generally the same things:

  • The current aesthetic is good but it would be great if I could realize my illustration aesthetic
  • The animation is too jarring and the piece would be significantly stronger if there were smoother transitions.

I completely agree with my peers and I think I would like to eventually get my skills up to point were I can achieve these goals. In fact, I think my skills have already improved quite a bit since I coded the clock, and could probably get the animation to be smother, but I would have to continue to work to get the aesthetic style there. I think it would be interesting for my capstone project to include Tega’s reference to Bacon’s cipher system, my illustration aesthetic, and sophisticated animation.

hizlik-LookingOutwards04

When I had seen this a few years ago (my friend had sent me a link) I couldn’t stop laughing. Why? Because in all the times I played Minecraft I’d forgotten just how ridiculous the idea of hitting (or punching) a block to gather resources was. To do anything in the survival mode, first thing you need is wood, which is achieved by punching trees. Need dirt to build temporary walls? Punch dirt (or hit it with anything). Seeing this in “real life” just made those silly actions become a reality, and for me this was just an emphasis of that feeling. The artist, Ben Purdy, made 3 of these videos (I originally only saw one) but it’s a shame he hasn’t done anything with it. This has great potential to be an public or education-based interactive artwork or exhibition. I can see this being used in very lower-aged groups, such as elementary kids (perhaps ages so young they haven’t even played Minecraft as a sort of digital/interactive-art building block or learning method.

Unfortunately I don’t find this particularly inspiring for my own work, however it is an enjoyable piece of interactive work that has obviously required some special thinking and meticulous work (such as managing to perfectly project onto the sides of the box using one(?) projector… I believe in the 3rd video he uses multiple projectors for even-sided projection.

Jaqaur – LookingOutwards04

https://www.tiltbrush.com/

Okay, I know this isn’t technically an art project, but it’s still a form of interactivity that I find interesting, so it counts, right? Tilt Brush combines my unboundedly increasing obsession with Google and my long-term love for virtual reality. Basically, it’s an environment in which you can “paint” in three dimensional space using their special Tilt Brush tool. You can choose color, stroke width, and all that good stuff, and then… just draw. In air. Technically, you need a VR headset to be able to see what you’re drawing, but that’s a small price to pay for the ability to instantly create 3D objects around you (either for fun or to plan out a future project). There have been 3D modeling tools out there for a while now, but Tilt Brush is different. It’s not as good if you want to run physics stimulations on your creations, but it’s so much better for abstract brainstorming of ideas. You can create all sorts of shapes fairly quickly, and then actually walk around them and see how they would look in the real world. This is an idea I’ve dreamed about since I was a kid, and something I think could be really useful to all kinds of artists in the future.

This video illustrates how Tilt Brush works, and while it looks a little simplistic now, the possibilities are endless. I bet that, in the not-so-distant future, it could be possible for users to smooth out the surfaces of the shapes they draw, because right now, that’s the main thing that bothers me about the drawings shown in the video below: they’re rough, and look a bit like they were put together with colorful strips of paper mache. Anyway, this whole project really excites me, and I’m looking forward to seeing where it goes from here.

cambu-LookingOutwards04

My Looking Outwards for this week scratches at the confluence of two ideas I’ve been thinking about for the past few days:

  1. Interaction as Challenge
  2. Blurring the Physical and Digital (sparked by James & Josh’s talk on Wednesday

With regard to the first idea, it’s very common within the School of Design to talk about what makes something hard/bad, not human-friendly, etc. This isn’t surprising, because almost always, the goal of ‘design’ is to get out of the way and reduce the friction between the human and the built-world/designed-artifact/etc. But, during How People Work (51-271) on Septemeber 28th, the idea of making things ‘difficult on purpose’ came up within the context of learning and video game design. To the second point, after listening to James Tichenor and Joshua Walton speak on the need to create ‘richer blurs’ between digital and physical spaces, I’ve been on the lookout for good example of this in status quo.

When I first saw Mylène Dreyer’s interactive drawing game on Creative Applications, I felt like it was really hard to understand and would probably confuse users. But I also tried to think about how that could benefit her within the context of ‘Interaction as Challenge.’ It also reminded me about some discussions [1, 2] within the UX community awhile back about how Snapchat’s bad user experience is actually to its benefit. Also: Double points for cute music and simple graphics — it really makes the game pop!

kadoin-plot

plotter2 Plot #1 plotter1 Plot #2

Better scanned images of the plotter drawings coming soon.

I started this project by just Google Image searching “generative plotter art” and the stuff that showed up was honestly some of the coolest looking drawings. One of my favorite series of prints from that short search was Schwarm by Andreas Nicolas Fischer.

Now those drawings are massive and have hundreds of thousands of little lines in them, so I knew I wouldn’t be able to make something with that complexity, but I thought it was awesome and I wanted to make something cool. So like we did with Vera Molnar’s Interruptions, I decided to make my own version of something similar to this.

The cloudy waviness reminded me of some awesome Perlin noise images, of which Dan Shiffman has a great p5 tutorial video on how to make. Because I was following and playing with variations of his tutorial, I started my project in p5 instead of Processing.

Perlin noise particle drawing! Wow! Perlin noise particle drawing! Wow!

Instead of making my own physics and dropping particles, I just took the angles assigned to each part of the part according the Perlin noise and changed the direction of a curve based on that angle and where the control point of the curve was. I added some color based on a general location of where the lines were drawn too in order to get a sense of how I would plot it.

Here’s the p5.js code, click to change color combinations for the flow field:

sketch

capture3 Screenshot Sample 1 capture4 Screenshot Sample 2 capture2 B&W version I sent to the plotter since the plotter can’t see color. I just changed the colored pens when I felt like, so they didn’t clump as nicely as on the colored generated images.

I was foolish and didn’t sort the lines before they were drawn so travel time where the pen was up and not drawing cost me a lot of time. After a while I decided they were filled enough and cut it short because other people needed the plotter too. There were also lines drawn beyond the edge of the canvas that I didn’t know about, but the plotter did, so it was drawn way over the dimensions I gave it.

img_20160929_222948117 img_20160929_232627113 img_20160930_244514554_hdr

I’d upload a video of it plotting too, but I didn’t have anything better than my phone camera and the video quality is just too crappy.

I think the end results came out pretty nice regardless of all the issues I had while printing. The gel pens give the layers of lines a nice raised texture when you run your fingers across it. I’d like to tweak the code so it doesn’t run off screen and will print in a reasonable amount of time so it can actually finish.

plotter3 plotter4 plotter5

 

There were some fun accidents while generating the the image that would be cool to plot if I had more time. Here’s an example:

capture

 

 

Written by Comments Off on kadoin-plot Posted in Plot

Aliot-LookingOutwards04

The Manual Input Workstation (2004-2006: Golan Levin and Zachary Lieberman)

Yes, I realize that writing about Golan’s work is maybe not the most productive thing in this class but this piece was super cool!

Basically this is a system which uses two kinds of projection (digital and analog) and computer vision to recognize hands. The shadow produced by hands in the projection are identified by the computer vision software and shapes are created using both the negative space and the actual form of the hand. The user can interact with and play with forms made from light. The light takes on a material property as you can see it bounce and you can control it’s movements.

I loved this project because it’s so tangible in borderline-sculptural way. Many digital interactions are abstracted beyond the point of intuition or too simple to be entertaining for very long. This seems like it would be endlessly amusing because there are so many infinite shapes that hands are capable of making and the animations are so physical. The only thing that seemed a little off was how bouncy the shapes were, it could have been intentional or just a limitation of the technology.

Link to Project Page

 

arialy – Plot

img_8409

gifarialyplot

img_8408

arialy-plotscan-1

I really enjoy the aesthetics of the handful of plotter art I’ve seen. I wanted to combine the utility of using a plotter while still making a representational image. I liked the idea of making a tunnel by using a series of incrementing curves. It was an interesting challenge to try to make this very mechanical process appear more organic. The plotter actually helped with this, since the pen didn’t consistently apply the ink. The machine also started to have tiny wavers in the line. I plotted it both at 5x6in and 2.5x3in, and the very small scale actually made it a much more intimate image. I hand colored the person, and I think if I were to plot it again I would hand draw the lines of the person as well. The contrast between the plotted and hand drawn parts of the piece is something I’d like to explore.

Written by Comments Off on arialy – Plot Posted in Plot

takos-plot

g3020

t12

Final:

p_20160930_0416481p_20160930_0421521

Sketch:

p_20160930_0921041

Process:

My original idea was to make a human face generator (a la my sketch) with randomized face shapes, eyes, nose, mouth , and eyebrows. I started off by making the face shape, which worked in theory but I had a lot of trouble with my bezier curves and ended up switching to curveVertex(), which I thought looked too symmetrical and inorganic, and also didn’t work all the time. From there i decided to exaggerate the glitches and increase the randomnesses of the vertices. Then I added the facial features based somewhat on my drawing style. I usually draw y eyes far apart, so i added that into the code by making the variable averageEyeDist to a high number, and made it so not all of the faces will have pupils. I also coded the noses to have two different nose types = just the nostrils, or an upside down triangle. The mouse is generally a diagonal line, which is one of the way I draw mouths when I’m drawing non-realistic things. Even though it didn’t end up the way I had originally planned, I thought the result was entertaining. Id was also interesting to watch the plotter draw the faces because it drew in the order I usually draw, which was subconsciously the way that I coded it (or almost close – i draw the mouth before the nose not the other way around).

Code:

https://github.com/tatyanade/PLOT/blob/master/sads.pde