kadoin-proposal

For my final project, I’m thinking about doing something with either augmented projection, or VR. In general, I guess I just want to make something immersive and interactive that goes beyond just interfacing with a computer screen. I don’t have any more specific ideas as of yet though for what I can do.

****New more specific things!****

I’m going to go with playing with interactive projections. And how do people usually interact with strong directional lights? Shadow puppets! When someone makes a shadow puppet within the bounds of the projection, once they remove their hands, their shadow would remain, be animated depending on the animal, and fly or hop or move in some way off the screen, leaving it ready for more shadow puppets to come to life.

I’m not entirely sure on how to do this, but I’m watching a lot of Dan Shiffman videos so hopefully I’ll be able to at least realize it in part.

kadoin-manifestoReading

4. The Critical Engineer looks beyond the “awe of implementation” to determine methods of influence and their specific effects.

In today’s world, so much more time, energy, and money are spent on shiny new gadgets for people to buy before the newer, shinier one comes out. But what good are most of those gadgets doing for society? Not saying everything created has to be utilitarian, but being aware of what a new technology can do, and making a purposeful effort to curb negative social side effects is an important step in the right direction. Creating things that create meaningful interactions should be the goal more so than making new annoying MacBooks that don’t have USB ports because you know people will buy them anyway.

kadoin-lookingOutwards08

 

 

Adam Ben-Dror seems to have a PIXAR them going with his physical computing projects. The Abovemarine reminds me the fish in the bags at the end of Finding Nemo, and you can’t not think about Luxo the lamp when watching the Pinokio the lamp video. Despite being very PIXAR they show a fair amount of creativity and originality, especially in the documentation.

They both use pretty simple motion tracking to make the objects move around, but those movements give them a pretty great deal of personality. I would very much like to go on a walk down the street with Jose once he is not bound by the wires.

I was also told Adam was an exchange student here from New Zealand so I feel like I gotta keep upping my game because of how simple but fun these concepts and executions are.

kadoin-lookingoutwards08

I chose Rachel Binx to look at because she had worked for NASA and that seemed pretty cool, but that was a while ago and she’s moved on to different things since then. Those things are still pretty cool, though. The work that reeled me in was her visualizations of viral Facebook posts. They weren’t very readable without an explanation but watching them was still mesmerizing. The structures formed have a lot of energy while exploding every which way, and have a very organic form to them. I also think the data she chose to base this project off of was funny: 3 of George Takei’s Facebook posts. I see people sharing his posts all the time so I totally believe the explosive virality shown in the time lapse video, but at the same time, I’ve always wondered why George Takei is so active on social media. I get that he’s big into social activism and all that, but he posts a lot of memes for an old man. I’ve read that he has a lot of other people posting for him sometimes, but I still find the whole thing odd. Unfortunately, all the links to the original posts are broken now.

All in all, these visualizations don’t really resolve any confusion about George Takei’s social media activity, but they’re beautifully done and fascinating to watch.

LLAP Mr. Sulu

 

kadoin-visualization

Data vis can be cool and isolating data isn’t so bad, but d3 is a punk and I’d need a bit more practice with it before I think I’d make anything nice with it.  I tried to see if any of the bikes went to all the stations and sadly the answer is no. The most worldly of the bikes have only been to 36 stations while some have stayed in one place the entire time.

This graph is pretty meh but it gets the point across I think. A continuation of this project might be a graph that shows a bell curve for the average number of stations visited by a bike.

Overall, data vis ain’t my favorite.

capture

link to the full graph here

kadoin-lookingOutwards06

Nora Reed makes a lot of pretty funny twitterbots that sit at the intersection of trolling and satire. Here are a few samples from different bots she’s made:

capture

capture1 capture2

 

But the bot that takes the cake is Carol, a bot that tweets ridiculous conservative Christian-like phrases in an attempt to troll stuck up internet atheists.

And people actually took this bot for a human being, got angry with it, and started twitter fights. All the bot could do was tweet back generic responses, which made people even angrier because their points weren’t getting through to her.

As funny as it is, with groups of people like the Westboro Baptist Church and other pretty extreme organizations always spewing stuff like this unironically, Carol is pretty convincing, and as outrageous as the things she says are, the people that interact with her also come across as pretty pathetic. Fights with strangers over the internet can be funny, but no one wants to be the guy who’s actually a part of it.

 

 

kadoin-Book

Poetry for Robots by Robots

cover

By combining some Shakespeare and computer user guide manuals in a Markov chain generator, the pages on the left of this book are a series of rhyming couplets that don’t always make much sense, but every once in a while says just the right thing to make that special robot swoon. The pages on the right, when flipped through quickly, show a handsome bot falling in love.

Sees the world through rose tinted receivers and transmits the waveforms of the words "I love you."
Robot sees the world through rose tinted receivers and transmits the waveforms of the words “I love you.”

PDF of book here:

botbook PDF

botBook Code

kadoin-lookingoutwards05

I’ve watched all the Dan Shiffman videos on Perlin noise so it was really cool to hear Ken Perlin speak at Weird Reality. I didn’t really know all that much about his work outside of the noise function, so to see more of the cool things he’s been doing was awesome. Something I hadn’t really considered until I saw the Google Daydream talk the day before was VR with multiple people that could interact with each other, and his demo for tracking the with phone VR sets and the ping pong balls was really cool. The Poop VR game was the only demo at the VR salon that incorporated interaction with other people, but it was limited in action and movement because you’re limited to a toilet and just Google Cardboard. I think Ken Perlin’s examples he showed where multiple people wandered around a room and were able to interact with each other and other elements in virtual space did a good job of exploring new kinds interactions with their more advanced headsets. I also think it’s interesting how he said no one ran into each other while they had the headsets on. I didn’t think with all the perspective distortions to make the virtual space seem bigger, it would still be accurate enough to mirror real space. One thing it made me think of that no one really brought up during the conference, what getting the same sort of interactions, but with people who aren’t in the same room. I think a VR space that allows people across the globe to enter and interact would be something pretty amazing with a lot of potential.

kadoin-FaceOSC

robogif

Behind all that metal and code might there really be… a soul?

What’s happening?

  • Eyes light up as eyebrows are raised
  • Mouth shows the amplitude of the song over time
  • Music gets louder as mouth is opened almost like you/the robot is singing it

Melodramatic musings aside, I had a lot of fun with this project. It sort of just started with playing around with the sample code. I didn’t have a strong concept of what I wanted to do and it was my first time playing around with 3D so I didn’t really want to get into fancy shapes, but I could make a box move around in a space, and what do boxes look like? Robots. And what kind of space could this robot be in? Outer space. Nothing I was going to make would look super realistic, so the cartooniness of all the simple shapes worked well.

I really liked the new Star Trek movie that came out this summer, it was very reminiscent of the original series. I worked at a movie theater so I got to keep the poster, but I digress.  A Beastie Boys song blew up a fleet of robot ships, which was actually very fun to watch.

So I figured Intergalactic by the Beastie Boys would be a nice touch while also giving the mouth something to do.

Add a bit of interaction, map a few things, and bada bing bada boom, I got a fun space robot. It’s not the most conceptual thing maybe not even the most original, but it was really fun to make and I really like how it turned out. It’s pretty fun.

Since I can upload video for FaceOSC to analyze, I was thinking about making a music video type thing by putting different Star Trek clips through the program, but that would probably include a decent amount of editing to make it look nice, which I didn’t have time for. It’s still something I think I’ll do eventually, though. Maybe I’ll experiment more with light sources while I’m at it.

 

// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230

//also Golan's 'face controlled box' sample code towards the bottom of the assignment page was pretty helpful
//http://cmuems.com/2016/60212/deliverables/deliverables-05/


import processing.sound.*;
SoundFile file;
Amplitude amp;
import oscP5.*;
OscP5 oscP5;

Star[] stars = new Star[100];


int found; // global variable, indicates if a face is found

float poseScale;
PVector poseOrientation = new PVector(); // stores an (x,y,z)
PVector posePosition = new PVector();

float eyebrowLeft;
float eyebrowRight;
float mouthHeight;

int isPlaying = 0;
ArrayList amps= new ArrayList ();
float curAmp =0;
//----------------------------------
void setup() {
  size(640, 480, P3D);

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");

  file = new SoundFile(this, "intergalactic1.wav");
  amp = new Amplitude(this);



  for (int i =0; i <stars.length; i++) {
    stars[i] = new Star();
  }

  for (int j = 0; j<28; j++) {
    amps.add(0, 0.0);
  }
}

//----------------------------------
void draw() {

  int millis = millis();
  background (0);
  noStroke();
  lights();




  for (int i =0; i <stars.length; i++) {
    stars[i].update();
    stars[i].show();
  }



  if (found != 0) {
    if (isPlaying == 0) {
      file.loop();
      amp.input(file);
      isPlaying = 1;
    }
    curAmp = amp.analyze();
    amps.add(0, curAmp);
    amps.remove(amps.size()-1);

    float eyeBrightL = map(eyebrowLeft, 7.5, 9, 0, 1);
    float eyeBrightR = map(eyebrowRight, 7.5, 9, 0, 1);

    float beacon = map(millis%566, 50, 565, 0, 0.85);

    float mouthAmp = map(mouthHeight, 2, 11, .1, 1);
    file.amp(mouthAmp);


    pushMatrix(); 

    translate(posePosition.x, posePosition.y, posePosition.z);
    rotateY (0 - poseOrientation.y); 
    rotateX (0 - poseOrientation.x); 
    rotateZ ( poseOrientation.z); 


    scale(poseScale, poseScale, poseScale);


    //eyeLights
    lightFalloff(0.01, 0.0005, 0.00075);

    //antena light
    lightFalloff(1, 0.001, 0.0001);
    pointLight(255-255*beacon, 0, 0, 0, -45, 0);

    fill(200, 200, 250);
    box(40, 40, 40);//head

    translate(0, -20, 0);


    fill(150, 150, 200);
    box(10, 5, 10); //top peg
    fill(150, 150, 200);
    box(3, 25, 3);//antena
    translate(0, -15, 0);

    fill(255-255*beacon, 50-50*beacon, 50-50*beacon);
    sphere(4); //beep boop
    translate(0, 15, 0);

    fill(150, 150, 200);
    translate(0, 20, 0);
    translate(-20, 0, 0);
    box(10, 20, 20);//left peg 1
    translate(20, 0, 0);

    translate(20, 0, 0);
    box(10, 20, 20);//right peg 1
    translate(-20, 0, 0);

    fill(255, 255, 255);
    translate(-8, -8, 18);
    pointLight(255*eyeBrightL, 240*eyeBrightL, 0, -8, 0, 30);
    sphere(6);//left eye

    translate(8, 8, -18);

    translate(8, -8, 18);
    pointLight(255*eyeBrightR, 240*eyeBrightR, 0, 8, 0, 30);
    sphere(6);//right eye
    translate(-8, 8, -18);

    noLights();
    lights();

    translate(0, 8, 20);
    fill(150, 150, 200);
    box(30, 10, 5);//mouth
    fill(0);
    box(28, 8, 5.01);
    pushMatrix();
    for (int i =  -14; i<14; i++) {
      float h = amps.get(i+14)*10;//*mouthAmp;
      translate(i+0.5, 0, 2.52);
      fill(0, 0, 255);
      box(1, h, .1);
      translate(-i-0.5, 0, -2.52);
    }
    popMatrix();
    translate(0, -8, -20);

    popMatrix();
  }
}


//----------------------------------
// Event handlers for receiving FaceOSC data
public void found (int i) { 
  found = i;
}

public void poseScale(float s) {
  poseScale = s;
}

public void poseOrientation(float x, float y, float z) {
  poseOrientation.set(x, y, z);
}

public void posePosition(float x, float y) {
  posePosition.set(x, y, 0);
}

public void eyebrowLeftReceived(float f) {
  eyebrowLeft = f;
}

public void eyebrowRightReceived(float f) {
  eyebrowRight = f;
}

public void mouthHeightReceived(float h) {
  mouthHeight = h;
}


//Dan Shiffman had a cool video about how to make a warpspeed-like star field
//I thought it'd be a pretty sweet setting for my little bot
//https://www.youtube.com/watch?v=17WoOqgXsRM

class Star {
  float x;
  float y;
  float z;

  Star() {
    x = random(-width/2, width/2);
    y = random(-height/2, height/2);
    z = random(0, width);
  }

  void update() {
    z -= 15;
    if (z<1) {
      z=width;
      x = random(-width/2, width/2);
      y = random(-height/2, height/2);
    }
  }

  void show() {
    fill(255);
    float sx = map(x/z, 0, 1, 0, width);
    float sy = map(y/z, 0, 1, 0, height);
    pushMatrix();
    translate(width/2, height/2, -50);
    
    float r = map(z, 0, width, 16, 0);
    ellipse(sx, sy, r, r);
    popMatrix();
  }
}

kadoin-plot

plotter2
Plot #1
plotter1
Plot #2

Better scanned images of the plotter drawings coming soon.

I started this project by just Google Image searching “generative plotter art” and the stuff that showed up was honestly some of the coolest looking drawings. One of my favorite series of prints from that short search was Schwarm by Andreas Nicolas Fischer.

Now those drawings are massive and have hundreds of thousands of little lines in them, so I knew I wouldn’t be able to make something with that complexity, but I thought it was awesome and I wanted to make something cool. So like we did with Vera Molnar’s Interruptions, I decided to make my own version of something similar to this.

The cloudy waviness reminded me of some awesome Perlin noise images, of which Dan Shiffman has a great p5 tutorial video on how to make. Because I was following and playing with variations of his tutorial, I started my project in p5 instead of Processing.

Perlin noise particle drawing! Wow!
Perlin noise particle drawing! Wow!

Instead of making my own physics and dropping particles, I just took the angles assigned to each part of the part according the Perlin noise and changed the direction of a curve based on that angle and where the control point of the curve was. I added some color based on a general location of where the lines were drawn too in order to get a sense of how I would plot it.

Here’s the p5.js code, click to change color combinations for the flow field:

sketch

capture3
Screenshot Sample 1
capture4
Screenshot Sample 2
capture2
B&W version I sent to the plotter since the plotter can’t see color. I just changed the colored pens when I felt like, so they didn’t clump as nicely as on the colored generated images.

I was foolish and didn’t sort the lines before they were drawn so travel time where the pen was up and not drawing cost me a lot of time. After a while I decided they were filled enough and cut it short because other people needed the plotter too. There were also lines drawn beyond the edge of the canvas that I didn’t know about, but the plotter did, so it was drawn way over the dimensions I gave it.

img_20160929_222948117 img_20160929_232627113 img_20160930_244514554_hdr

I’d upload a video of it plotting too, but I didn’t have anything better than my phone camera and the video quality is just too crappy.

I think the end results came out pretty nice regardless of all the issues I had while printing. The gel pens give the layers of lines a nice raised texture when you run your fingers across it. I’d like to tweak the code so it doesn’t run off screen and will print in a reasonable amount of time so it can actually finish.

plotter3 plotter4 plotter5

 

There were some fun accidents while generating the the image that would be cool to plot if I had more time. Here’s an example:

capture

 

 

kadoin-lookingOutwards04

Nova Jiang’s Ideogenetic Machine is an interactive piece that allows participants to become characters in a generative comic book based on current events. I think it’s pretty cool because I’m pretty interested in storytelling and narrative art. With the audience being the characters, they get to choose how they would react to certain scenarios. The participants can also add their own dialogue to the comic afterwards. It would be cool if in addition to the interactivity, the dialogue was generative. I guess it makes it a little less interactive, but I think acting out a story generative story that responds to the participant it’s taking a photo of would be cool.

kadoin-clock-feedback

I got a lot of positive feedback on the aesthetic and concept of my clock. I’d like to go into video game design or animation someday, so those comments were pretty encouraging. My code is a bit of a mess though with the giant lists of coordinates and repetitive functions. It was a bit embarrassing how much I hard coded instead of writing something that would have done it all for me, but laziness got the better of me.

kadoin-lookingoutwards03

Melanie Hoff is a Brooklyn based artist who actually does a lot of generative art. In her project “15,000 Volts” she sticks metal screws into a piece of wood and passes a current of 15,000 volts through them. While the current goes through the wood it burns its path while trying to find the path of least resistance to complete the circuit.  Despite being a mesmerizing process to watch, the end results of the tiny lightning bolts burned into the wood is also beautiful.

I like this piece because  watching things burn is just very satisfying to watch. I love looking at the embers of a dying campfire and I love using the laser cutter to burn precise markings into things. This piece is like a combination between those two things with its slow burning but delicate line work.

Apart from the start point and endpoint of the burn lines, the resulting burn marks that make up the piece are almost impossible to predict, putting this piece of generative art in the the more disorderly category.

kadoin-reading03

1a. Crochet was first used to replace flimsy paper classroom models of hyperbolic planes in 1997. By following a fairly straightforward pattern, anyone could crochet themselves an awesomely weird, wiggly, yet orderly (in a non-euclidean sense) shape. What two sisters from Austrailia found, was they could imitate the look of coral by varying certain elements of the hyperbolic algorithm and create an infinite amount of organic looking yarny shapes. These  crocheted coral reef sculptures still lean towards order, but the wide variation of shapes from small tweaks to the pattern give the end result a slightly more disorderly outcome. Crochet Coral Reef website

1b. Making meaningful art is hard, especially in art school where you’re expected to juggle 4 or 5 projects at once and all of them are expected to be equally profound and meaningful. Meaningful work touches people in a lot more ways than just a pretty picture. It’s not so hard to paint a nice and meaningless bowl of fruit just as it’s not so hard to write a generative program that produces infinite images of fruit arranged differently in a bowl. The problem of meaning isn’t just a problem for generative art, but art in general. There are ways to make any type of art meaningful, it just takes a lot of thought, creativity, and ingenuity from the artist.

kadoin-AnimatedLoop

kadoin-loop

Based on the inspirations below, I wanted to create an underwater seaweedy scene because wiggling seaweed seemed like a fun motion to imitate. Without the random bubbles, it would be able to loop continuously because the seaweed is just based on sinusoids, but I felt like it needed some kind of other movement to it so I put them in. The seaweed isn’t drawn very efficiently so it’s a bit slow running.

tumblr_ncq1zwmemo1tf7qzao1_500Light Processes

looperCarl Burton

kadoin-Interruptions

kadoininterruptions

  1. The canvas is square
  2. The lines are all the same length
  3. the lines seem to be evenly spaces and rotated around the midpoint
  4. the lines angles are random, but for the most part, are generally vertical or horizontal
  5. There are areas where there are gaps in the regular pattern of rotated lines. They seem organized in a semi-random way, but I can’t pinpoint what the method of organization is.

I used randomGaussian a lot for this assignment to rotate the lines, and to organize my gaps. I’m a fan of vertical lines as opposed to horizontal lines, so I made the lines mostly semi-vertical. I also figured if this were printed, like the original, all I would have to do is rotate the paper to make them seem horizontal.

The gaps were the tricky part to imitate. I ended up removing randomly sized sections of lines from an area using randomGaussian before the lines were drawn. It has generally the same effect, but it’s not 100% and I just can’t quite put my finger on it.

kadoin-lookingoutwards02

When I was 12, I was on the way back to Chicago from visiting my grandparents in Louisville, and had only the Toy Story 2 DVD and a portable DVD player to entertain myself. After I watched the movie and sat around bored for a while, I decided to watch the movie again but this time with directors commentary.

In that, they talked a lot about how far 3D computer animation had come since the first Toy Story. How now they were able to process and render so much more than just a few years ago. I hadn’t really thought all that much about the difference technological advances had made in 3D animation until then. All of the sudden I was noticing the difference in quality between every 3D animation I watched and decided going down a career path in the animation industry would be fun.

PIXAR has pushed once stiff moving characters, to nearly photo realistic accuracy and are always looking to continue to push the limits of computer animation. But PIXAR in addition to their A+ team of software engineers, has amazing artists and storytellers. It’s not all about the graphics that makes their movies enjoyable, but their commitment to the highest industry standards, is admirable.

kadoin-clock

kadoin-clock-final

I wanted to make a clock based on the movement of planets so I started with the key in the upper right corner. It works similar to a clock face, the sun acts as the center and the planet acts as the hour hand. From there the planet then acts as the center and the moon acts as a minute hand.

Relative to the planet though, I wanted one revolution of the planet around the sun to act as a year. The stars in the sky slowly rotate around the the point in the center of the screen when the compass points North over the course of the 12 hour year to show different parts of the year.  One revolution of the moon around the planet would equate to one month with the appropriate phases of the moon. I also have the sun rise and set every minute.

I wanted the viewer to be able to see both the sunrise and sunset as they happened, so I tried to make a panorama 180 degree view. There is a side scroll when the mouse is near the left or right edge of the window, as well as a compass to let the user know which direction their facing.

I didn’t initially intend for this clock to almost mirror our 12 months/year, ~30 days/month calendar, but that’s just how it ended up. Throughout the process I became painfully aware of the annoyance of leap day and how a calendar year in all units of measurement doesn’t line up exactly.

img_3078

img_3079

img_3080

kadoin-FirstWordLastWord

What’s inspiring often has the quality of pushing the limits of the norms, but mainly, it’s about the work’s ability to engage people emotionally, beyond the novelty. I’m glad CMU has artistic based programming classes because in general code is pretty far from the first artistic medium people think of, but the amazingly creative and beautiful things people can do with it is almost overwhelming. The rate at which the forefront of technology and new media is also increasing so fast, it’s hard to keep up. I’m very excited about all the “first word art” being made with new technologies but I also think it needs to not get too swept up in all the excitement what’s most important when it comes down to it, isn’t doing something beyond what’s been done before, but doing something with meaning that you put your heart and soul into. Those are the things I would be able to be proud of and hopefully leave lasting impressions on others as well.

kadoin-Looking Outwards-01

 

This short film, Interim Camp, by the new media studio Field, explores an entirely computer generated landscape made in Processing. The terrain is shifty and constantly vibrating, almost like the rocky planet is some sort of giant living organism.

I was first drawn to this piece because some of the stills generated by the program looked fascinating and with the shifting surreal colors, the planet looked totally alien. It reminded me of a video game I had played called MirrorMoon, a low poly puzzle game I played through it mostly for the visuals. This being just a video I was left wishing that it was more interactive and allowed me to explore the world more on my own sense it gave off such an exploratory impression.

In some of the studio’s more recent work, they have added a great deal of interactivity. Most similar to Interim Camp though, is an interactive piece called City of Drones where the user can navigate through an infinite cityscape.