Tigop-mocap

My mocap project! The lighting in the gif shows you more, still learning about directional light and cameras so it’s hard to see the body that has spheres with shifting x coordinates because it is poorly lighted.

So I used the data from Perfume, only I used one of the bodies rather than all three. One body continues to stretch horizontally because the spheres’ x values space out over time, the one in the middle has spheres that spin around really fast, and the one in the front, which reminds me of a slinky,kind of dips into the ground and ends up upside down, then comes back up from the ground.

I would call this piece “Day at the Gym” and ironically, all of the spheres have a skin of my cartoon self (I don’t GO to the gym!!!! This is what would happen if I did!!!!)

Anyways, that’s all folks!

Written by Comments Off on Tigop-mocap Posted in Mocap

Zarard-Manifesto

The tenet I chose is the one where the critical engineer doesn’t just marvel at a new technology because it is new and combines cool elements of technology. The tenet says that the critical engineer looks beyond how their work is implemented to see how it will actually have an impact, more so the critical engineer digs into specificity of the impact of their work. An example of this is the advent of the social media platform. When Facebook, Twitter, Snapchat, etcetera became popular it was initially conceived of as a way to share light-hearted photos, jokes, and stories. However, the engineers didn’t really look in to the depth of what it means to be social. Being social means to sometimes be envious, which is why people who spend unhealthy amounts of time on social media are prone to depression. Being social means to compete to get the most friends, to ask people for money, to argue, and to ignore. But because the focus originally was on the technology and the ability, it wasn’t until years later where the apps were refined to account for things like fraud, hate speech, suicide posts.

Nngdon-Last Project

The PDF version is available here: FaunaOfSloogiaII.pdf

For my last project I decided to expand on my generative book, which is about imaginary creatures on an imaginary island. The last version had only generated illustrations of the creatures, so I felt that I could supplement the concept of “fauna of an island” by giving each creature a short description, some maps indicating the habitats of them, and some rendered pictures of the animals with a natural background (trees, rivers, mountains, etc.).

The Map

I first generated a height maps using 2D Perlin noise. This results in an even spread of lands and waters across the canvas. To make the terrain more island-ish, I used a vignette mask to darken (subtract value from) the corners before rendering the noise.

After this an edge finding algorithm was used to draw the isolines.

Labeling

The next task is to label the map: to find where the mountains and seas are and name them accordingly.

I wrote my own “blob detection” algorithm inspired by flood fill. First, given a point, the program will try to draw the largest possible circle, given the rule that all pixels in that circle have to be within a certain range of height. Then, around the circumference of the circle, the program tries to generate even more such circles. This is done recursively, until no more circles larger than a certain small radius can be drawn. The union of all the circles  is returned.

Using Mitchell’s best-candidate algorithm, I picked random points evenly spread across the map, and apply my blob detection. Blobs that are very close to each other or have a lot of overlapping are merged.

Then for each blob that indicates a water area, the program checks how surrounded by land it is, and decide whether it is a lake, strait, a gulf, or a sea. For the land areas, the program decides the terrain according to its height and whether it is connected to another piece of land.

A Markov chain is used to generate the names for the places. The text is rotated and scaled according to the general shape of the area.

Finally, the program exports a JSON file, saving the seed and the names, areas and locations of the places, to be used in the next step.

 

The Description

The description costed me the most time in this project. I spent a long time thinking about ways of generating high-quality generative text.

I noticed that there are usually three major ways of making generative text people are using:

  1. Markov chain/ machine learning method. The result has good variety, and is easy to implement, as the computer does the most part for you. However the programmer has the least control over what the program is generating, and nonsensical sentences often occur.
  2. Word substitution. The human writer writes the whole paragraph, and some words in the paragraph are substituted by words chosen randomly from a bank. This method is good for generating only one or two pieces of output, and soon gets very repetitive after a few iterations. A very boring algorithm.
  3. A set of pre-defined grammar + word substitution.

The third direction seems to be able to combine order and randomness well. However as I explored deeper I discovered that it’s like teaching the computer English from scratch, and massive amount of work is probably involved to make it generating something meaningful, instead of something like:

Nosier clarinet tweezes beacuse 77 carnauba sportily misteaches.

However I was in fact able to invent a versatile regex-like syntax that makes defining a large set of grammar rather easy. I believe it’s going to be a very promising algorithm, and I’m probably going to work on it later. As for this project, I tried to look into the other two algorithms.

Grab data, tokenize and scramble

Finally after some thought, I decided to combine the the first and the second method.

First I wrote a program to steal all the articles from the internet. The program pretends to be an innocent web browser and searches sites such as wikipedia using a long list of keywords. It retrieves the source code of the pages, and parses it to get the clean, plain text version of the articles.

Then I collected a database of animal names, place names, color names etc., and searched within the articles to substitute the keywords with special tokens (such as “$q$” for the name of the query animal, “$a$” for name of other animals, “$p$” for places, “$c$” for colors, etc.)

I developed various techniques, such as score-based word similarity comparison to avoid missing any keywords. For example, an article about the grey wolf may mention “gray wolf”, “grey wolves”,”the wolf”, “wolves” referring to the same thing.

After this, a scrambling algorithm such as Markov chain is used. Notice that since the keywords are tokenized before scrambling, the generator can slide easily from one phrase to another across different articles. This gives the result interesting variety.

LSTR and charRNN

Golan pointed me to the neural networks LSTR and charRNN as alternatives to Markov chain. It was very interesting to explore them and watch as the computer learns to speak English. However they still tend to generate gibberish after training overnight. There seems to be an asymptote to the loss function: the computer is becoming better and better, but then it reaches a bottleneck, and starts to confuse itself and slips back.

Another phenomenon I observed is that the computer seems to be falling in love with a certain word, and just keeps saying it whenever it’s possible. At the worst outburst of this symptom the computer falls into a madness like:

Calf where be will calf will calf that calf will calf different calf calf calf the and calf a calf only calf a other calf calf calf calf…

And oftentimes it does not know when to end its sentences, and keeps running on.

The problem with neural networks is that it’s like a magic black box. When it works it’s magical, but when it doesn’t you don’t know where to fix. As I’m not too familiar with the details of neural networks and was entirely using other people’s libraries, I have no idea how to improve the algorithm.

Generation

I wrote my own very portable version of Markov chain in 20 lines of python code, and it seems to be working better than the neural networks.(?)

My favorite lines are:

The $q$ can take a grave dislike towards their tail, which are the primary source of prey.

A female $q$ gives birth to one another through touch, movement and sound.

The infant $q$ remains with its mother until it was strong enough to overpower it and kill it.

And paradoxical ones such as:

…the tail which is twice as often as long as two million individuals.

Finally the tokens are substituted by relevant information about the animal described. These information are stored in JSON files when the illustrations and maps are generated.

The names of all the 50 animals and places are stored in a pool, so descriptions of different animals can refer to each other. For example, in the description of animal A, it says its predator is animal B. After flipping a few pages, the reader will be able to find a detailed account of animal B, and so on.

Eyes Improvement

Golan told me that my creatures eyes look dead and need to be fixed. I added in some highlights so they look more lively now (hopefully).

Code

The complete code will be available on Github once I finalize the project. Currently I’m working on rendering the animals against a natural background.

But here’s my 20-line Markov chain in python.

 

import random
class Markov20():
	def __init__(self,corp,delim=" ",punc=[".",",",";","?","!",'"']):
		self.corp = corp
		self.punc = punc
		self.delim = delim
		for p in self.punc: self.corp = self.corp.replace(p,delim+p)
		self.corp = self.corp.split(delim)
	def predict(self,wl):
		return random.choice([self.corp[i+len(wl)] for i in range(0,len(self.corp)-len(wl)) if self.corp[i:i+len(wl)] == wl ])
	def sentence(self,w,d,l=0):
		res = w + self.delim
		i = 0
		while (l != 0 and i < l) or (l==0 and w != self.punc[0]):
			w = self.predict(res.split(self.delim)[-1-d:-1])
			res += w + self.delim
			i+=1
		for p in self.punc: res = res.replace(self.delim+p,p)
		return res
	def randsentstart(self):
		return random.choice(self.delim.join(self.corp).split(self.punc[0]+self.delim)).split(self.delim)[0]


if __name__ == "__main__":
	f1 = open("nietzsche.txt") #s3.amazonaws.com/text-datasets/nietzsche.txt	
	corp = (f1.read()).replace("  ","").replace("\n"," ").replace("\r\n"," ").replace("\r"," ").replace("=","")
	m20 = Markov20(corp)
	for i in range(0,3):
		print m20.sentence(m20.randsentstart(),2)


Zarard- LastProject

Over the semester I’ve been working with the Carnegie Museuem of Art to analyze artwork from the photographer Teenie Harris. Teenie Harris was an amazing photojournalist who captured the most comprehensive visuals of Black American life from the 1930s to the 1970s. Because I am working on this project for the next 1.5 years, I wanted my last project to lay the foundation for future explorations.

So my project was essentially to create a collection of scripts to aid me in visually annotating the Teenie Harris archive and create a system of storing that information.

Things I did over the 3 weeks:

Got code working with Microsoft Azure to get face and emotion data for the Teenie Harris Archive, as well as tagging. Which involved debugging their starter code and working with tech support to figure out why my api keys didn’t work.

Figured out Jupyter

Installed and set up a MongoDB database to hold data from Teenie Harris Archive.

Learned the Pymongo driver for interacting with MongoDB through python.

Learned multithreading so that the code could run 12 times as fast (hours instead of weeks)

Integrated the data and descriptions from the Carnegie Museum of Art into the database.

Integrated the data and descriptions from dlib into the database.

Got Familiar with the OpenCV library and the Pillow library for annotating and photo manipulation.

Created images that combined CMOA, Dlib, Azure, and OpenCV data and inserted them into a database.

All of this work sets me up to do meaningful composition analysis on the data. View the results below:

Tigop- Final

This is a small world that I am in the process of making! I’m really excited about it because it’s based off a fictional world that I’ve created outside of class. It makes me really happy to see it manifest itself in P3D! I’ve been working on a manifesto for this other world outside of class as well, and it’s a manifesto that is supposed to be relevant to the world we are living in today (as is the fictional world). So far, this processing program has been my way of exploring how I could allow viewers to interact with the manifesto and be pulled into the great slime bubble.

Throughout the game, viewers have to use their mouse as well as their face to explore. By moving your face and listening to prompts that are given through text, you are allowed to progress through the world and uncover more and more.

Tigop Object

Here I made a small device with the makey makey that functions as a module to teach users about consent. PERFECT for Haven courses that incoming freshman must take as a requirement, now in addition to sending students the last lecture book, students will also be sent a consent module package.

I also thought about how this little module might be used to keep people who want to diet away from junk food. They can place a chocolate bar or cookie inside a box and put it under these guys.

The only downside to this is you need to wear something on your wrist to close the circuit. I would have preferred to have a way to have this happen without the thing that goes on the wrist, but that’s okay. Maybe one day.

Written by Comments Off on Tigop Object Posted in Object

Takos – Final Project


Process:
My original idea for this project was to make modular, jointed people, with customizable features(height, width, etc). I have made ball jointed dolls out of clay before, so this was a natural step for me. I started off by trying to geometrize the organic shapes of the human body into something tha.t I could feasibly code on OpenSCAD. I was originally going to make the joints different sizes depending on which part of the body they belonged too, but decided not too because having the joints all be the same size allowed for more customizability and turned what was supposed to be just a movable person into a sort of toolkit to create with and alter.

I started off by prototyping a ball and socket joint. I went through a few different 3d printing tests until I found a desgin that really worked for me: The end socket covers over 50% of the ball, but has slits cut out of it to allow it to expand when being fitted. It also has a cut out of a shpere which allows joints to be rotated to almost 90 degrees. This allows the use of a double joint (used in the elbows, shoulders, knees, and hips) to achieve more realistic movability.
Other unique joints are the upper to lower torso joint, and the neck to head joint. The torso joint’s socket needed to be able to intersect with the upper torso in order to achieve a full range of motion, so I subtracted a sphere around the ball of the upper torso. This is shpwn below. The neck join needed to be able to expand while also fitting fully inside the head. To allow this to happen, I subtracted a sphere from outside the socket, which allows the socket to expand when the ball is inserted.

Github Repository:
https://github.com/tatyanade/Modular-Person

Written by Comments Off on Takos – Final Project Posted in LastProject

Keali-Last Project

f

Staying loyal to my usual aesthetic and everlasting motivation of making a beautiful, virtual environment… I’d like to thank Golan for dealing with the general constant theme, but I hope some of my other projects were different and experimental enough as well (book, data viz, mocap, etc.) For starters, it was unfortunate that I didn’t have the confidence to get started and learn OpenFrameworks within the timeframe of this project; it is definitely something I want to pick up and utilize in the future–especially since this project really hammered in to me what the limits of Processing were. For an interactive environment, Processing will undeniably start running super slowly because of all the assets involved; this actually limited what interactivity and aesthetics I wanted to include in this environment, and I had to prioritize what to keep without sacrificing a sense of completion in the overall product.

As such I aimed for something calm (my own bias) and serene… an environment that would provide subtle and endearing movement and interaction potential. I definitely focused on setting up the entire environment overall, much like a stage setting, rather than attempting to go for quantitative assets. I wanted a well-rendered and atmospherically polished environment rather than a more lackluster one with more features. Basically the whole setup was contingent on object-oriented programming, something I actually barely did throughout the semester for previous projects–but this was crucial here as every characteristic of the setting was its own object class: I worked with waves, landscapes, fireflies/particles, rain, stars, and branches and leaves (aggregated into trees), to carefully render each aspect in the right order to eventually stack together to become the final environment. I initially was inspired to include more air-related features through some beautiful OpenProcessing samples which I wanted to customize, but the utility of noise to render the smoke and clouds slowed down my entire program to the point that movement was hardly seen, so I had to abandon that. With the idealistic assets I had in mind and the further progress of how much code I’d typed with how increasingly sluggish my program was running, I had to make the decisions to output a workable and seamless product that still stayed aesthetic. I am incredibly happy with the resulting soft and subtle features, with the highlight movements from the stars, waves, and fireflies; and the interactions of the light particles following the user’s cursor and the movement of vectors upon clicking trees to allow for branch swaying and leaves scattering and falling. It is definitely not a game, or an interactive interface with a goal per say, but the whole point was to imagine a user mindlessly enjoying the program, as if it were a virtual art piece.

In contrast to how I had to present it the final Friday of class, I really think the program benefits from an experience where the user is running it alone in a dark room, with headphones. That is how I believe the piece should be seen and felt, and funnily enough the dark room and headphones is exactly the environment that I mostly coded it in as I worked on it during many nights. With this setup I feel like the user would become more fully immersed in the cathartic and serene nature of Nocturne.

This project is definitely something I want to continue and further develop in my free time or if given the chance in the future. This would probably also assume that I will be developing and enhancing it in OpenFrameworks, because as of now I think I’m already hitting the max of what Processing will run and handle smoothly. As of now I intent to add interactions with new objects such as plant growth and animals, and perhaps some more precipitation and particle options.

GitHub repository//

/*
REFERENCES: 

https://www.openprocessing.org/sketch/179401
https://www.openprocessing.org/sketch/90192
https://processing.org/examples/sinewave.html
https://processing.org/examples/multipleparticlesystems.html
https://processing.org/examples/simpleparticlesystem.html
*/


import processing.sound.*;

int xspacing = 16;   // How far apart should each horizontal location be spaced
int w;              // Width of entire wave

float theta = 0.0;  // Start angle at 0
//float amplitude = 75.0;  // Height of wave --> moved as parameter
float period = 500.0;  // How many pixels before the wave repeats
float dx;  // Value for incrementing X, a function of period and xspacing
float[] yvalues;  // Using an array to store height values for the wave

SoundFile file;

int Y_AXIS = 1;
color c1, c2;

color cloudFill, fade, far, near, mist;

int rainNum = 80;
Rain[] drops = new Rain[rainNum];

ArrayList trees = new ArrayList();

void setup() {
  size(1500, 700);
  smooth();
  
  file = new SoundFile(this, "khiitest.wav");
  //file = new SoundFile(this, "khii.mp3"); // testing audio loop?? 
  file.loop();
  
  c1 = color(17, 24, 51);
  c2 = color(24, 55, 112);

  // some setups aborted
  fade = color(64, 85, 128);

  w = width+16;
  dx = (TWO_PI / period) * xspacing;
  yvalues = new float[w/xspacing];

  //for (int i = 0; i < particleCount; i++) {
  //  sparks[i] = new Particle(176, 203, 235);

  for (int i = 0; i < smallStarList.length; i++) {
    smallStarList[i] = new smallStar();
  }
  
  for (int i = 0; i < bigStarList.length; i++) {
    bigStarList[i] = new bigStar();
  }
  
  for (int i = 0; i < fireflyList.length; i++) {
    fireflyList[i] = new firefly();
  }
  
  trees.add(new Tree(600,0));
  trees.add(new Tree(-500,0));
  trees.add(new Tree(300,0));
  trees.add(new Tree(50,0));
  trees.add(new Tree(400,0));
  for (int i = 0; i < rainNum; i++) {
    drops[i] = new Rain();
  }
  
  ps = new ParticleSystem(new PVector(400,600)); // buffer default loc
}

smallStar[] smallStarList = new smallStar[110];
bigStar[] bigStarList = new bigStar[50];
firefly[] fireflyList = new firefly[70];
float gMove = map(.15,0,.3,0,30);
ParticleSystem ps;


void draw() {
  background(0);
  setGradient(0, 0, width, height, c1, c2, Y_AXIS);

  makeFade(fade);
  //clouds(cloudFill); //cloud reference from https://www.openprocessing.org/sketch/179401

  for (int i = 0; i < smallStarList.length; i++) {
    smallStarList[i].display();
  }
  
  for (int i = 0; i < bigStarList.length; i++) {
    bigStarList[i].display();
  }

  drawMountains();
  
  ps.addParticle();
  ps.run();
  for (Tree tree : trees) {
    tree.display(); 
  }
  
  anotherNoiseWave();

  calcWave(30.0);
  renderWave();
  
  for (int i = 0; i < fireflyList.length; i++) {
    fireflyList[i].update();
    fireflyList[i].display();
  }
  
  ps.setOrigin(new PVector(mouseX,mouseY)); 
  
  //if (raining) {  for temp rain no-respawn fix 
    for (int i = 0; i < rainNum; i++) {
      drops[i].update();
    }
  //}
}

void makeFade(color fade) {
  for (int i = 0; i < height/3; i++) {
    float a = map(i,0,height/3,360,0);
    strokeWeight(1);
    stroke(fade,a);
    line(0,i,width,i);
  }
}

class ParticleSystem {
  ArrayList particles;
  PVector origin;
  ParticleSystem(PVector location) {
    origin = location.copy();
    particles = new ArrayList();
  }
  
  void addParticle() {
    particles.add(new Particle(origin));
  }
  
  void setOrigin(PVector origin) {
    this.origin = origin; 
  }
  
  void run() { 
    for (int i = particles.size()-1; i >= 0; i--) {
      Particle p = particles.get(i);
      p.run();
      if (p.isDead()) {
        particles.remove(i);
      }
    }
  }
}

class Particle {
  PVector location;
  PVector velocity;
  PVector acceleration;
  float lifespan;

  Particle(PVector l) {
    acceleration = new PVector(0,0.05);
    velocity = new PVector(random(-1,1),random(-2,0));
    location = l.copy();
    lifespan = 255.0;
  }

  void run() {
    update();
    display();
  }

  // update location 
  void update() {
    velocity.add(acceleration);
    location.add(velocity);
    lifespan -= 10.0;
  }

  // display particles
  void display() {
    noStroke();
    //fill(216,226,237,lifespan-15);
    //ellipse(location.x,location.y,3,3);
    fill(237,240,255,lifespan);
    //ellipse(location.x,location.y,5,5);
    float w = random(3,9);
    ellipse(location.x,location.y,w,w);
  }
  
  // "irrelevant" particle
  boolean isDead() {
    if (lifespan < 0.0) {
      return true;
    } else {
      return false;
    }
  }
}

class Tree {
  ArrayList branches = new ArrayList();
  ArrayList leaves = new ArrayList();
  int maxLevel = 8;
  Tree(float x, float y) {
    float rootLength = random(80.0, 150.0);
    branches.add(new Branch(this,x+width/2, y+height, x+width/2, y+height-rootLength, 0, null));
    subDivide(branches.get(0));
  }
  
  void display() {
    for (int i = 0; i < branches.size(); i++) {
      Branch branch = branches.get(i);
      branch.move();
      branch.display();
    }
    
    for (int i = leaves.size()-1; i > -1; i--) {
      Leaf leaf = leaves.get(i);
      leaf.move();
      leaf.display();
      leaf.destroyIfOutBounds();
    } 
  }

  void mousePress(PVector source) {
    float branchDistThreshold = 300*300;
    
    for (Branch branch : branches) {
      float distance = distSquared(mouseX, mouseY, branch.end.x, branch.end.y);
      if (distance > branchDistThreshold) {
        continue;
      }
      
      PVector explosion = new PVector(branch.end.x, branch.end.y);
      explosion.sub(source);
      explosion.normalize();
      float mult = map(distance, 0, branchDistThreshold, 10.0, 1.0); 
      explosion.mult(mult);
      branch.applyForce(explosion);
    }
    
    float leafDistThreshold = 50*50;
    
    for (Leaf leaf : leaves) {
      float distance = distSquared(mouseX, mouseY, leaf.pos.x, leaf.pos.y);
      if (distance > leafDistThreshold) {
        continue;
      }
      
      PVector explosion = new PVector(leaf.pos.x, leaf.pos.y);
      explosion.sub(source);
      explosion.normalize();
      float mult = map(distance, 0, leafDistThreshold, 2.0, 0.1);
      mult *= random(0.8, 1.2); // variation
      explosion.mult(mult);
      leaf.applyForce(explosion);
      
      leaf.dynamic = true;
    }
  }

 void subDivide(Branch branch) {
  ArrayList newBranches = new ArrayList();
  
  int newBranchCount = (int)random(1, 4);
  
  float minLength = 0.7;
  float maxLength = 0.85;
  
  switch(newBranchCount) {
    case 2:
      newBranches.add(branch.newBranch(random(-45.0, -10.0), random(minLength, maxLength)));
      newBranches.add(branch.newBranch(random(10.0, 45.0), random(minLength, maxLength)));
      break;
    case 3:
      newBranches.add(branch.newBranch(random(-45.0, -15.0), random(minLength, maxLength)));
      newBranches.add(branch.newBranch(random(-10.0, 10.0), random(minLength, maxLength)));
      newBranches.add(branch.newBranch(random(15.0, 45.0), random(minLength, maxLength)));
      break;
    default:
      newBranches.add(branch.newBranch(random(-45.0, 45.0), random(minLength, maxLength)));
      break;
  }
  
  for (Branch newBranch : newBranches) {
    this.branches.add(newBranch);

    if (newBranch.level < this.maxLevel) {
      subDivide(newBranch);
    } else {
      // generate random leaves position on last branch
      float offset = 5.0;
      for (int i = 0; i < 5; i++) {
        this.leaves.add(new Leaf(this,newBranch.end.x+random(-offset, offset), 
        newBranch.end.y+random(-offset, offset), newBranch));
      }
    }
  }
}
}

class Leaf {
  PVector pos;
  PVector velocity = new PVector(0,0);
  PVector acc = new PVector(0,0);
  float dia;
  float a;
  float r;
  float g;
  PVector offset;
  boolean dynamic = false;
  Branch parent;
  Tree tree;
  Leaf(Tree tree, float x, float y, Branch parent) {
    this.pos = new PVector(x,y);
    this.dia = random(2,11);
    this.a = random(50,150);
    this.parent = parent;
    this.offset = new PVector(parent.restPos.x-this.pos.x, parent.restPos.y-this.pos.y);
     this.tree = tree;
    if (tree.leaves.size() % 5 == 0) {
      this.r = 232;
      this.g = 250;
    } else {
      this.r = 227;
      this.g = random(230,255);
    }
  }
  
  void display() {
    pushMatrix();
    noStroke();
    fill(this.r, g, 250, this.a);
    ellipse(this.pos.x,this.pos.y,this.dia,this.dia);
    popMatrix();
  }
  
  void bounds() {
    if (!this.dynamic) { return; }
  }
  
  void applyForce(PVector force) {
    this.acc.add(force);
  }
  
  void move() {
    if (this.dynamic) {
      // Sim leaf
      
      PVector gravity = new PVector(0, 0.025);
      this.applyForce(gravity);
      
      this.velocity.add(this.acc);
      this.pos.add(this.velocity);
      this.acc.mult(0);
      
      this.bounds();
    } else {
      // follow branch
      this.pos.x = this.parent.end.x+this.offset.x;
      this.pos.y = this.parent.end.y+this.offset.y;
    }
  } 
  
  void destroyIfOutBounds() {
    if (this.dynamic) {
      if (this.pos.x < 0 || this.pos.x > width || this.pos.y < 0 || this.pos.y > height) {
        tree.leaves.remove(this);
      }
    }
  }
}


class Branch {
  PVector start;
  PVector end;
  PVector vel = new PVector(0, 0);
  PVector acc = new PVector(0, 0);
  int level;
  Branch parent = null;
  PVector restPos;
  float restLength;
  Tree tree;

  Branch(Tree tree, float x1, float y1, float x2, float y2, int level, Branch parent) {
    this.start = new PVector(x1, y1);
    this.end = new PVector(x2, y2);
    this.level = level;
    this.restLength = dist(x1, y1, x2, y2);
    this.restPos = new PVector(x2, y2);
    this.parent = parent;
    this.tree = tree;
  }

  void display() {
    pushMatrix();
    stroke(159, 200, 195+this.level*5);
    strokeWeight(tree.maxLevel-this.level+1);
    
    if (this.parent != null) {
      line(this.parent.end.x, this.parent.end.y, this.end.x, this.end.y);
    } else {
      line(this.start.x, this.start.y, this.end.x, this.end.y);
    }
    popMatrix();
  }

  Branch newBranch(float angle, float mult) {
    // calculate new branch's direction and length
    PVector direction = new PVector(this.end.x, this.end.y);
    direction.sub(this.start);
    float branchLength = direction.mag();

    float worldAngle = degrees(atan2(direction.x, direction.y))+angle;
    direction.x = sin(radians(worldAngle));
    direction.y = cos(radians(worldAngle));
    direction.normalize();
    direction.mult(branchLength*mult);
    
    PVector newEnd = new PVector(this.end.x, this.end.y);
    newEnd.add(direction);

    return new Branch(tree, this.end.x, this.end.y, newEnd.x, newEnd.y, this.level+1, this);
  }
  
  // branch bouncing 
  void applyForce(PVector force) {
    PVector forceCopy = force.get();
    
    // smaller branches will be more bouncy
    float divValue = map(this.level, 0, tree.maxLevel, 8.0, 2.0);
    forceCopy.div(divValue);
    
    this.acc.add(forceCopy);
  }
  
  void sim() {
    PVector airDrag = new PVector(this.vel.x, this.vel.y);
    float dragMagnitude = airDrag.mag();
    airDrag.normalize();
    airDrag.mult(-1);
    airDrag.mult(0.025*dragMagnitude*dragMagnitude); // java mode
    this.applyForce(airDrag);
    
    PVector spring = new PVector(this.end.x, this.end.y);
    spring.sub(this.restPos);
    float stretchedLength = dist(this.restPos.x, this.restPos.y, this.end.x, this.end.y);
    spring.normalize();
    float elasticMult = map(this.level, 0, tree.maxLevel, 0.05, 0.1); // java mode
    spring.mult(-elasticMult*stretchedLength);
    this.applyForce(spring);
  }
  
  void move() {
    this.sim();
    
    this.vel.mult(0.95);
    
    // kill velocity below this threshold to reduce jittering
    if (this.vel.mag() < 0.05) {
      this.vel.mult(0);
    }
    
    this.vel.add(this.acc);
    this.end.add(this.vel);
    this.acc.mult(0);    
  }
}

float distSquared(float x1, float y1, float x2, float y2) {
  return (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1);
}
  
class smallStar {
  color c;
  float x;
  float y;
  float a;
  float h;
  float w;
  float centerX;
  float centerY;
  float ang;
  
  smallStar() {
    x = random(0,width);
    y = random(0,height/2);
    w = random(3,6);
    a = random(100,200);
    color[] colors = {color(232,248,255,a),color(235,234,175,a),color(242,242,208,a),
                         color(250,250,240,a),color(255,255,255,a)};
    int index = int(random(colors.length));
    c = colors[index];
    h = w;
    centerX = x + w/2;
    centerY = y + h/2;
    ang = random(0,PI)/random(1,4);
  }
  
  void display() {
    pushMatrix();
    ang = (this.ang + .01) % (2*PI);
    fill(this.c);
    noStroke();
    translate(centerX,centerY);
    rotate(ang);
    rect(-w/2,-h/2,w,h);
    popMatrix();
    //println("x" + this.x + "y" + this.y);
  }
}

class bigStar {
  float x;
  float y;
  float r1;
  float a;
  float flicker;
  float r2;
  color c;
  float ang;
  float angDir;
  
  bigStar() {
    x = random(0, width);
    y = random(0, height/2);
    r1 = random(2,5);
    a = random(40,180);
    flicker = random(400,800); 
    r2 = r1 * 2;
    color[] colors = {color(232,248,255,a),color(201,239,255,a),color(242,242,208,a),
                         color(250,250,240,a),color(255,255,255,a)};
    int index = int(random(colors.length));
    c = colors[index];
    float[] angles = {radians(millis()/170),radians(millis()/150),radians(millis()/-150),
                      radians(millis()/-170)};
    int index2 = int(random(angles.length));
    ang = angles[index2];
    angDir = (random(1)*0.1) - .05;
  }
  
  void display() {
    pushMatrix();
    //colorMode(RGB,255,255,255);
    //float newA = map(shine,-1,1,0,255);
    float newR = c >> 16 & 0xFF; //use bit shifts for faster processing
    float newG = c >> 8 & 0xFF;
    float newB = c & 0xFF;
    //float newAA = (a + newA) % 255;
    //float newC = color(newR, newG, newB, newAA); 
    float shine = sin(millis()/flicker);
    float a = this.a + map(shine,-1,1,40,100);
    //if (a < 0) { a = -a; };
    fill(newR,newG,newB,a);
    //fill(newC);
    noStroke();
    translate(x,y);
    ang = (this.ang + angDir) % (2*PI);
    rotate(ang);
    makeBigStar(0,0,r1,r2,5);
    popMatrix();
    //println("shine " + shine + "newAA " + newAA);
  }
}
    

void setGradient(int x, int y, float w, float h, color c1, color c2, int axis) {
  noFill();
  for (int i = y; i <= y+h; i++) {
    float inter = map(i, y, y+h, 0, 1);
    color c = lerpColor(c1, c2, inter);
    stroke(c);
    line(x, i, x+w, i);
  }
}

boolean raining = false;
//boolean rainToggle = false;

void keyPressed() {
  if (key == 'r') {
    if (raining == false) {
      raining = true;
      //rainNum = 80;
      //rainToggle = true;
    } else {
      raining = false;
    }
  }
}

void mousePressed() {
  PVector source = new PVector(mouseX, mouseY);
  for (Tree tree : trees) {
     tree.mousePress(source); 
  }
}

class firefly {
  PVector position;
  PVector velocity;
  float move;
  //float flicker;
  float a;
  
  firefly() {
    position = new PVector(random(0,width),random(400,650));
    velocity = new PVector(1*random(-1,1),-1*(random(-1,1)));
    move = random(-7,1);
    //flicker = sin(millis()/400.0);
    a = random(0,100); //map(flicker,-1,1,40,100);
  }
  
  void update() {
    position.add(velocity);
    if (position.x > width) {
      position.x = 0;
    }
    if (position.y > height || position.y < 360) {
      velocity.y = velocity.y * -1;
    }
  }
  
  void display() {
    pushMatrix();
    float flicker = sin(millis()/400.0);
    float a = (this.a + map(flicker,-1,1,40,100)) % 255;
    fill(255,255,240,a);
    ellipse(position.x,position.y,gMove+move, gMove+move);
    ellipse(position.x,position.y,(gMove+move)*0.5,(gMove+move)*0.5);
    popMatrix();
  }
}  

float yoff = 0.0;
float yoff2 = 0.0;

float time = 0;

void anotherNoiseWave() {
  float x = 0;
  while (x < width) {
    //stroke(255,255,255,5);
    stroke(0,65,117,120);
    //stroke(11, 114, 158, 12);
    line(x, 520 + 90 * noise(x/100, time), x, height);
    x++;
  }
  time = time + 0.02;
}

void calcWave(float amplitude) {
  // Increment theta (try different values for 'angular velocity' here
  theta += 0.02;

  // For every x value, calculate a y value with sine function
  float x = theta;
  for (int i = 0; i < yvalues.length; i++) {
    yvalues[i] = sin(x)*amplitude;
    x+=dx;
  }
}

void renderWave() {
  noStroke();
  colorMode(RGB);
  float ellipsePulse = sin(millis()/600.0);
  float ellipseColor = map(ellipsePulse, -1, 1, 150, 245);
  fill((int)ellipseColor, 220, 250, ellipseColor-60);
  // A simple way to draw the wave with an ellipse at each location
  for (int x = 0; x < yvalues.length; x++) {
    ellipse(x*1.3*xspacing, height/1.2+yvalues[x], 6, 6);
  }
  for (int x = 0; x < yvalues.length; x++) {
    ellipse(x*1.7*xspacing, height/1.3+yvalues[x], 5, 5);
  }
  for (int x = 0; x < yvalues.length; x++) {
    ellipse(x*1.4*xspacing, height/1.15+yvalues[x], 7, 7);
  }
  for (int x = 0; x < yvalues.length; x++) {
    ellipse(x*1.5*xspacing, height/1.27+yvalues[x], 6, 6);
  }
}

class Rain {
  float x = random(0, width);
  float y = random(-1000, 0);
  float size = random(3, 7);
  float speed = random(20, 40);
  void update() {
    y += speed;
    fill(255, 255, 255, 180);
    //fill(185, 197, 209, random(20, 100));
    ellipse(x, y-5, size-3, size*2-3);
    fill(185, 197, 209, random(20, 100));
    //fill(255, 255, 255, 180);
    ellipse(x, y, size, size*2);

    if (y > height) {
      if (raining) {
        x = random(0, width);
        y = random(-10, 0);
      } 
      if (!raining) { // temp fix for stopping rain: let current rainfall not respawn at top
        //drops = new Rain[0];
        y = height;
        //speed = 0;
      }
    }
  }
}

void makeBigStar(float x, float y, float radius1, float radius2, int npoints) {
  float angle = TWO_PI / npoints;
  float halfAngle = angle/2.0;
  beginShape();
  for (float a = 0; a < TWO_PI; a += angle) {
    float sx = x + cos(a) * radius2;
    float sy = y + sin(a) * radius2;
    vertex(sx, sy);
    sx = x + cos(a+halfAngle) * radius1;
    sy = y + sin(a+halfAngle) * radius1;
    vertex(sx, sy);
  }
  endShape(CLOSE);
}

void drawMountains() {
  strokeWeight(15);
  strokeJoin(ROUND);
  for (int i = 0; i <= 10; i++ ) {
    float y = i*30;
    fill(map(i, 0, 5, 200, 35), map(i, 0, 5, 250, 100), map(i, 0, 5, 255, 140));
    stroke(map(i, 0, 5, 200, 35), map(i, 0, 5, 250, 110), map(i, 0, 5, 255, 150));
    beginShape();
    vertex(0, 400+y);
    for (int q = 0; q <= width; q+=10) {
      float y2 = 400+y-abs(sin(radians(q)+i))*cos(radians(i+q/2))*map(i, 0, 5, 100, 20);
      vertex(q, y2);
    }
    vertex(width, height);
    vertex(0, height);
    endShape(CLOSE);
  }
}

Catlu – Last Project

Please click on images to play GIF (for some reason they won’t play otherwise):

For this project, what I wanted to do was do more coding with Python in Maya. Basically I wanted to practice more coding in Maya, and get to know the Maya-specific coding commands and language like the basic polygon commands and how to find the information on objects and control them. At first I thought I wanted to construct a generative city with Maya, but later decided not to because I realized that with the knowledge I’d be able to get about Maya Python in the time I had, I didn’t think I’d be satisfied at all with how complex I’d be able to make a city. After that, I decided it would be good to explore another useful feature, making objects move in a relation to another object. Originally I wanted to make a projectile object that scattered particles  in a field. To do this I started with more basic movements. I had a really hard time this project getting things to work. Whereas last time I used coding to mass produce objects at different angles, this time it was moving objects. Mass-generating objects was definitely a lot easier. Even though the things I was trying to do weren’t supposed to be that hard, I found things to be harder than I thought and more time-consuming. Figuring out Maya’s kinks without a good guide was also challenging. In the end, I could only get basic animation code to sort of work. I generated the lanterns in the scene in their formation using code, and made them move in relation to the mask using code. In the end, I think I learned more about coding in Maya and am more comfortable in it, but definitely need to practice tons more.

 

Here are the links to the code on Github. Once again WP-Syntax has failed me.

These are not the final versions of the code I used for the animation and creation. Unfortunately my Maya program crashed before I saved the final code, but the code down below are the not-so-final versions of the ones I used.

Maya lantern move code

Maya lantern generate in pattern code

Xastol – Last Project

INITIAL IDEAS

For the last project I created a “scene generator”. My initial idea consisted of developing a program that would generate random scripts and movie ideas given a database of well known films. However, after doing more research on LSTM and recursive neural networks, I found that it would take to much time to train the network.

FINAL IDEA

After conversing with Professor Golan, I began to pursue a similar idea. Utilizing a database of various photos and captions, I introduced two chat bots to a random photo.  One bot would say the caption associated with the provided photo and set the scene for the two bots to converse. After the “scene” ended the entire script would be saved into a text file, similar to the format one would find for a film.

For coding purposes, I decided to use python. Although not very good with visualizing things, python has a lot to offer in terms of collecting and presenting data to AI. Regarding AI, I found the cleverbot module to be the most responsive. Additionally, the program worked particularly well when the bots shared the same database of responses (even though shared same databse, bots were initialized differently as to not respond the exact same things every time).

ADDITIONAL COMMENTS

I actually really enjoyed the process for this project. Although I felt lost about the direction of my project, I really enjoyed the outcome and look forward to developing it more to give more humanistic qualities to the two “actors” (i.e. – text sentiment analysis,  vocal inflections,  etc.).

 

DEMOS

Favorite scenes.

Another example.

In program conversation/picture change.

A short conversation about genders (saved script).

Github: https://github.com/xapostol/60-212/tree/master/scriptingScenes

CODE

# Xavier Apostol
# 60-212 (Last Project)
# SCRIPTING SCENES
    # NOTE: runs in python 2.7

import os
import re
import time
import msvcrt
import random
import pyttsx
import pygame
from textwrap import fill
from cleverbot import Cleverbot

###########################
### INITIALIZING THINGS ###
###########################
# initializing chat bots
bot1Name = "ROBOTIC VOICE 1"
cb1 = Cleverbot()
bot2Name = "ROBOTIC VOICE 2"
cb2 = Cleverbot()

# getting started with voice recognition
engBots = pyttsx.init()
voices = engBots.getProperty('voices')

# misc
sleepTime = 1

# conversation lists
bot1Conversation = []
bot2Conversation = []

# max length for text
maxTextLen = 60


#####################
### TRUNCATE TEXT ###
#####################
# formats txt appropriately (text wrapping)
def formatTxt(text):
    lstSpace = []

    text = fill(text, maxTextLen)
    for char in range(0, len(text)):
        if text[char] == "\n":
            lstSpace.append(char)
    return lstSpace


###############################
### GET CAPTIONS AND IMAGES ###
###############################
# change to location of "photo_database" folder
picsFldr = "C:/Users/Xavier/Desktop/60-212/Class Work/FINAL PROJECT/scriptingScenes/photo_database" 
filenameLst = []

# collect photo names
for f in os.listdir(picsFldr):
    fileName, fileExt = os.path.splitext(f)
    filenameLst.append(fileName)

# collect captions
fo = open("SBU_captions_F2K.txt", 'r')
captionsList = fo.read().split('\n')
fo.close()

# all image titles are numbers
def grabCaption(imgTitle):
    indx = int(imgTitle)
    return (captionsList[indx])


####################
### PYGAME STUFF ###
####################
# initiating pygame
pygame.init()

# start window/set values
running = True
windSz = winW, winH = 1280, 720
#windSz = winW, winH = 1920, 1080
window = pygame.display.set_mode(windSz, pygame.RESIZABLE)
pygame.display.set_caption("Robotic Voices Script")

imgSz = imgW, imgH = 450, 400
#imgSz = imgW, imgH = 600, 550

backGClr = (0, 0, 0)
window.fill(backGClr)

# optimize frame rate
clock = pygame.time.Clock()
framesPSec = 30
clock.tick(framesPSec)  # change FPS

# font implementation
fontSz = imgW / 10
font  = pygame.font.SysFont("Arial", fontSz)
fontClr = (255, 255, 255)

# bot X and Y
displayTextX = winW/2
displayTextY = winH/2 + fontSz*3 + 10


####################################
### BASIS CODE FOR CONVERSATIONS ###
####################################
# loads and displays picture of interest
def displayPicture(pictureName):
    imgLoad = pygame.image.load(picsFldr + "/" + pictureName + ".jpg").convert()
    imgLoad = pygame.transform.scale(imgLoad, (imgW,imgH))
    window.blit(imgLoad,(displayTextX-imgW/2, displayTextY-(imgH + fontSz/1.5)))

# displays text for each bot on screen
def displayConvo(botName, botVoice, botText, pictureName):
    # initializing variables
    botTextLH1 = ""  # last half of botText (if too big)
    botTextLH2 = ""  # last half of botText (if bigger than twice the maxLen)
    indxChng1 = 0
    indxChng2 = 0

    # for testing
    #print(pictureName)
    #print(botName + " - " + botText)

    # set voice and what to say
    engBots.setProperty('voice', voices[botVoice].id)  # feminine voice
    engBots.say(botText)

    # start writing text
    if len(botText) > maxTextLen*2:
        # formats to three lines
        indxChng1 = formatTxt(botText)[0]
        indxChng2 = formatTxt(botText)[1]
        botTextLH2 = botText[indxChng2+1:]
        botTextLH1 = botText[indxChng1+1:indxChng2]
        botText = botText[:indxChng1]

    elif len(botText) > maxTextLen:
        # formats to two lines
        indxChng1 = formatTxt(botText)[0]
        botTextLH1 = botText[indxChng1+1:]
        botText = botText[:indxChng1]

    # sets up vocalization of text
    vocTxt = font.render(botText, False, fontClr)
    vocTxtLH1 = font.render(botTextLH1, False, fontClr)
    vocTxtLH2 = font.render(botTextLH2, False, fontClr)

    # displays text
    window.blit(vocTxt,    (displayTextX - vocTxt.get_rect().width/2,
                            displayTextY))
    window.blit(vocTxtLH1, (displayTextX - vocTxtLH1.get_rect().width/2,
                            displayTextY + fontSz))
    window.blit(vocTxtLH2, (displayTextX - vocTxtLH2.get_rect().width/2,
                            displayTextY + fontSz*2))

    displayPicture(pictureName)  # display subject
    pygame.display.update()      # update display
    engBots.runAndWait()         # vocalize text
    time.sleep(sleepTime)        # wait time
    window.fill(backGClr)        # reset canvas (set to black to erase prev msg)


#####################
### RUNNING SCENE ###
#####################
# runs entire scene (program)
def runScene():
    # setting counter and magic numbers
    count = 1
    maxRuns = 200  # free to change

    ####################
    ### CONVERSATION ###
    ####################
    # bot 1 starts conversation
    time.sleep(10)
    ranPicName = random.choice(filenameLst)

    bot1Response = grabCaption(ranPicName)
    displayConvo(bot1Name, 0, bot1Response, ranPicName)
    bot1Conversation.append(bot1Response)

    while (count <= maxRuns):
        # chances of implementing item
        ranInt = random.randint(5, 10)
        result = count % 4

        """
        # testing purposes
        print("Random Int: " + str(ranInt))
        print("Result: " + str(result))
        print("\n")
        """

        # check if randomly apply item from "Table of Responses"
        if (result == 0):
            # collects random picture and caption
            ranPicName = random.choice(filenameLst)
            bot2Response = grabCaption(ranPicName)
        # check if it's time to say goodbye.
        elif (count == maxRuns):
            bot2Response = "Bye."
        # else keep responding
        else:
            bot2Response = cb2.ask(bot1Response)


        # bot 2 responds
        displayConvo(bot2Name, 1, bot2Response, ranPicName)
        bot2Conversation.append(bot2Response)

        # bot 1 responds
        bot1Response = cb1.ask(bot2Response)
        displayConvo(bot1Name, 0, bot1Response, ranPicName)
        bot1Conversation.append(bot1Response)

        count += 1

        # press anything to stop program (break out of loop)
        if msvcrt.kbhit():
            break

    pygame.quit()


#########################
### WRITING TEXT FILE ###
#########################
# writes conversation to a .txt file (script)
def saveConversationToScript():
    file = open("robotic_voices_script.txt", "w")

    file.write("SCENE 1")
    file.write("\n")
    file.write("INT. DARKNESS")
    file.write("\n")
    file.write("\n")

    file.write("There is nothing but darkness.")
    file.write("\n")
    file.write("Suddenly, two robot voices emit into conversation.")
    file.write("\n")
    file.write("The first, ROBOTIC VOICE 1, speaks.")
    file.write("\n")
    file.write("=============================================")
    file.write("\n")
    file.write("\n")

    for i in range(0, len(bot1Conversation)):
        file.write(bot1Name)
        file.write("\n")
        file.write(bot1Conversation[i])
        file.write("\n")
        file.write("\n")

        file.write(bot2Name)
        file.write("\n")
        if i == len(bot1Conversation) - 1:
            file.write("*SILENCE*")
        else:
            file.write(bot2Conversation[i])
        file.write("\n")
        file.write("\n")

    file.write("=============================================")
    file.write("\n")
    file.write("The voices stop.")
    file.write("\n")
    file.write("There is nothing but darkness.")
    file.write("\n")
    file.write("\n")
    file.write("END SCENE")

    file.close()


#########################
### RUNNING FUNCTIONS ###
#########################
runScene()
saveConversationToScript()

Darca-Reading03

Question 1A.

Manfred Schroeder: Prime Spectrum, 1968

In terms of effective complexity, I think this piece combines the order of using same sized, uniformed, black and white pixel-like blocks formed in sixteen squares, and randomness of arranging the color of blocks in certain ways. Also it appears that the tendency of accumulated white blocks are diagonal with concentrations in certain areas, and the fact that the whole photo is symmetric is also a representation of order.

 

Question 1B.

The problem of authorship is really interesting to me. I feel like for digital generative art, when computers started to be the execution or tool, it also became a bigger part of the process of making art. The line starts to blur when both the artist and the computer are working as a team together, since the result would not be the same without the input and response from either sides. In my opinion the computer is more than just a tool when creating generative art, and the computer and artist should be partners, since they are entitled to part of the creation and this certain part alone.

 

Written by Comments Off on Darca-Reading03 Posted in Reading03

Guodu-Final Process


p5* Calligraphy

An interactive experience where you can practice writing and calligraphy with different types of randomly selected font templates and brushes


Enter your practice word below

Esc – Resets Canvas | Shift – Change Brush Style | Up or Down to Change Brush Thickness

sketch

Process 

 

 

 

 

 

 


Sketches

Next Time

There’s a lot of interaction issues like non-intuitive controls for the brushes characteristics and not knowing what brush you are on. Also, I think it will be beneficial in teaching calligraphy to show which direction one’s stroke should go.

Overall I had a lot of fun creating this, especially the limitless brush styles. When thinking about a concept for this project, I looked to my hobbies and interests, which always came back to drawing and typography. I found the idea of being able to use a tool (p5*) to make another tool and hopefully share it with others to be empowering.

When I was exposed to so many programming artists in this course, Zach Lieberman left a deep impression on my with one of his EyeO Talks (here). He talked about his interests in

  • Intersection of Drawing and Code
  • What does drawing on a computer feel like?
  • How do we describe drawings on the computer?
  • What is the sketchbook of today’s age?
  • Beginner (turn off background and you have a paintbrush) –> Advanced drawing in code (recording data)

Ultimately this exploration of bridging digital and physical in addition to drawing makes me wonder how drawing in these different mediums affects and influences a person. Would someone get better at calligraphy by hand if they practiced on this template and used a tablet? And if someone is already good at calligraphy, how well do they transfer to a digital program?

 

 

 

 

 

 

 

 

 

 

 

 

Drewch – LastProject

(Spoilers!)

 

For my final project, I decided to use Unity 3D. I only had a little more than a week to create a game in a programming environment that I have never used before, so I had to figure out how to create a game (that I wasn’t ashamed of) using what Unity provides readily: a Physics engine.

I wanted to put into practice some of the things that I have learned from playing the games that I talked about in LookingOutwards09, particularly meaning in mechanics, and thought-provoking surprise. Admittingly, the game is hard for people who are inexperienced in navigating in first-person, and sometimes it’s unbeatable because the pseudo-randomly placed cubes fly off when they spawn inside each other. Aside from those issues, I think this project was a success, considering my time constraints. I learned a lot about lighting, camera effects, player-controlling, physics, materials, and more, in just the span of a week. I’m excited to keep working with Unity.

Instructions are in the ReadMe, if you want to give it a shot. It looks much nicer real time.

download link: http://www.mediafire.com/file/svxdtvgokxew51y/FinalProject.7z

Drewch – LookingOutwards09

I’ve talked about Bound and Journey already in another LookingOutwards, and although the final project that I created is inspired by it in spades, I’ll take this moment to talk about a few other games that really influenced me and my decisions.

First is AntiChamber. I think the above video was purposely designed to confuse you, which is fitting, because the game was designed to confuse you anyways. There is a mastery of the psychological meaning of the game’s mechanics, and with it, an incredible revelation that most players go through at some point: the greatest obstacle you’ll ever face in this game is yourself.

Next up are Stanley Parable, and The Beginner’s Guide.

The big difference about these two games compared to the previous ones I mentioned is that dialogue (monologue?) is the biggest thing that drives player progression. The games/levels are designed to catch the player off-guard, and the narrators try to make players think over what they’re seeing/doing.

My goal for this project (but hopefully better accomplished in future projects) is to tap into the power of surprise and introspection, things that all of the aforementioned games do masterfully.

Darca-LookingOutwards07

Fleshmap by Fernanda Viégas and Martin Wattenberg

Project homepage: http://www.fleshmap.com/index.html

Fleshmap, in the creators’ words, is an inquiry into human desire, its collective shape and individual expressions. In a series of artistic studies, exploring the relationship between the body and its visual and verbal representation. It contains three parts: touch, listen and look.

I appreciate it when a data visualization tells me something new from what I think I know, or what I cannot understand as well by living a life as myself. Playing with the results feels like an open and impromptu discussion with friendly strangers about an intimate topic, full of little surprises, eccentric but intriguing. It tries to explore the secret of human desire, by breaking down the basic senses, examining both the collective patterns and individual uniqueness, abstracting data from real human emotions, to put together images that represent the answer that lies in itself.

It’s fascinating to see the difference between the provider and the receiver in the interaction of touching, for both men and women, that human desire can be different from different ends. It’s not symmetric and opens up the question behind the result of touching areas but what leads to this result that is more complicated than physical interactions, in this case, touch. The origin of it all is that the heart wants what it wants.

    

kander – last project

 

 

 

 

 

 

My original idea was to use a Raspberry Pi to run a Processing sketch that printed generative horoscopes to the AdaFruit Mini Thermal Printer. However, after being unable to connect the rpi to the internet, I tried simply making an Arduino sketch that would print something more simplistic, and I could just stick the printer in a little box and have a cute little object that spits out something random (I was going to go with morbid variations on lyrics from popular Christmas carols)

This is what comes with the printer. The kit I got also had a power supply and adapter. It was very easy to set up with Processing (red and black cable is power, green, yellow, and black is dataOut, dataIn, and ground). All I had to do was install the thermal printer library and connect everything according to the instructions at https://learn.adafruit.com/mini-thermal-receipt-printer/microcontroller

 

so cute! so fragile!

 

 

 

 

 

 

 

 

However, through human error, the printer got plugged into a 12V power supply instead of the 5V one it required, and got fried. Since there’s no undo command for ruining your hardware, I ended up simply creating an interface in Processing that displays the horoscope when you click on the star sign. Not as cool, funny, or as well suited to the material (I really liked the low quality of receipt paper matching these dumb little horoscopes).

I used rita.js’s markov class to generate the text for the horoscopes, using the text from a lot of horoscopes that I found online. I also had my first experience of dealing with Unicode characters in Processsing, so that was nice.

Project Github

I also think that I ought to include some of the materials and processes from my attempts with the Raspberry Pi and the Arduino, since learning through failure was such an integral part of this project!

Pi: I had never used one of these before, and I was pretty amazed at its capabilities. I learned about downloading the latest image and installed it on the pi. I ran into issues when I was trying to connect to wifi, which is necessary to setting up printing. I got a lot of practice with the linux terminal though (lol)

Arduino: I had used these before, but it had been a while. Golan and I wrote a sketch called kander1 (accessible in GitHub branch) that printed random characters. It actually worked for a little bit! I also got some practice breadboarding, which was fun.

Random side note: I also made this drawing program in Processing that I kind of like. It’s good for drawing human heads. It was part of my experimentation when I was considering including a drawing that corresponds to each star sign rather than the Unicode symbol. I wanted to be able to manipulate the drawing in other programs, so I saved the points that formed the ends of the curves into a JSON file, which can be loaded and parsed in another program to do the actual drawing again (instead of loading a png, for example).

drawing program on github

 

 

aliot-lookingoutwards07

1033 Objects by Ingrid Burrington

This project is a “visualization” of items which have been transferred to police departments around America from the Pentagon. The project plays off of 1033 Program — an initiative to transfer excess military equipment from federal agencies to civilian law enforcement agencies. At the time of the project about 200,000 objects had been transferred. In an attempt to convey the quantity of objects and ambiguity of the program, the website displays 1033 random objects. The viewer gets a good sense of the sort of items transferred and while it would take an inordinate amount of time to read the information, the items are arranged in a grid so the scrolling motion down the page drives home the notion that there are VERY VERY many of them.

aliot-lastproject

I had originally planned on finishing a app for the hololens, but amidst a hectic final week of classes and numerous software updates to unity and visual studio, I decided to go with a simpler project. I made a unity app which tracks mouth movement and screams for the user when the user opens their mouth.

Feeling frustrated with my projects and exams, I said to myself “I just want to scream.” This is often not a reasonable reaction when in class or in any public setting, but screaming can be very cathartic. This app, used with headphones, allows a user to experience the stress-reliving sensation of screaming without disturbing those in his or her vicinity.

I was heavily inspired by some of these public works. (Mostly the vicarious scream button)
http://thecreatorsproject.vice.com/blog/clever-signs-public-art-spaces

Anson-Manifesto

It is very difficult to choose just one tenant of this Critical Engineering Manifesto. Many of the tenants are interrelated, and they feed directly into much of the reading I’ve been doing recently on the human-machine entanglement.

Therefore, I pick three tenants, which I believe to be highly interrelated:

1. The Critical Engineer considers any technology depended upon to be both a challenge and a threat. The greater the dependence on a technology the greater the need to study and expose its inner workings, regardless of ownership or legal provision.

2. The Critical Engineer raises awareness that with each technological advance our techno-political literacy is challenged.

9. The Critical Engineer notes that written code expands into social and psychological realms, regulating behaviour between people and the machines they interact with. By understanding this, the Critical Engineer seeks to reconstruct user-constraints and social action through means of digital excavation.

These concepts all hinge on the power and value hierarchy wielded by those who create the “black box” around new technological developments. We have seen, with vivid and brutal clarity, what happens when we depend on a technology and allow it to “regulate our behavior” – remaining within its tightly controlled constructs, and not the questioning the legitimacy of this dependency. The “echo chamber” around social media, the propagation of fake news, the threats to cybersecurity – all of these relate to our “techno-political literacy.” When we depend on a technology, we render ourselves vulnerable to its exploitation. As the perpetual “forward march of progress” in technology continues, we are challenged to understand the new developments as they affect our liberty, communication, and access to information. The more these technological developments remain underneath the “black box” veil, the more we must apply the tenants of the Critical Engineer, to “expose its inner workings,” and “reconstruct user-constraints and social action through means of digital excavation.”

Lucy Suchman articulates the importance of “Critical Technical Practice, in which attention to the rhetorics and technologies through which a field constructs its research objects becomes an integral part of its research practice.” – Suchman, Human-Machine Reconfigurations : Plans and Situated Actions, 2nd Edition, 2007. As Critical Engineers and Practitioners, we must be self-aware and critical of our own rhetoric surrounding the technology we develop and work with – thus continually unmasking the “black box” – and refusing to become the robots of our own design.

 

anson-lookingoutwards07

The project I’ve chosen is The Architecture of Radio, by Richard Vijgen. I’m very interested in projects which take the site as an important element in the experience. I’m less interested in visualizations or experiences that take place within a closed loop of user-to-device, and more engaged with hybrid works which connect directly to the physical environment / actual time / specific location of the user.

This project is a site specific iPad app that visualizes what is normally invisible to the naked eye – the network of networks that surround us all the time : i.e. cell towers, wifi routers, satellites for navigation, communication, and observation. The project was created using Three.js and the Iconic Framework (for apps), and utilizes GPS to find cell towers that are within reach from OpenCellID.

I believe that this project taps into an important gap in many people’s knowledge – understanding just how intertwined we are, how surveilled we are, and how we depend on this invisible network of information pathways to inhabit our contemporary always-on, always-connected, and always-observed society. As data privacy and cybersecurity become increasingly problematized, educating the public about the “invisible” structures around them, and how to navigate them safely, will become (has already become) paramount.

This sentiment is alluded to in Business Insider’s review, which the artist uses in their documentation (and therefore believes to be important) : “Both beautiful and slightly disturbing.”

The physical act of holding up an iPad in public space is a bit ridiculous, though, so I think the project could consider alternate, more seamless ways of engaging with the information. And to this end I would suggest Augmented Reality, or Mixed Reality, specifically the Hololens, but to posit that the Hololens is a “seamless” experience, or non-intrusive, is a fallacy.  If Magic Leap releases what they propose to be developing – this experience would fit right in.

 

 

Keali-LookingOutwards09

Theo, Emily, and Nick’s works, and in particular Connected Worlds, have captivated me since the day I came across them; seriously, I was such a fanboy of Connected Worlds. By chance I came across their Eyeo talk to write about for my past LookingOutwards and I could not have been more grateful for that discovery.

It is December 2 and they just finished their presentation and I’m so giddy inside and I also lowkey want a picture with them but it’s okay.

Anyhow my last project will be an attempt to also make some virtual environment; personally I’ve never been as into VR or 3D, so I want the output to be purely run on the computer, reminiscent of a videogame/computer game environment. However the user manipulation and interaction persists, so ideally it would be some inferior Connected Worlds… I personally admire the aesthetics and graphics of Design I/O’s works, and would personally like to achieve my own style and effects with a similarly sleek, endearing, and effective designs. I also aim to execute a calm, serene environment–an innocent nature-scape.

Lumar Final process

See my context reflection here. It’ll give some background information on why I chose to explore this area.

algorithmicdancetypealgorithmicdancetypeish1 algorithmicdancetypeish2

algorithmicdancetypeish3 algorithmicdancetypeish4

I messed up and used popStyle instead of popMatrix when translating the ellipses on the z axis.

algorithmicdancetypeish8

I had calculated the hue of each point according to its relative position within the array of points that is constantly being added to. This enabled the drawn form, no matter how many points, to always have the full range of the rainbow. Unfortunately, it slowed down so so quickly.

So I just changed the mapping to the mod of 10,000. That being said, sometimes it takes less than 10,000 points for the two rotating circles to complete the radial curve shape.

algorithmicdancetypeish12

 

screen-shot-2016-12-02-at-9-32-40-pm screen-shot-2016-12-02-at-9-34-48-pm

screen-shot-2016-12-04-at-10-13-27-pm screen-shot-2016-12-04-at-10-05-21-pm

Mapping the derivative intersections of 0, to the borders of the letter form – essentially taking the z of the point and mapping it to the x of the letters:

Ex: “B”

AlgorithmicDanceTypeish18

Some additional inspiration from Aman, & Char –  acetate printouts in many many layers of the point cloud, mapping to the color pattern, additional inspiration from Thomas Medicus’s

Special Character 🙂 displayed below:

Changed some of the formulas to allow the points to map and move smoothly and reconnect to stay continuous. Slight patterns of color were adjusted in.

The issue is just that since everything is mapped continuously and evenly through out, the middle bar of the A is missing.

The letter “A”:

The letter ‘C’ above worked a lot better.

I really wanted to strive to get the letter form to be more informed by it’s cycloid method of creation. Simply having a cloud of points doesn’t justify the need to have the cycloid creation within the process.

This is why I wasn’t satisfied with this earlier iteration below:

For the future, what I want to do:

  • make the line of the circle disappear when you look at the letter form straight on (Golan’s suggestion)
  • stereoscopic view in 3js for google cardboard
  • remapping in real-time to different words
 
import geomerative.*;
//import cardboard.*;

// Declare the objects we are going to use, so that they are accesible from setup() and from draw()
RFont f;
RShape grp;
RPoint[] points;

import peasy.*;
PeasyCam cam;

DMachine DM;
float h2;
float w2;
float d2;
float CAPHEIGHT = 300;
// determines the main artboard size (radius)
float ArtboardRadius = 500;

// Animation for starting circles
float Radius1st = floor(random(ArtboardRadius * 0.2, ArtboardRadius * 0.5));
float Radius2nd = floor(random(ArtboardRadius * 0.2, ArtboardRadius * 0.5));
float speedModif1st = floor(random(3)+1);
float speedModif2nd = floor(random(3)+1);

// arm lengths
float armlength = (ArtboardRadius * 1.05) + floor(random(-75, 75));

// beginning location of drawing arm circles and it's speed
float n1, n2;
float nShift = radians(floor(random(45, 135)));
float nSpeed = 0.0005;

// a new layer for the drawing machine
PGraphics fDM;
ArrayList<Starpoint> starArray = new ArrayList<Starpoint>();
IntList inventoryOfEndpointi = new IntList();
int index = 0;
float letterXmax=0;
float pointsArrayIndexOfMax=0;
void setup() {
//fullScreen(PCardboard.STEREO);
inventoryOfEndpointi.append(0);
// VERY IMPORTANT: Allways initialize the library in the setup
RG.init(this);
// Load the font file we want to use (the file must be in the data folder in the sketch floder), with the size 60 and the alignment CENTER
grp = RG.getText("F", "Comfortaa_Bold.ttf", 400, LEFT);

size( 1280, 720, P3D );
d2 = dist(0, 0, w2, h2);
colorMode(HSB, 100);
cam = new PeasyCam(this, 100);
cam.lookAt(650, 300, 0);
cam.setMinimumDistance(50);
cam.setMaximumDistance(1000);
cam.setDistance(800);
cam.setYawRotationMode();
ortho();
smooth();
background(0);
strokeCap(CORNER);

n1 = radians(180);
n2 = n1 + nShift;

DM = new DMachine();
fDM = createGraphics(width, height,P3D);

starArray.add(new Starpoint(width/2, height/2,0));
RG.setPolygonizer(RG.UNIFORMLENGTH);
RG.setPolygonizerLength(1);
points = grp.getPoints();
for(int i=0; i<points.length;i++){//need to map the letter data to the size of the cycloid thingy
points[i].y = points[i].y+120;
}
}
void keyPressed(){
println("float[][] starArray={");
for(int i=0; i<starArray.size();i++){
println("{"+starArray.get(i).x+","+starArray.get(i).y+","+starArray.get(i).z+"},");
}
println("};");
}
float noiseInput = 0;
void draw() {
background(0);
// draw Artboard (BIG CIRCLE)
noFill();
//MAKE THE STROKE OPACITY DISAPPEAR WHEN YOU TURN IT TO THE SIDE
stroke(100);
strokeWeight(1);
ellipse(width/2, height/2, ArtboardRadius*2, ArtboardRadius*2);

// draw Initial Points (Begin Points)
DM.draw1stBeginPoint(n1, Radius1st, speedModif1st);
DM.draw2ndBeginPoint(n2, Radius2nd, speedModif2nd);
DM.CalculateEndPoint(armlength);

float distances = dist(DM.tX, DM.tY, width/2,height/2);//points further out should have z's from the letter that is further away
float zz = 0;//initial z value of all points

stroke(100);
line(DM.tX,DM.tY,0,DM.tX,DM.tY,zz);
int sizeOfArray = starArray.size()%11000;
starArray.add(new Starpoint(DM.tX, DM.tY,sizeOfArray));

for (int i = 1; i < starArray.size()-1; i++) {
Starpoint point = starArray.get(i);
point.render();
if (!(point.hasBeenChecked)){//check if the point is where direction changes drastically if it hasn't already been checked
point.findIfMaxLetter(starArray.get(i-1),starArray.get(i+1),i, noiseInput);
noiseInput+=.0001;
}else if(point.hasBeenChecked && (inventoryOfEndpointi.get(inventoryOfEndpointi.size()-1)>i)){
if(!(point.mapped)){
for(int j=1;j<inventoryOfEndpointi.size();j++){
int indexOfTarget = inventoryOfEndpointi.get(j);
int indexOflastEndpoint = inventoryOfEndpointi.get(j-1);
if((i<indexOfTarget) && (i>indexOflastEndpoint)){
point.mappingToEndpoints(i, starArray.get(indexOfTarget).endZ, indexOfTarget, indexOflastEndpoint, starArray.get(indexOflastEndpoint).endZ) ;//determines the endZ
}
}
}else if(point.mapped){
point.moveTowardsEndZ();
}
}
}
}
class Starpoint {
boolean hasBeenChecked = false;
boolean endCurve = false;
boolean mapped = false;
float x, y, z, endZ;
float hue;
Starpoint(float xx, float yy, int arraySize){
hue = map(arraySize,0,10999,0,100);
x = xx;
y = yy;
z=0;
}
void render(){
pushMatrix();
//color is mapped according to where it is in the array to ensure a rainbow no matter how many points there are
stroke(hue,100,100);
translate(0,0,z);
point(x,y,2);
popMatrix();
}
void moveTowardsEndZ(){
z = 0.99*z+0.01*endZ;
}
void mappingToEndpoints(int currentPointIndex, float targetZ, int targetEndpointindex, int lastEndpointindex, float lastZ){
endZ = map(currentPointIndex,lastEndpointindex,targetEndpointindex,lastZ,targetZ);
mapped = true;

}
void findIfMaxLetter(Starpoint p1,Starpoint p2, int index, float noisy){
if((p1.x>=this.x && p2.x>=this.x)|| (p1.x<=this.x && p2.x<=this.x)||(p1.y>=this.y && p2.y>=this.y)|| (p1.y<=this.y && p2.y<=this.y)){
this.endCurve=true;
this.mapped = true;
addEndCurveIndexValueToGlobalArraylist(index);
//clearly it's an edge, go search for an appropriate z
FloatList inventoryZ = new FloatList();//stores the multiple possible z's for later
for(int j=0; j<points.length; j++){
float ltrY = points[j].y;//they have to match y values generally
if (this.y>ltrY-1+height/2 && this.y<ltrY+height/2){inventoryZ.append(points[j].x);}//if it matches y, add the x value of the letter point as a possible z value
}
inventoryZ.append(0);
addEndCurveIndexValueToGlobalArraylist(index);

//which of the posible z's will it take from the inventory?
//int whichZ = floor(random(0,inventoryZ.size()-1));
int whichZ = floor(map(noise(noisy),0,1,0,inventoryZ.size()));
this.endZ = inventoryZ.get(whichZ);
}else{
float radialDistance = sq(p1.x-width/2)+sq(p1.y-height/2);
float radialDistance1 = sq(this.x-width/2)+sq(this.y-height/2);
if (radialDistance>=radialDistance1){//is it bigger farther away?
//if it is, is the next one smaller?
float radialDistance2 = sq(p2.x-width/2)+sq(p2.y-height/2);
}
}
this.hasBeenChecked = true;

}
}
void addEndCurveIndexValueToGlobalArraylist(int indexValue){
inventoryOfEndpointi.append(indexValue);
}
//////////////////
class DMachine {
float MAxx1, MAyy1;
float MAxx2, MAyy2;
float tX, tY;

float anim;

DMachine() {
anim = 0;
}

void draw1stBeginPoint(float n1_, float Radius1st_, float speedModif1st_) {
float MAx1, MAy1;
MAx1 = width/2 + ArtboardRadius * cos(n1);
MAy1 = height/2 + ArtboardRadius * -sin(n1);
stroke(60);
strokeWeight(1);
fill(0);
ellipse(MAx1, MAy1, Radius1st, Radius1st);

// resets the angle
n1 -= nSpeed;
if (degrees(n1) < 0) {
n1 = radians(360);
}

noStroke();
fill(255);
MAxx1 = MAx1 + cos(anim * speedModif1st) * Radius1st/2;
MAyy1 = MAy1 + sin(anim * speedModif1st) * Radius1st/2;
anim += 0.025;
fill(60);
ellipse(MAxx1, MAyy1, 5, 5);
}
void draw2ndBeginPoint(float n2_, float Radius2nd_, float speedModif2nd) {
float MAx2 = width/2 + ArtboardRadius * cos(n2);
float MAy2 = height/2 + ArtboardRadius * -sin(n2);
stroke(60);
strokeWeight(1);
fill(0);
ellipse(MAx2, MAy2, Radius2nd, Radius2nd);

// resets the angle
n2 -= nSpeed;
if (degrees(n2) < 0) {
n2 = radians(360);
}

noStroke();
fill(255);
MAxx2 = MAx2 + sin(anim * speedModif2nd) * Radius2nd/2;
MAyy2 = MAy2 + cos(anim * speedModif2nd) * Radius2nd/2;
anim += 0.025;
fill(60);
ellipse(MAxx2, MAyy2, 5, 5);
}

void CalculateEndPoint(float armlength_) {
// "crazy" math stuff here
// only look if you dare!

stroke(60);
fill(60);

// distance between the two main points
float a = dist(MAxx1, MAyy1, MAxx2, MAyy2);
line(MAxx1, MAyy1, MAxx2, MAyy2);

// the mid-point
float a2X = lerp(MAxx1, MAxx2, 0.5);
float a2Y = lerp(MAyy1, MAyy2, 0.5);
ellipse(a2X, a2Y, 5, 5);

// The armlength "compensator" aka the triangle height calculator
float fD1 = abs(sq(armlength) - sq(a/2));
float fD2 = sqrt(fD1);

// "compensation" angle
float alpha = asin(abs(MAyy1 - MAyy2) / a);

if (MAyy1 - MAyy2 < 0 && MAxx1 - MAxx2 < 0) {
// works in between "180-270"
// a is \ angle
tX = a2X + fD2 * cos(-PI/2+alpha);
tY = a2Y + fD2 * sin(-PI/2+alpha);
} else if (MAyy1 - MAyy2 < 0 && MAxx1 - MAxx2 > 0) {
// works in between 90-180
// a is / angle
tX = a2X + fD2 * cos(PI/2-alpha);
tY = a2Y + fD2 * sin(PI/2-alpha);
} else if (MAyy1 - MAyy2 > 0 && MAxx1 - MAxx2 > 0) {
// works in between 0-90
// a is \ angle
tX = a2X + fD2 * cos(PI/2+alpha);
tY = a2Y + fD2 * sin(PI/2+alpha);
} else if (MAyy1 - MAyy2 > 0 && MAxx1 - MAxx2 < 0) {
// works in between 270-360
// a is / angle
tX = a2X + fD2 * cos(-PI/2-alpha);
tY = a2Y + fD2 * sin(-PI/2-alpha);
}

// final lines
line(MAxx1, MAyy1, tX, tY);
line(MAxx2, MAyy2, tX, tY);
}
}

aliot-object

I wanted to make unnecessary tweaks to an everyday object that would record and post something banal about its use. I came up with many many ideas (a motion detector for a book so that a tweet would be produced for every page turned, a motion detector for eye glasses that would post an instagram picture every time you blinked, a pressure detector for a shoe that would tweet every 100 steps). The concept behind this piece would be to (perhaps pedantically) point out the banality/self-centrist nature of posts on our social media as well as the hilarious frivolity of the internet of things, taken to an extreme. At this point, such statements are almost cliché, I realize, but I wanted to do it anyway.

I finally settled on creating a coaster that would make a facebook post every time a user picked up their drink. The post would document the place and time as well as an optimistic health-related message about staying hydrated.

Written by Comments Off on aliot-object Posted in Object

aliot-lookingoutwards08

A machine that judges how sick your ollies are. This is actually a really simple piece. The artist used a gyroscope as well as an accelerator to judge the angle and displacement of a skateboard in the x, y, and z directions. This work was so compelling to me because it was simple and snarky. Although there are definitely things I would change about it (instead of making a carnival game of it, I would have installed it in a skate park where people could use the board. the skaters would then be judged loudly and harshly by the machine in the presence of their peers), ultimately I find the idea of machine judgement of subjective things really fascinating. At some point, technically speaking a machine is better equipped than any fallible human to judge certain things… but is this always true? What qualifies as an ollie? As a “good” ollie? Is it some quantifiable motion or is it pizazz and attitude? Can machines judge your “cool factor,” can it make you self conscious?

http://digg.com/video/ollie-machine

aliot-final-proposal

I would like to continue the project I was working on over the summer, here in the studio for creative inquiry. I’m planning on getting the hololens app (which receives kinect data via osc) to coordinate the hololens 3D space with the kinect 3D space so the app can be used to attach virtual objects to humans in the real space.

Written by Comments Off on aliot-final-proposal Posted in Proposal