Category: LastProject

Antar-LastProject

Generative Posters 

All course code is here and here.

For my final project I wanted to come out feeling extremely confident in typography and colour generation that could be used in a practical context. A significant portion of generative work that I’ve seen often creates interesting art while playing with type, but rarely have I seen it to create actual communication pieces. When I was first introduced to generative art and design, my peers and I felt a bit nervous. We were apprehensive and felt that some were saying that code could replace our jobs as designers. However, through this final project I finally understood how creating work with generative typography is no different than using tools that that are so familiar to designers, such as the Adobe suite.
My favourite project of the semester had been the book project because I was given new tools to explore a different way of thinking about type manipulation. Being able to easily create image and pattern through type is a powerful skill for a communication designer and I now feel more prepared to create pieces in the future that contain greater visual complexity and depth. As reflected in my personal work, I am very fond of repetition and intricate pattern details. Through the book project, and this final project, I feel that I can now effortlessly create patterns and length repetitions, that would normally take hours by hand. I also feel comfortable writing multiple programs to create richer work, which allows me to understand how to use the strengths of each language.

(Above) Some of Yuan Guo’s poster work that inspired me.

When I began this project I looked to the work of two other designers for inspiration for where to start and what type of goals to set for myself. The first designer I looked to was Yuan Guo who is very talented when it comes to colour and form relationships. His poster work was a great source for inspiration, and I began to wonder which elements from his work I could recreate with code. The first element I was interested in was his effective use of fluorescent colours and neon gradients. That became the first goal of my project, to create generative neon gradients that would translate well from screen to print. The second goal I had set for myself was to create my own style of glitching. I’ve seen this design trend overused and misused very often, but I still think it can be a tasteful way to create texture if used effectively. In think in Guo’s glitch art calendar, I think he has over used the effect, as many designers and artists do, to the point where some pieces are illegible.  In terms of creating my own glitching I first attempted to use pixel manipulation, but then discovered that I would not be able to export my work as a PDF, which is vector based. I ended up creating fake “vector pixels” (single points) as pixels instead. I selected portions of the gradients and replaced rows of “pixels” with the colour of a pixel in close proximity. I think this created a subtle texture rather than the jarring distortion that is commonly seen in glitch art.

In addition to Guo, I looked to my professor Kyuha (Q) Shim for typography inspiration. Q’s site Code and Type was a good source to look to for examples for coded type that plays with form. However, Q’s work was node based which I am unfamiliar with. Using Processing I first attempted to write type on complex wave paths, then tried to manipulate the type the same way I did with the glitching, by manipulating the pixels. Later Golan introduced me to Geomerative, which made typography manipulation effortless. I was then able to create subtle generative titles that mimicked the style of the glitch art.

(Above) Some of Q’s coded typography play that inspired me.

After creating the backgrounds, which contained the gradients, the glitch texture, and the title, I used Illustrator to convert the PDF’s to TIF’s that were the appropriate size for my InDesign file. I was then able to use basil to create the growing 60-212 title, as well as the “matrix” type treatment in the background. The “matrix” pattern uses the text from the course description paragraph, then splits the page into a grid of text boxes, large enough for only one character. It then places one character from the paragraph into each text box, then arbitrarily shifts the baseline between a given scale.

(Above)The wall behind my desk in my studio, showing the three generative posters with other work, including a page from the generative book, and a print from Brandon Ngai. Using an inkjet Epson p800 project effective for printing, as the colours turned out pleasantly neon, without being too harsh.

arialy-LastProject

Ever since I played The Beginner’s Guide, a PC game on Steam, I saw games as a more accessible medium to make art. Though I’ve played plenty of games I would consider meaningful, they were very high production in depth games. The Beginner’s Guide includes a series of short games that convey something even without a true storyline to each of them. Unity, a free gaming engine, seemed like the perfect entry point to experiment with making a basic “game.” My main goal was to make something in Unity and get past the entry learning curve, adding Unity to the list of tools I’m familiar with.

Since I would think it would be somewhat difficult to have complicated visuals within this short time, light and sound became the main components of the project. My concept for the project came about while I was listening to a song that felt really nostalgic. The nostalgia I had made me think about the close relationship between emotions, memory, and sound. I wanted to create a space where these sound memories could live. Each orb of sound in the space is related to a specific memory I have. None of the sounds were made for this project, some are found online while others are ripped from existing videos I have.

Code can be found here
Explore the project here

hizlik- LastProject

Together Again

Click here to play. Click here to see the GitHub repo.

Do you ever wish you could turn back time? Fix what you’ve broken, see those you’ve lost? Together Again is a simple game with a unique mechanic; the goal is to reunite the square and the circle, make them one and the same. Simply use your mouse to tap or bounce the falling square towards the circle. The faster you hit the square, the harder the hit and faster the motion of the square. As the levels progress, gravity becomes stronger, and it gets harder and harder to be together again. Therefore, your wish has come true and you can travel back in time to fix your mistakes, plan out your actions through trial and error, and hopefully succeed. Your past attempts show as faded versions in the background, and the more you fail, the more crowded and distracting and harder it will get to succeed. There is no game over. There is just time.

This game is related to my previous project, hizlik-Book (Year One) in that this game is about the unexpected and unbelievable event that is a breakup that occurred during the time of the printing of the book. However, like the book, the correlation of my projects to my personal life is left ambiguous and often unnoticed. Specifically, the square represents a male figure, the circle a female figure, and the color green is her favorite color.

The game was created in p5.js, with the code provided below.


var c; // canvas
var cwidth = window.innerWidth;
var cheight = window.innerHeight;
var nervous; var biko; // fonts

var gravity = 0.3;
var mouseBuffer = -3;
var bounce = -0.6;
var p2mouse = [];
var boxSize = 50;

var gameState = "menu";
var vizState = "static";
var transitionVal = 0;
var level = 1;
var boxState = "forward";
var offscreen = false;
var offscreenCounter = 0;
var keyWasDown = false;
var gameCounter = 0;

var currBox = null;
var currCirc = null;
var ghosts = [];

function setup() {
	c = createCanvas(cwidth, cheight);
	background(255);
	frameRate(30);
	noCursor();
	nervous = loadFont("Nervous.ttf");
	biko = loadFont("Biko_Regular.otf");
}

window.onresize = function() { 
	cwidth = window.innerWidth;
	cheight = window.innerHeight;
	c.size(cwidth, cheight);
}

function draw() {
	background(255);

	// splash menu
	if(gameState == "menu") {
		noStroke();
		fill(121,151,73)
		textFont(nervous);
		textSize(min(cwidth,cheight)*.1);
		textAlign(CENTER,CENTER);
		text("Together Again", cwidth/2, cheight/2);

		fill(218,225,213);
		textFont(biko);
		textSize(min(cwidth,cheight)*.03);
		text("hold SPACE to go turn back time", cwidth/2, cheight-cheight/5);

		if(keyIsDown(32)) { vizState = "transition"; }
		if(keyIsDown(68)) { gravity = 1; }

		if(vizState == "transition") {
			transitionVal += 10;
			fill(255,255,255,transitionVal);
			rect(0,0,cwidth,cheight);
			if(transitionVal>255) {
				gameState = "game";
			}
		}
	}

	// actual game
	if(gameState == "game") {
		if(currBox == null) currBox = new Box(null);
		if(currCirc == null) currCirc = new Circle();

		if(vizState == "transition") {
			currBox.draw();
			currCirc.draw();
			transitionVal -= 10;
			fill(255,255,255,transitionVal);
			rect(0,0,cwidth,cheight);
			if(transitionVal < 0 && !keyIsDown(32)) {
				vizState = "static";
			}
		}
		else {
			// check if space is being pressed
			if(keyIsDown(32)) {
				boxState = "rewind"
				if(!keyWasDown) { 
					ghosts.push(currBox);
					keyWasDown = true;
				}
			}
			else if(keyWasDown) {
				keyWasDown = false;
				boxState = "forward";
				var prevCount = 1;
				var prev = null;
				while(prev == null) {
					// console.log(ghosts.length-prevCount);
					prev = ghosts[ghosts.length-prevCount].getCurrPos();
					prevCount++;
					if(prevCount > ghosts.length) {
						for(var i=0; i 255) {
					console.log("new level");
					level++;
					gravity = constrain(gravity+.2, 0, 3);
					currBox = new Box([random(cwidth-boxSize), random(cheight/2), 0, 0]);
					// currBox = new Box(null);
					currCirc = new Circle();
					ghosts = [];
					vizState = "transition";
					boxState = "forward"
					transitionVal = 255;
				}
				else if(transitionVal-150 > 150) {
					currBox.draw();
					currCirc.draw();
					fill(255,255,255,transitionVal-150);
					rect(0,0,cwidth,cheight);
				}
				else {
					currBox.draw();
					currCirc.draw();
				}
			}
			else if(boxState == "rewind") {
				gameCounter--;
				if(gameCounter>0) {
					for(var i=0; i= 0) { this.vy += constrain(vm, -40, 40); }
				else { this.vy *= bounce; }
				this.pos[1] = mouseY+mouseBuffer;
			}

			// ========== UPDATE HORIZONTAL ========== //
			var hpos = map(mouseX, this.pos[0], this.pos[0]+boxSize, -1, 1);
			this.vx += 10 * bounce * hpos;
		}

		// update horizontal bounce
		if (collision != null && (collision[2]=="left" || collision[2]=="right")) {
			var vm = (mouseX - pmouseX); // velocity of the mouse in x direction
			this.vx += constrain(vm, -40, 40);

			if(collision[2] == "left") {
				if(this.vx > 0) { this.pos[0] = mouseX; }
				else { this.vx *= bounce; }
				this.pos[0] = mouseX;
			}
			if(collision[2] == "right") {
				if(this.vx < 0) { this.pos[0] = mouseX-boxSize; }
				else { this.vx *= bounce; }
				this.pos[0] = mouseX-boxSize;
			}
		}

		// update position
		if(this.vx > 20*gravity || this.vx < -20*gravity) { this.vx *= 0.85; }
		if(this.vy > 35*gravity || this.vy < -35*gravity) { this.vy *= 0.85; }
		this.vy = constrain(this.vy + gravity, -30, 50);
		this.pos[0] += this.vx;
		this.pos[1] += this.vy;
		
		//debug
		if(this.pos[1]-boxSize>cheight || this.pos[0]+boxSize<0 || this.pos[0]>cwidth) {
			offscreen = true;
		}
		this.history.push([this.pos[0], this.pos[1], this.vx, this.vy]);
		this.currIndex++;
	}

	this.draw = function() {
		noStroke();
		fill(57,67,7);
		if(!this.active)
			fill(239,240,235);
		if(this.currIndex >= 0 && this.currIndex < this.history.length) {
			if(boxState == "reunited") { 
				this.corner = constrain(this.corner+1, 0, 18); 
				var r = map(this.corner, 0, 18, 57, 121);
				var g = map(this.corner, 0, 18, 67, 151);
				var b = map(this.corner, 0, 18, 7, 73);
				fill(r,g,b);
			}
			rect(this.history[this.currIndex][0], this.history[this.currIndex][1], boxSize, boxSize, this.corner);
		}
	}

	this.getMouseCollisionPoint = function() {
		var top = new Line(this.pos[0],this.pos[1],this.pos[0]+boxSize,this.pos[1])
		var left = new Line(this.pos[0],this.pos[1],this.pos[0],this.pos[1]+boxSize)
		var bottom = new Line(this.pos[0],this.pos[1]+boxSize,this.pos[0]+boxSize,this.pos[1]+boxSize)
		var right = new Line(this.pos[0]+boxSize,this.pos[1],this.pos[0]+boxSize,this.pos[1]+boxSize)
		var mouse = new Line(mouseX, mouseY+mouseBuffer, pmouseX, pmouseY+mouseBuffer);
		var coords = null;
		if(pmouseX <= mouseX) {
			var result = getMouseCollision(mouse, left);
			if(result != null) {
				result.push("left");
				return result;
			}
		}
		if(pmouseX >= mouseX) {
			var result = getMouseCollision(mouse, right);
			if(result != null) {
				result.push("right");
				return result;
			}
		}
		if(pmouseY <= mouseY) {
			var result = getMouseCollision(mouse, top);
			if(result != null) {
				result.push("top");
				return result;
			}
		}
		if(pmouseY >= mouseY){
			var result = getMouseCollision(mouse, bottom);
			if(result != null) {
				result.push("bottom");
				return result;
			}
		}
		if(this.vx < 0 && 
			mouseX >= this.pos[0] &&
			pmouseX < this.pos[0]-this.vx &&
			mouseY+mouseBuffer >= this.pos[1] && 
			mouseY+mouseBuffer <= this.pos[1]+boxSize) {
			return [mouseX, mouseY+mouseBuffer, "left"];
		}
		if(this.vx > 0 && 
			mouseX <= this.pos[0]+boxSize &&
			pmouseX > this.pos[0]+boxSize-this.vx &&
			mouseY+mouseBuffer >= this.pos[1] && 
			mouseY+mouseBuffer <= this.pos[1]+boxSize) {
			return [mouseX, mouseY+mouseBuffer, "right"];
		}
		if(this.vy < 0 && 
			mouseY+mouseBuffer >= this.pos[1] &&
			pmouseY+mouseBuffer < this.pos[1]+this.vy &&
			mouseX >= this.pos[0] && 
			mouseX <= this.pos[0]+boxSize) {
			return [mouseX, mouseY+mouseBuffer, "top"];
		}
		if(this.vy > 0 && 
			mouseY+mouseBuffer <= this.pos[1]+boxSize &&
			pmouseY+mouseBuffer >= this.pos[1]+boxSize-this.vy &&
			mouseX >= this.pos[0] && 
			mouseX <= this.pos[0]+boxSize) {
			return [mouseX, mouseY+mouseBuffer, "bottom"];
		}
		return null;
	}

	this.rewind = function() {
		this.currIndex--;
		this.active = false;
	}

	this.init(startVals);
}

function Circle() {
	this.pos = [];
	this.ring = 6;
	this.corner = 25; //18 mid-point

	this.init = function() {
		this.pos = [random(cwidth), random(cheight)];
	}

	this.pulse = function() {
		this.ring-= .2;
		if(this.ring<0.5 && boxState != "reunited") {
			this.ring = 6;
		}
		else if(this.ring<0 && boxState == "reunited") {
			this.ring = 0;
		}
	}

	this.draw = function() {
		this.pulse();
		fill(176, 196, 134);
		strokeWeight(this.ring);
		stroke(239,240,235);
		// ellipse(this.pos[0], this.pos[1], boxSize, boxSize);
		if(boxState == "reunited") { 
			noStroke();
			this.corner = constrain(this.corner-1, 18, 25); 
			var r = map(this.corner, 18, 25, 121, 176);
			var g = map(this.corner, 18, 25, 151, 196);
			var b = map(this.corner, 18, 25, 73, 235);
			fill(r,g,b);
		}
		rect(this.pos[0]-boxSize/2, this.pos[1]-boxSize/2, boxSize, boxSize, this.corner)
	}

	this.init();
}

function lostMessage() {
	noStroke();
	fill(218,225,213);
	textFont(nervous);
	textSize(min(cwidth,cheight)*.08);
	text("lost your way", cwidth/2, cheight/2);
	textFont(biko);
	var sec = "seconds";
	textSize(min(cwidth,cheight)*.05);
	if(round(offscreenCounter/30)==1) sec = "second";
	text("for "+round(offscreenCounter/30)+" "+sec, cwidth/2, cheight/2+cheight/10);
	textSize(min(cwidth,cheight)*.03);
	text("hold SPACE to go turn back time", cwidth/2, cheight-cheight/5);
}

function getMouseCollision(a, b) {
	var coord = null;
	var de = ((b.y2-b.y1)*(a.x2-a.x1))-((b.x2-b.x1)*(a.y2-a.y1));
	var ua = (((b.x2-b.x1)*(a.y1-b.y1))-((b.y2-b.y1)*(a.x1-b.x1))) / de;
	var ub = (((a.x2-a.x1)*(a.y1-b.y1))-((a.y2-a.y1)*(a.x1-b.x1))) / de;
	if((ua > 0) && (ua < 1) && (ub > 0) && (ub < 1)) {
		var x = a.x1 + (ua * (a.x2-a.x1));
		var y = a.y1 + (ua * (a.y2-a.y1));
		coord = [x, y];
	}
	return coord;
}

function Line(x1, y1, x2, y2) {
	this.x1 = x1;
	this.y1 = y1;
	this.x2 = x2;
	this.y2 = y2;
}

function areReunited(box, circle) {
	var distX = Math.abs(circle.pos[0] - box.pos[0] - boxSize / 2);
    var distY = Math.abs(circle.pos[1] - box.pos[1] - boxSize / 2);

    if (distX > (boxSize / 2 + boxSize/2)) {
        return false;
    }
    if (distY > (boxSize / 2 + boxSize/2)) {
        return false;
    }

    if (distX <= (boxSize / 2)) {
        return true;
    }
    if (distY <= (boxSize / 2)) {
        return true;
    }

    var dx = distX - boxSize / 2;
    var dy = distY - boxSize / 2;
    return (dx * dx + dy * dy <= (boxSize/2 * boxSize));
}

cambu-last

My last project for this class shifted many times as I realized the limits of my capabilities and, quite honestly, left me with a trail of newly minted skills instead of a clearly defined project. This blog post is the tale of that trail of wandering…

I started off with the intention of continuing my explorations in tangible computing with Guodu, but we eventually scrapped this plan. Instead, I decided I would try to learn the skills necessary to add ‘location awareness’ to a project I’d worked on in a another class. To do this, I would need to learn the below at a bare minimum:

  • Soldering and making basic circuits [learning resource]
  • some level of Arduino programming
  • RFID Tag hardware and software [learning resource] incld. reading and writing to and from RFID Tags
  • wireless communication between computers and Arduino
  • How to control physical devices (fans, lights, etc.) with an Arduino [learning resource]

Before I had the Adafruit RFID Shield, I decided to explore another RFID reader. The Phidget 1023 RFID tag reader (borrowed from IDeaTe), but after extensive work found I could only control it via a USB Host device. I spent a night exploring a Raspberry Pi approach wherein I would be able to script control of the Phidget reader via processing on the Pi. I learned how to flash a Pi with a Processing image but driver issues with the Phidget ultimately doomed this approach.

I then moved back to an Arduino approach which required learning Physical Computing basics, including how to Solder, communicate with the Arduino board via serial in the terminal (‘screen tty’), understand baud rates, pwm, digital vs. analogue in/out and more. The true highlight of my Arduino adventure was triggering a physical lamp via a digital RFID trigger:

All that said, at one point, I realized the original goal of extending my previous project the way I intended was impossible with the time given. At that point, I completely shifted gears… This new direction was based on a few inspirations:

  1. Golan’s BlinkyTapes
  2. Shftr.io‘s Physical Computing Trailer
  3. Noodl’s External Hardware and MQTT Guide

My next goal was to control physical hardware through some type of digital control. To achieve this, I used BlinkiTape’s Processing library to render MQTT messages sent through Shftr.io from Noodl’s slider modules. See the video below:


Conclusion

In the end, despite not pulling together a singular cohesive project, I learned a great deal about Arduino, hardware programming, soldering, and other tools for communication between hardware and software systems.

Jaqaur – Last Project

Motion Tracer

For my last project (no more projects–it’s so sad to think about), I decided to combine aspects from two previous ones: the motion capture project and the plotter project. For my plotter project, I had used a paintbrush with the Axidraw instead of a pen, and I really liked the result, but the biggest criticism I got was that the content itself (binary trees) was not very compelling. So, for this project, I chose to paint more interesting material: motion over time.
I came up with the idea to trace the paths of various body parts pretty early, but it wasn’t until I recorded BVH data and wrote some sample code that I could determine how many and which body parts to trace. Originally, I had thought that tracing the hands, feet, elbows, knees, and mid-back would make for a good, somewhat “legible” image, but as Golan and literally everyone else I talked to told me: less is more. So, I ultimately decided to trace only the hands and the feet. This makes the images a bit harder to decipher (as far as figuring out what the movement was), but they look better, and I guess that’s the point.
One more change I made from my old project was the addition of multiple colors. Golan advised me against this, but I elected to completely ignore him, and I really like how the multi-colored images turned out. I mixed different watercolors (my first time using watercolors since middle school art class) in a tray, and put those coordinates into my code. I added instructions between each line of color for the Axidraw to go dip the brush in water, wipe it off on a paper towel, and dip itself in a new color. I think that the different colored lines make the images a little easier to understand, and give them a bit more depth.

I tried to record a wide variety of motion capture data for this project (thanks to several more talented volunteers) including ballet, other dance, gymnastics, parkour, martial arts, and me tripping over things. Unfortunately, I had some technical difficulties the first night of MoCap recording, so most of that data ended up unusable (extremely low frame rate). The next night, I got much better data, but I discovered later that Breckle really is not good with upside down (or even somewhat contorted) people. This made a lot of my parkour/martial arts data come out a bit weird, and I had to select only the best ones to print. If I were to do this project again, I would like to record Motion Capture data in Hunt Library perhaps, or just with a slightly better system than the one I used for this project. I think I would get somewhat nicer pictures that way.

One more aspect of my code that I want to point out is a little portion of code I made that maps the data to be an appropriate size for the paper. It runs at the beginning, and finds the maximum and minimum x and y values reached by any body part. Then, it scales that data to be as large as possible (without messing up its original proportions) while still fitting inside the paper’s margins. This means that a really tall motion will be scaled down to be the right height, and then have its weight shrunk accordingly, and a really wide motion will be scaled by its width, and then have its height shrunk accordingly. I think that this was an important feature.

Here are some of the images generated by my code:

Above are three pictures of the same motion capture data: a pirouette. It was the first motion I painted, and it took me a few tries to get the paper’s size coordinates right, and to mix the paint dark enough.


That’s an image generated by a series of martial arts movements, mostly punches. Note the dark spot where some paint dripped on the paper; I think little “mistakes” like that give these works character, as if they weren’t painted by a robot.


This one was generated by a somersault. I think when he went upside down, the data got a bit–messed up, but I like the end result nonetheless.


Here is a REALLY messed up image that was supposed to be a front walkover. You can see her hands and feet on the right side, but I think when she went upside down, Breckle didn’t know what to do, and put her body parts all over the place. I don’t really consider this one part of my final series, and since I knew the data was messy, I wasn’t going to paint it, but I had paint/paper left over so I figured, why not? It’s interesting anyway.


I really like these. The bottom two are actually paintings of the same data, just with different paint, but all four are actually paintings of the same dance move– a “Pas De Chat.” I got three separate BVH recordings of the dancer doing the same move, and painted all of them. I think it’s really interesting to note the similarities between them, especially the top two.

All in all, I am super happy with how this project turned out. I would have liked to get a little more variety in (usuable) motion capture data, because I love trying to trace where every limb goes during a movement (you can see some of this in my documentation video above). I also think that a more advanced way of capturing motion capture data would have been helpful, but what can you do?

Thanks for a great semester, Golan.

Here is a link to my code on Github: https://github.com/JacquiwithaQ/Interactivity-and-Computation/tree/master/Motion_Tracer

Anson+Kadoin-LastProject

For our last project, Kate and I both wanted to work with projections. We chose to augment a common, often overlooked but frequently used object, the water cooler. We first discussed how we wanted to create a “flooded” water effect, as if the digital water was flowing from the tap. We attempted to work with the physics engine, Box 2D, but we ended up creating our own particle system in Processing to generate the waterfall. We added some floating kids into the waterfall to create an unexpected sense of playfulness.

Here’s our video: 

To create the projection mapping, we used the Keystone library for Processing to correct the perspective from the projector throw. In the final documentation, we used Millumin to add further control over the warping of the projections, fitting the waterfall precisely to the water cooler tap and floor level. This allowed us to use bezier curves and segmenting to enhance our projection mapping accuracy.

Here’s some code:

Water[] drops = new Water[500];
Mist[] bubbles = new Mist[500];
Ball[] balls = new Ball[200];

int numBalls = 200;
float spring = 0.05;
float gravity = 0.2;
float friction = -.1;

int numFrames = 81;  // The number of frames in the animation
int currentFrame = 0;
PImage[] images = new PImage[numFrames];
//ArrayList mistClouds;

float[] p1 = {237, 0};
float[] p2 = {320, 0};
float[] p3 = {320, 0};
float[] p4 = {320, 0};
float[] p5 = {320, 0};
float[] p6 = {320, 0};
float[] p7 = {320, 0};
float[] p8 = {320, 0};
float[] p9 = {337, 0};

int mouseR = 25;



void setup() {
  size(640, 640);

  //frameRate(30);
  //animation1 = new Animation("Witch Flying_2_", 81);
  //animation2 = new Animation("PT_Teddy_", 60);

  //for (int j = 0; j < numFrames; j++) {
  //  String imageName = "Witch Flying_2_" + nf(j, 5) + ".png";
  //  images[j] = loadImage(imageName);
  //}

  for (int i = 0; i();
}

void draw() {
  background(0, 0, 0);
  //frameRate(30);

  //currentFrame = (currentFrame+1) % numFrames;  // Use % to cycle through frames
  /*int offset = 0;
   for (int x = -100; x < width; x += images[0].width) { 
   image(images[(currentFrame+offset) % numFrames], x, -20);
   offset+=2;
   image(images[(currentFrame+offset) % numFrames], x, height/2);
   offset+=2;
   }*/


  //------------------------------------------------------------//
  //                    draw pool 
  //------------------------------------------------------------//

  //fill(150, 180, 255);


  //pushMatrix();

  //beginShape();

  //translate(0, height/2);

  //curveVertex(p1[0], p1[1]);
  //curveVertex(p1[0], p1[1]);
  //curveVertex(p2[0], p2[1]);
  //curveVertex(p3[0], p3[1]);
  //curveVertex(p4[0], p4[1]);
  //curveVertex(p5[0], p5[1]);
  //curveVertex(p6[0], p6[1]);
  //curveVertex(p7[0], p7[1]);
  //curveVertex(p8[0], p8[1]);
  //curveVertex(p9[0], p9[1]);
  //curveVertex(p9[0], p9[1]);

  ////ellipse(p1[0], p1[1], 10, 10);
  ////ellipse(p2[0], p2[1], 10, 10);
  ////ellipse(p3[0], p3[1], 10, 10);
  ////ellipse(p4[0], p4[1], 10, 10);
  ////ellipse(p5[0], p5[1], 10, 10);
  ////ellipse(p6[0], p6[1], 10, 10);
  ////ellipse(p7[0], p7[1], 10, 10);
  ////ellipse(p8[0], p8[1], 10, 10);
  ////ellipse(p9[0], p9[1], 10, 10);

  //endShape(CLOSE);

  //popMatrix();

  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p2[1] -= .2;
  //    }
  //  }
  //}

  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p3[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p4[1] -= .5;
  //    }
  //  }
  //}


  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p5[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot width/2) { //shrink left
  //      p6[0] -= .5;
  //    }
  //    if (p6[1]> p1[1]) { //shrink up
  //      p6[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot width/2) { //shrink left
  //      p7[0] -= .5;
  //    }
  //    if (p7[1]> p1[1]) { //shrink up
  //      p7[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot width/2) { //shrink left
  //      p8[0] -= .5;
  //    }
  //    if (p8[1]> p1[1]) { //shrink up
  //      p8[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot width/2+25) { //shrink left
  //      p9[0] -= .5;
  //    }
  //  }
  //}

  for (int drop = 0; drop
					
					
			
Written by Comments Off on Anson+Kadoin-LastProject Posted in LastProject

Krawleb-LastProject

For my last project, I took the opportunity to learn Unity through prototyping a simple 3d local-multiplayer game.

In the game, each player controls a ‘Shepherd’. The objective of the game is to eliminate all of the enemies ‘vassals’ which are a herd of small soldiers. The only control the shepherd has over the vassals is to toggle between having them follow their shepherd, or seek out and attack the nearest enemy vassal.

This makes the gameplay tactics about positioning, choosing the right time to attack, and using the environment to their advantage. If your units attack as a group, or at a choke point, they will eliminate the enemy with ease.

Because I had never worked with unity before, the vast majority of this 2 week project was spent familiarizing myself with unity, C#, and how to use many of the built-in functionality that unity provides. This included:

• Nav Meshes & Nav Mesh Agents to control the flocking/pathfinding behavior of the AI vassals.

• Delegates, Events, and Event Subscription to allow GameObjects to relay their position / health / etc. to other GameObjects

• Instantiation, Parent/Child relationships, and how to safely destroy GameObjects to allow for modular setup/reset of the level

• Camera Viewports and how to setup splitscreen for multiplayer

• Layers, Tagging, and physics interactions / collisions

The scripts I wrote for unit control, health, combat interactions, among other things are on github here

As I began wrapping up the programming for the basic gameplay interactions (about 1 night before the due date), I decided to quickly create some low-fidelity 3D assets to create more interesting environments to battle in. 

A bridge to connect islands and act as a choke point

Some islands, to create a more divided playspace that forced players to choose when to cross between them.

Some palm trees, to add to the atmosphere and provide an additional small obstacle.

Here’s an earlier iteration of the map, with a previous bridge design that was replaced because it’s geometry interacted problematically with the vassals.

Additionally, I originally tried to work with a top-down camera, but felt that I couldn’t find a balance of showing the entire map while giving enough attention to the seeking behavior of the vassals.

I ran into several roadblocks along the way, but learned even more than I imagined I would in the given time. Unfortunately, much to my dismay as someone from a primarily visual background, I was left with very little time to focus on learning lighting, materials, and camera effects. The result was a very awkwardly-colored poorly lit prototype.

However, I loved working with unity. The separation of functionality into separate scripts and objects that allowed me to compartmentalize code was refreshing. Additionally, working with prefabs that allowed me to build up components of my program as an object felt intuitive. I will certainly work with unity again and mastering lighting and materials will be my next goal.

 

Nngdon-Last Project

The PDF version is available here: FaunaOfSloogiaII.pdf

For my last project I decided to expand on my generative book, which is about imaginary creatures on an imaginary island. The last version had only generated illustrations of the creatures, so I felt that I could supplement the concept of “fauna of an island” by giving each creature a short description, some maps indicating the habitats of them, and some rendered pictures of the animals with a natural background (trees, rivers, mountains, etc.).

The Map

I first generated a height maps using 2D Perlin noise. This results in an even spread of lands and waters across the canvas. To make the terrain more island-ish, I used a vignette mask to darken (subtract value from) the corners before rendering the noise.

After this an edge finding algorithm was used to draw the isolines.

Labeling

The next task is to label the map: to find where the mountains and seas are and name them accordingly.

I wrote my own “blob detection” algorithm inspired by flood fill. First, given a point, the program will try to draw the largest possible circle, given the rule that all pixels in that circle have to be within a certain range of height. Then, around the circumference of the circle, the program tries to generate even more such circles. This is done recursively, until no more circles larger than a certain small radius can be drawn. The union of all the circles  is returned.

Using Mitchell’s best-candidate algorithm, I picked random points evenly spread across the map, and apply my blob detection. Blobs that are very close to each other or have a lot of overlapping are merged.

Then for each blob that indicates a water area, the program checks how surrounded by land it is, and decide whether it is a lake, strait, a gulf, or a sea. For the land areas, the program decides the terrain according to its height and whether it is connected to another piece of land.

A Markov chain is used to generate the names for the places. The text is rotated and scaled according to the general shape of the area.

Finally, the program exports a JSON file, saving the seed and the names, areas and locations of the places, to be used in the next step.

 

The Description

The description costed me the most time in this project. I spent a long time thinking about ways of generating high-quality generative text.

I noticed that there are usually three major ways of making generative text people are using:

  1. Markov chain/ machine learning method. The result has good variety, and is easy to implement, as the computer does the most part for you. However the programmer has the least control over what the program is generating, and nonsensical sentences often occur.
  2. Word substitution. The human writer writes the whole paragraph, and some words in the paragraph are substituted by words chosen randomly from a bank. This method is good for generating only one or two pieces of output, and soon gets very repetitive after a few iterations. A very boring algorithm.
  3. A set of pre-defined grammar + word substitution.

The third direction seems to be able to combine order and randomness well. However as I explored deeper I discovered that it’s like teaching the computer English from scratch, and massive amount of work is probably involved to make it generating something meaningful, instead of something like:

Nosier clarinet tweezes beacuse 77 carnauba sportily misteaches.

However I was in fact able to invent a versatile regex-like syntax that makes defining a large set of grammar rather easy. I believe it’s going to be a very promising algorithm, and I’m probably going to work on it later. As for this project, I tried to look into the other two algorithms.

Grab data, tokenize and scramble

Finally after some thought, I decided to combine the the first and the second method.

First I wrote a program to steal all the articles from the internet. The program pretends to be an innocent web browser and searches sites such as wikipedia using a long list of keywords. It retrieves the source code of the pages, and parses it to get the clean, plain text version of the articles.

Then I collected a database of animal names, place names, color names etc., and searched within the articles to substitute the keywords with special tokens (such as “$q$” for the name of the query animal, “$a$” for name of other animals, “$p$” for places, “$c$” for colors, etc.)

I developed various techniques, such as score-based word similarity comparison to avoid missing any keywords. For example, an article about the grey wolf may mention “gray wolf”, “grey wolves”,”the wolf”, “wolves” referring to the same thing.

After this, a scrambling algorithm such as Markov chain is used. Notice that since the keywords are tokenized before scrambling, the generator can slide easily from one phrase to another across different articles. This gives the result interesting variety.

LSTR and charRNN

Golan pointed me to the neural networks LSTR and charRNN as alternatives to Markov chain. It was very interesting to explore them and watch as the computer learns to speak English. However they still tend to generate gibberish after training overnight. There seems to be an asymptote to the loss function: the computer is becoming better and better, but then it reaches a bottleneck, and starts to confuse itself and slips back.

Another phenomenon I observed is that the computer seems to be falling in love with a certain word, and just keeps saying it whenever it’s possible. At the worst outburst of this symptom the computer falls into a madness like:

Calf where be will calf will calf that calf will calf different calf calf calf the and calf a calf only calf a other calf calf calf calf…

And oftentimes it does not know when to end its sentences, and keeps running on.

The problem with neural networks is that it’s like a magic black box. When it works it’s magical, but when it doesn’t you don’t know where to fix. As I’m not too familiar with the details of neural networks and was entirely using other people’s libraries, I have no idea how to improve the algorithm.

Generation

I wrote my own very portable version of Markov chain in 20 lines of python code, and it seems to be working better than the neural networks.(?)

My favorite lines are:

The $q$ can take a grave dislike towards their tail, which are the primary source of prey.

A female $q$ gives birth to one another through touch, movement and sound.

The infant $q$ remains with its mother until it was strong enough to overpower it and kill it.

And paradoxical ones such as:

…the tail which is twice as often as long as two million individuals.

Finally the tokens are substituted by relevant information about the animal described. These information are stored in JSON files when the illustrations and maps are generated.

The names of all the 50 animals and places are stored in a pool, so descriptions of different animals can refer to each other. For example, in the description of animal A, it says its predator is animal B. After flipping a few pages, the reader will be able to find a detailed account of animal B, and so on.

Eyes Improvement

Golan told me that my creatures eyes look dead and need to be fixed. I added in some highlights so they look more lively now (hopefully).

Code

The complete code will be available on Github once I finalize the project. Currently I’m working on rendering the animals against a natural background.

But here’s my 20-line Markov chain in python.

 

import random
class Markov20():
	def __init__(self,corp,delim=" ",punc=[".",",",";","?","!",'"']):
		self.corp = corp
		self.punc = punc
		self.delim = delim
		for p in self.punc: self.corp = self.corp.replace(p,delim+p)
		self.corp = self.corp.split(delim)
	def predict(self,wl):
		return random.choice([self.corp[i+len(wl)] for i in range(0,len(self.corp)-len(wl)) if self.corp[i:i+len(wl)] == wl ])
	def sentence(self,w,d,l=0):
		res = w + self.delim
		i = 0
		while (l != 0 and i < l) or (l==0 and w != self.punc[0]):
			w = self.predict(res.split(self.delim)[-1-d:-1])
			res += w + self.delim
			i+=1
		for p in self.punc: res = res.replace(self.delim+p,p)
		return res
	def randsentstart(self):
		return random.choice(self.delim.join(self.corp).split(self.punc[0]+self.delim)).split(self.delim)[0]


if __name__ == "__main__":
	f1 = open("nietzsche.txt") #s3.amazonaws.com/text-datasets/nietzsche.txt	
	corp = (f1.read()).replace("  ","").replace("\n"," ").replace("\r\n"," ").replace("\r"," ").replace("=","")
	m20 = Markov20(corp)
	for i in range(0,3):
		print m20.sentence(m20.randsentstart(),2)


Zarard- LastProject

Over the semester I’ve been working with the Carnegie Museuem of Art to analyze artwork from the photographer Teenie Harris. Teenie Harris was an amazing photojournalist who captured the most comprehensive visuals of Black American life from the 1930s to the 1970s. Because I am working on this project for the next 1.5 years, I wanted my last project to lay the foundation for future explorations.

So my project was essentially to create a collection of scripts to aid me in visually annotating the Teenie Harris archive and create a system of storing that information.

Things I did over the 3 weeks:

Got code working with Microsoft Azure to get face and emotion data for the Teenie Harris Archive, as well as tagging. Which involved debugging their starter code and working with tech support to figure out why my api keys didn’t work.

Figured out Jupyter

Installed and set up a MongoDB database to hold data from Teenie Harris Archive.

Learned the Pymongo driver for interacting with MongoDB through python.

Learned multithreading so that the code could run 12 times as fast (hours instead of weeks)

Integrated the data and descriptions from the Carnegie Museum of Art into the database.

Integrated the data and descriptions from dlib into the database.

Got Familiar with the OpenCV library and the Pillow library for annotating and photo manipulation.

Created images that combined CMOA, Dlib, Azure, and OpenCV data and inserted them into a database.

All of this work sets me up to do meaningful composition analysis on the data. View the results below:

Tigop- Final

This is a small world that I am in the process of making! I’m really excited about it because it’s based off a fictional world that I’ve created outside of class. It makes me really happy to see it manifest itself in P3D! I’ve been working on a manifesto for this other world outside of class as well, and it’s a manifesto that is supposed to be relevant to the world we are living in today (as is the fictional world). So far, this processing program has been my way of exploring how I could allow viewers to interact with the manifesto and be pulled into the great slime bubble.

Throughout the game, viewers have to use their mouse as well as their face to explore. By moving your face and listening to prompts that are given through text, you are allowed to progress through the world and uncover more and more.

Takos – Final Project


Process:
My original idea for this project was to make modular, jointed people, with customizable features(height, width, etc). I have made ball jointed dolls out of clay before, so this was a natural step for me. I started off by trying to geometrize the organic shapes of the human body into something tha.t I could feasibly code on OpenSCAD. I was originally going to make the joints different sizes depending on which part of the body they belonged too, but decided not too because having the joints all be the same size allowed for more customizability and turned what was supposed to be just a movable person into a sort of toolkit to create with and alter.

I started off by prototyping a ball and socket joint. I went through a few different 3d printing tests until I found a desgin that really worked for me: The end socket covers over 50% of the ball, but has slits cut out of it to allow it to expand when being fitted. It also has a cut out of a shpere which allows joints to be rotated to almost 90 degrees. This allows the use of a double joint (used in the elbows, shoulders, knees, and hips) to achieve more realistic movability.
Other unique joints are the upper to lower torso joint, and the neck to head joint. The torso joint’s socket needed to be able to intersect with the upper torso in order to achieve a full range of motion, so I subtracted a sphere around the ball of the upper torso. This is shpwn below. The neck join needed to be able to expand while also fitting fully inside the head. To allow this to happen, I subtracted a sphere from outside the socket, which allows the socket to expand when the ball is inserted.

Github Repository:
https://github.com/tatyanade/Modular-Person

Written by Comments Off on Takos – Final Project Posted in LastProject

Keali-Last Project

f

Staying loyal to my usual aesthetic and everlasting motivation of making a beautiful, virtual environment… I’d like to thank Golan for dealing with the general constant theme, but I hope some of my other projects were different and experimental enough as well (book, data viz, mocap, etc.) For starters, it was unfortunate that I didn’t have the confidence to get started and learn OpenFrameworks within the timeframe of this project; it is definitely something I want to pick up and utilize in the future–especially since this project really hammered in to me what the limits of Processing were. For an interactive environment, Processing will undeniably start running super slowly because of all the assets involved; this actually limited what interactivity and aesthetics I wanted to include in this environment, and I had to prioritize what to keep without sacrificing a sense of completion in the overall product.

As such I aimed for something calm (my own bias) and serene… an environment that would provide subtle and endearing movement and interaction potential. I definitely focused on setting up the entire environment overall, much like a stage setting, rather than attempting to go for quantitative assets. I wanted a well-rendered and atmospherically polished environment rather than a more lackluster one with more features. Basically the whole setup was contingent on object-oriented programming, something I actually barely did throughout the semester for previous projects–but this was crucial here as every characteristic of the setting was its own object class: I worked with waves, landscapes, fireflies/particles, rain, stars, and branches and leaves (aggregated into trees), to carefully render each aspect in the right order to eventually stack together to become the final environment. I initially was inspired to include more air-related features through some beautiful OpenProcessing samples which I wanted to customize, but the utility of noise to render the smoke and clouds slowed down my entire program to the point that movement was hardly seen, so I had to abandon that. With the idealistic assets I had in mind and the further progress of how much code I’d typed with how increasingly sluggish my program was running, I had to make the decisions to output a workable and seamless product that still stayed aesthetic. I am incredibly happy with the resulting soft and subtle features, with the highlight movements from the stars, waves, and fireflies; and the interactions of the light particles following the user’s cursor and the movement of vectors upon clicking trees to allow for branch swaying and leaves scattering and falling. It is definitely not a game, or an interactive interface with a goal per say, but the whole point was to imagine a user mindlessly enjoying the program, as if it were a virtual art piece.

In contrast to how I had to present it the final Friday of class, I really think the program benefits from an experience where the user is running it alone in a dark room, with headphones. That is how I believe the piece should be seen and felt, and funnily enough the dark room and headphones is exactly the environment that I mostly coded it in as I worked on it during many nights. With this setup I feel like the user would become more fully immersed in the cathartic and serene nature of Nocturne.

This project is definitely something I want to continue and further develop in my free time or if given the chance in the future. This would probably also assume that I will be developing and enhancing it in OpenFrameworks, because as of now I think I’m already hitting the max of what Processing will run and handle smoothly. As of now I intent to add interactions with new objects such as plant growth and animals, and perhaps some more precipitation and particle options.

GitHub repository//

/*
REFERENCES: 

https://www.openprocessing.org/sketch/179401
https://www.openprocessing.org/sketch/90192
https://processing.org/examples/sinewave.html
https://processing.org/examples/multipleparticlesystems.html
https://processing.org/examples/simpleparticlesystem.html
*/


import processing.sound.*;

int xspacing = 16;   // How far apart should each horizontal location be spaced
int w;              // Width of entire wave

float theta = 0.0;  // Start angle at 0
//float amplitude = 75.0;  // Height of wave --> moved as parameter
float period = 500.0;  // How many pixels before the wave repeats
float dx;  // Value for incrementing X, a function of period and xspacing
float[] yvalues;  // Using an array to store height values for the wave

SoundFile file;

int Y_AXIS = 1;
color c1, c2;

color cloudFill, fade, far, near, mist;

int rainNum = 80;
Rain[] drops = new Rain[rainNum];

ArrayList trees = new ArrayList();

void setup() {
  size(1500, 700);
  smooth();
  
  file = new SoundFile(this, "khiitest.wav");
  //file = new SoundFile(this, "khii.mp3"); // testing audio loop?? 
  file.loop();
  
  c1 = color(17, 24, 51);
  c2 = color(24, 55, 112);

  // some setups aborted
  fade = color(64, 85, 128);

  w = width+16;
  dx = (TWO_PI / period) * xspacing;
  yvalues = new float[w/xspacing];

  //for (int i = 0; i < particleCount; i++) {
  //  sparks[i] = new Particle(176, 203, 235);

  for (int i = 0; i < smallStarList.length; i++) {
    smallStarList[i] = new smallStar();
  }
  
  for (int i = 0; i < bigStarList.length; i++) {
    bigStarList[i] = new bigStar();
  }
  
  for (int i = 0; i < fireflyList.length; i++) {
    fireflyList[i] = new firefly();
  }
  
  trees.add(new Tree(600,0));
  trees.add(new Tree(-500,0));
  trees.add(new Tree(300,0));
  trees.add(new Tree(50,0));
  trees.add(new Tree(400,0));
  for (int i = 0; i < rainNum; i++) {
    drops[i] = new Rain();
  }
  
  ps = new ParticleSystem(new PVector(400,600)); // buffer default loc
}

smallStar[] smallStarList = new smallStar[110];
bigStar[] bigStarList = new bigStar[50];
firefly[] fireflyList = new firefly[70];
float gMove = map(.15,0,.3,0,30);
ParticleSystem ps;


void draw() {
  background(0);
  setGradient(0, 0, width, height, c1, c2, Y_AXIS);

  makeFade(fade);
  //clouds(cloudFill); //cloud reference from https://www.openprocessing.org/sketch/179401

  for (int i = 0; i < smallStarList.length; i++) {
    smallStarList[i].display();
  }
  
  for (int i = 0; i < bigStarList.length; i++) {
    bigStarList[i].display();
  }

  drawMountains();
  
  ps.addParticle();
  ps.run();
  for (Tree tree : trees) {
    tree.display(); 
  }
  
  anotherNoiseWave();

  calcWave(30.0);
  renderWave();
  
  for (int i = 0; i < fireflyList.length; i++) {
    fireflyList[i].update();
    fireflyList[i].display();
  }
  
  ps.setOrigin(new PVector(mouseX,mouseY)); 
  
  //if (raining) {  for temp rain no-respawn fix 
    for (int i = 0; i < rainNum; i++) {
      drops[i].update();
    }
  //}
}

void makeFade(color fade) {
  for (int i = 0; i < height/3; i++) {
    float a = map(i,0,height/3,360,0);
    strokeWeight(1);
    stroke(fade,a);
    line(0,i,width,i);
  }
}

class ParticleSystem {
  ArrayList particles;
  PVector origin;
  ParticleSystem(PVector location) {
    origin = location.copy();
    particles = new ArrayList();
  }
  
  void addParticle() {
    particles.add(new Particle(origin));
  }
  
  void setOrigin(PVector origin) {
    this.origin = origin; 
  }
  
  void run() { 
    for (int i = particles.size()-1; i >= 0; i--) {
      Particle p = particles.get(i);
      p.run();
      if (p.isDead()) {
        particles.remove(i);
      }
    }
  }
}

class Particle {
  PVector location;
  PVector velocity;
  PVector acceleration;
  float lifespan;

  Particle(PVector l) {
    acceleration = new PVector(0,0.05);
    velocity = new PVector(random(-1,1),random(-2,0));
    location = l.copy();
    lifespan = 255.0;
  }

  void run() {
    update();
    display();
  }

  // update location 
  void update() {
    velocity.add(acceleration);
    location.add(velocity);
    lifespan -= 10.0;
  }

  // display particles
  void display() {
    noStroke();
    //fill(216,226,237,lifespan-15);
    //ellipse(location.x,location.y,3,3);
    fill(237,240,255,lifespan);
    //ellipse(location.x,location.y,5,5);
    float w = random(3,9);
    ellipse(location.x,location.y,w,w);
  }
  
  // "irrelevant" particle
  boolean isDead() {
    if (lifespan < 0.0) {
      return true;
    } else {
      return false;
    }
  }
}

class Tree {
  ArrayList branches = new ArrayList();
  ArrayList leaves = new ArrayList();
  int maxLevel = 8;
  Tree(float x, float y) {
    float rootLength = random(80.0, 150.0);
    branches.add(new Branch(this,x+width/2, y+height, x+width/2, y+height-rootLength, 0, null));
    subDivide(branches.get(0));
  }
  
  void display() {
    for (int i = 0; i < branches.size(); i++) {
      Branch branch = branches.get(i);
      branch.move();
      branch.display();
    }
    
    for (int i = leaves.size()-1; i > -1; i--) {
      Leaf leaf = leaves.get(i);
      leaf.move();
      leaf.display();
      leaf.destroyIfOutBounds();
    } 
  }

  void mousePress(PVector source) {
    float branchDistThreshold = 300*300;
    
    for (Branch branch : branches) {
      float distance = distSquared(mouseX, mouseY, branch.end.x, branch.end.y);
      if (distance > branchDistThreshold) {
        continue;
      }
      
      PVector explosion = new PVector(branch.end.x, branch.end.y);
      explosion.sub(source);
      explosion.normalize();
      float mult = map(distance, 0, branchDistThreshold, 10.0, 1.0); 
      explosion.mult(mult);
      branch.applyForce(explosion);
    }
    
    float leafDistThreshold = 50*50;
    
    for (Leaf leaf : leaves) {
      float distance = distSquared(mouseX, mouseY, leaf.pos.x, leaf.pos.y);
      if (distance > leafDistThreshold) {
        continue;
      }
      
      PVector explosion = new PVector(leaf.pos.x, leaf.pos.y);
      explosion.sub(source);
      explosion.normalize();
      float mult = map(distance, 0, leafDistThreshold, 2.0, 0.1);
      mult *= random(0.8, 1.2); // variation
      explosion.mult(mult);
      leaf.applyForce(explosion);
      
      leaf.dynamic = true;
    }
  }

 void subDivide(Branch branch) {
  ArrayList newBranches = new ArrayList();
  
  int newBranchCount = (int)random(1, 4);
  
  float minLength = 0.7;
  float maxLength = 0.85;
  
  switch(newBranchCount) {
    case 2:
      newBranches.add(branch.newBranch(random(-45.0, -10.0), random(minLength, maxLength)));
      newBranches.add(branch.newBranch(random(10.0, 45.0), random(minLength, maxLength)));
      break;
    case 3:
      newBranches.add(branch.newBranch(random(-45.0, -15.0), random(minLength, maxLength)));
      newBranches.add(branch.newBranch(random(-10.0, 10.0), random(minLength, maxLength)));
      newBranches.add(branch.newBranch(random(15.0, 45.0), random(minLength, maxLength)));
      break;
    default:
      newBranches.add(branch.newBranch(random(-45.0, 45.0), random(minLength, maxLength)));
      break;
  }
  
  for (Branch newBranch : newBranches) {
    this.branches.add(newBranch);

    if (newBranch.level < this.maxLevel) {
      subDivide(newBranch);
    } else {
      // generate random leaves position on last branch
      float offset = 5.0;
      for (int i = 0; i < 5; i++) {
        this.leaves.add(new Leaf(this,newBranch.end.x+random(-offset, offset), 
        newBranch.end.y+random(-offset, offset), newBranch));
      }
    }
  }
}
}

class Leaf {
  PVector pos;
  PVector velocity = new PVector(0,0);
  PVector acc = new PVector(0,0);
  float dia;
  float a;
  float r;
  float g;
  PVector offset;
  boolean dynamic = false;
  Branch parent;
  Tree tree;
  Leaf(Tree tree, float x, float y, Branch parent) {
    this.pos = new PVector(x,y);
    this.dia = random(2,11);
    this.a = random(50,150);
    this.parent = parent;
    this.offset = new PVector(parent.restPos.x-this.pos.x, parent.restPos.y-this.pos.y);
     this.tree = tree;
    if (tree.leaves.size() % 5 == 0) {
      this.r = 232;
      this.g = 250;
    } else {
      this.r = 227;
      this.g = random(230,255);
    }
  }
  
  void display() {
    pushMatrix();
    noStroke();
    fill(this.r, g, 250, this.a);
    ellipse(this.pos.x,this.pos.y,this.dia,this.dia);
    popMatrix();
  }
  
  void bounds() {
    if (!this.dynamic) { return; }
  }
  
  void applyForce(PVector force) {
    this.acc.add(force);
  }
  
  void move() {
    if (this.dynamic) {
      // Sim leaf
      
      PVector gravity = new PVector(0, 0.025);
      this.applyForce(gravity);
      
      this.velocity.add(this.acc);
      this.pos.add(this.velocity);
      this.acc.mult(0);
      
      this.bounds();
    } else {
      // follow branch
      this.pos.x = this.parent.end.x+this.offset.x;
      this.pos.y = this.parent.end.y+this.offset.y;
    }
  } 
  
  void destroyIfOutBounds() {
    if (this.dynamic) {
      if (this.pos.x < 0 || this.pos.x > width || this.pos.y < 0 || this.pos.y > height) {
        tree.leaves.remove(this);
      }
    }
  }
}


class Branch {
  PVector start;
  PVector end;
  PVector vel = new PVector(0, 0);
  PVector acc = new PVector(0, 0);
  int level;
  Branch parent = null;
  PVector restPos;
  float restLength;
  Tree tree;

  Branch(Tree tree, float x1, float y1, float x2, float y2, int level, Branch parent) {
    this.start = new PVector(x1, y1);
    this.end = new PVector(x2, y2);
    this.level = level;
    this.restLength = dist(x1, y1, x2, y2);
    this.restPos = new PVector(x2, y2);
    this.parent = parent;
    this.tree = tree;
  }

  void display() {
    pushMatrix();
    stroke(159, 200, 195+this.level*5);
    strokeWeight(tree.maxLevel-this.level+1);
    
    if (this.parent != null) {
      line(this.parent.end.x, this.parent.end.y, this.end.x, this.end.y);
    } else {
      line(this.start.x, this.start.y, this.end.x, this.end.y);
    }
    popMatrix();
  }

  Branch newBranch(float angle, float mult) {
    // calculate new branch's direction and length
    PVector direction = new PVector(this.end.x, this.end.y);
    direction.sub(this.start);
    float branchLength = direction.mag();

    float worldAngle = degrees(atan2(direction.x, direction.y))+angle;
    direction.x = sin(radians(worldAngle));
    direction.y = cos(radians(worldAngle));
    direction.normalize();
    direction.mult(branchLength*mult);
    
    PVector newEnd = new PVector(this.end.x, this.end.y);
    newEnd.add(direction);

    return new Branch(tree, this.end.x, this.end.y, newEnd.x, newEnd.y, this.level+1, this);
  }
  
  // branch bouncing 
  void applyForce(PVector force) {
    PVector forceCopy = force.get();
    
    // smaller branches will be more bouncy
    float divValue = map(this.level, 0, tree.maxLevel, 8.0, 2.0);
    forceCopy.div(divValue);
    
    this.acc.add(forceCopy);
  }
  
  void sim() {
    PVector airDrag = new PVector(this.vel.x, this.vel.y);
    float dragMagnitude = airDrag.mag();
    airDrag.normalize();
    airDrag.mult(-1);
    airDrag.mult(0.025*dragMagnitude*dragMagnitude); // java mode
    this.applyForce(airDrag);
    
    PVector spring = new PVector(this.end.x, this.end.y);
    spring.sub(this.restPos);
    float stretchedLength = dist(this.restPos.x, this.restPos.y, this.end.x, this.end.y);
    spring.normalize();
    float elasticMult = map(this.level, 0, tree.maxLevel, 0.05, 0.1); // java mode
    spring.mult(-elasticMult*stretchedLength);
    this.applyForce(spring);
  }
  
  void move() {
    this.sim();
    
    this.vel.mult(0.95);
    
    // kill velocity below this threshold to reduce jittering
    if (this.vel.mag() < 0.05) {
      this.vel.mult(0);
    }
    
    this.vel.add(this.acc);
    this.end.add(this.vel);
    this.acc.mult(0);    
  }
}

float distSquared(float x1, float y1, float x2, float y2) {
  return (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1);
}
  
class smallStar {
  color c;
  float x;
  float y;
  float a;
  float h;
  float w;
  float centerX;
  float centerY;
  float ang;
  
  smallStar() {
    x = random(0,width);
    y = random(0,height/2);
    w = random(3,6);
    a = random(100,200);
    color[] colors = {color(232,248,255,a),color(235,234,175,a),color(242,242,208,a),
                         color(250,250,240,a),color(255,255,255,a)};
    int index = int(random(colors.length));
    c = colors[index];
    h = w;
    centerX = x + w/2;
    centerY = y + h/2;
    ang = random(0,PI)/random(1,4);
  }
  
  void display() {
    pushMatrix();
    ang = (this.ang + .01) % (2*PI);
    fill(this.c);
    noStroke();
    translate(centerX,centerY);
    rotate(ang);
    rect(-w/2,-h/2,w,h);
    popMatrix();
    //println("x" + this.x + "y" + this.y);
  }
}

class bigStar {
  float x;
  float y;
  float r1;
  float a;
  float flicker;
  float r2;
  color c;
  float ang;
  float angDir;
  
  bigStar() {
    x = random(0, width);
    y = random(0, height/2);
    r1 = random(2,5);
    a = random(40,180);
    flicker = random(400,800); 
    r2 = r1 * 2;
    color[] colors = {color(232,248,255,a),color(201,239,255,a),color(242,242,208,a),
                         color(250,250,240,a),color(255,255,255,a)};
    int index = int(random(colors.length));
    c = colors[index];
    float[] angles = {radians(millis()/170),radians(millis()/150),radians(millis()/-150),
                      radians(millis()/-170)};
    int index2 = int(random(angles.length));
    ang = angles[index2];
    angDir = (random(1)*0.1) - .05;
  }
  
  void display() {
    pushMatrix();
    //colorMode(RGB,255,255,255);
    //float newA = map(shine,-1,1,0,255);
    float newR = c >> 16 & 0xFF; //use bit shifts for faster processing
    float newG = c >> 8 & 0xFF;
    float newB = c & 0xFF;
    //float newAA = (a + newA) % 255;
    //float newC = color(newR, newG, newB, newAA); 
    float shine = sin(millis()/flicker);
    float a = this.a + map(shine,-1,1,40,100);
    //if (a < 0) { a = -a; };
    fill(newR,newG,newB,a);
    //fill(newC);
    noStroke();
    translate(x,y);
    ang = (this.ang + angDir) % (2*PI);
    rotate(ang);
    makeBigStar(0,0,r1,r2,5);
    popMatrix();
    //println("shine " + shine + "newAA " + newAA);
  }
}
    

void setGradient(int x, int y, float w, float h, color c1, color c2, int axis) {
  noFill();
  for (int i = y; i <= y+h; i++) {
    float inter = map(i, y, y+h, 0, 1);
    color c = lerpColor(c1, c2, inter);
    stroke(c);
    line(x, i, x+w, i);
  }
}

boolean raining = false;
//boolean rainToggle = false;

void keyPressed() {
  if (key == 'r') {
    if (raining == false) {
      raining = true;
      //rainNum = 80;
      //rainToggle = true;
    } else {
      raining = false;
    }
  }
}

void mousePressed() {
  PVector source = new PVector(mouseX, mouseY);
  for (Tree tree : trees) {
     tree.mousePress(source); 
  }
}

class firefly {
  PVector position;
  PVector velocity;
  float move;
  //float flicker;
  float a;
  
  firefly() {
    position = new PVector(random(0,width),random(400,650));
    velocity = new PVector(1*random(-1,1),-1*(random(-1,1)));
    move = random(-7,1);
    //flicker = sin(millis()/400.0);
    a = random(0,100); //map(flicker,-1,1,40,100);
  }
  
  void update() {
    position.add(velocity);
    if (position.x > width) {
      position.x = 0;
    }
    if (position.y > height || position.y < 360) {
      velocity.y = velocity.y * -1;
    }
  }
  
  void display() {
    pushMatrix();
    float flicker = sin(millis()/400.0);
    float a = (this.a + map(flicker,-1,1,40,100)) % 255;
    fill(255,255,240,a);
    ellipse(position.x,position.y,gMove+move, gMove+move);
    ellipse(position.x,position.y,(gMove+move)*0.5,(gMove+move)*0.5);
    popMatrix();
  }
}  

float yoff = 0.0;
float yoff2 = 0.0;

float time = 0;

void anotherNoiseWave() {
  float x = 0;
  while (x < width) {
    //stroke(255,255,255,5);
    stroke(0,65,117,120);
    //stroke(11, 114, 158, 12);
    line(x, 520 + 90 * noise(x/100, time), x, height);
    x++;
  }
  time = time + 0.02;
}

void calcWave(float amplitude) {
  // Increment theta (try different values for 'angular velocity' here
  theta += 0.02;

  // For every x value, calculate a y value with sine function
  float x = theta;
  for (int i = 0; i < yvalues.length; i++) {
    yvalues[i] = sin(x)*amplitude;
    x+=dx;
  }
}

void renderWave() {
  noStroke();
  colorMode(RGB);
  float ellipsePulse = sin(millis()/600.0);
  float ellipseColor = map(ellipsePulse, -1, 1, 150, 245);
  fill((int)ellipseColor, 220, 250, ellipseColor-60);
  // A simple way to draw the wave with an ellipse at each location
  for (int x = 0; x < yvalues.length; x++) {
    ellipse(x*1.3*xspacing, height/1.2+yvalues[x], 6, 6);
  }
  for (int x = 0; x < yvalues.length; x++) {
    ellipse(x*1.7*xspacing, height/1.3+yvalues[x], 5, 5);
  }
  for (int x = 0; x < yvalues.length; x++) {
    ellipse(x*1.4*xspacing, height/1.15+yvalues[x], 7, 7);
  }
  for (int x = 0; x < yvalues.length; x++) {
    ellipse(x*1.5*xspacing, height/1.27+yvalues[x], 6, 6);
  }
}

class Rain {
  float x = random(0, width);
  float y = random(-1000, 0);
  float size = random(3, 7);
  float speed = random(20, 40);
  void update() {
    y += speed;
    fill(255, 255, 255, 180);
    //fill(185, 197, 209, random(20, 100));
    ellipse(x, y-5, size-3, size*2-3);
    fill(185, 197, 209, random(20, 100));
    //fill(255, 255, 255, 180);
    ellipse(x, y, size, size*2);

    if (y > height) {
      if (raining) {
        x = random(0, width);
        y = random(-10, 0);
      } 
      if (!raining) { // temp fix for stopping rain: let current rainfall not respawn at top
        //drops = new Rain[0];
        y = height;
        //speed = 0;
      }
    }
  }
}

void makeBigStar(float x, float y, float radius1, float radius2, int npoints) {
  float angle = TWO_PI / npoints;
  float halfAngle = angle/2.0;
  beginShape();
  for (float a = 0; a < TWO_PI; a += angle) {
    float sx = x + cos(a) * radius2;
    float sy = y + sin(a) * radius2;
    vertex(sx, sy);
    sx = x + cos(a+halfAngle) * radius1;
    sy = y + sin(a+halfAngle) * radius1;
    vertex(sx, sy);
  }
  endShape(CLOSE);
}

void drawMountains() {
  strokeWeight(15);
  strokeJoin(ROUND);
  for (int i = 0; i <= 10; i++ ) {
    float y = i*30;
    fill(map(i, 0, 5, 200, 35), map(i, 0, 5, 250, 100), map(i, 0, 5, 255, 140));
    stroke(map(i, 0, 5, 200, 35), map(i, 0, 5, 250, 110), map(i, 0, 5, 255, 150));
    beginShape();
    vertex(0, 400+y);
    for (int q = 0; q <= width; q+=10) {
      float y2 = 400+y-abs(sin(radians(q)+i))*cos(radians(i+q/2))*map(i, 0, 5, 100, 20);
      vertex(q, y2);
    }
    vertex(width, height);
    vertex(0, height);
    endShape(CLOSE);
  }
}

Catlu – Last Project

Please click on images to play GIF (for some reason they won’t play otherwise):

For this project, what I wanted to do was do more coding with Python in Maya. Basically I wanted to practice more coding in Maya, and get to know the Maya-specific coding commands and language like the basic polygon commands and how to find the information on objects and control them. At first I thought I wanted to construct a generative city with Maya, but later decided not to because I realized that with the knowledge I’d be able to get about Maya Python in the time I had, I didn’t think I’d be satisfied at all with how complex I’d be able to make a city. After that, I decided it would be good to explore another useful feature, making objects move in a relation to another object. Originally I wanted to make a projectile object that scattered particles  in a field. To do this I started with more basic movements. I had a really hard time this project getting things to work. Whereas last time I used coding to mass produce objects at different angles, this time it was moving objects. Mass-generating objects was definitely a lot easier. Even though the things I was trying to do weren’t supposed to be that hard, I found things to be harder than I thought and more time-consuming. Figuring out Maya’s kinks without a good guide was also challenging. In the end, I could only get basic animation code to sort of work. I generated the lanterns in the scene in their formation using code, and made them move in relation to the mask using code. In the end, I think I learned more about coding in Maya and am more comfortable in it, but definitely need to practice tons more.

 

Here are the links to the code on Github. Once again WP-Syntax has failed me.

These are not the final versions of the code I used for the animation and creation. Unfortunately my Maya program crashed before I saved the final code, but the code down below are the not-so-final versions of the ones I used.

Maya lantern move code

Maya lantern generate in pattern code

Xastol – Last Project

INITIAL IDEAS

For the last project I created a “scene generator”. My initial idea consisted of developing a program that would generate random scripts and movie ideas given a database of well known films. However, after doing more research on LSTM and recursive neural networks, I found that it would take to much time to train the network.

FINAL IDEA

After conversing with Professor Golan, I began to pursue a similar idea. Utilizing a database of various photos and captions, I introduced two chat bots to a random photo.  One bot would say the caption associated with the provided photo and set the scene for the two bots to converse. After the “scene” ended the entire script would be saved into a text file, similar to the format one would find for a film.

For coding purposes, I decided to use python. Although not very good with visualizing things, python has a lot to offer in terms of collecting and presenting data to AI. Regarding AI, I found the cleverbot module to be the most responsive. Additionally, the program worked particularly well when the bots shared the same database of responses (even though shared same databse, bots were initialized differently as to not respond the exact same things every time).

ADDITIONAL COMMENTS

I actually really enjoyed the process for this project. Although I felt lost about the direction of my project, I really enjoyed the outcome and look forward to developing it more to give more humanistic qualities to the two “actors” (i.e. – text sentiment analysis,  vocal inflections,  etc.).

 

DEMOS

Favorite scenes.

Another example.

In program conversation/picture change.

A short conversation about genders (saved script).

Github: https://github.com/xapostol/60-212/tree/master/scriptingScenes

CODE

# Xavier Apostol
# 60-212 (Last Project)
# SCRIPTING SCENES
    # NOTE: runs in python 2.7

import os
import re
import time
import msvcrt
import random
import pyttsx
import pygame
from textwrap import fill
from cleverbot import Cleverbot

###########################
### INITIALIZING THINGS ###
###########################
# initializing chat bots
bot1Name = "ROBOTIC VOICE 1"
cb1 = Cleverbot()
bot2Name = "ROBOTIC VOICE 2"
cb2 = Cleverbot()

# getting started with voice recognition
engBots = pyttsx.init()
voices = engBots.getProperty('voices')

# misc
sleepTime = 1

# conversation lists
bot1Conversation = []
bot2Conversation = []

# max length for text
maxTextLen = 60


#####################
### TRUNCATE TEXT ###
#####################
# formats txt appropriately (text wrapping)
def formatTxt(text):
    lstSpace = []

    text = fill(text, maxTextLen)
    for char in range(0, len(text)):
        if text[char] == "\n":
            lstSpace.append(char)
    return lstSpace


###############################
### GET CAPTIONS AND IMAGES ###
###############################
# change to location of "photo_database" folder
picsFldr = "C:/Users/Xavier/Desktop/60-212/Class Work/FINAL PROJECT/scriptingScenes/photo_database" 
filenameLst = []

# collect photo names
for f in os.listdir(picsFldr):
    fileName, fileExt = os.path.splitext(f)
    filenameLst.append(fileName)

# collect captions
fo = open("SBU_captions_F2K.txt", 'r')
captionsList = fo.read().split('\n')
fo.close()

# all image titles are numbers
def grabCaption(imgTitle):
    indx = int(imgTitle)
    return (captionsList[indx])


####################
### PYGAME STUFF ###
####################
# initiating pygame
pygame.init()

# start window/set values
running = True
windSz = winW, winH = 1280, 720
#windSz = winW, winH = 1920, 1080
window = pygame.display.set_mode(windSz, pygame.RESIZABLE)
pygame.display.set_caption("Robotic Voices Script")

imgSz = imgW, imgH = 450, 400
#imgSz = imgW, imgH = 600, 550

backGClr = (0, 0, 0)
window.fill(backGClr)

# optimize frame rate
clock = pygame.time.Clock()
framesPSec = 30
clock.tick(framesPSec)  # change FPS

# font implementation
fontSz = imgW / 10
font  = pygame.font.SysFont("Arial", fontSz)
fontClr = (255, 255, 255)

# bot X and Y
displayTextX = winW/2
displayTextY = winH/2 + fontSz*3 + 10


####################################
### BASIS CODE FOR CONVERSATIONS ###
####################################
# loads and displays picture of interest
def displayPicture(pictureName):
    imgLoad = pygame.image.load(picsFldr + "/" + pictureName + ".jpg").convert()
    imgLoad = pygame.transform.scale(imgLoad, (imgW,imgH))
    window.blit(imgLoad,(displayTextX-imgW/2, displayTextY-(imgH + fontSz/1.5)))

# displays text for each bot on screen
def displayConvo(botName, botVoice, botText, pictureName):
    # initializing variables
    botTextLH1 = ""  # last half of botText (if too big)
    botTextLH2 = ""  # last half of botText (if bigger than twice the maxLen)
    indxChng1 = 0
    indxChng2 = 0

    # for testing
    #print(pictureName)
    #print(botName + " - " + botText)

    # set voice and what to say
    engBots.setProperty('voice', voices[botVoice].id)  # feminine voice
    engBots.say(botText)

    # start writing text
    if len(botText) > maxTextLen*2:
        # formats to three lines
        indxChng1 = formatTxt(botText)[0]
        indxChng2 = formatTxt(botText)[1]
        botTextLH2 = botText[indxChng2+1:]
        botTextLH1 = botText[indxChng1+1:indxChng2]
        botText = botText[:indxChng1]

    elif len(botText) > maxTextLen:
        # formats to two lines
        indxChng1 = formatTxt(botText)[0]
        botTextLH1 = botText[indxChng1+1:]
        botText = botText[:indxChng1]

    # sets up vocalization of text
    vocTxt = font.render(botText, False, fontClr)
    vocTxtLH1 = font.render(botTextLH1, False, fontClr)
    vocTxtLH2 = font.render(botTextLH2, False, fontClr)

    # displays text
    window.blit(vocTxt,    (displayTextX - vocTxt.get_rect().width/2,
                            displayTextY))
    window.blit(vocTxtLH1, (displayTextX - vocTxtLH1.get_rect().width/2,
                            displayTextY + fontSz))
    window.blit(vocTxtLH2, (displayTextX - vocTxtLH2.get_rect().width/2,
                            displayTextY + fontSz*2))

    displayPicture(pictureName)  # display subject
    pygame.display.update()      # update display
    engBots.runAndWait()         # vocalize text
    time.sleep(sleepTime)        # wait time
    window.fill(backGClr)        # reset canvas (set to black to erase prev msg)


#####################
### RUNNING SCENE ###
#####################
# runs entire scene (program)
def runScene():
    # setting counter and magic numbers
    count = 1
    maxRuns = 200  # free to change

    ####################
    ### CONVERSATION ###
    ####################
    # bot 1 starts conversation
    time.sleep(10)
    ranPicName = random.choice(filenameLst)

    bot1Response = grabCaption(ranPicName)
    displayConvo(bot1Name, 0, bot1Response, ranPicName)
    bot1Conversation.append(bot1Response)

    while (count <= maxRuns):
        # chances of implementing item
        ranInt = random.randint(5, 10)
        result = count % 4

        """
        # testing purposes
        print("Random Int: " + str(ranInt))
        print("Result: " + str(result))
        print("\n")
        """

        # check if randomly apply item from "Table of Responses"
        if (result == 0):
            # collects random picture and caption
            ranPicName = random.choice(filenameLst)
            bot2Response = grabCaption(ranPicName)
        # check if it's time to say goodbye.
        elif (count == maxRuns):
            bot2Response = "Bye."
        # else keep responding
        else:
            bot2Response = cb2.ask(bot1Response)


        # bot 2 responds
        displayConvo(bot2Name, 1, bot2Response, ranPicName)
        bot2Conversation.append(bot2Response)

        # bot 1 responds
        bot1Response = cb1.ask(bot2Response)
        displayConvo(bot1Name, 0, bot1Response, ranPicName)
        bot1Conversation.append(bot1Response)

        count += 1

        # press anything to stop program (break out of loop)
        if msvcrt.kbhit():
            break

    pygame.quit()


#########################
### WRITING TEXT FILE ###
#########################
# writes conversation to a .txt file (script)
def saveConversationToScript():
    file = open("robotic_voices_script.txt", "w")

    file.write("SCENE 1")
    file.write("\n")
    file.write("INT. DARKNESS")
    file.write("\n")
    file.write("\n")

    file.write("There is nothing but darkness.")
    file.write("\n")
    file.write("Suddenly, two robot voices emit into conversation.")
    file.write("\n")
    file.write("The first, ROBOTIC VOICE 1, speaks.")
    file.write("\n")
    file.write("=============================================")
    file.write("\n")
    file.write("\n")

    for i in range(0, len(bot1Conversation)):
        file.write(bot1Name)
        file.write("\n")
        file.write(bot1Conversation[i])
        file.write("\n")
        file.write("\n")

        file.write(bot2Name)
        file.write("\n")
        if i == len(bot1Conversation) - 1:
            file.write("*SILENCE*")
        else:
            file.write(bot2Conversation[i])
        file.write("\n")
        file.write("\n")

    file.write("=============================================")
    file.write("\n")
    file.write("The voices stop.")
    file.write("\n")
    file.write("There is nothing but darkness.")
    file.write("\n")
    file.write("\n")
    file.write("END SCENE")

    file.close()


#########################
### RUNNING FUNCTIONS ###
#########################
runScene()
saveConversationToScript()

Guodu-Final Process


p5* Calligraphy

An interactive experience where you can practice writing and calligraphy with different types of randomly selected font templates and brushes


Enter your practice word below

Esc – Resets Canvas | Shift – Change Brush Style | Up or Down to Change Brush Thickness

sketch

Process 

 

 

 

 

 

 


Sketches

Next Time

There’s a lot of interaction issues like non-intuitive controls for the brushes characteristics and not knowing what brush you are on. Also, I think it will be beneficial in teaching calligraphy to show which direction one’s stroke should go.

Overall I had a lot of fun creating this, especially the limitless brush styles. When thinking about a concept for this project, I looked to my hobbies and interests, which always came back to drawing and typography. I found the idea of being able to use a tool (p5*) to make another tool and hopefully share it with others to be empowering.

When I was exposed to so many programming artists in this course, Zach Lieberman left a deep impression on my with one of his EyeO Talks (here). He talked about his interests in

  • Intersection of Drawing and Code
  • What does drawing on a computer feel like?
  • How do we describe drawings on the computer?
  • What is the sketchbook of today’s age?
  • Beginner (turn off background and you have a paintbrush) –> Advanced drawing in code (recording data)

Ultimately this exploration of bridging digital and physical in addition to drawing makes me wonder how drawing in these different mediums affects and influences a person. Would someone get better at calligraphy by hand if they practiced on this template and used a tablet? And if someone is already good at calligraphy, how well do they transfer to a digital program?

 

 

 

 

 

 

 

 

 

 

 

 

Drewch – LastProject

(Spoilers!)

 

For my final project, I decided to use Unity 3D. I only had a little more than a week to create a game in a programming environment that I have never used before, so I had to figure out how to create a game (that I wasn’t ashamed of) using what Unity provides readily: a Physics engine.

I wanted to put into practice some of the things that I have learned from playing the games that I talked about in LookingOutwards09, particularly meaning in mechanics, and thought-provoking surprise. Admittingly, the game is hard for people who are inexperienced in navigating in first-person, and sometimes it’s unbeatable because the pseudo-randomly placed cubes fly off when they spawn inside each other. Aside from those issues, I think this project was a success, considering my time constraints. I learned a lot about lighting, camera effects, player-controlling, physics, materials, and more, in just the span of a week. I’m excited to keep working with Unity.

Instructions are in the ReadMe, if you want to give it a shot. It looks much nicer real time.

download link: http://www.mediafire.com/file/svxdtvgokxew51y/FinalProject.7z

kander – last project

 

 

 

 

 

 

My original idea was to use a Raspberry Pi to run a Processing sketch that printed generative horoscopes to the AdaFruit Mini Thermal Printer. However, after being unable to connect the rpi to the internet, I tried simply making an Arduino sketch that would print something more simplistic, and I could just stick the printer in a little box and have a cute little object that spits out something random (I was going to go with morbid variations on lyrics from popular Christmas carols)

This is what comes with the printer. The kit I got also had a power supply and adapter. It was very easy to set up with Processing (red and black cable is power, green, yellow, and black is dataOut, dataIn, and ground). All I had to do was install the thermal printer library and connect everything according to the instructions at https://learn.adafruit.com/mini-thermal-receipt-printer/microcontroller

 

so cute! so fragile!

 

 

 

 

 

 

 

 

However, through human error, the printer got plugged into a 12V power supply instead of the 5V one it required, and got fried. Since there’s no undo command for ruining your hardware, I ended up simply creating an interface in Processing that displays the horoscope when you click on the star sign. Not as cool, funny, or as well suited to the material (I really liked the low quality of receipt paper matching these dumb little horoscopes).

I used rita.js’s markov class to generate the text for the horoscopes, using the text from a lot of horoscopes that I found online. I also had my first experience of dealing with Unicode characters in Processsing, so that was nice.

Project Github

I also think that I ought to include some of the materials and processes from my attempts with the Raspberry Pi and the Arduino, since learning through failure was such an integral part of this project!

Pi: I had never used one of these before, and I was pretty amazed at its capabilities. I learned about downloading the latest image and installed it on the pi. I ran into issues when I was trying to connect to wifi, which is necessary to setting up printing. I got a lot of practice with the linux terminal though (lol)

Arduino: I had used these before, but it had been a while. Golan and I wrote a sketch called kander1 (accessible in GitHub branch) that printed random characters. It actually worked for a little bit! I also got some practice breadboarding, which was fun.

Random side note: I also made this drawing program in Processing that I kind of like. It’s good for drawing human heads. It was part of my experimentation when I was considering including a drawing that corresponds to each star sign rather than the Unicode symbol. I wanted to be able to manipulate the drawing in other programs, so I saved the points that formed the ends of the curves into a JSON file, which can be loaded and parsed in another program to do the actual drawing again (instead of loading a png, for example).

drawing program on github

 

 

aliot-lastproject

I had originally planned on finishing a app for the hololens, but amidst a hectic final week of classes and numerous software updates to unity and visual studio, I decided to go with a simpler project. I made a unity app which tracks mouth movement and screams for the user when the user opens their mouth.

Feeling frustrated with my projects and exams, I said to myself “I just want to scream.” This is often not a reasonable reaction when in class or in any public setting, but screaming can be very cathartic. This app, used with headphones, allows a user to experience the stress-reliving sensation of screaming without disturbing those in his or her vicinity.

I was heavily inspired by some of these public works. (Mostly the vicarious scream button)
http://thecreatorsproject.vice.com/blog/clever-signs-public-art-spaces

Lumar Final process

See my context reflection here. It’ll give some background information on why I chose to explore this area.

algorithmicdancetypealgorithmicdancetypeish1 algorithmicdancetypeish2

algorithmicdancetypeish3 algorithmicdancetypeish4

I messed up and used popStyle instead of popMatrix when translating the ellipses on the z axis.

algorithmicdancetypeish8

I had calculated the hue of each point according to its relative position within the array of points that is constantly being added to. This enabled the drawn form, no matter how many points, to always have the full range of the rainbow. Unfortunately, it slowed down so so quickly.

So I just changed the mapping to the mod of 10,000. That being said, sometimes it takes less than 10,000 points for the two rotating circles to complete the radial curve shape.

algorithmicdancetypeish12

 

screen-shot-2016-12-02-at-9-32-40-pm screen-shot-2016-12-02-at-9-34-48-pm

screen-shot-2016-12-04-at-10-13-27-pm screen-shot-2016-12-04-at-10-05-21-pm

Mapping the derivative intersections of 0, to the borders of the letter form – essentially taking the z of the point and mapping it to the x of the letters:

Ex: “B”

AlgorithmicDanceTypeish18

Some additional inspiration from Aman, & Char –  acetate printouts in many many layers of the point cloud, mapping to the color pattern, additional inspiration from Thomas Medicus’s

Special Character 🙂 displayed below:

Changed some of the formulas to allow the points to map and move smoothly and reconnect to stay continuous. Slight patterns of color were adjusted in.

The issue is just that since everything is mapped continuously and evenly through out, the middle bar of the A is missing.

The letter “A”:

The letter ‘C’ above worked a lot better.

I really wanted to strive to get the letter form to be more informed by it’s cycloid method of creation. Simply having a cloud of points doesn’t justify the need to have the cycloid creation within the process.

This is why I wasn’t satisfied with this earlier iteration below:

For the future, what I want to do:

  • make the line of the circle disappear when you look at the letter form straight on (Golan’s suggestion)
  • stereoscopic view in 3js for google cardboard
  • remapping in real-time to different words
 
import geomerative.*;
//import cardboard.*;

// Declare the objects we are going to use, so that they are accesible from setup() and from draw()
RFont f;
RShape grp;
RPoint[] points;

import peasy.*;
PeasyCam cam;

DMachine DM;
float h2;
float w2;
float d2;
float CAPHEIGHT = 300;
// determines the main artboard size (radius)
float ArtboardRadius = 500;

// Animation for starting circles
float Radius1st = floor(random(ArtboardRadius * 0.2, ArtboardRadius * 0.5));
float Radius2nd = floor(random(ArtboardRadius * 0.2, ArtboardRadius * 0.5));
float speedModif1st = floor(random(3)+1);
float speedModif2nd = floor(random(3)+1);

// arm lengths
float armlength = (ArtboardRadius * 1.05) + floor(random(-75, 75));

// beginning location of drawing arm circles and it's speed
float n1, n2;
float nShift = radians(floor(random(45, 135)));
float nSpeed = 0.0005;

// a new layer for the drawing machine
PGraphics fDM;
ArrayList<Starpoint> starArray = new ArrayList<Starpoint>();
IntList inventoryOfEndpointi = new IntList();
int index = 0;
float letterXmax=0;
float pointsArrayIndexOfMax=0;
void setup() {
//fullScreen(PCardboard.STEREO);
inventoryOfEndpointi.append(0);
// VERY IMPORTANT: Allways initialize the library in the setup
RG.init(this);
// Load the font file we want to use (the file must be in the data folder in the sketch floder), with the size 60 and the alignment CENTER
grp = RG.getText("F", "Comfortaa_Bold.ttf", 400, LEFT);

size( 1280, 720, P3D );
d2 = dist(0, 0, w2, h2);
colorMode(HSB, 100);
cam = new PeasyCam(this, 100);
cam.lookAt(650, 300, 0);
cam.setMinimumDistance(50);
cam.setMaximumDistance(1000);
cam.setDistance(800);
cam.setYawRotationMode();
ortho();
smooth();
background(0);
strokeCap(CORNER);

n1 = radians(180);
n2 = n1 + nShift;

DM = new DMachine();
fDM = createGraphics(width, height,P3D);

starArray.add(new Starpoint(width/2, height/2,0));
RG.setPolygonizer(RG.UNIFORMLENGTH);
RG.setPolygonizerLength(1);
points = grp.getPoints();
for(int i=0; i<points.length;i++){//need to map the letter data to the size of the cycloid thingy
points[i].y = points[i].y+120;
}
}
void keyPressed(){
println("float[][] starArray={");
for(int i=0; i<starArray.size();i++){
println("{"+starArray.get(i).x+","+starArray.get(i).y+","+starArray.get(i).z+"},");
}
println("};");
}
float noiseInput = 0;
void draw() {
background(0);
// draw Artboard (BIG CIRCLE)
noFill();
//MAKE THE STROKE OPACITY DISAPPEAR WHEN YOU TURN IT TO THE SIDE
stroke(100);
strokeWeight(1);
ellipse(width/2, height/2, ArtboardRadius*2, ArtboardRadius*2);

// draw Initial Points (Begin Points)
DM.draw1stBeginPoint(n1, Radius1st, speedModif1st);
DM.draw2ndBeginPoint(n2, Radius2nd, speedModif2nd);
DM.CalculateEndPoint(armlength);

float distances = dist(DM.tX, DM.tY, width/2,height/2);//points further out should have z's from the letter that is further away
float zz = 0;//initial z value of all points

stroke(100);
line(DM.tX,DM.tY,0,DM.tX,DM.tY,zz);
int sizeOfArray = starArray.size()%11000;
starArray.add(new Starpoint(DM.tX, DM.tY,sizeOfArray));

for (int i = 1; i < starArray.size()-1; i++) {
Starpoint point = starArray.get(i);
point.render();
if (!(point.hasBeenChecked)){//check if the point is where direction changes drastically if it hasn't already been checked
point.findIfMaxLetter(starArray.get(i-1),starArray.get(i+1),i, noiseInput);
noiseInput+=.0001;
}else if(point.hasBeenChecked && (inventoryOfEndpointi.get(inventoryOfEndpointi.size()-1)>i)){
if(!(point.mapped)){
for(int j=1;j<inventoryOfEndpointi.size();j++){
int indexOfTarget = inventoryOfEndpointi.get(j);
int indexOflastEndpoint = inventoryOfEndpointi.get(j-1);
if((i<indexOfTarget) && (i>indexOflastEndpoint)){
point.mappingToEndpoints(i, starArray.get(indexOfTarget).endZ, indexOfTarget, indexOflastEndpoint, starArray.get(indexOflastEndpoint).endZ) ;//determines the endZ
}
}
}else if(point.mapped){
point.moveTowardsEndZ();
}
}
}
}
class Starpoint {
boolean hasBeenChecked = false;
boolean endCurve = false;
boolean mapped = false;
float x, y, z, endZ;
float hue;
Starpoint(float xx, float yy, int arraySize){
hue = map(arraySize,0,10999,0,100);
x = xx;
y = yy;
z=0;
}
void render(){
pushMatrix();
//color is mapped according to where it is in the array to ensure a rainbow no matter how many points there are
stroke(hue,100,100);
translate(0,0,z);
point(x,y,2);
popMatrix();
}
void moveTowardsEndZ(){
z = 0.99*z+0.01*endZ;
}
void mappingToEndpoints(int currentPointIndex, float targetZ, int targetEndpointindex, int lastEndpointindex, float lastZ){
endZ = map(currentPointIndex,lastEndpointindex,targetEndpointindex,lastZ,targetZ);
mapped = true;

}
void findIfMaxLetter(Starpoint p1,Starpoint p2, int index, float noisy){
if((p1.x>=this.x && p2.x>=this.x)|| (p1.x<=this.x && p2.x<=this.x)||(p1.y>=this.y && p2.y>=this.y)|| (p1.y<=this.y && p2.y<=this.y)){
this.endCurve=true;
this.mapped = true;
addEndCurveIndexValueToGlobalArraylist(index);
//clearly it's an edge, go search for an appropriate z
FloatList inventoryZ = new FloatList();//stores the multiple possible z's for later
for(int j=0; j<points.length; j++){
float ltrY = points[j].y;//they have to match y values generally
if (this.y>ltrY-1+height/2 && this.y<ltrY+height/2){inventoryZ.append(points[j].x);}//if it matches y, add the x value of the letter point as a possible z value
}
inventoryZ.append(0);
addEndCurveIndexValueToGlobalArraylist(index);

//which of the posible z's will it take from the inventory?
//int whichZ = floor(random(0,inventoryZ.size()-1));
int whichZ = floor(map(noise(noisy),0,1,0,inventoryZ.size()));
this.endZ = inventoryZ.get(whichZ);
}else{
float radialDistance = sq(p1.x-width/2)+sq(p1.y-height/2);
float radialDistance1 = sq(this.x-width/2)+sq(this.y-height/2);
if (radialDistance>=radialDistance1){//is it bigger farther away?
//if it is, is the next one smaller?
float radialDistance2 = sq(p2.x-width/2)+sq(p2.y-height/2);
}
}
this.hasBeenChecked = true;

}
}
void addEndCurveIndexValueToGlobalArraylist(int indexValue){
inventoryOfEndpointi.append(indexValue);
}
//////////////////
class DMachine {
float MAxx1, MAyy1;
float MAxx2, MAyy2;
float tX, tY;

float anim;

DMachine() {
anim = 0;
}

void draw1stBeginPoint(float n1_, float Radius1st_, float speedModif1st_) {
float MAx1, MAy1;
MAx1 = width/2 + ArtboardRadius * cos(n1);
MAy1 = height/2 + ArtboardRadius * -sin(n1);
stroke(60);
strokeWeight(1);
fill(0);
ellipse(MAx1, MAy1, Radius1st, Radius1st);

// resets the angle
n1 -= nSpeed;
if (degrees(n1) < 0) {
n1 = radians(360);
}

noStroke();
fill(255);
MAxx1 = MAx1 + cos(anim * speedModif1st) * Radius1st/2;
MAyy1 = MAy1 + sin(anim * speedModif1st) * Radius1st/2;
anim += 0.025;
fill(60);
ellipse(MAxx1, MAyy1, 5, 5);
}
void draw2ndBeginPoint(float n2_, float Radius2nd_, float speedModif2nd) {
float MAx2 = width/2 + ArtboardRadius * cos(n2);
float MAy2 = height/2 + ArtboardRadius * -sin(n2);
stroke(60);
strokeWeight(1);
fill(0);
ellipse(MAx2, MAy2, Radius2nd, Radius2nd);

// resets the angle
n2 -= nSpeed;
if (degrees(n2) < 0) {
n2 = radians(360);
}

noStroke();
fill(255);
MAxx2 = MAx2 + sin(anim * speedModif2nd) * Radius2nd/2;
MAyy2 = MAy2 + cos(anim * speedModif2nd) * Radius2nd/2;
anim += 0.025;
fill(60);
ellipse(MAxx2, MAyy2, 5, 5);
}

void CalculateEndPoint(float armlength_) {
// "crazy" math stuff here
// only look if you dare!

stroke(60);
fill(60);

// distance between the two main points
float a = dist(MAxx1, MAyy1, MAxx2, MAyy2);
line(MAxx1, MAyy1, MAxx2, MAyy2);

// the mid-point
float a2X = lerp(MAxx1, MAxx2, 0.5);
float a2Y = lerp(MAyy1, MAyy2, 0.5);
ellipse(a2X, a2Y, 5, 5);

// The armlength "compensator" aka the triangle height calculator
float fD1 = abs(sq(armlength) - sq(a/2));
float fD2 = sqrt(fD1);

// "compensation" angle
float alpha = asin(abs(MAyy1 - MAyy2) / a);

if (MAyy1 - MAyy2 < 0 && MAxx1 - MAxx2 < 0) {
// works in between "180-270"
// a is \ angle
tX = a2X + fD2 * cos(-PI/2+alpha);
tY = a2Y + fD2 * sin(-PI/2+alpha);
} else if (MAyy1 - MAyy2 < 0 && MAxx1 - MAxx2 > 0) {
// works in between 90-180
// a is / angle
tX = a2X + fD2 * cos(PI/2-alpha);
tY = a2Y + fD2 * sin(PI/2-alpha);
} else if (MAyy1 - MAyy2 > 0 && MAxx1 - MAxx2 > 0) {
// works in between 0-90
// a is \ angle
tX = a2X + fD2 * cos(PI/2+alpha);
tY = a2Y + fD2 * sin(PI/2+alpha);
} else if (MAyy1 - MAyy2 > 0 && MAxx1 - MAxx2 < 0) {
// works in between 270-360
// a is / angle
tX = a2X + fD2 * cos(-PI/2-alpha);
tY = a2Y + fD2 * sin(-PI/2-alpha);
}

// final lines
line(MAxx1, MAyy1, tX, tY);
line(MAxx2, MAyy2, tX, tY);
}
}