takos-lookingoutwards4

A JOURNEY, LONDON
Paper, Wood, Electronics
2008

A Journey, London, is a box that opens up to reveal a paper version of the London, and then lights up the map in different ways in an effort to evoke memories in the viewer. I thought this was interesting because there are not specific memories that are being targeted, but the artwork just addresses the fact that the viewer is bound to have memories associated with different locations in London because they either live there, or they are visiting.

takos-lookingoutwards3

Generative Art
Quayola – Pleasant Places

Quayola’s Pleasant Places is a series of six images formed from videos of trees. The videos are used to generate art by being run through filters which analyze the movement in each video and treat them as brush strokes similar to some kind of controlled motion blur – the leaves and branches do the majority of the painting in this process due to the fact that they’re the most mobile part of the tree. I thought the results were particularly compelling in that they do not look generative, but resemble a form of impressionism. I also thought it was interesting that parts of the original video show up in some of the stills, but does not look out of place

Antar-LastProject

Generative Posters 

All course code is here and here.

For my final project I wanted to come out feeling extremely confident in typography and colour generation that could be used in a practical context. A significant portion of generative work that I’ve seen often creates interesting art while playing with type, but rarely have I seen it to create actual communication pieces. When I was first introduced to generative art and design, my peers and I felt a bit nervous. We were apprehensive and felt that some were saying that code could replace our jobs as designers. However, through this final project I finally understood how creating work with generative typography is no different than using tools that that are so familiar to designers, such as the Adobe suite.
My favourite project of the semester had been the book project because I was given new tools to explore a different way of thinking about type manipulation. Being able to easily create image and pattern through type is a powerful skill for a communication designer and I now feel more prepared to create pieces in the future that contain greater visual complexity and depth. As reflected in my personal work, I am very fond of repetition and intricate pattern details. Through the book project, and this final project, I feel that I can now effortlessly create patterns and length repetitions, that would normally take hours by hand. I also feel comfortable writing multiple programs to create richer work, which allows me to understand how to use the strengths of each language.

(Above) Some of Yuan Guo’s poster work that inspired me.

When I began this project I looked to the work of two other designers for inspiration for where to start and what type of goals to set for myself. The first designer I looked to was Yuan Guo who is very talented when it comes to colour and form relationships. His poster work was a great source for inspiration, and I began to wonder which elements from his work I could recreate with code. The first element I was interested in was his effective use of fluorescent colours and neon gradients. That became the first goal of my project, to create generative neon gradients that would translate well from screen to print. The second goal I had set for myself was to create my own style of glitching. I’ve seen this design trend overused and misused very often, but I still think it can be a tasteful way to create texture if used effectively. In think in Guo’s glitch art calendar, I think he has over used the effect, as many designers and artists do, to the point where some pieces are illegible.  In terms of creating my own glitching I first attempted to use pixel manipulation, but then discovered that I would not be able to export my work as a PDF, which is vector based. I ended up creating fake “vector pixels” (single points) as pixels instead. I selected portions of the gradients and replaced rows of “pixels” with the colour of a pixel in close proximity. I think this created a subtle texture rather than the jarring distortion that is commonly seen in glitch art.

In addition to Guo, I looked to my professor Kyuha (Q) Shim for typography inspiration. Q’s site Code and Type was a good source to look to for examples for coded type that plays with form. However, Q’s work was node based which I am unfamiliar with. Using Processing I first attempted to write type on complex wave paths, then tried to manipulate the type the same way I did with the glitching, by manipulating the pixels. Later Golan introduced me to Geomerative, which made typography manipulation effortless. I was then able to create subtle generative titles that mimicked the style of the glitch art.

(Above) Some of Q’s coded typography play that inspired me.

After creating the backgrounds, which contained the gradients, the glitch texture, and the title, I used Illustrator to convert the PDF’s to TIF’s that were the appropriate size for my InDesign file. I was then able to use basil to create the growing 60-212 title, as well as the “matrix” type treatment in the background. The “matrix” pattern uses the text from the course description paragraph, then splits the page into a grid of text boxes, large enough for only one character. It then places one character from the paragraph into each text box, then arbitrarily shifts the baseline between a given scale.

(Above)The wall behind my desk in my studio, showing the three generative posters with other work, including a page from the generative book, and a print from Brandon Ngai. Using an inkjet Epson p800 project effective for printing, as the colours turned out pleasantly neon, without being too harsh.

arialy-manifesto

“3. The Critical Engineer deconstructs and incites suspicion of rich user experiences.”

I found this tenet of the Critical Engineering Manifesto curious, since rich user experiences are typically thought to be solely positive. Upon further thought, very rich user interface can disguise underlying problems. A very basic example would be purchasing some sort of commodity, say headphones. People might want to buy really edgy and well designed headphones even if it’s overpriced and the sound quality isn’t as good as alternatives. This happens with the way almost anything is packaged. When strong user interface and convenience intersect, the better alternative solutions and problematic engineering can be swept under the rug. A basic (perhaps opinionated) example would be the popularity of Venmo. Linking your banking info to this mobile app often does raise red flags due to its convenience, despite the fact that it’s not even that much of a hassle to pay someone back in cash. This surely extends to issues whose consequences go beyond the individual.

arialy-LookingOutwards09

The Beginner’s Guide was a PC game I played on my classmate drewch’s Steam account (funny enough we both decided to create Unity games for our final projects).

The game uses narration and a series of another (likely fictional) game designer’s works to construct a sense of this unknown artist’s character. The game is really powerful not because of intensely beautiful visuals, story (in the traditional sense), or fun gameplay. It’s really about the way the viewer projects their own memories onto the works this unknown game designer seemingly created about his own problems.

The use of games as really an art medium made me want to explore games as part of my own practice. I’m also very interested in the use of lights within artworks. The simulation of lights within the Unity game space is something I wanted to play with. This led me to make something in Unity for the final project.

arialy-lookingOutwards08

ChainFORM from the MIT Media Lab presents some interesting insight to the present and future of technology. The bit where they use the chain to detect and correct the person’s posture was to me an unexpected way to use it. There’s such a broad range of uses it’s pretty fascinating to think where it could wind up in the future. I can see it being used in children’s toys, clothing in performance art, robotic arms, future styluses… the list goes on.

Kelc – LookingOutwards08

So I personally don’t have a very broad or thorough concept of what physical computing entails, so I decided to look at three projects that are very different but relate to different sides of physical computing.

So the first piece I really liked was similar to Design IO’s Connected Worlds piece. Curious Displays by Julia Tsao simulates what may eventually become a real physical project through a connected display on two screens in a living room setting and some sort of sensor which detects the placement of objects around the room.

 

Kelc – LookingOutwards09

For my final Looking Outwards I decided to look at Angélica Dass; although she is not a tech artist she inspired much of my work for my game and my last project for this class.

Angélica Dass is a Brazilian photographer based in Madrid, Spain. She is most known for her exhibit Humanae which aims to explore the true variation between skin tones and challenges just what exactly makes us the classified race that we are– as she stated in her TEDTalk, “does it have to do with our origin, nationality or bank account?” She speaks on growing up in Brazil, which, like many Latin American countries and countries in general, culturally contains many implicit and explicit biases against people of darker complexions. Dass recalls being treated like a nanny or a prostitute many times because of her complexion, a story eerily similar to one I heard while interviewing a few Brazilian women for my project. This issue has improved tenfold since the times they are speaking of, however it has not been quite eradicated anywhere in the world.

 

Dass’ other pieces include Vecinas, a collaboration with the Mallan Council of Spain which aimed to reshape the perception of migrants and refugees of Mall through archived pictures and Dass’ photography, and De Pies A Cabeza, a series of photographs of people’s faces and their shoes. Her work challenges existing notions in a subtle and objective but very powerful way. Conceptually she is “goals” so to say; in my game and in a lot of my work I aim to achieve the same sense of subtlety she does with same amount of intellectual impact.

arialy-LastProject

Ever since I played The Beginner’s Guide, a PC game on Steam, I saw games as a more accessible medium to make art. Though I’ve played plenty of games I would consider meaningful, they were very high production in depth games. The Beginner’s Guide includes a series of short games that convey something even without a true storyline to each of them. Unity, a free gaming engine, seemed like the perfect entry point to experiment with making a basic “game.” My main goal was to make something in Unity and get past the entry learning curve, adding Unity to the list of tools I’m familiar with.

Since I would think it would be somewhat difficult to have complicated visuals within this short time, light and sound became the main components of the project. My concept for the project came about while I was listening to a song that felt really nostalgic. The nostalgia I had made me think about the close relationship between emotions, memory, and sound. I wanted to create a space where these sound memories could live. Each orb of sound in the space is related to a specific memory I have. None of the sounds were made for this project, some are found online while others are ripped from existing videos I have.

Code can be found here
Explore the project here

arialy-proposal

For my final project I’d like to make something in Unity. Though I’d really like to make something meaningful, I’m more concerned about actually jumping into Unity for the first time and making something. I’d like to have almost everything coded.

Written by Comments Off on arialy-proposal Posted in Proposal

ngdon-LookingOutwards09

For part of my last project I was doing generative text, so I looked into ways neural networks can be used to do so. Golan pointed me to a couple of resources, and one of them seems particularly interesting and effective: http://karpathy.github.io/2015/05/21/rnn-effectiveness/

This author applied recurrent neural networks to wikipedia articles, Shakespeare, code, and even research papers, and the result is very realistic.

The model learned how to start and end quotation marks and brackets, which I think is one of the more difficult problems in text generation.

Generative paper.

Generative code.

I’m particularly fascinated by the way neural networks magically learn. Unfortunately I never got them to work so well in my final project.

I was also planing to generate plants alongside my imaginary animals, so I also researched about L-system, which is a type of grammar that can be used to describe the shape of a plant.

I read this paper: http://algorithmicbotany.org/papers/abop/abop-ch1.pdf, in which usage and variations of l-systems are extensive explained.

I’m very interested in this system because it condenses a complicated graphic such a that of a tree into a plain line of text. And simply changing the order of a few symbols can result in a vast variety of different tree shapes.

For example, the above image can be simply described by the rule

(X → F−[[X]+X]+F[+FX]−X), (F → FF)

As I’m also doing text generation, I thought about the interesting possibilities of applying text generation methods onto the rules. So the shapes of the trees will not be limited by the number of rules that I can find, and exotic and alien-looking trees can be easily churned out.

Below are a couple of outputs I was able to get by applying Markov chain to existing l-system rules using python.

However more often the program generates something that resembles a messy ball of string. Therefore I’m still working on it.

hizlik- LastProject

Together Again

Click here to play. Click here to see the GitHub repo.

Do you ever wish you could turn back time? Fix what you’ve broken, see those you’ve lost? Together Again is a simple game with a unique mechanic; the goal is to reunite the square and the circle, make them one and the same. Simply use your mouse to tap or bounce the falling square towards the circle. The faster you hit the square, the harder the hit and faster the motion of the square. As the levels progress, gravity becomes stronger, and it gets harder and harder to be together again. Therefore, your wish has come true and you can travel back in time to fix your mistakes, plan out your actions through trial and error, and hopefully succeed. Your past attempts show as faded versions in the background, and the more you fail, the more crowded and distracting and harder it will get to succeed. There is no game over. There is just time.

This game is related to my previous project, hizlik-Book (Year One) in that this game is about the unexpected and unbelievable event that is a breakup that occurred during the time of the printing of the book. However, like the book, the correlation of my projects to my personal life is left ambiguous and often unnoticed. Specifically, the square represents a male figure, the circle a female figure, and the color green is her favorite color.

The game was created in p5.js, with the code provided below.


var c; // canvas
var cwidth = window.innerWidth;
var cheight = window.innerHeight;
var nervous; var biko; // fonts

var gravity = 0.3;
var mouseBuffer = -3;
var bounce = -0.6;
var p2mouse = [];
var boxSize = 50;

var gameState = "menu";
var vizState = "static";
var transitionVal = 0;
var level = 1;
var boxState = "forward";
var offscreen = false;
var offscreenCounter = 0;
var keyWasDown = false;
var gameCounter = 0;

var currBox = null;
var currCirc = null;
var ghosts = [];

function setup() {
	c = createCanvas(cwidth, cheight);
	background(255);
	frameRate(30);
	noCursor();
	nervous = loadFont("Nervous.ttf");
	biko = loadFont("Biko_Regular.otf");
}

window.onresize = function() { 
	cwidth = window.innerWidth;
	cheight = window.innerHeight;
	c.size(cwidth, cheight);
}

function draw() {
	background(255);

	// splash menu
	if(gameState == "menu") {
		noStroke();
		fill(121,151,73)
		textFont(nervous);
		textSize(min(cwidth,cheight)*.1);
		textAlign(CENTER,CENTER);
		text("Together Again", cwidth/2, cheight/2);

		fill(218,225,213);
		textFont(biko);
		textSize(min(cwidth,cheight)*.03);
		text("hold SPACE to go turn back time", cwidth/2, cheight-cheight/5);

		if(keyIsDown(32)) { vizState = "transition"; }
		if(keyIsDown(68)) { gravity = 1; }

		if(vizState == "transition") {
			transitionVal += 10;
			fill(255,255,255,transitionVal);
			rect(0,0,cwidth,cheight);
			if(transitionVal>255) {
				gameState = "game";
			}
		}
	}

	// actual game
	if(gameState == "game") {
		if(currBox == null) currBox = new Box(null);
		if(currCirc == null) currCirc = new Circle();

		if(vizState == "transition") {
			currBox.draw();
			currCirc.draw();
			transitionVal -= 10;
			fill(255,255,255,transitionVal);
			rect(0,0,cwidth,cheight);
			if(transitionVal < 0 && !keyIsDown(32)) {
				vizState = "static";
			}
		}
		else {
			// check if space is being pressed
			if(keyIsDown(32)) {
				boxState = "rewind"
				if(!keyWasDown) { 
					ghosts.push(currBox);
					keyWasDown = true;
				}
			}
			else if(keyWasDown) {
				keyWasDown = false;
				boxState = "forward";
				var prevCount = 1;
				var prev = null;
				while(prev == null) {
					// console.log(ghosts.length-prevCount);
					prev = ghosts[ghosts.length-prevCount].getCurrPos();
					prevCount++;
					if(prevCount > ghosts.length) {
						for(var i=0; i 255) {
					console.log("new level");
					level++;
					gravity = constrain(gravity+.2, 0, 3);
					currBox = new Box([random(cwidth-boxSize), random(cheight/2), 0, 0]);
					// currBox = new Box(null);
					currCirc = new Circle();
					ghosts = [];
					vizState = "transition";
					boxState = "forward"
					transitionVal = 255;
				}
				else if(transitionVal-150 > 150) {
					currBox.draw();
					currCirc.draw();
					fill(255,255,255,transitionVal-150);
					rect(0,0,cwidth,cheight);
				}
				else {
					currBox.draw();
					currCirc.draw();
				}
			}
			else if(boxState == "rewind") {
				gameCounter--;
				if(gameCounter>0) {
					for(var i=0; i= 0) { this.vy += constrain(vm, -40, 40); }
				else { this.vy *= bounce; }
				this.pos[1] = mouseY+mouseBuffer;
			}

			// ========== UPDATE HORIZONTAL ========== //
			var hpos = map(mouseX, this.pos[0], this.pos[0]+boxSize, -1, 1);
			this.vx += 10 * bounce * hpos;
		}

		// update horizontal bounce
		if (collision != null && (collision[2]=="left" || collision[2]=="right")) {
			var vm = (mouseX - pmouseX); // velocity of the mouse in x direction
			this.vx += constrain(vm, -40, 40);

			if(collision[2] == "left") {
				if(this.vx > 0) { this.pos[0] = mouseX; }
				else { this.vx *= bounce; }
				this.pos[0] = mouseX;
			}
			if(collision[2] == "right") {
				if(this.vx < 0) { this.pos[0] = mouseX-boxSize; }
				else { this.vx *= bounce; }
				this.pos[0] = mouseX-boxSize;
			}
		}

		// update position
		if(this.vx > 20*gravity || this.vx < -20*gravity) { this.vx *= 0.85; }
		if(this.vy > 35*gravity || this.vy < -35*gravity) { this.vy *= 0.85; }
		this.vy = constrain(this.vy + gravity, -30, 50);
		this.pos[0] += this.vx;
		this.pos[1] += this.vy;
		
		//debug
		if(this.pos[1]-boxSize>cheight || this.pos[0]+boxSize<0 || this.pos[0]>cwidth) {
			offscreen = true;
		}
		this.history.push([this.pos[0], this.pos[1], this.vx, this.vy]);
		this.currIndex++;
	}

	this.draw = function() {
		noStroke();
		fill(57,67,7);
		if(!this.active)
			fill(239,240,235);
		if(this.currIndex >= 0 && this.currIndex < this.history.length) {
			if(boxState == "reunited") { 
				this.corner = constrain(this.corner+1, 0, 18); 
				var r = map(this.corner, 0, 18, 57, 121);
				var g = map(this.corner, 0, 18, 67, 151);
				var b = map(this.corner, 0, 18, 7, 73);
				fill(r,g,b);
			}
			rect(this.history[this.currIndex][0], this.history[this.currIndex][1], boxSize, boxSize, this.corner);
		}
	}

	this.getMouseCollisionPoint = function() {
		var top = new Line(this.pos[0],this.pos[1],this.pos[0]+boxSize,this.pos[1])
		var left = new Line(this.pos[0],this.pos[1],this.pos[0],this.pos[1]+boxSize)
		var bottom = new Line(this.pos[0],this.pos[1]+boxSize,this.pos[0]+boxSize,this.pos[1]+boxSize)
		var right = new Line(this.pos[0]+boxSize,this.pos[1],this.pos[0]+boxSize,this.pos[1]+boxSize)
		var mouse = new Line(mouseX, mouseY+mouseBuffer, pmouseX, pmouseY+mouseBuffer);
		var coords = null;
		if(pmouseX <= mouseX) {
			var result = getMouseCollision(mouse, left);
			if(result != null) {
				result.push("left");
				return result;
			}
		}
		if(pmouseX >= mouseX) {
			var result = getMouseCollision(mouse, right);
			if(result != null) {
				result.push("right");
				return result;
			}
		}
		if(pmouseY <= mouseY) {
			var result = getMouseCollision(mouse, top);
			if(result != null) {
				result.push("top");
				return result;
			}
		}
		if(pmouseY >= mouseY){
			var result = getMouseCollision(mouse, bottom);
			if(result != null) {
				result.push("bottom");
				return result;
			}
		}
		if(this.vx < 0 && 
			mouseX >= this.pos[0] &&
			pmouseX < this.pos[0]-this.vx &&
			mouseY+mouseBuffer >= this.pos[1] && 
			mouseY+mouseBuffer <= this.pos[1]+boxSize) {
			return [mouseX, mouseY+mouseBuffer, "left"];
		}
		if(this.vx > 0 && 
			mouseX <= this.pos[0]+boxSize &&
			pmouseX > this.pos[0]+boxSize-this.vx &&
			mouseY+mouseBuffer >= this.pos[1] && 
			mouseY+mouseBuffer <= this.pos[1]+boxSize) {
			return [mouseX, mouseY+mouseBuffer, "right"];
		}
		if(this.vy < 0 && 
			mouseY+mouseBuffer >= this.pos[1] &&
			pmouseY+mouseBuffer < this.pos[1]+this.vy &&
			mouseX >= this.pos[0] && 
			mouseX <= this.pos[0]+boxSize) {
			return [mouseX, mouseY+mouseBuffer, "top"];
		}
		if(this.vy > 0 && 
			mouseY+mouseBuffer <= this.pos[1]+boxSize &&
			pmouseY+mouseBuffer >= this.pos[1]+boxSize-this.vy &&
			mouseX >= this.pos[0] && 
			mouseX <= this.pos[0]+boxSize) {
			return [mouseX, mouseY+mouseBuffer, "bottom"];
		}
		return null;
	}

	this.rewind = function() {
		this.currIndex--;
		this.active = false;
	}

	this.init(startVals);
}

function Circle() {
	this.pos = [];
	this.ring = 6;
	this.corner = 25; //18 mid-point

	this.init = function() {
		this.pos = [random(cwidth), random(cheight)];
	}

	this.pulse = function() {
		this.ring-= .2;
		if(this.ring<0.5 && boxState != "reunited") {
			this.ring = 6;
		}
		else if(this.ring<0 && boxState == "reunited") {
			this.ring = 0;
		}
	}

	this.draw = function() {
		this.pulse();
		fill(176, 196, 134);
		strokeWeight(this.ring);
		stroke(239,240,235);
		// ellipse(this.pos[0], this.pos[1], boxSize, boxSize);
		if(boxState == "reunited") { 
			noStroke();
			this.corner = constrain(this.corner-1, 18, 25); 
			var r = map(this.corner, 18, 25, 121, 176);
			var g = map(this.corner, 18, 25, 151, 196);
			var b = map(this.corner, 18, 25, 73, 235);
			fill(r,g,b);
		}
		rect(this.pos[0]-boxSize/2, this.pos[1]-boxSize/2, boxSize, boxSize, this.corner)
	}

	this.init();
}

function lostMessage() {
	noStroke();
	fill(218,225,213);
	textFont(nervous);
	textSize(min(cwidth,cheight)*.08);
	text("lost your way", cwidth/2, cheight/2);
	textFont(biko);
	var sec = "seconds";
	textSize(min(cwidth,cheight)*.05);
	if(round(offscreenCounter/30)==1) sec = "second";
	text("for "+round(offscreenCounter/30)+" "+sec, cwidth/2, cheight/2+cheight/10);
	textSize(min(cwidth,cheight)*.03);
	text("hold SPACE to go turn back time", cwidth/2, cheight-cheight/5);
}

function getMouseCollision(a, b) {
	var coord = null;
	var de = ((b.y2-b.y1)*(a.x2-a.x1))-((b.x2-b.x1)*(a.y2-a.y1));
	var ua = (((b.x2-b.x1)*(a.y1-b.y1))-((b.y2-b.y1)*(a.x1-b.x1))) / de;
	var ub = (((a.x2-a.x1)*(a.y1-b.y1))-((a.y2-a.y1)*(a.x1-b.x1))) / de;
	if((ua > 0) && (ua < 1) && (ub > 0) && (ub < 1)) {
		var x = a.x1 + (ua * (a.x2-a.x1));
		var y = a.y1 + (ua * (a.y2-a.y1));
		coord = [x, y];
	}
	return coord;
}

function Line(x1, y1, x2, y2) {
	this.x1 = x1;
	this.y1 = y1;
	this.x2 = x2;
	this.y2 = y2;
}

function areReunited(box, circle) {
	var distX = Math.abs(circle.pos[0] - box.pos[0] - boxSize / 2);
    var distY = Math.abs(circle.pos[1] - box.pos[1] - boxSize / 2);

    if (distX > (boxSize / 2 + boxSize/2)) {
        return false;
    }
    if (distY > (boxSize / 2 + boxSize/2)) {
        return false;
    }

    if (distX <= (boxSize / 2)) {
        return true;
    }
    if (distY <= (boxSize / 2)) {
        return true;
    }

    var dx = distX - boxSize / 2;
    var dy = distY - boxSize / 2;
    return (dx * dx + dy * dy <= (boxSize/2 * boxSize));
}

cambu-last

My last project for this class shifted many times as I realized the limits of my capabilities and, quite honestly, left me with a trail of newly minted skills instead of a clearly defined project. This blog post is the tale of that trail of wandering…

I started off with the intention of continuing my explorations in tangible computing with Guodu, but we eventually scrapped this plan. Instead, I decided I would try to learn the skills necessary to add ‘location awareness’ to a project I’d worked on in a another class. To do this, I would need to learn the below at a bare minimum:

  • Soldering and making basic circuits [learning resource]
  • some level of Arduino programming
  • RFID Tag hardware and software [learning resource] incld. reading and writing to and from RFID Tags
  • wireless communication between computers and Arduino
  • How to control physical devices (fans, lights, etc.) with an Arduino [learning resource]

Before I had the Adafruit RFID Shield, I decided to explore another RFID reader. The Phidget 1023 RFID tag reader (borrowed from IDeaTe), but after extensive work found I could only control it via a USB Host device. I spent a night exploring a Raspberry Pi approach wherein I would be able to script control of the Phidget reader via processing on the Pi. I learned how to flash a Pi with a Processing image but driver issues with the Phidget ultimately doomed this approach.

I then moved back to an Arduino approach which required learning Physical Computing basics, including how to Solder, communicate with the Arduino board via serial in the terminal (‘screen tty’), understand baud rates, pwm, digital vs. analogue in/out and more. The true highlight of my Arduino adventure was triggering a physical lamp via a digital RFID trigger:

All that said, at one point, I realized the original goal of extending my previous project the way I intended was impossible with the time given. At that point, I completely shifted gears… This new direction was based on a few inspirations:

  1. Golan’s BlinkyTapes
  2. Shftr.io‘s Physical Computing Trailer
  3. Noodl’s External Hardware and MQTT Guide

My next goal was to control physical hardware through some type of digital control. To achieve this, I used BlinkiTape’s Processing library to render MQTT messages sent through Shftr.io from Noodl’s slider modules. See the video below:


Conclusion

In the end, despite not pulling together a singular cohesive project, I learned a great deal about Arduino, hardware programming, soldering, and other tools for communication between hardware and software systems.

Jaqaur – Last Project

Motion Tracer

For my last project (no more projects–it’s so sad to think about), I decided to combine aspects from two previous ones: the motion capture project and the plotter project. For my plotter project, I had used a paintbrush with the Axidraw instead of a pen, and I really liked the result, but the biggest criticism I got was that the content itself (binary trees) was not very compelling. So, for this project, I chose to paint more interesting material: motion over time.
I came up with the idea to trace the paths of various body parts pretty early, but it wasn’t until I recorded BVH data and wrote some sample code that I could determine how many and which body parts to trace. Originally, I had thought that tracing the hands, feet, elbows, knees, and mid-back would make for a good, somewhat “legible” image, but as Golan and literally everyone else I talked to told me: less is more. So, I ultimately decided to trace only the hands and the feet. This makes the images a bit harder to decipher (as far as figuring out what the movement was), but they look better, and I guess that’s the point.
One more change I made from my old project was the addition of multiple colors. Golan advised me against this, but I elected to completely ignore him, and I really like how the multi-colored images turned out. I mixed different watercolors (my first time using watercolors since middle school art class) in a tray, and put those coordinates into my code. I added instructions between each line of color for the Axidraw to go dip the brush in water, wipe it off on a paper towel, and dip itself in a new color. I think that the different colored lines make the images a little easier to understand, and give them a bit more depth.

I tried to record a wide variety of motion capture data for this project (thanks to several more talented volunteers) including ballet, other dance, gymnastics, parkour, martial arts, and me tripping over things. Unfortunately, I had some technical difficulties the first night of MoCap recording, so most of that data ended up unusable (extremely low frame rate). The next night, I got much better data, but I discovered later that Breckle really is not good with upside down (or even somewhat contorted) people. This made a lot of my parkour/martial arts data come out a bit weird, and I had to select only the best ones to print. If I were to do this project again, I would like to record Motion Capture data in Hunt Library perhaps, or just with a slightly better system than the one I used for this project. I think I would get somewhat nicer pictures that way.

One more aspect of my code that I want to point out is a little portion of code I made that maps the data to be an appropriate size for the paper. It runs at the beginning, and finds the maximum and minimum x and y values reached by any body part. Then, it scales that data to be as large as possible (without messing up its original proportions) while still fitting inside the paper’s margins. This means that a really tall motion will be scaled down to be the right height, and then have its weight shrunk accordingly, and a really wide motion will be scaled by its width, and then have its height shrunk accordingly. I think that this was an important feature.

Here are some of the images generated by my code:

Above are three pictures of the same motion capture data: a pirouette. It was the first motion I painted, and it took me a few tries to get the paper’s size coordinates right, and to mix the paint dark enough.


That’s an image generated by a series of martial arts movements, mostly punches. Note the dark spot where some paint dripped on the paper; I think little “mistakes” like that give these works character, as if they weren’t painted by a robot.


This one was generated by a somersault. I think when he went upside down, the data got a bit–messed up, but I like the end result nonetheless.


Here is a REALLY messed up image that was supposed to be a front walkover. You can see her hands and feet on the right side, but I think when she went upside down, Breckle didn’t know what to do, and put her body parts all over the place. I don’t really consider this one part of my final series, and since I knew the data was messy, I wasn’t going to paint it, but I had paint/paper left over so I figured, why not? It’s interesting anyway.


I really like these. The bottom two are actually paintings of the same data, just with different paint, but all four are actually paintings of the same dance move– a “Pas De Chat.” I got three separate BVH recordings of the dancer doing the same move, and painted all of them. I think it’s really interesting to note the similarities between them, especially the top two.

All in all, I am super happy with how this project turned out. I would have liked to get a little more variety in (usuable) motion capture data, because I love trying to trace where every limb goes during a movement (you can see some of this in my documentation video above). I also think that a more advanced way of capturing motion capture data would have been helpful, but what can you do?

Thanks for a great semester, Golan.

Here is a link to my code on Github: https://github.com/JacquiwithaQ/Interactivity-and-Computation/tree/master/Motion_Tracer

Anson+Kadoin-LastProject

For our last project, Kate and I both wanted to work with projections. We chose to augment a common, often overlooked but frequently used object, the water cooler. We first discussed how we wanted to create a “flooded” water effect, as if the digital water was flowing from the tap. We attempted to work with the physics engine, Box 2D, but we ended up creating our own particle system in Processing to generate the waterfall. We added some floating kids into the waterfall to create an unexpected sense of playfulness.

Here’s our video: 

To create the projection mapping, we used the Keystone library for Processing to correct the perspective from the projector throw. In the final documentation, we used Millumin to add further control over the warping of the projections, fitting the waterfall precisely to the water cooler tap and floor level. This allowed us to use bezier curves and segmenting to enhance our projection mapping accuracy.

Here’s some code:

Water[] drops = new Water[500];
Mist[] bubbles = new Mist[500];
Ball[] balls = new Ball[200];

int numBalls = 200;
float spring = 0.05;
float gravity = 0.2;
float friction = -.1;

int numFrames = 81;  // The number of frames in the animation
int currentFrame = 0;
PImage[] images = new PImage[numFrames];
//ArrayList mistClouds;

float[] p1 = {237, 0};
float[] p2 = {320, 0};
float[] p3 = {320, 0};
float[] p4 = {320, 0};
float[] p5 = {320, 0};
float[] p6 = {320, 0};
float[] p7 = {320, 0};
float[] p8 = {320, 0};
float[] p9 = {337, 0};

int mouseR = 25;



void setup() {
  size(640, 640);

  //frameRate(30);
  //animation1 = new Animation("Witch Flying_2_", 81);
  //animation2 = new Animation("PT_Teddy_", 60);

  //for (int j = 0; j < numFrames; j++) {
  //  String imageName = "Witch Flying_2_" + nf(j, 5) + ".png";
  //  images[j] = loadImage(imageName);
  //}

  for (int i = 0; i();
}

void draw() {
  background(0, 0, 0);
  //frameRate(30);

  //currentFrame = (currentFrame+1) % numFrames;  // Use % to cycle through frames
  /*int offset = 0;
   for (int x = -100; x < width; x += images[0].width) { 
   image(images[(currentFrame+offset) % numFrames], x, -20);
   offset+=2;
   image(images[(currentFrame+offset) % numFrames], x, height/2);
   offset+=2;
   }*/


  //------------------------------------------------------------//
  //                    draw pool 
  //------------------------------------------------------------//

  //fill(150, 180, 255);


  //pushMatrix();

  //beginShape();

  //translate(0, height/2);

  //curveVertex(p1[0], p1[1]);
  //curveVertex(p1[0], p1[1]);
  //curveVertex(p2[0], p2[1]);
  //curveVertex(p3[0], p3[1]);
  //curveVertex(p4[0], p4[1]);
  //curveVertex(p5[0], p5[1]);
  //curveVertex(p6[0], p6[1]);
  //curveVertex(p7[0], p7[1]);
  //curveVertex(p8[0], p8[1]);
  //curveVertex(p9[0], p9[1]);
  //curveVertex(p9[0], p9[1]);

  ////ellipse(p1[0], p1[1], 10, 10);
  ////ellipse(p2[0], p2[1], 10, 10);
  ////ellipse(p3[0], p3[1], 10, 10);
  ////ellipse(p4[0], p4[1], 10, 10);
  ////ellipse(p5[0], p5[1], 10, 10);
  ////ellipse(p6[0], p6[1], 10, 10);
  ////ellipse(p7[0], p7[1], 10, 10);
  ////ellipse(p8[0], p8[1], 10, 10);
  ////ellipse(p9[0], p9[1], 10, 10);

  //endShape(CLOSE);

  //popMatrix();

  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p2[1] -= .2;
  //    }
  //  }
  //}

  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p3[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p4[1] -= .5;
  //    }
  //  }
  //}


  //for (int dot= 0; dot p1[1]) { //shrink up
  //      p5[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot width/2) { //shrink left
  //      p6[0] -= .5;
  //    }
  //    if (p6[1]> p1[1]) { //shrink up
  //      p6[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot width/2) { //shrink left
  //      p7[0] -= .5;
  //    }
  //    if (p7[1]> p1[1]) { //shrink up
  //      p7[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot width/2) { //shrink left
  //      p8[0] -= .5;
  //    }
  //    if (p8[1]> p1[1]) { //shrink up
  //      p8[1] -= .5;
  //    }
  //  }
  //}

  //for (int dot= 0; dot width/2+25) { //shrink left
  //      p9[0] -= .5;
  //    }
  //  }
  //}

  for (int drop = 0; drop
					
					
			
Written by Comments Off on Anson+Kadoin-LastProject Posted in LastProject

Zarard-LookingOutwards02

Inside Social Soul Inside Social Soul (Perspective 2)

Social Soul is a social media installation created by a collaboration between Kyle McDonald, Lauren McCarthy, and Kyle McDonald. It brings to life a user’s Twitter stream and profile in 360 degree monitors mirrors and sound.

More impressively they created an algorithm to match users with other attendees and speakers at the conference this was installed at and displayed their match’s stream. The user was allowed to connect with their social soul mate.

They used seven different languages to make it and the visual and audio arrangements were computationally generated live.

What inspires me about this piece is the fact that it is personal. Every user can step in and feel that this is a piece for them. Also when you leave the piece the experience isn’t over because it connects you to a real live person and so the experience of participating in the social soul outlives the life of its existence. I personally really enjoy the idea of connecting people and exposing the differences and similarities amongst social circles which is why this piece really fascinates me.

Krawleb-LastProject

For my last project, I took the opportunity to learn Unity through prototyping a simple 3d local-multiplayer game.

In the game, each player controls a ‘Shepherd’. The objective of the game is to eliminate all of the enemies ‘vassals’ which are a herd of small soldiers. The only control the shepherd has over the vassals is to toggle between having them follow their shepherd, or seek out and attack the nearest enemy vassal.

This makes the gameplay tactics about positioning, choosing the right time to attack, and using the environment to their advantage. If your units attack as a group, or at a choke point, they will eliminate the enemy with ease.

Because I had never worked with unity before, the vast majority of this 2 week project was spent familiarizing myself with unity, C#, and how to use many of the built-in functionality that unity provides. This included:

• Nav Meshes & Nav Mesh Agents to control the flocking/pathfinding behavior of the AI vassals.

• Delegates, Events, and Event Subscription to allow GameObjects to relay their position / health / etc. to other GameObjects

• Instantiation, Parent/Child relationships, and how to safely destroy GameObjects to allow for modular setup/reset of the level

• Camera Viewports and how to setup splitscreen for multiplayer

• Layers, Tagging, and physics interactions / collisions

The scripts I wrote for unit control, health, combat interactions, among other things are on github here

As I began wrapping up the programming for the basic gameplay interactions (about 1 night before the due date), I decided to quickly create some low-fidelity 3D assets to create more interesting environments to battle in. 

A bridge to connect islands and act as a choke point

Some islands, to create a more divided playspace that forced players to choose when to cross between them.

Some palm trees, to add to the atmosphere and provide an additional small obstacle.

Here’s an earlier iteration of the map, with a previous bridge design that was replaced because it’s geometry interacted problematically with the vassals.

Additionally, I originally tried to work with a top-down camera, but felt that I couldn’t find a balance of showing the entire map while giving enough attention to the seeking behavior of the vassals.

I ran into several roadblocks along the way, but learned even more than I imagined I would in the given time. Unfortunately, much to my dismay as someone from a primarily visual background, I was left with very little time to focus on learning lighting, materials, and camera effects. The result was a very awkwardly-colored poorly lit prototype.

However, I loved working with unity. The separation of functionality into separate scripts and objects that allowed me to compartmentalize code was refreshing. Additionally, working with prefabs that allowed me to build up components of my program as an object felt intuitive. I will certainly work with unity again and mastering lighting and materials will be my next goal.

 

Zarard-lookingoutwards09

For this project I decided to look at something related to my Teenie Harris Research.

The Loop Suite Kids with Santa

Jason Salavon focuses his artistic practice around visual averages and overlays. Because he chooses datasets with high similarity, by layering all the images together you can pull out key insights about the situations depicted in the photographs. In the top image, you can get a sense of the shapes that come through in Chicago’s inner loop, very tall and long buildings. In the Santa photo, you notice that many of the children sit on the same leg and are very small, probably all less than 6 years old. I might end up trying this in my visual annotations of the Teenie Harris archive.

Zarard-lookingoutwards07

A visual study from the artist on coherence, plausibility, and shape.

I decided to look at Moritz Stefaner because he created one of my favorite data visualizations, Selfie City. While surfing his website I found to OECD index visualization

Countries represented by their wellness components.

THIS IS SUCH A GOOD DATA VIZ. The link to it is here: http://www.oecdbetterlifeindex.org/#/11111111111

It uses a flower. Flowers are pretty and aesthetically pleasing but also simple enough of a shape to still be legible. Additionally each petal represents a different factor in wellness for a country. Not to mention it is interactive and customizable. People are much more engaged when they can see how the data relates to them personally. I think that might be one key insight that will stick with me through my data viz pursuits.

Zarard-lookingoutwards06

The bot I like is the ASCII Art bot. I’m always into computationally generated art and event though it doesn’t say how it is made, since an art piece is posted every 30 minutes, I’m assuming it was computationally generated. One thing this bot does extremely well is creating character and emotion. It’s not like these bots are just stick figures. They are overwhelmingly personified by the simplest of characters.

Although I really enjoy this bot, I’d like if it played with font or boldness or italicization or formatting more. I think it could stand to use the full range of text editing options.

tigop-visualization

this was just awful

Table allRidesTable;
int stationRideCounts[]; 
String stations[] = {
"Centre Ave & PPG Paints Arena",
"North Shore Trail & Fort Duquesne Bridge",
"Ross St & Sixth Ave (Steel Plaza T Station)",
"37th St & Butler St",
"Bigelow Blvd & Fifth Ave",
"Frew St & Schenley Dr",
"Forbes Ave & Market Square",
"Forbes Ave & Grant St",
"Stevenson St & Forbes Ave",
"12th St & Penn Ave",
"Ridge Ave & Brighton Rd (CCAC)",
"17th St & Penn Ave",
"Taylor St & Liberty Ave",
"Liberty Ave & Baum Blvd",
"Shady Ave & Ellsworth Ave",
"Penn Ave & Putnam St (Bakery Square)",
"Alder St & S Highland Ave",
"Maryland Ave & Ellsworth Ave",
"Ivy St & Walnut St",
"Fifth Ave & S Dithridge St",
"Schenley Dr at Schenley Plaza (Carnegie Library Main)",
"Boulevard of the Allies & Parkview Ave",
"Atwood St & Bates",
"Fifth Ave & S Bouquet St",
"Zulema St & Coltart Ave",
"S 27th St & Sidney St. (Southside Works)",
"S 25th St & E Carson St",
"S 22nd St & E Carson St",
"S 12th St & E Carson St",
"21st St & Penn Ave",
"42nd St & Butler St",
"S Negley Ave & Baum Blvd",
"Liberty Ave & Stanwix St",
"S 18th St & Sidney St",
"Third Ave & Wood St",
"First Ave & Smithfield St (Art Institute)",
"First Ave & B St (T Station)",
"10th St & Penn Ave (David L. Lawrence Convention Center)",
"Fort Duquesne Blvd & 7th",
"Isabella St & Federal St (PNC Park)",
"42nd & Penn Ave.",
"Liberty Ave & S Millvale Ave (West Penn Hospital)",
"Penn Ave & N Fairmount St",
"Ellsworth Ave & N Neville St",
"Coltart Ave & Forbes Ave",
"Walnut St & College St",
"Penn Ave & S Whitfield St",
"Federal St & E North Ave",
"S Euclid Ave & Centre Ave",
"Centre Ave & Kirkpatrick St"
};
void setup() {

  int nStations = stations.length;
  stationRideCounts = new int[nStations];
  for (int s=0; s

.

tigop- last project proposal

Originally, I described creating an interactive creature. I later shifted gears, trying to create an interactive world that a user can move through. It functions as a game, but I feel it is relevant to my practice as it incorporates the fictional world I have created and the manifesto that I wrote (which is relevant to censorship, class power struggles, and amending the current system in place if it appears wrong). My goal with this project (before I had made it) was to make the world reveal some sort of greater truth- and I think this could have also been done in a data driven way, but that just isn’t the way I’ve been working recently. After a bit more research, I would like to have data revealed about dependents looking to divorce or minors looking for emancipation, but this wasn’t the place to do that or I didn’t feel like I knew how to incorporate it into the fictional world just yet. I also originally planned on using the makey makey, but then there were too many moving parts so I decided to keep it a bit simpler.

Written by Comments Off on tigop- last project proposal Posted in Proposal

tigop-lookingoutwards8

I found this to be a really interesting project by Lauren McCarthy, a project in which individuals have physical followers as opposed to followers in the digital world. I feel like it is interesting how it shifts the idea of a “follower” from something that creates a system of meritocracy to something that might mean you are being stalked, and how being followed is no longer a means of validation in the latter case. As charming of an idea as this is, I have doubts about who would actually get the app and use it. It’s really funny but I wonder if there’s a way to trick people into using it. I suppose I could also see people intentionally using it. When you are a follower, you are doing the stalking, but maybe to the follower it’s just “people watching”. Interesting to see how this is perceived from both sides of the experience.

Follower – Attention, surveillance and physicality of social media

 

tigop- looking outwards07

I took a look at Anna Powell-Smith’s “Fix My Street” app, which sends data to your local council, prompting them to fix something in your area, like the pothole mentioned in this video. I see this as an interesting way to collect data and then actually project it towards a central power that has the ability to create the change that is necessary to make your neighborhood a better place. I don’t think many people know about the app (I know I didn’t) but I wonder how significantly neighborhoods would be changed if it was well advertised. It would be interesting to see what other data sets can be collected and how we can provide this data to a power that has the ability to create change that might be out of our scope of ability.

Zarard-lookingoutwards08

and the wind was like the regret for what is no more by João Costa

What it is: “This work consists of a set of sixteen bottles – with air blowers attached to each one of them – and a wind vane. The vane is fixed on the outside of a window and detects the direction the wind is blowing. Inside of the room, the motor starts blowing air into the bottle that corresponds to that particular direction. This event generates a smooth sound, and each direction has its own pitch. The bottles are arranged in a circle, similar to the shape of the compass rose, depicting the eight principal winds and the eight half-winds.” – Costa

To be honest I thought this was referencing some important historical monument but I did some research and I was actually just thinking of a spongebob episode.

The episode SpongeHinge. Not the monument Stonehinge. Honest mistake.

I think what makes the project so effective is that it requires your full attention to really be aware of whats going on. The artist is capturing wind direction with sound which is something you probably wouldn’t notice if you weren’t fully present in the moment. Wind direction isn’t something that people are generally attuned to so for us it is something like the invisible.

The capturing of the invisible, which is what the artist claims to get across isn’t quite there for me. The sound is obvious, but at the same time i don’t think it’d be immediately clear to me that the sound is linked to the wind direction (at least from the documentation). I think the winds would have to be more forceful and and controlled than what is given in that environment.

However I think the project is technically sound.