Category: LookingOutwards

Looking Outwards

Eunoia by Lisa Park

Eunoia collects the artist’s brain using an EEG sensor and then uses them to vibrate 5 bowls of water. Using Processing and MaxMSP, each bowl of water is receiving the frequencies of her brain activity (Alpha, Beta, Delta, Gamma, Theta) as well as eye movement. This is visualizing data that is usually difficult to grasp by the average person.

This is the data she is using as a graph, and it seems incomprehesible.Elements by Britzpetermann

Straightforwardly Elements is an interactive walkway. One of four elements will build up where the subject puts their hands. Because of its simplicity the experience of Elements relies heavily on its visuals, and the graphics aren’t as natural looking as they could be. Looking at Elements you are very aware that the images are computationally generated. It is very difficult to re-create something as magnificent and beautiful as natural fire and sand dunes, and not seem lame compared to the real thing.

Equilibrium by Memo Akten

Similarly to Elements, Equilibrium becomes disturbed when touched and settles to a calmer state when left alone, except with this project the visuals are stellar. It is very complex, but soothing at the same time, all in very good taste. Interestingly, I read from Memo Akten’s website that it was the wild landscapes of Madagascar which served as inspiration for this project, which adds a whole new frame. Equilibrium creates its own platform by becoming something that is incomparable to anything in nature, yet still reflects natural behavior.

LO & Final Project

I’ve been recently interested in the text-based gaming revival started by Porpentine. Porpentine created this platform called Twine where anyone is able to create their own text-based video game (some examples http://aliendovecote.com/intfic.html). The game borders an immersion of story/plot and contemporary poetic elements.

Screen Shot 2014-11-19 at 20.09.33

After thinking about this, I was inspired to create a sort of facial simplification via prose by creating a software that would recognize elements of your face (level of eyebrows, face structure, expression, facial direction, colour of shirt below face). The data would then give the participant a small snippet of prose based upon these elements.

Looking Outwards: Final Project

Audience – rAndom International with Chris O’Shea

Audience consists of a set of robotic mirrors that orient themselves to face an individual that steps near them. I really enjoy how much character these machines have. Seeing them move with and without a target makes them seem easy to interact with, and audience members clearly also have this sentiment. Their semi-random placement coupled with their low placement help with placing the viewer in an imaginative space.

I would say that the mirrors themselves seems too small and disparate to be engaging, and that their movement can be slightly uncanny at times.

Pulse Machine – Alicia Eggert & Alexander Reben

Pulse Machine is an artwork with a lifespan. it consists of a drum that kicks at 60 bpm, and a counter which started with enough beats for the piece to ‘live’ for 78 years. Each beat takes off 1 from the counter, and the drum stops when the counter hits 0.

pulsemachine_1

This is a strange insight into humanity, or at least what makes a life. It’s a simple comment on cause and effect in existence, playing with inevitability, and, to an extent, fate.

I think that the drum itself may is a slightly jarring element.

Collected Works – Zimoun

Zimoun works around a central theme of creating physical and audio spaces out of simple forms and machines on a huge scale. There is a simple power to these pieces, and they speak well in relation to each other as a series. I also like the element of directed randomness in this piece almost meditative. Seeing the machines stop is also powerful

I do have to say that I would like to see more variance in the work.

Skylines III: Point Cloud City – Patricio Gonzales Vivo

Skylines are a series of 3D renderings made of the landscapes of major cities. This video is a flythrough of one of these renderings. I think that this way of representing a city street is wonderful, and the scale and approach of the camera’s view makes the scene seems more imaginative, magical almost. It’s an excellent example of environment building.

I would really like to see something done with this technology rather than a simple demonstration of a city.

Looking Outwards Final Project

For my final project I’m not exactly sure what I want to do. I only know that I want to work with sound. Without getting my expectations too high, i’d like to attempt to create a work which reacts to the conversations and interactions in a room. The projects I have selected each reacts to information presented to it. Some sound and some visuals.

Conversnitch – Conversnitch by Kyle McDonald is a device which listens to conversations around it, secretly uploads them to mechanical turk to be transcribed, and then tweets to the world what was supposed to be a private conversation. The integration of turking into this project is extremely interesting in that it is very difficult for computers themselves to transcribe audio. Integrating a “silent human” element to the work is extremely powerful because it makes the process still seem automated even though a majority of the difficult work is done by humans.

Conversnitch from Kyle McDonald on Vimeo.

Descriptive Camera – Descriptive Camera by Matt Rishardson is a device which snaps an image of an area and rather than out out that image, “develops” the image into a description of the scene in words. This project also uses mechanical turk to transcribe information but what’s most interesting about this project is that it changes what we expect. When a photo is taken we as digital individuals expect a lasting snapsht and when we are returned a description we are both jarred and freed. Freed in the sense that we can now use this information as we wish.

Giver Of Names – Giver of Names by David Rokeby literally gives names to objects which are placed in it’s view point. What intrigues me about this piece is that the computer is actively attempting to describe what’s in front of it with a name. It is immediately responding to information presented to it and then allowing the participants in the room to know it’s interpretation. I hope to achieve this in my final project.

The Giver of Names from David Rokeby on Vimeo.

Looking Outwards – Projection Art

Lit Tree by Kimchi and Chips

Kimchi and Chips maps the projectable surfaces of a tree in order to project onto the branches, stems, and leaves. In this way they are using the tree to create a 3D voxel display. I like this because it is an unconventional way to display a 3D image. It relates to my project in that I plan to work with projectors and 3D imagery.

Lighting of the Sails by URBANSCREEN

In this project the group URBANSCREEN projected onto the Sydney Opera House. I liked it because they used imagery which caused you to question the shape of the opera house. Particularly when the opera house appeared to be sails blowing in the wind. This is relevant to the work I’d like to do because I plan to work with projection.

Box by Bot & Dolly

Look Here.

BOX is interesting because it looks like magic. The camera movement is well planned and works perfectly with the movement of the screens and the projections. While watching I forgot at times that I was seeing a screen and not a moving box. This is relavent to my work because I would like people to watch a projection I made and forget that they are looking at something formed from technology.

 

Looking Outwards: THE FINAL PROJECT OF ULTIMATE DESTINY

So if you’d believe it or not, I had a really silly amount of trouble finding any large-scale installations or no-expenses-spared versions of what I’m aiming to do. There are, however, a lot of really ghetto built-in-mom’s-basement versions on Youtube, complete with horrible pop soundtrack playing in the background like so much elevator muzak.

This first project is along the same lines of what I’m up to; it’s a set of LED lights that respond to the music in much the same way that an equalizer does:

Second we have someone working with a whole lot of LEDs at once; they’ve made a full equalizer as well as various patterns that seem to move in time with the music:

Lastly we’ve got this guy giving a full video tutorial of his project, including the code. What he’s done, essentially, is pretty much exactly what I want to do only on a much smaller scale: He’s got three large LEDs responding to each of the low, middle and high frequencies. As far as mine goes, I’m mostly just doing this with more lights and more colors, but we’ll see how advanced I can manage. Anyway, here’s his video:

In addition to these, there’s a lot of really nice little tutorials out there that either have elements of or a version of what I’m trying to do, so I’m pulling together a decent idea of the work I have cut out for me.

MAJ: Looking Outwards #7

Admiration: Puppet Parade

Puppet Parade, by Emily Gobeille and Theo Watson of Design I/O, is an interactive instillation that uses arm motions to puppeteer giant projected creatures. This project uses openFrameworks 007, an infra-red camera, and two Kinects to track motions and translate them into visuals. Puppet Parade was featured at the 2011 Cinekid festival.

I enjoy the bright, gaudy-yet-simplistic visuals that characterize Puppet Parade. I’m particularly impressed with how simple hand-motions create such a complex environment, and would be interested to see the process of fine-tuning the projected outputs of these simple inputs.

For more info on Puppet Parade, click here. For more info on Design I/O, click here. For more on Cinekid, click here.

Surprise: The Treachery of Sanctuary

The Treachery of Sanctuary, conceived and directed by Chris Milk, is an interactive triptych depicting inspired by the cave drawings on the walls of Lascaux. From left to right, each screen represents birth, death, and regeneration. Infra-red sensors and Kinect cameras are used to sense participants. The Treachery of Sanctuary made its debut at The Creators Project: San Francisco 2012.

I’m impressed by the visual effects The Treachery of Sanctuary utilizes. The fluidity of the bird’s wings is quite striking, and I imagine how harrowing it must feel to become the subject of such projections.

For more on The Treachery of Sanctuary, click here. For more on Chris Milk, click here. For more on The Creators Project, click here.

What Could Have Been: Bird on a Wire

Bird on a Wire was created by Ben Light, Christie Leece, Inessah Selditz, and Matt Richardson, for a Master’s course at NYU’s Interactive Telecommunications Program (ITP). The birds animate when a specific phone-number is called.

I like the playful interaction Bird on a Wire has the potential to inspire, but I do have a nitpick: the flight of the birds is not fluid. I think more variation regarding the birds’ flight paths and animations would give this instillation the finishing touch it needs, although I understand such an addition is not critical to the success of the project itself.

For more on Bird on a Wire, click here. For more on Ben Light, click here.  For more on Christie Leece, click here. For more on Inessah Selditz, click here. For more on Matt Richardson, click here.

Food, Art and Social Media

The concept of food in social media is one that most people are probably familiar with. Most of the times, when food appears on social media its a either in a facebook post detailing all the delicious food you will be eating, or have eaten, as seen in below.

However, when it comes to food, art and social media the bag is pretty mixed. Sure their are plenty of tumblrs detailing food, as far as I have searched, I haven’t found any artist who does what I want to do. However, I was inspired  the project Pentametron. Pentametron  uses twitter posts to make poetry. I just like the concept of a being that exists in the internet tweeting existential/funny things.  My, subtweeting subs, looks to tweet sassy things about other subs based off of real subtweets made by celebrities and individuals. Examples of such ‘subtweets’ include:

Kim Kardashian subtweeting Amber Rose

 

Meek Mill subtweeting Chris Brown

 

Adam Levine Subtweeting…Lady Gaga

 

Lady Gaga subtweeting adam levine subtweeting her

How it works.20141117_200254

I plan on creating a database of phrases to tweet and a database of types of subs, their weights and the restaurant chains that make them. I will then have the sub tweet a random set of phrases either to the company or just as a status update.

 

For my second idea it is based on a project I did for my EcoArt class. For that class I created a series of photographs called sexting fruit. Basically the premise for that was again, based off of a social phenomenon, ie, leaked sexts from celebrities. With that concept in mind a created a series of fruits in compromising positions and then released the images in some text messages. I feel that programing a computer to send the sexts via twitter will also go along with the social media angle of the project. It will use the same weigh–> tweet concept as my original concept except fruit will be weighed in this case instead of subs. I am leaning towards my original concept with the potential of actually combining these ideas (ie being able to tweet just words for the subs and images for the fruit) as another possibility.

Looking Outwards: Sensors and Shields

Adafruit Color Sensors

I found this particular sensor to be extremely exciting because I am really interested in color theory. One of the projects I was thinking of was a simple matching game when an individual would see a color on a screen, out of reach, and try and match that color exactly using an assortment of objects around them. I would also enjoy using this sensor for my own personal painting practice in order to discover interesting color combinations.

light_1356demo_LRG

 

 

Flex Sensor 2.2

For this sensor, I thought it would be interesting in performance pieces. I really would enjoy being able to use the movements of my body to control sound and visuals. These would either have to be built into cloths or somehow hidden under makeup because I think it would be physically interesting to have someones face being able to control these elements as well and not just through a camera lens. I wounded how sensitive these sensors are.

https://www.sparkfun.com/products/10264

10264-01

[S]ensored.

11195-01

RGB Color Detecting Sensor

This sensor sends RGB values of the light it receives to it’s output wires. While the power of light on it’s own is a very useful measurement to have the type of light that is being input is even more powerful. While I am not exactly sure what I would want to do with this piece it definitely would not be difficult to map the red, green and blue values of light in a room to sound, other lights and power distributed to a room. This one stood out, from the other sensors but I’m not exactly sure why.

1063-00

Electret Microphone Amiplifier

1064-00

Electret Microphone

This amplifier is a great way to translate both direct and ambient sound into data which I can process with my arduino. I love sound and being able to couple sound with the portability and computational abilities of the arduino would be fantastic. I have a few ideas for projects which would involve analyzing the sounds I make on a normal basis (Whether that be with my voice or otherwise). In particular I would like to match audio levels to some kind of physical reaction on my body.