Prior to taking an interest into the world of new media and technological art there were few works that I had seen and been familiar with. As I arrived at CMU and became further engulfed in new media art, the possibilities began to seem endless and more and more interesting achievements of artists past caught my eye. This sudden interest in interdisciplinary forms of practice soon led me to the VIA festival and from there my first memorable hands on experience with new media art occurred. Although VIA consists of hundreds of outstanding people coming together to make all of the events and works showcased throughout the festival, one of the most memorable for me was Lauren Goshinski’s own ASMR NPC experience.
This piece consists of immersing the audience into an all consuming virtual reality experience in which their visual and auditory senses are stimulated through the phenomenon of ASMR (Autonomous Sensory Meridian Response). Although the software used for this piece was not entirely created by Lauren herself, the entire team that came together to make each piece of the whole experience possible still shows a magnificent and new aspect of technological art which widens the horizon for future art to come.
The Rain Room is an interactive installation that was put together by Random International, a team of artists headed by Florian Ortkrass and Hannes Koch who consider science as a new form of material. What I really admire about this work is that, it incorporates so many advanced technologies to work together to create a scenario that is basically impossible to experience in nature. The creation of this installation was somewhat accidental. Random International initially intended to create something that prints images with droplets of water, but along the way, the process started getting too complicated and convoluted so the team took a step back and crated the Rain Room. The Rain Room is basically considered as an environment of falling water but it allows the viewers to navigate through the room without getting wet. This was made possible by multiple 3D tracking cameras to sense where the viewers are and thus to pulse the rain drops. However, there wasn’t an already made program that tells the cameras exactly what to do. Therefore, the most difficult and most important part to this project was for the artists in Random International to develop their own set of programs that just for this installation.
Link to this project:http://random-international.com/work/rainroom/
Associate Professor Sarah Bergbreiter and her research team from the University of Maryland set out to create a robot that is the size of a grain of rice, or an ant robot, in other words. In a robot of this scale the team met many challenges, such as mobility, maneuverability, and using an energy source. At the beginning of their research they used simple elastic, like rubber bands, to observe how a robot of this scale could possibly jump. They then moved on to usuing magnets to simulate the joint movement of a robot of this scale. To this robot they added sensors and “appendages”for movement. As of now they have created a 1×1 centimeter robot that can run and respond with light sensors. However they have a ways to go in their research before they create a robot on a 1×1 mm scale, which is their goal. The possibilities for robots at this scale is amazing – and this is what inspires me. If Sarah and her team were able to put a type of “brain” into this robot and begin to upload programs of actions for the robot, there would be endless possibilities!
‘Feed’ by Mark Napier is a new form of digital art where the work of artists are accessed through the virtual space within the web. Mark Napier is one of the five artists who is currently exhibiting his work in 010101 : ART IN TECHNOLOGICAL TIMES’ site, which is commissioned by SFMOMA. The artists are creating and exploring the world wide web as a new medium to produce art and Napier does this by manipulating the order of the web codes. The site the audience is on will be reopened and redisplayed after it has been manipulated and randomly changed with a tool called ‘The Shredder’, as a result creating chaotic images. Napier, one year after creating ‘The Shredder’ he created a similar art piece called ‘The Riot’, which merges different sites the audience is surfing on. His work breaks the order, rules and hierarchy that has been set on the web and creates chaos in an aesthetically pleasing way and this is why I admire his work.
“Feed the Head” is a web game made by Vectorpark games. The game has no real objective other than to explore the possibilities of what the head can do. For example, you can remove the nose of the head with your mouse and a new, unique nose will grow back.
The project is inspiring because it separates itself from typical, hyper-stimulating online games. Feed the Head is artistic and requires a decent amount of creative thought to discover the full potential of what the head can do.
While I admire the subtlety of the game, I wish some of its components were easier to find. I have played this game for hours and have been unable to make the head to everything. However, the enigmatic nature of the game makes it unusually beautiful. It also makes the user want to revisit the game and go back for more.
The maker of this project, Patrick Smith, also designed a similar game called Windowsill. Smith graduated from Washington University with a degree in painting. He apparently made this game for his own personal entertainment.
The inForm created by the Tangible Media Group in MIT, is an interactive pin screen that uses the sensors of the Microsoft Kinect. The technology brings digital elements into reality by detecting motion and imitating it through the pins using a motor and a laptop. This interaction between two or even more users brings people from different cities, to continents together. While in the past it would have been difficult to show a physical prototype to a client overseas, now it’s possible to show a general outline of a product through the inForm. Personally, I have seen many friends who have broken up because of difficulties concerning their long distance relationships, but now with inForm people in long distance relationships could still interact without physically being with their significant other all the time. Since this project is not a final product but a working prototype some details that they missed is dismissible. However, to mention a few, I would have to say the pins would have to be smaller to increase the accuracy of the shape it tries to portray. For the specific interaction concerning physical human parts, the pins could be made of a material that is reminiscent of human skin because in the video when the hands are portrayed as pins it made me feel uncomfortable that as a user I would feel like I’m interacting with a robot more than a human. As of what the project successfully executed and portrayed in the video is the instantaneous interaction. Even a slight lag can really deter the quality of interaction one has with another. Also, the light projected is a very crucial detail that makes the interaction alive. Without the light, the entertainment, education, and business elements would be diminished as it is just a plain pin board. As of the chain of influence for inForm is first of all the pin screen and the emergence of motion sensing gaming. InForm in the larger context of what Tangible Media is trying to achieve is just a stepping stone to their greater vision: getting rid of the distinction between digital and physical media and interaction. It is interesting to see where human interaction design is heading and hopefully I could learn more about HCI through this course.
Massage Meis a project from 2007 by Kobakant, a collective consisting of Miko Satomi and Hannah Perner Wilson. They have been collaborating since 2006 and experiment with combining traditional craft processes with electronics and interactive media.
With this project, they noticed the amount of energy exerted by the hands while playing games with a controller and asked if they would be able to assign that excess energy towards another task. The result is a vest with embedded sensors that can act as a game controller, which is then worn by another person. As the first person plays the video game using the vest worn by the second person, this second person can receive a massage from the pressure of the pressed buttons. This turns the solitary nature of using game console controllers, which usually only allows for one person to use it at a time, into an interaction which involves another person. The vest is made with conductive fabric and neoprene, a synthetic rubber fabric commonly used for wetsuits. A Playstation controller was taken apart and the buttons were rearranged across the back of the vest.
What I admire about this project is that is usually a playful sense of humor to rethink how we can redesign an object. Kobakant also has great in process documentation of their work so you can see the process, failures and ideas that they come up with along the way. Many of their projects also include online instructions so that you can create your own version. This openness to how their projects are created is really inspiring for me since it allows people from different to access/understand their works.
Just a few days before returning to Pittsburgh for the school year, I experienced Nightscape: A Light and Sound Experience by Klip Collective at Longwood Gardens in Kennett Square, PA. I’ve been visiting Longwood my whole life, always enjoying the expansive gardens, the variety of firework and fountain shows, as well as their incredible Christmas decorations come winter. So, when I learned about this project and the marriage between the natural, familiar place and new technology, I immediately knew I had to go.
Klip Collective, a visual art group based in Philly, worked for about two years with Longwood to develop the project. For years, the team at Longwood has been pushing the combination of technology and the natural world, but this project appears to be the first meant to be immersive and for everyone.
I think this project does open a lot of doors for discussion of how we see technology’s place in the world. The media experience doesn’t take away from the beauty and wonder of the natural world that Longwood so simply encapsulates. The media allows us to see things we’ve maybe become desensitized to in refreshing ways – the beauty of trees, water, and flowers is reinvigorated.
I think my only critique would be that in many places, the media had a neutral state, and then would start into a more elaborate animation. This neutral state lasted about 5 minutes, meaning that some viewers kept walking through, missing the even more awe-inspiring moments. Additionally, this caused quite a few back-ups in the walking portions as people waited to experience the changes.
Overall, though, it was incredibly inspiring for me to see somewhere so familiar and loved become something new and exciting.
Speaking of interactive media and computational art, the very first thing that comes to mind would be no other than video games. However, out of all genres, I find RPGs (Role Playing Games) the most fascinating, as there is more interaction with the media than any other genre. One of the pioneers of this particular genre is the widely acclaimed Pokemon. The franchise has now expanded to various media beyond video games such as trading cards, anime, movies, toys and so forth. However, I would like to focus on the very first generation of the video in this blog post. The game designer Satoshi Tajiri based the game’s concept off his life experiences–his hobby of collecting insects. The first generation, Pokemon Red and Green (Red and Blue in the US) was first released by Nintendo in 1996, so I had the privilege to experience the game personally. The game, unlike many others of its time, was interesting not only in that it was set in a completely different world, but also because the player gets to play as a character and interact with other characters in the game. This opened many possibilities to interactive gameplay and inspired many other game designers to put emphasis on interaction with the audience.
RepRap (Replicating Rapid-Prototyper) is a widely used open source, low-cost desktop 3D printer. It’s admirable that the creator of RepRap chose to make the 3D printer free and available to anyone who wants to create something. 3D Printers already exist but RepRap provides cutting edge technology to those who would not be able to afford other printers. RepRap includes the software CAD (Computer-Aided Design) and CAM (Computer-Aided Manufacturing). RepRap was created by Adrian Bowyer. Bowyer, a mechanical engineer and professor, wanted to build a more accessible 3D printer.( In fact, RepRap has been called “an invention that will bring down global capitalism,” by the Guardian. ) The printer makes the objects with plastic. If I were to critique it, I would say that the products would definitely not be as stable because of the plastic. However, I realize it would be hard to keep the software and printer free (its’ mission statement) if the used more durable material.