dinkolas-lookingoutwards01

Non-Euclidean Virtual Reality is a project by Vi Hart, Andrea Hawksley, and Henry Segerman, with help from many others. It's a virtual reality world that you can move around in the normal forwards-backwards, up-down, right-left directions, but rather than behaving like a euclidean 3D space, it is a hyperbolic space. This allows for things like four ninety degree angles fitting in a plane. One of my favorite applications of graphics is to demonstrate mathematical concepts, and this is next level immersion in a math space, that gives an intuition that would otherwise take a whole lotta reading and thinking to achieve.

The project used various sources of software for shaders, curved spaces, web VR, etc., but the most of the work is their original javascript code. It is certainly a common thing in mathematics, particularly math that extends beyond the third dimension, to try to imagine oneself as a being in different spaces (Flatland: A Romance of Many Dimensions by Edwin Abbott Abbott is a great example), and this project carries this tradition into new technologies. The core concept of this project could certainly go in many different directions, and I think the educational applications of placing people into foreign dimensions could be valuable for beginners and experts alike.

You can try it out here, move around with arrow keys and WASD keys.

Here's a video:

A .gif:

And an image:

yuvian-lookingoutwards01

As a preface, I did not know of many interactive/computational artworks before this class and most of my exposure to this field has been from looking through OpenProcessing.

One project I remember distinctly is Jason Labbe's sketch entitled "Frozen Brush"

The user interacts with this sketch by moving their mouse across the screen, dragging the "frozen brush" along with it.

<

Jason Labbe is an artist with a strong background in visual effects, game development, and 3d animation. He has worked on films such as "Avatar", "The Avengers: Age of Ultron", and other projects that require the use of sophisticated special effects. Thus, one can assume that he is inspired by game engine physics and other aspects of computer-generated simulations.

Before coming across this project, I had just begun experimenting with Processing and P5, playing around with simple blocks and shapes. It was only after seeing this project and those similar to it that I realized how dynamic, intelligent, artistic, and interactive computational artworks could be. I admire the colors within this sketch and the fluidity of the movement of the "ice" as your mouse interacts with the code. Although simplistic, I think that this sketch is really beautiful because it's straightforward but also seems well-thought out. I aspire to make computational artwork that seems a lot simpler than it is while maintaining a level of artistic complexity.

One thing I would have liked to see in this project is -surprisingly- more interactivity. It would have been interesting to take in the pressure of the user's mouseclick and use that information to affect the movement of the brush. In addition, since it is titled "Frozen Brush", I would have enabled a function that allows the user to actually paint onto the screen. Another feature that would have made this project more interesting and interactive is the use of sound. For some reason, I naturally associated the movement of the brush with sound. An implementation of the visual effects of this project with another associating the position of the brush with sound would be very interesting to interact with.

sapeck-lookingoutwards01

Kinetic Lights is a project from Christopher Bauder and his creative studio WHITEvoid. The DMX-controlled winch and LED platform allows for hanging structures, lights, and mirrors to move and change form. I find objects animated by light captivating. The structures seem like they are from science fiction films, but they aren't images on a screen--they are nearly surreal, breathing spheres, rods, and beams of light.

The ecosystem of winches, light fixtures, and controllers appears to be built from the ground up. Each winch can be connected to an assortment of shapes of light-up forms. WHITEvoid designed software to map out and program in the system in 3D while still controlling the lights through standard DMX. Unfortunately, Kinetic Lights is a commercial product, so not much information on the development of is published.

Bauder developed this project after he created Lichtgrenze, a massive outdoor installation commemorating the 25th anniversary of the fall of the Berlin wall. The same light-up balloon system used in that project are available as Kinetic Lights fixtures. The floating nature of glowing spheres may have inspired the Kinetic Lights project.

The system is primarily used in very large installation art, but there are more possibilities. It may be used in performances where light and movement usually complement performers (ex. concerts, dance, plays). Kinetic Lights would also fit well in malls and large atriums where kinetic, hanging art is already found.

Kinetic Lights by Christopher Bauder as a part of WHITEvoid

MIRROR-MIRRORMIRROR-MIRROR

DEEP WEB - kinetic audiovisual installation and performance

breep-lookingoutwards01

Bastion (by Supergiant Games), Published 20th July, 2011

Bastion remains to this day one of my favourite video games. It follows the exploration of 'The Kid', who you play, through the post-apocalyptic world as he tries to piece together 'The Bastion', one of the last safe havens. For me it was an eye opening experience in the re-working of narratives, the full extent of the power of the soundtrack as well as the capabilities of an small indie publisher with only about 7 people to produce a well defined glossy and fabulously fun project. (I believe it took they two years to finish the project). This combined with the fact that they were working in two different cities to piece it together makes it one of the core works in my library of favourite works. I also feel that Bastion was the first time I fully absorbed the quality of such an individualistic art style for a game, which is very apparently continued in their other games.

I am uncertain of the software they used, but being an indie developer's first project I don't imagine they would have used custom software. The lead developer of the art style was inspired by Japanese isometric games, like the early Romance of Three Kingdom Games. Supergiant Games has produced two other similar third person narrative driven games and I imagine will continue to do so.

https://www.supergiantgames.com/games/bastion/ - This is a link to Supergiant Games' Bastion page (it has videos and images of the work on there as well, including the trailer).  

(Credit: SealedSun Youtube Channel (https://www.youtube.com/channel/UCykRGanLNdkrLBuFLh-Ed3w), Bastion Gameplay, Youtube Video, posted August 16th, 2011, retrieved 5th September 2018)

harsh-lookingoutwards01

One of my first introductions to the practice of expressing culture through the medium of technology was through Deeplocal. Though I've been interested in their work for a little while now (and curious about how the idea factory over there really works), a project that struck a variety of cords with me is their "Old Navy Selfiebration Machine". The questions this project indirectly asks about the homogeneity of screens in contemporary culture and the projections it makes about new mediums of interactions with technology make this project stand out for me. Beyond that, it does all of this while still maintaining a certain punchiness that's attention-grabbing in a rather ludicrous fashion.

As someone who's trained as an architect, any new medium of interfacing with computation that is physical in nature is always interesting to me. Which is what makes this radical questioning of what a "pixel" means for computation, executed in such a non-conventional method is deeply fascinating.

To briefly describe this project, it is a grid of inflatable balloons connected to Twitter, where people can post their selfies, which then get rasterized and displayed through this inflatable balloon grid. (The video below explains it in a little more depth) . Knowing Deeplocal's project cycle, it's most likely that this was done in a timeframe between 2 weeks and a month, with a team of 5-10 engineers and designers working on it.

From my knowledge of how they work, I'd say Deeplocal probably developed custom hardware (pneumatic systems, circuit boards etc) for this installation, and used some form of either C or C++ to program this system (though I am unsure how they linked it to Twitter).

The potential something like this holds in my mind is enormous. I hate the homogeneity of the screen-based interface of technology, and I envision a world where technology operates around us in a very real and physical manner.  This project for me is both a  provocation and a beginning of that idea(again, this also just might be because of my architecture background). I yearn to experiment and play in that place where the material world interfaces with technology.

 

lass-lookingoutwards01

Dance Tonite is a web based VR experience in which the user travels through a series of rooms containing virtual dancers, moving to the beat of Tonite by LCD Soundsystem. Each dance performance was created by a fan of the band using a VR controller. When viewed from a web or mobile device, which is how I viewed the project, the user can watch the dances from a birdseye perspective or the perspective of individual dancers. The thing that I admire most about this project is its simplicity--since the movements of the dancers was captured with two controllers and a headset, they are portrayed in the app as a single cone with two stick arms. The project makes use of solid, bright colors to match the upbeat tone of the music. The movements of the dancers is very natural and imperfect, (which is expected since they are recorded movements of actual dancers) which I think really adds to the experience and gives it less of a robotic feeling, despite how clean the shapes and colors are.

The project was created by Jonathan Puckey and Moniker, in collaboration with the Data Arts team at Google. The site lists a ton of technologies that they used: WebVR, Gamepad, Three.js, Preact, and Firebase, to name a few. I really appreciate how the artists are exploring recent technologies to create a unique music video, and I think that this project shows off some great capabilities that we have for creating easily accessible virtual reality experiences in the web.

Dance Tonite by Jonathan Puckey, Moniker, and the Google Data ArtsTeam

airsun-lookingoutwards01

It was not long ago when I saw this powerful installation in the Mori Art Museum in Tokyo this summer. I have seen interactive artworks through screens, however, I was not able to participate in many of them. But for this one, when standing in the space and looking at what is happening around me, the power this installation delivered is magnificent and significant. "Power of Scale" is a play rendering of installation done by Seiichi Saito and Rhizomatiks. It can be viewed as an informative introduction on architecture as well as an interactive digital art installation that embraces the audience by the demonstration of architecture history and human interaction design. Indeed, the exhibition is about Arata Isozki's book Japan-ness, a groundbreaking book about Architecture in 2006. It deals with topics such as "coexistence with nature" and "hybrid architecture", explaining how Japanese architecture flourishes with movable screens instead of traditional concepts of walls. This work is accomplished through video and fiber laser technology. It provides a human-scale environment where the audience can feel the most realistic experience of discovering and reflecting on human scale and its relationship with the immediate surroundings.

Introducing "Power of Scale"- Mori Art Museum

 

chaine-lookingoutwards01

AIBO is a series of robotic pets created by Sony and first introduced on May 11, 1999.  Both prestigious designers and engineers worked together and ultimately earned AIBO spots in places like the Museum of Modern Art, the Smithsonian Institution, and receive many design awards. (they were also added into the Carnegie Mellon University Robot Hall of Fame!)

Teams of AIBO played during several Robocup events (short for Robot Soccer World Cup), which were aimed at promoting robotics and AI research. These robots were unique in that they had the capacity to learn, responding to a plethora of different variables--especially on the soccer field. The robodogs are able to take in sensory data and compute an action. If they perform a less than ideal action, they are able to improve their subsequent actions with positive feedback, much like a real dog being rewarded for learning how to high five. Ever since their release in 1999, new models of AIBO came out until 2006 when AIBO was discontinued. Every AIBO was created with the usage of AIBOLife software which enabled them to "see", walk, and recognize commands while their sounds were programmed in by music composers who fused together mechanical and organic noises.

The creators of AIBO were most likely studied previous robots with artificial intelligence and were inspired to make something that was more complex and ever-evolving to its surrounding environment. The AIBO serves as one of the checkpoints in artificial intelligence, giving opportunities for more self-learning algorithms.

https://us.aibo.com/

https://en.wikipedia.org/wiki/AIBO#Hardware

 

casher-lookingoutwards01

Last summer, I had the chance to attend SIGGRAPH, the world's largest computer graphics conference. While I was there I saw some of the coolest art and tech projects I've ever seen -- one that stood out to me was a projection art project called INORI-PRAYER (https://vimeo.com/209356195). A small group of Japanese developers from WOW collaborated with the University of Tokyo and two dancers, AyaBambi, to create this interesting performance, which uses real-time face tracking and projection mapping at 1000fps. The girls dance to the song, and dark, creepy images that reflect the sad themes of the song are mapped on their faces. I love how the images enhance the mood of the music, especially because the song doesn't have words. I think the software and scripts of the project were mostly custom-made, which is impressive -- they developed both the projection technology and the face mapping technology. Projection mapping is destined to become a very useful tool for both artists and scientists. It is helpful for AR, so it is a cool way to incorporate interactivity into pieces, especially when combined with a camera. It provides a new medium for artists, but it can also help with data visualization, and it is very popular in the advertisement industry.

ocannoli-lookingoutwards01

        Detroit Become Human is an adventure game officially released May 25 2018 by Quantic Dream that emphasizes mainly the importance of choices and moral conflicts. I admire the extremely complex web of decisions that creates the basis for this game, the stunning and tedious use of motion capture graphics, and the ability to create a narrative so relevant and important to the atmosphere of today. The project took five and a half years to complete, with about 180 employees of Quantic Dream and 250 actors working on the project not including outsourced work. Although Quantic Dream used pre-existing software to create their visuals, they had to develop their own software in order to track and debug the insane amount of code in the numerous branches of the narrative, so in a sense it is a combination of the new and the old. Director David Cage and his team were definitely inspired by the legacy of choose your own adventure type games and narratives such as the Blade Runner series, Westworld, A.I. etc.; however, it still adds elements to create a twist to all its influences. It is important to note that Detroit Become Human does have its flaws. Many criticize that the narrative is too cliche and that the gameplay is somewhat lackluster and not as engaging as a game should be. Though many of its criticisms could be addressed by the new medium others claim the game is pushing towards. This new medium being a future of more cinematic videos games or even a future where the lines between film and audience input and interaction are blurred, which I find so exciting especially when the narrative has such valuable societal implications and relevant lessons.

Detroit Become Human Launch Trailer:

Branches and Brief Gameplay:

Branching Narrative Example: