Social Soul is a social media installation created by a collaboration between Kyle McDonald, Lauren McCarthy, and Kyle McDonald. It brings to life a user’s Twitter stream and profile in 360 degree monitors mirrors and sound.
More impressively they created an algorithm to match users with other attendees and speakers at the conference this was installed at and displayed their match’s stream. The user was allowed to connect with their social soul mate.
They used seven different languages to make it and the visual and audio arrangements were computationally generated live.
What inspires me about this piece is the fact that it is personal. Every user can step in and feel that this is a piece for them. Also when you leave the piece the experience isn’t over because it connects you to a real live person and so the experience of participating in the social soul outlives the life of its existence. I personally really enjoy the idea of connecting people and exposing the differences and similarities amongst social circles which is why this piece really fascinates me.
I’ve always been inspired by large-scale projects and artworks, especially in the realm of video games/3D models. For example, I am fascinated by sprawling Minecraft creations and super-detailed, vast landscapes and cityscapes created for games like Grand Theft Auto. In movies like Star Wars, detailed 3D renderings of cities that go as far as the eye can see (like Coruscant) have also inspired me (which just resulted in using them as computer wallpapers). I’ve also in the past dabbled in Google Earth model-ripping software that allowed you to export real-world 3D cities into 3DS Max to work with.
The latest in the world of large-scale projects is perhaps the largest of them all, No Mans Sky. It is a space exploration game where you can visit planets, starships, and space stations by flying around in a ship, collecting minerals and materials to trade and craft with. What makes this game so unique is the complex mathematical algorithms and logic used to randomly generate planets, creatures, plant life, atmosphere, and “properties” (gravity, toxicity, etc). The team developed this in-house, with the help of a team of mathematicians and a graphic designer. Although randomly generated biology existed before, applying them to such a variety of 3D model applications on such a large-scale, and public, project has never been done before.
Details about the game’s creation, the methods used and about the team are available in this interview on Kotaku.
The first time I learned about computational art was my senior year of high school in my first graphic design class, I sat next to a buddy of mine who was a “hacker”. At the time, I didn’t understand what this latter word meant or what programming really was but he showed me some abstract “art” that “he told the computer to do.” At first I thought, why would you do this when you can just make it in illustrator, but then he outputted like 20 different compositions in a few seconds and I thought, “woah”.
One Field Trip in 2014
The second time that computational and interactive art left a deep impression on me, which inspired me to to take 15-110 and attempt 15-112 sophomore year, was when I went on a design field trip my first year at CMU. We toured a a few places but the two that I still remember to this day are two NYC design agencies with an appreciation for coding (I mean one of them literally has the word code in their name): Code and Theory and Breakfast. These two places made me realize that designers can program and programmers can design, that these two fields are not mutually exclusive.
Code and Theory
Code and Theory showed us their website upon entering their studio, check it out below or optimally at their website:
So as flashy and visual candy as this is, for me, this was the first time I linked together design and programming as a way to visually communicate. Code and Theory could have easily decided not to include supplementary animations and interactions to go with their descriptions, but they didn’t and I’m glad.
Breakfast was commissioned by TNT to create this interactive Billboard advertising a new crime-solving show Perception. The protagonist can apparently see anagrams among large blocks of text so Breakfast “revived a sign technology of yesteryear to create an anagram-finding experience on the streets of New York”. Zolty, Breakfast’s creative director and founder, explained that they had to write their own software in order to have the dots flip from black to white 15 times faster than it was originally designed to, so when anyone walked by, the interaction was in real-time.
In some ways this project reminds me of Text Rain by Romy Achituv and Camille Utterback (1999). Both are an interactive installation that tracks body movement, allowing the audience to play with letters and words. Breakfast’s installation and execution are definitely more complex in its medium, code, and technology, but I find that both concepts to have people’s movements tracked to interact with a screen (digital / physical) are simple and similar.
While I don’t imagine myself ever being the programmer on a team, I think Breakfast demonstrates what I believe can happen when a small team that has a diverse range of experiences and skill sets unifies. I really admire Breakfast’s ambition and philosophy of improving “how connected devices can look and act in the real world”. “Technology doesn’t need to stand out and look like technology. It can blend in and hide the complexity behind great design.”
“The future is not a touch-screen on a wall”
Some more pictures below from this memorable visit:
Although I did not learn about this artist until just recently, I would like to explore the boundary between game and interactive computational art. Mario Von Rickenbach is a designer, artist, and programmer that seems to blur the line for me. One way I would argue he does this is through his loose interpretation of a game– most of his big projects do not have enemies or obstacles to overcome, some do not even have real objectives. Many of his games like Plug & Play and Mirage are more exploratory. In Plug & Play, he took what was a very surreal short film and a very abstract concept and turned it into a computer game. As evident by the Let’s Play below, the game is not clear in its objective or even in its explanation of how to progress. The video above also has an explanation of the sorts of mechanisms used in Unity to create the physics behind some of the gameplay.
Several other examples of projects that blur this line were apart of the Bit Bash Festival, a Chicago game festival celebrating independent and alternative games and designers. Many of the projects as seen in the video below can in fact be called games, but they also exploit bugs in computer graphics to create a specific look– “vaporwave”. Games like this, again, can be looked at as surreal, with much more intention placed on the graphics of the game than the inventiveness of the gameplay itself.
For my looking outward, I chose to examine the project “HYPER-REALITY” by Keiichi Matsuda. While it is not necessarily interactive in a literal way, the project presents a speculative first personexperience for future augmented reality that I find myself re-watching on a regular basis because it is so rich in depth. It feels interactive because of the amount of thought it, and the amount of thought it generates in the viewer. The quality of the special effects are extremely high, and they are arguably surpassed by the richness of conceptual content. The video presents a dystopian future for wearable augmented reality and bio feedback/internet of things style devices. As a big fan of cyberpunk novels, this project reminded me of some of my favorite books. Not only because of the content, but because of the artist’s attention to narrative detail in the story. Although the content presented is absurd, the linear path from contemporary technology, to the technology presented in the video is very clear. And it is made clear without the need for any explanation that breaks the mythology of this world. I think it was made by filming in a city, and strategically overlaying generated and digitally content on real places, paying close attention to perspective lines. I would imagine that this work was inspired by cyberpunk books and films.
I want to write about an interactive digital walk site called VOID. Its unconventional, unexpected, versatile ways of interaction and subtle, submerging experience throughout the story, make it an unique learning subject for me to revisit. And every time I do, I can still see something new, the hidden glowing letters on the bottom, the different spectrums the black ice reflects, the dazzling sound from the icy, wiry and sharp geometries surrounding the letters of the Hi-Res…
VOID is a digital walk created by Robin Gardeur(IeSHKA) and the Hi-Res team in 2005. The project is very conceptual, exploring possibilities of ideas and imaginations through sounds and visuals. The forms and shapes are impossible to describe, yet so hard to look away, and each chapter has its own theme and dynamic, but together they become a Aside from the visuals, the sound effects is so well integrated, changing according to every input of the user, every shape and concept.
On the preface it writes: “All art is created from nothing into something. Our imaginations piece together concepts which we then transform as well as manipulating mass to fill space. We use VOID to house these pieces, and we see the emptiness that surrounds them as potential for new ideas.” It is fascinating that the creators choose to make the idea of “idea” into a “out of reality” experience that is well thought out and beautifully put together.
The website was built using WebGL and Web audio technologies, threejs, howlerjs, GSAP and Coffee Collider. Although they all look quite intimidating, I determine to use my time in the course to learn more about them and open up the possibilities for my own ideas at the same time. Will be quite a journey.
I was very amused by Maryyann Landlord and Ralph Kim’s VR piece at the FREE GERMS 2016 Senior Exhibition last semester. In their work, the user got to play the role of an insidious looking character on a bus/subway. As this character, you were able to pop the balloons of small children-like creatures, and by doing so, you made them cry and then they died. You essentially killed them by ruining their happiness. I think there is something compelling about interactive pieces, and I really enjoyed the style of this piece in particular. I like the idea of bringing people into very imaginative worlds and taking on different roles in these worlds. They somehow managed to pull together a piece with an incredibly dark concept, but it felt like it was coated in a layer of sugar. It was very satisfying.
The first exposure to computer-based generativity that I was interesting to me was when I learned about the computer generated random environments ins The Elder Scrolls Arena and Daggerfall. I didn’t play them because I was too young, but I learned about them while playing a later game in the franchise. I’ve always been interested in the world-building aspects of video games so I was very interested in learning that the worlds in these two games had been created by computers, instead of people working hard to craft every inch of an in-game world that was 62,394 mi^2. Even though it is an old game and lacks quality, just the sheer size of it is something I find really amazing and interesting. This is probably one of the things that got me interested in how computers can make things, instead of just being used as tools for people to make things.
This piece by TeamLab was the catalyst that got me interested in new media arts. The animation is projected onto the viewers in the gallery, giving the whole room a sense of movement. The music combined with the motion of the work creates an emotional and almost overwhelming experience. TeamLab is a Japanese collective group of engineers, animators, designers, artists, programmers, etc. I’m not sure how many people work on a single piece or how long it may have taken them. I imagine that this animation required custom software, but I’m very well versed in animation software. Their work is clearly rooted in and influenced by prior Japanese art pieces. TeamLab makes a lot of public art or geared towards children, so their pieces are both very engaging and relatable to the general audience.
When I was 12, I was on the way back to Chicago from visiting my grandparents in Louisville, and had only the Toy Story 2 DVD and a portable DVD player to entertain myself. After I watched the movie and sat around bored for a while, I decided to watch the movie again but this time with directors commentary.
In that, they talked a lot about how far 3D computer animation had come since the first Toy Story. How now they were able to process and render so much more than just a few years ago. I hadn’t really thought all that much about the difference technological advances had made in 3D animation until then. All of the sudden I was noticing the difference in quality between every 3D animation I watched and decided going down a career path in the animation industry would be fun.
PIXAR has pushed once stiff moving characters, to nearly photo realistic accuracy and are always looking to continue to push the limits of computer animation. But PIXAR in addition to their A+ team of software engineers, has amazing artists and storytellers. It’s not all about the graphics that makes their movies enjoyable, but their commitment to the highest industry standards, is admirable.
I have been inspired by Karolina Sobecka’s project Sniff since I heard about it four years ago. This project was, as far as I’m concerned, extremely innovative for its time – and it is surprising when I look at the documentation on this project and realize it was made 7 years ago. This is an example of a site-specific interactive projection in public space. This project utilizes computer vision, Open Frameworks, and the real time graphics game engine, Unity 3D.
The project took place on a storefront window, and a sidewalk in New York City. Using an IR camera, people’s movements were monitored, and an animated dog “reads” the gestural response of the users to inform its artificial intelligence system, forming a relationship with the person who is interacting with it (which could be read as friendly, excited, aggressive, or standoffish). I feel this was innovative in many ways – the artists, Karolina Sobecka and James George, wrote custom software to create the project. But more importantly, they explored in an effective way the emotional and psychological impacts that could take place within an interaction with a digital “presence” – in this case, a dog. Many questions are raised here. Who is affecting whom? Where is cause and effect? Can you have an embodied experience with a digital experience? Can you summon genuine emotion from a digital presence? These are questions that I ask in my own work, and would like to explore in my research, creating interactive, digital experiences that explore our relationship to space, examine our connection to each other, and focus on embodiment. And I believe embodiment with a direct link to the spatial conditions (site specific) around us are the most powerful.
Sobecka’s work was inspired by the essay, “The Body We Care For,” by Vinciane Despret, which discusses a horse named Hans who was believed to have been able to learn mathematics. However, the horse was simply responding to physical and emotional signs from its handlers. She quotes Despret, “Who influences and who is influenced, in this story, are questions that can no longer receive a clear answer. Both, human and horse, are cause and effect of each other’s movements. Both induce and are induced, affect and are affected. Both embody each other’s mind.”
These questions, which I believe are becoming increasingly imperative, are at the root of my research.
Here’s a video of someone interacting with the work.
The thing that may have most inspired me to begin to learn coding was the program Meander that was used in Disney’s short film, Paperman. I have always loved animation, but before I saw Paperman, there was always such a clear distinction for me between 2D and 3D. 2D had distinct stylistic advantages, while 3D could achieve gorgeous and realistic effects. Although the new wave of 3D animation was exciting, there was a bit of nostalgic longing for the old 2D movies that always stayed with me. Hand drawn lines have an appeal and stylish nature that I’ve never seen really matched in a 3D film. With certain nuances, most 3D films give off the same sort of feeling. With Paperman, it was different. I was amazed. Even if the style was classic Disney, the feeling the animation gave off, the atmosphere of the lighting and lines, was unique. I looked up the short later and found that the effect was achieved by using a program called Meander that was made in Disney’s R&D department. Meander made it so that the short could be animated in 3D, but that 2D lines could be hand drawn on top, that would morph, contour, and follow the curves of the 3D spaces and characters. They had essentially given a 3D movie a further 2D appeal. It was because of Paperman that I realized the possibilities that computer science brought to the field of animation, and that by learning computer science, I could open up those possibilities for myself.
Initially when given the assignment, I worried over the fact that I had no personally recognizable exposure to what I defined as interactive/computational art, project, or installation; perhaps I overthought and unnecessarily limited myself, but my aspiration in utilizing technology and design together stems from experience of sequential narratives such as animations and videogames–and ultimately decided to make this reflection on what I consider to be a compromise between an interactive work on a medium with which I was more familiar:
Such is the videogame Flower by thatgamecompany, in which the player is the wind and guides and collects petals by interacting with the surrounding environment; the goals and journey in each level vary, but involve flight and exploration to create an idyllic atmosphere. The game was created by a development team, which included producers, directors, engineers, designers, illustrators, writers, and composers amongst other members to accommodate for every last detail to be successfully integrated into the interface. Flower was actually the second project in a 3-game deal with PlayStation, in which Sony offered to fund three games from the company, meaning that it is specific to the PS3 and would likely not be available on any other platforms in the future.
However, Flower challenges traditional gaming conventions by delivering a simple gameplay using accessible controls (SIXAXIS motion sensors) and a medium that is meant to evoke positive emotions in the gamer; the team viewed their efforts as creating a work of art, removing elements and mechanics that were not provoking the desired response in the players. The result is a narrative arc that progresses through visual aesthetic and emotional cues as the audience fades from the external and stressful world. As a student whose goal is to effectively have technology and art coexist, this game provides future opportunity of advanced visual, audio, and interactive escapes to engage players and strum the chords of feelings that all consumers naturally have: this is something important to me as a gamer and an artist, that there is a feedback or response between the resulting products and the audience.
That Game Company’s website: http://thatgamecompany.com/games/journey/
This is Journey, a game developed by That Game Company in 2012. They had a crew of 19 people and used PhyreEngine (and had inspiration from games like Braid). This is THE gaming experience of this decade, a story that I will never forget! Journey tells it’s narrative wordlessly, and its game mechanics aren’t particularly challenging. You won’t ever find yourself stuck on a puzzle, but Journey isn’t really a game about solving things. Journey is about finding your way up a mountain towards a light while being guided by another player who has already done the pilgrimage. This game tells a story that the player gives meaning in their own way, touching the soul in a way that only games can.
Journey excites me, not only because it is a beautiful game, but because it has made a major impact in the gaming community. It paved the way for more and more abstractionist/First Word games, and my new favorite game to come out this year: Bound, has clear inspiration from Journey. I feel like the path towards becoming a better person starts with understanding yourself, and these sorts of games are incredibly helpful at that.
My huge interest in generative things, especially generative landscapes started years ago when I first saw Minecraft on somebody else’s computer. What amazed me was not only that the landscape is realistic and beautiful, but also that the maps were infinite, and every time a world is generated it was different from any other.
I spend hours traveling in these worlds pondering how they are generated. The style of my own work also started changing. I was beginning to use less and less pre-rendered images until one day I decided to have everything in a project procedurally generated.
The project was initially developed by a single person named Markus “Notch” Persson in java, and later by his team. This fact also convinced me how much a single person with a computer can do.
It is said that Notch was inspired by other generative games such as “Dwarf Fortress”.
I believe this project, among many others, may point to a very interesting future. What will we be able to generate? Will we be able to tell if something is real or generated?
This project, called Displacements, is something I was introduced to a few years ago in Larry Shea’s Media Performance class here at CMU. I wasn’t initially enamored by it, admittedly, but as time has gone on, I find myself thinking about this piece quite often. I think it struck a chord with me in the following months and years because it was such a simple, yet such precisely executed piece.
The installation is completely blank, white room in which a spinning projector projects color and movement onto. Before the room was colored white, actors had been filmed in the room with a camera rotating at the same speed in the same place as the projector, so that they appear in the projection. Naimark is often referred to as a pioneer of projection mapping and indeed he is; Displacements was installed 1984 in the San Francisco Museum of Modern Art. (Although there are projection mapping projects such as the faces in Disney’s theme park as early as 1969)
As an artist who is interested in memory and the connection to space and objects, this piece hit each chord while simultaneously being a very open project. Its a step into augmented reality.
I’ll be honest; I don’t really keep up with the computational art scene. When I first heard this assignment, no particular project came to mind. Sill, I love computational art when I see it; when I was a kid, and my family visited a museum, I would always spend an unreasonably long time playing with the interactive wall projections, catching colorful raindrops in my hand or stretching out my arms to see how many digital birds I could get to land on them. While I love this sort of thing, and was really excited by the idea of this class, I can’t point to any specific project and say “That’s what inspired me.”
So, what am I going to write about? Only the latest, greatest, interactive augmented reality project that basically took over the world in less than a week. That’s right: Pokemon Go. Yeah, yeah, I know it’s not as purely artistic as many other computational art projects, but it’s an excellent example of emerging technologies coming together to form something that’s interactive, entertaining, and all-around pretty impressive.
For those of you who don’t know what Pokemon Go is, it’s a new(-ish) mobile game that allows users to collect and battle virtual animals called “Pokemon” in the real world. Players need to physically walk around to earn points, find Pokemon, and hatch eggs. It may seem pretty simplistic, but there’s a lot going on. The app uses GPS technology to find out where you are and how far you’ve walked (I tend to agree with the people who say that the phone’s pedometer would have been a better way to measure the latter). It uses data or WiFi to access game information, like what Pokemon and Pokestops are in your area. Finally, it uses your camera to display an augmented version of reality: one that includes little animals all over the place.
None of these technologies are particularly new, but never before has there been a game that used all of them to this degree, on this scale. Through this app, players have access to an entire virtual world that Niantic (the company behind Pokemon Go) has created. Real-world locations are used as Poke-Gyms and Poke-stops, and you can watch the digital avatar you design for yourself walking through your town. It’s a massive project that has gotten nerds everywhere out walking, exercising, and socializing, and it’s success could mean more augmented reality games like this in the future.
It may not be a traditional art project, but I find Pokemon Go pretty inspiring. Sure, it could use some improvements (*cough* tracking *cough*), but what’s more important than the gameplay itself is the fact that augmented reality is making its way into our everyday life. Niantic has even said that they are working on making it work on smart glasses! Pokemon Go is the first step in what is hopefully a massive entertainment revolution. If people had the opportunity to view the world around them through well-implemented augmented reality that wasn’t hugely inconvenient, it would make gaming much more immersive, exciting, and (for what it’s worth) healthy. That’s the sort of thing I want to work on in the future, and the sort of world I want to see.
A computational project that got me interested in taking this class was SketchSynth by Billy Keyes. The project is basically a draw-able controller. The user takes a piece of paper and draws various buttons, sliders, and toggle switches. The program recognizes these drawn controls and then makes them functional through the tracking of human interactions with said controls. I admire how the artist connects the physical and virtual world through the nature of the project, and also how the user has control in what type of controller they develop. I also think it’s cool that this project actually sprang from a class held at CMU in 2012. It goes to show how we, although intermediate programs, have the resources to develop such exciting programs.
The project was created/developed entirely by one student. In terms of the software used to develop the program, Keyes primarily used commercial software (openFrameworks) and also used add-ons that were developed by other artists. This projects aids in developing a stronger user influence and sets the stage for more complex works where users can change the outcome of the program based on minor decisions.
My first exposure to computational design and art was through Steven Wittens (acko.net) in my senior year of high school while taking a Calculus and Vectors class. I had become irritated with the way my teacher was approaching the topic, never letting us explore the material or do projects, instead forming the entire semester around wrote tests and quizzes. Around the same time, I also became aware of Bret Victor (worrydream.com), whose projects inspire me immensely to this day.
One of my favourite pieces of Bret’s work is Drawing Dynamic Visualizations (video, additional notes), a concept for a hybrid direct-manipulation/programmatic information visualizer.
In his talk, Bret introduces the problem of Spreadsheets only creating pre-fab visualizations, drawing programs like Illustrator not being able to deal with dynamic data, and the output of coded artifacts not being continuously “seeable.” Meaning you can’t see what you’re making until after you render, which creates a feedback loop where errors can occur. To express this idea, he posits that programming is equal to “Blindly Manipulating Symbols.” A feeling I relate very strongly to when I don’t know exactly what my code is doing and can’t recreate the entire structure in my mind’s eye.
As a solution to this problem, Bret presents a concept for a program that combines the idea of direct manipulation with the ability to process and handle dynamic data.
This prototype was created wholly by Bret, but is not his first attempt at creating programmatic drawing tools or concepts. For prior art, see his works: Substroke, Dynamic Drawing, and ‘Stop Drawing Dead Fish.’ In terms of the future, I see the possibility for tools like this to change how many people work with the computational display of information and ideas. Personally, I’ve never taken immense joy from the act of programming, but, rather from the results which it produces and I believe tools like this could make that power far more accessible and enjoyable.
For this week, I’ve chosen to write about one of the first computational design projects I ever heard about, and one that certainly changed the way I understood programming forever. While enrolled in 15-104, we were working in processing, which felt enormously intuitive for me. I had encountered programming in an introductory class in high school, but it had always been so deeply rooted in a perspective of math and execution of function that it never really grew on me and I found it difficult. Processing flipped the programming metaphor on its head, establishing a visual feedback system that I immediately understood. Naturally I was curious about who had created this amazing tool and quickly stumbled upon Casey Reas portfolio. I was enchanted by the intricate and pseudo-natural patterning in his work, but couldn’t unpack it visually. Then I found his “Process Compendium” which describes the algorithms behind (much of) his work in plain english, a logic-based framework for creating interactions infinitely more complex than each component. This compendium also explains likely the name behind ‘Processing’ as the method of translating a ruleset or process into a coded algorithm which creates an output. This collection of projects and the mindset it implied is what really showed me how powerful programming is as a creative medium, and how it allows artists and designs to work in ways so far beyond the capabilities of their owns hands, in an orchestration of thoughts and rules to make beautiful systems. Since then, I’ve followed this theme both in programming based work, as well as learning about natural generative or emergent systems as a lens to observe, learn from, and emulate nature.