So I personally don’t have a very broad or thorough concept of what physical computing entails, so I decided to look at three projects that are very different but relate to different sides of physical computing.
So the first piece I really liked was similar to Design IO’s Connected Worlds piece. Curious Displays by Julia Tsao simulates what may eventually become a real physical project through a connected display on two screens in a living room setting and some sort of sensor which detects the placement of objects around the room.
For my final Looking Outwards I decided to look at Angélica Dass; although she is not a tech artist she inspired much of my work for my game and my last project for this class.
Angélica Dass is a Brazilian photographer based in Madrid, Spain. She is most known for her exhibit Humanae which aims to explore the true variation between skin tones and challenges just what exactly makes us the classified race that we are– as she stated in her TEDTalk, “does it have to do with our origin, nationality or bank account?” She speaks on growing up in Brazil, which, like many Latin American countries and countries in general, culturally contains many implicit and explicit biases against people of darker complexions. Dass recalls being treated like a nanny or a prostitute many times because of her complexion, a story eerily similar to one I heard while interviewing a few Brazilian women for my project. This issue has improved tenfold since the times they are speaking of, however it has not been quite eradicated anywhere in the world.
Dass’ other pieces include Vecinas, a collaboration with the Mallan Council of Spain which aimed to reshape the perception of migrants and refugees of Mall through archived pictures and Dass’ photography, and De Pies A Cabeza, a series of photographs of people’s faces and their shoes. Her work challenges existing notions in a subtle and objective but very powerful way. Conceptually she is “goals” so to say; in my game and in a lot of my work I aim to achieve the same sense of subtlety she does with same amount of intellectual impact.
For the final project, I plan to use a Skin Color Detection API to write a Python function that detects a face in the camera, takes a picture of the user, extracts two colors from the skin tone (using the API), and maps that as a texture onto a character model in Maya.
For my mocap project I wanted to do a study of the nCloth feature in Maya used with motion as well as get a basic grasp of the capabilities of scripting. In both aims I think I was quite successful. Each gif below is taken from separate playblasts (screencasts), all of which can be downloaded here--they chronicle the process of getting the result above.
To start I knew I wanted some fairly clean mocap data– capturing it myself would come with its own set of challenges. Mixamo‘s animation library is pretty extensive and set up with Maya takes practically no time (setting up the auto-rig feature is simple, easy and most importantly free), so I set up a simple bellydancing animation and looked at the character’s skeleton. The first script (2nd picture on the left) was basically a test which iterated through the skeleton and parent an object at its x and y coordinates. If one does not want any joints in the chain to have an object parented to them (such as the fingers, which were not very crucial in this particular animation) its easy enough to unparent them Mixamo skeleton and place them in a separate group.
My second script essentially did the same as the first but for a polyPlane instead (pictured bottom left). These would become nCloth once the feature was applied.
The most time-intensive part of the project was experimenting with the nCloth feature, which I knew to be pretty finicky to work with; keeping cloth simulations from glitching and flying in unexpected directions takes time. Tutorials are any Maya-user’s best friend, so I found a quick but helpful tutorial using a transform constraint to keep the cloth moving with the dancing form. My third script produced the gifs shown below, which essentially put into action each step the tutorial instruction but in code form.
Finally, my last script loops third script to create the final product shown below (minus the shading material). I ran the first one to create and parent spheres at every joint except the fingers, then ran the second one to create a plane at each joint as well. The last script iterates through each of those spheres and planes and assigns a collider, nCloth, (respectively) and then applies a transform constraint to the two, so the cloth follows the parented spheres. If one wishes to run the script more than once or on different objects, the iteration number must be updated accordingly, since when Maya creates nCloth it names it “polySurface” and then the next number in the outliner.
From this project, I learned that scripting isn’t that hard! Essentially all you are doing is translating into code every action you would be doing manually. Commands can easily be looked up, and even someone with limited knowledge of Python would be able to pick up on it quickly. There’s also a reference describing every command and its flags. One can even call the maya.mel.eval function which directly calls a MEL shell command. It made a project which would’ve been possible yet painstaking to do manually fairly quick and simple.
Words to Live By takes fairly well-known quotes from poets and writers and mashes them with lines from current-day rappers. Below are some excerpts:
Process & Code
Python Program – pulls lyrics from given artists from LyricsNMatch.com and saves .txt file using Unirest.
Processing Code – reads from .txt files produced above and sews together quotes + names.
Conceptually my aim was to work on blending two different vernaculars into one text to create an odd and unexpected twist on phrases, sayings, and lines many people recognize. To start I sifted through the various song lyrics API’s available to me. I ended up using the “LyricsNMusic” API from marketplace.mashape.com. It allowed search inputs to range from artist and song name to keywords and lyric excerpts. The biggest issue I ran into with this project was simply working with java; I knew most of the manipulation I would be doing would be in rita.js but attempting to download every installer, package and library and required to even begin using this API (and then over-Google every error message I received) became very taxing on both me and my laptop. So I opted for Python.
I first ran my Python program, which produced lyric .txt files of whatever artist name is put in, on an array of about 37 rappers I could come up with. Then in Processing I iterated through those documents as well as another “50 Famous Quotes from Poets” .txt file and sewed different lines together through a common word like “in”, “and”, or “woman”. The output wasn’t as seamless and consistent as I originally intended but it worked surprisingly similar to a more efficient maneuver I would’ve used in rita.js. All in all, sometimes the less than desirable method still gets you a result you are aiming for.
Here’s a video of the professor, flipping through my book:
I had the pleasure of speaking with this wonderful woman so I thought I would do my LookingOutwards05 on
She is an artist and researcher from Brooklyn, NY apart of a duo called Candyfloss and has worked on many projects within the realm of interactive video-games, virtual reality simulations, and digital exploration.
At the VR Salon she facilitated a 3d drawing experience through the use of an Oculus headset and two game controllers. Users were able to bring their otherwise 2-d creations to life, changing the brush and color in real-time and creating marks in what looked like real space. What struck me about her piece in comparison to the others was heavy attention was paid to the quality of the graphics– the environment itself was convincing on its own, and the drawing technology was mesmerizing. One issue I saw detracting from the experience was the cord but otherwise the entire setup was pretty flawless.
One collaborative project that really stuck with me is the Iyapo repository– a library / collection of physical and digital artifacts “created to reaffirm he future of peoples of African descent.” The pieces bring to life artifacts dealing with past, present, and future cultural endeavors of the African-American and African diasporic community. The character “Iyapo” comes from renowned sci-fi novelist Octavia Butler’s Little Blood, as well each piece addresses concepts of Afrofuturism from strikingly different yet related perspectives. The library tackles topics that range from the lack of diversity in science fiction and futurist media, as well as the crisis of documenting and eternalizing African-American culture and experiences.
Asega also participated in an event honoring Kara Walker’s A Subtlety in an attempt to amplify Walker’s message of heavy cultural significance as a collective experience. She was (is?) apart of a non-profit dedicated to connecting current digital artists just entering the New Media arts scene. She does a really incredible job of blending new media art and technology with her ideological / cultural identity.
Overall I think sticking with the simpler concept of a clock may have been a lot easier to execute, if I had sat with it a little longer. I bit off a lot more than I could chew in the end and ended up jumbling up the concept in the process. In general I think taking the process step by step and not aiming high before I’ve set the foundations would benefit me.
For my plotter my goal was to make a Mandala generator. As a disclaimer, I am not Buddhist or Hindu, however I’ve been intrigued with and reading about Mandalas for a couple years now and I am very interested with their form. Every Mandala has a sort of uniqueness about it but they tend toward the same geometrical shapes, composition (symmetry) and patterns. Thus I wanted to explore the idea of computationally generating a highly spirtual yet mass-producible symbol.
I started by choosing shapes and patterns I see very often in Mandalas. These same shapes tend to occur often in Henna designs as well, some more complex than others. I settled on a few of the simpler ones to code: circles (of course), polygons, lotus petals (the teardrop curve), the bumpy moon cake shaped flower (most prevalent on the third image pictured above), and the dollop shaped curve(most prevalent in the middle image). Mandalas are also highly symmetrical, therefore unit circles came back from high school trigonometry to haunt me. This also constrained the time I had to express more of the complexities of the Mandala I initially wanted, such as complex geometry, shading and patterns.
Although the computer generated images took some time, the laser cutter took about 14 minutes each to produce these. My lesson existed more in the realm of the difficulty of producing random patterns but with a goal aesthetic, rather than the time it takes to cut them in real life. I guess you could say I’ve achieved a greater appreciation for the art of designing Mandalas as well as Henna– its not as easy as I thought to compute their visual motifs.
Toshio Iwai is known as the Peter Pan of digital culture. His interests range from video and film, to animation (zoetrope), into what he is now considered: an interactive and computer artist. Iwai has brought to fruition a variety of projects yet maintains a distinct style and theme in his work. Although his pieces fall on a broad spectrum of topics, much of them deal with audiovision and interactivity.
Two of Iwai’s works I wanted to touch on, Resonance of 4 (1994) and Piano – As Image Media (1995), fall into this category of interactive audiovisual art.
My gif is much different than originally intended. I really wanted to work 3d and set out to use Maya to create gifs similar to those of Julian Glander. His work was very playful and stylized but also very simple. However his work is very object- and character-oriented, aspects of animation that are hard to execute ONLY through code. From there I began looking at the creation of fractals and recursive designs using Python in Maya. All in all, the learning curve kept me from creating a gif that I would be proud of. Instead I decided to experiment with the 3d primitives of p5.js, and tried out creating a neutron-looking gif using those tools.
I wanted to explore artist Harvey Moon for my project simply because I was most interested by his drawing machines, namely the one which was controlled by an insect (linked below). After writing about the problem of authenticity addressed in the Lanier’s article, I think this piece expands what I considered the possibilities of robot art and further challenges this authenticity. The tool is now man-made but animal controlled– what other living or nonliving elements can an artist use as a means to instruct a machine? How does the person, animal, or force controlling the robot influence the meaning, impact, and authenticity of the resulting drawing? This piece alone has a very great balance of randomization and instruction in terms of its effective complexity; the insect’s movements are unpredictable and random to US, yet the machine and the resulting drawings have a sort of pattern to them. What I additionally find great about this piece is that the artist has minimal involvement with the resulting process. Nothing innately personalized or stylized comes out of this piece other than the drawing machine itself, which has now been overruled by the creative prowess of the fly it follows. The whole project challenges and pushes the boundary of what can be considered artwork.
Question 1A. Effective complexity is defined as a measure of complexity– in this context– in terms of generative art. Galanter classifies generative art as being a rejection of simple description and easy prediction, and as lying somewhere on a spectrum between highly-ordered and highly-disordered. The concept works to quantify the level of randomness vs. instructed nature of some sort of event. A good example of a system that lies somewhere in the middle of the spectrum is the stock market. There are a lot of factors that influence it–overseas markets, general economic data –making a controlled instructed system; however a lot of stock market analysts work with prediction and random outside occurrences, so much of stock-trading boils down to chance and randomness.
Question 1B. The biggest issue I take with generative art is, as most art should do, its expansion and challenging of the definition of art as whole. Art seemed for the longest time to be limited to human-made materials, objects, and ideas come to fruition. Yet generative art, as this article touches on, relies on a human-made or human-controlled robot/mechanism following a set of instructions in order to create the art itself. This in the article is addressed under the section The Problem of Authenticity. At the same time, I think many generative artworks, such as Harvey Moon’s drawing machines, force the audience to ponder on topics and consider possibilities they once may have not, which are characteristics of a successful art piece.
Although I did not learn about this artist until just recently, I would like to explore the boundary between game and interactive computational art. Mario Von Rickenbach is a designer, artist, and programmer that seems to blur the line for me. One way I would argue he does this is through his loose interpretation of a game– most of his big projects do not have enemies or obstacles to overcome, some do not even have real objectives. Many of his games like Plug & Play and Mirage are more exploratory. In Plug & Play, he took what was a very surreal short film and a very abstract concept and turned it into a computer game. As evident by the Let’s Play below, the game is not clear in its objective or even in its explanation of how to progress. The video above also has an explanation of the sorts of mechanisms used in Unity to create the physics behind some of the gameplay.
Several other examples of projects that blur this line were apart of the Bit Bash Festival, a Chicago game festival celebrating independent and alternative games and designers. Many of the projects as seen in the video below can in fact be called games, but they also exploit bugs in computer graphics to create a specific look– “vaporwave”. Games like this, again, can be looked at as surreal, with much more intention placed on the graphics of the game than the inventiveness of the gameplay itself.
I ended up instead creating a clock of zodiac constellations. The current code was simple but somewhat tedious to implement, although some features that are currently left out (which I intend to add) include the rest of the constellations (as coding two took a bit of time and careful measurement) and also a transition from one zodiac to the next. Zodiacs are monthly but I sped up the month variable to seconds to attempt to try out ways of smoothly transforming from one sign to the next, to no avail. I still intend to implement both, so it is a work in progress to say the least.
Kyle McDonald is a programmer and artist from Brooklyn, NY, a frequent collaborator with similar creative Lauren McCarthy, and a resident at the STUDIO for Creative Inquiry at Carnegie Mellon. His work most often challenges, subverts and plays with new technologies and their existing conventions– much of his work provides a new fresh take in addressing online communication & social media, surveillance, and virtual reality. Kyle also describes himself as a public, process-oriented artist who’s work often explores glitches and reverses anything from personal identity to work habits. What I personally most identify with in his work is that the pieces on his website and presented during his lecture are very consistent yet very very broad in their aims. Some of his projects can be categorized as social experiments or commentary on the way technology influences our communication styles (i.e. Going Public, us+, Face Substitution Research) while many other of his projects just seem to be experimentation with new and developing technologies such as artificial intelligence and face/body tracking (DIY 3d scanning, Nandhopper, Shadowplay). As well what I found interesting about his performance was his anecdote of how he came across labelling himself as an “artist”; his entire portfolio of work seems to be very technically based and really experimental with different areas of new media, however he describes the aspect of his projects being interactive which place them more in a category of art than simply technical presentations. He does not speak a lot on what type of social commentary or discussion his pieces may spark, but rather objectively details how they work and how the idea came about. I generally admire Kyle not only for the versatility and wide variety in the projects he works on, but also the humbleness and detachment with which he speaks about his work.
From my understanding, the writer of this article defines first word art to be novel, groundbreaking, and the beginning of a conversation. For example, Jackson Pollock could be considered a first-word artist for pioneering the abstract expressionist movement. Last word art, on the other hand, provides a new perspective on a conversation that has already been in existence; It challenges, explores, and builds off of conventions already set in place. Duchamp and participants in the dAdA art movement could be considered first-word artists, as they challenged the idea of just what could be considered artwork and the current state of the art sphere.
I believe almost all technological novel inventions play off of old ideas– in a way both first and last word art. For example, it is often brought up why Apple is such renowned company. First, they took an invention already in existence (a desktop computer) and decided to incorporate the idea of play/fun into it. They took a functional item and adjusted the design, the interface, made it more user-friendly, and created a more fluid system for use. Rather than attempting to come up with an entirely new tool (which they later did, basically pioneering the invention smartphone and the tablet) they took an already existing item and challenged the rules and conventions that had already been in place for it. Remaining on the topic of the Macintosh, Macbook and iPhone, these technologies shaped culture socially, economically, and politically. It created a new common experience amongst an entire generation, an experience that is occasionally lost to older generations. It coined a new commodity which through its functionality reels the buyer into continually purchasing the “newer model”. It created new forms of activism and news reception among the public. At the same time, the iPhone and its features very much cater to its users needs, so as cultural staples shift and society evolves, the technology keeps up with what is “new and upcoming”. I would consider myself to be most interested in technology that reworks and revisits rules and conventions that have already been set in place– “last word art”. That in itself keeps it “in with the times” and keeps the conversation surrounding it relevant.