Category: LookingOutwards05

Clacker- Looking Outwards 05

Laura Juo-Hsin Chen is a creative technologist and self proclaimed “doodler” from Taipei, Taiwan. She uses her background in traditional 3D animation and desire to celebrate the mundane and the weirdness of human interaction to make captivating virtual reality experiences. My favorite of her projects is “Poop VR” which is a communal virtual reality pooping experience. The aesthetic quality of the virtual reality experience is very high, as it combines the lovely aspects of hand drawn animation with the weirdness and absurdity of virtual reality as a medium. The idea of the project is that while pooping, you can check into this virtual reality shared space, and see avatars of other individuals pooping in their respective location, also logged into the vr experience. You can interact with other users by saying hello, walking around and shooting poop at each other. The virtual reality experience is completely browser based which is very interesting to me, as it seems to have the ability to be a truly interactive experience using the current climate of virtual reality.

See more of her work here.

hizlik-lookingoutward05

VRDoodler by Cindy Bishop

VRDoodler is a comprehensive in-browser 3D drawing tool that lets you draw and explore your drawings in 3D, with or without virtual reality gear. It is definitely an interesting, unique concept that would not have been possible before our generation’s available technologies. Unfortunately I could not stay during her lecture at Weird Reality very long because of volunteering commitments, however from the short amount of time I was present, I was able to understand a few things. One is that, it can be frustrating as it has a bit of a learning curve. You can often find yourself drawing at multiple depths/distances without realizing it until you spin your camera. One critique is that it is a bit iffy on a phone, laggy and not fluid. It is best on a computer, with a tablet.

You can visit the gallery to see some works created by users: http://vrdoodler.com/#gallery 

Aliot-LookingOutwards05

rachel-rossin-lossy-03 rossin_in6709_rosesinavase_2015_36x48_800pxRachel Rossin

http://rossin.co/

is a New York based painter and programmer. Her exhibit “Lossy” was a combination of oil and canvas (traditional painting techniques) and digital capture. Rossin would scan her surroundings/objects and upload the scans to unity to edit. She would then paint the resulting view which results in a fragmented and “lossy” representation of the original space. I love this work and I love her subject matter because I am exploring a very similar realm in my art. I am interested in memory and the lossy, fragmented attributes of that as well as in the relationship between the real, tangible world and the virtual. Her work spans a gap between the two in a very aesthetically pleasing way (even if it is very straightforward and simple).

 

kadoin-lookingoutwards05

I’ve watched all the Dan Shiffman videos on Perlin noise so it was really cool to hear Ken Perlin speak at Weird Reality. I didn’t really know all that much about his work outside of the noise function, so to see more of the cool things he’s been doing was awesome. Something I hadn’t really considered until I saw the Google Daydream talk the day before was VR with multiple people that could interact with each other, and his demo for tracking the with phone VR sets and the ping pong balls was really cool. The Poop VR game was the only demo at the VR salon that incorporated interaction with other people, but it was limited in action and movement because you’re limited to a toilet and just Google Cardboard. I think Ken Perlin’s examples he showed where multiple people wandered around a room and were able to interact with each other and other elements in virtual space did a good job of exploring new kinds interactions with their more advanced headsets. I also think it’s interesting how he said no one ran into each other while they had the headsets on. I didn’t think with all the perspective distortions to make the virtual space seem bigger, it would still be accurate enough to mirror real space. One thing it made me think of that no one really brought up during the conference, what getting the same sort of interactions, but with people who aren’t in the same room. I think a VR space that allows people across the globe to enter and interact would be something pretty amazing with a lot of potential.

tigop-Looking Outwards 05

Being at Weird Reality really allowed me to see both the possibilities as well as the limitations of Virtual Reality. A piece that I got to spend a lot of time with was Jeremy Bailey’s piece called Preterna. It was a piece in which the viewer became part of a commercial, and was able to experience “a stage before pregnancy”, the stage in which an individual is contemplating pregnancy. In the background, you are able to hear Jeremy and his wife, Kristen, bickering over whether or not they wanted to have a child. Jeremy had included 3D scans of his wife in the environment which was created in the piece, and the viewer is able to invade one of the 3D scans, taking over the body and manipulating severed arms which seem to float in the air (this had been achieved through the use of a motion detector placed on the Oculus Rift). The way the viewer steps into the 3D scan raises a question as to whether or not this individual is truly able to empathize with one who is pregnant- all while they are not actually able to experience all the other physical symptoms associated with pregnancy, such as shortness in breath, fatigue, morning sickness (though VR may very well make the nausea accessible), frequent need to urinate, etc. How do we make all these other sensory experiences accessible in the environment which can be created through VR? Talking to Jeremy, it had been made clear that the viewer was not truly supposed to be able to empathize with one who is pregnant, and they are in fact playing a more parasitic role in the invasion of this pregnant woman’s body, participating in a more exploitative role within the commercial. I had spoken to a pregnant woman who tried on the rift, and she said something that stuck with me: “When you step into someone’s shoes, you only take their shoes.” You are never going to fully experience what it is like to be a particular individual because as much as you might know about them, you will never truly know what it felt like to come out of the circumstances that they came out of and to feel the things that they feel. This idea of exploiting the pregnant female body and lacking a connection to who this individual is is a very cynical view on the idea of commercialism and possibly even a critique on commercialism within the health sector. I had a lot of fun working that night.

cambu-LookingOutwards05

Of all the games I played at the VR Salon, none had the craft and quality of SuperHyperCube. Of course this isn’t a knock against the lesser funded and more artisan efforts — some of them were thoroughly interesting (and thought provoking) experiences.

But, I do think it’s important to recognize the difference between VR experiences that are fun for a 5-minute demo and ones that I could really imagine spending hours in. SuperHyperCube certainly falls into the latter camp. After playing for only a few moments, I was enamored with the slick graphics and slowly building complexity, yet, it had enough similarities to existing game metaphors that it wasn’t overwhelming. Also, the game was simply really fun — never underestimate fun!

I really hope we continue to see VR experiences that focus on both being fun and being of quality — not everything has to be an artist statement.

takos-lookingoutwards5

lo5

Robert Yang – Intimate, Infinite
For my looking outwards I looked into the work Intimate, Infinite by Robert Yang. Intimate Infinite is a 3d first person game based on the  novel “The Garden of Forking Paths” By Jorge Luis Borges.  I read about the story after I played the game, and It made it more interesting because the narrative is abstract, so I wasn’t able to fully understand the work the first time I played it. It was interesting and aesthetically pleasing but the content came across as confusing. The story is about a Chinese professor of English who is a spy for the germans, and he is found out and murdered. I think that the game expects the player to be more knowledgeable about the source story, but that this was not made clear – Unless the game is intended to be abstracted, which it may be because it lends itself well to this through the lagged video feed, the different points of views, and the changing narrator, as well as the discussion of different lives and different timeslines

 

Jaqaur – LookingOutwards05

https://www.metavision.com/

I am writing about the presentation at Weird Reality that had the greatest impact on me: Meta. Meta is a company that works in augmented reality, and they strive to create an interface that is natural and gestureless. For example, one could select something by picking it up rather than by pointing your face at it and awkwardly pinching the air (like you have to do with some other AR systems). What really blew my mind was how fast VR/AR technology is advancing in ways I didn’t know about. For example, Meta currently has a working (if somewhat rough and not totally safe) way to let users feel virtual items just by touching them with their naked hand. And they (“they” not necessarily meaning Meta, but rather the ambiguous VR/AR scientists that develop this stuff) have wearable technology that can read the brain waves of a person and determine from that alone what the person is looking at. Similarly, they (same ambiguous they) can transmit an image (or at least a few colors) directly to a person’s brain, and it will be as if that person is seeing it with his or her eyes. Like, what?! That’s crazy! And once it develops further, it could have huge applications in medicine and psychology, not just entertainment. The presenter said that by 2030, he thinks we will have transcended AR goggles to the point that most people just wear devices that put the AR into their brains. That would be a huge advancement from the AR goggles they have now, which are clunky and a bit awkward. All in all, Weird Reality was a great experience, but Meta’s presentation in particular really reminded me just how FREAKING AWESOME this technology could be.

Check out a video of theirs (and this is from three years ago):

Krawleb-LookingOutwards05

The presentation that I was most impressed by (of those I attended, which was unfortunately not as many as I would have liked) was the presentation by Stefan Welker about Google’s ‘daydream labs’. I appreciated the way they approached their timelines and decided to prototype from the ground up on a weekly basis, valuing diversity of techniques rather than working on refining a larger project. I think that this approach to VR development, an incredibly small team on a quick turnaround, is more honest to the medium which is arguably still in its nascency. Unlike many of the projects that I’ve seen that appear to be little more than ‘immersive’ ports of screen-based interactions, their prototypes focused on testing interactions unique to room-scale VR as a medium, finding successes and failures in both techniques and social contexts. As someone who is as interested in the interaction methods and context of VR as the content of VR, Welker’s role sounds immensely exciting, working broadly to explore new types of interactions for what many (myself included) believe will evolve into an increasingly prominent medium.

He also mentioned that they frequently make blog posts summarizing their findings here, working to build more of a community of best-practices and patternized interactions, which is the sort of early-stage interaction design that VR needs right now.

Antar-lookingoutwards05

The Avocado Experience

avocado

During the Weird Reality show, I had the great pleasure to work the untitled avocado virtual reality experience by Scott Andrew and the Institute for New Feeling. During my five hour shift I got to explore the piece fairly deeply, and had the great opportunity of seeing how so many people interpreted and reacted to the work differently.

The experience began in a warehouse with a few bins of interested objects around the perimeter. If the user picked up an object, it would likely say that the object was out of stock. However, if the user picked up an avocado, the user was transported to a fantasy world. In this world the used embarked on a truck journey, one which the user was much more limited in movement, and if the user let go of the avocado, they would be brought back to the warehouse. Each time the user returned to the fantasy world they were brought back to the beginning of the truck journey. Through out the journey, the user was able to tap on an “add to cart” button with their available hand. While the user had no visual response whether or not they had successfully added the avocado to cart, in the browser on the computer it was possible to see the number of avocados in the cart. After the experience is over, the user was able to purchase the avocados on Amazon. When on the truck journey, the user was driving around a long curve through an avocado farm, and could see Aztec pyramids in the horizon. When driving through the farm, there were billboards throughout with statements about drought, or photos of models. After the user reached the end of the curve, they were able to see the destination of the truck. This destination was a large house with the fourth wall missing, allowing the truck to drive right into the living room. There was a TV on the far wall, with a sports game playing. On either side of the TV was a portrait of Vladimir Putin, and a portrait of a german shepherd. Once at the house the user was able to hear George Bush Jr. and his wife speaking. Once the user had been in the house for a moment, they were transported back to the warehouse.

For some users, the truck journey was long and a bit of a let down. After being transported back to the warehouse, they would take off the Vive and ask, “That’s it?”, while others would laugh in delight. Some users never got to the house because they were having too much fun picking up, throwing, juggling and playing with the avocados. Others, however, rarely picked up an avocado, and enjoyed the challenge of trying to grab things that were out of reach or inaccessible.

At the end of the night (2am), I was able to fully explore the experience for myself, and I took the luxury to read the rather long artist statement that existed above the avocado bin in the virtual warehouse. This is where I got the chance to learn fascinating information such as, “Avocado” is from Nahuatl word “ahuacatl”, which means “testicle”, which helped make the fruit sound more exotic than its original name “Alligator-Pear”. The statement followed the fruit’s rise to popularity through the late 20th century. A lot of people put a lot of time and money in to advertising and branding the avocado. The fruit was supposed to be a symbol for the California dream – the fruit of the healthy and happy. While on the truck journey, the user sees all the billboards that explain key moments in the history of the fruit, including a promotion which included winning a “Mrs Ripe” contest to be on Baywatch. It also included the Tom Selleck scandal. The Bush’s voices are present due to the choking incident in 2002. In essence this long and anticlimactic ride on a truck is a representation of the capitalistic efforts to bring rise to a fruit named after male gentiles.

Catlu – LookingOutwards05

Jeremy Bailey – Preterna (AR Pregnancy)

jeremy-bailey

One of the projects I found really interesting at the VR salon during Weird Reality was Jeremy Bailey’s pregnancy simulator, Preterna. When I put the VR headset on, I was transported to a calm plain of grasses and wildflowers. As I looked down, I saw the body of a pregnant woman. I thought the premise and execution of this project was really smart. By placing the mesh of a pregnant woman at a certain place and having us stand at that same place, it really did feel natural to look down and see a body that could be ours. I appreciated how we could see funky versions of our arms and hands without having to hold a remote or controller. It made the feeling of holding my hands to my “pregnant belly” more real. Although I couldn’t actually feel the belly, I did get an odd sense of happiness and contentment, probably because of general associations with motherhood and happiness, and also because of the calm environment. Everyone has wished to step into a body of someone of the opposite gender, and I think this is a great way for men to see at least a little bit what it’s like being pregnant. I thought it was very smart and thought provoking.

Lumar-LookingOutwards05

A lot of my ideation and thinking process for our FaceOSC project was affected  by/inspired by what I saw (or didn’t see) at weird reality’s VR salon. I’ll admit, I was a tad disappointed in some of the works – it felt as if the medium wasn’t being used to it’s fullest extent; it’s a complete 3d immersive environment, but some were such…passive viewing experiences that it was hard to say at all if the content was augmented by its medium or  if the content would’ve been equally fine as 2d or regular 3d movies. I wanted to be surprised within a modular environment – I wanted to turn around and see something new – if it was a building,…I want to know what made the VR experience better than simply walking through the actual building.

That being said, all the works were still wonderful to see! I am very grateful to have gotten the chance to go!

I got unity and tried some photogramming of my own this week!

remake

But anyway! Mars Wong. Dang. He’s only a freshman! I still haven’t gotten over the fact he started on the VR/Game design scene as only a 9 year old, and later on with a Fjord internship as a 9th grader. What on earth was I doing at that age? …typing very slowly that’s what.

http://m4r5w0n6.com/games#/interrogation/

This work I find especially interesting because after fiddling around with photogramming and unity, this piece doesn’t seem particularly arduous to do – so why do I think it’s worth mentioning? It’s a one day project to recreate an interrogation room environment/feel. With fairly simple techniques, but a really clever usage of the tools available, the deliverable definitely achieves its objectives. I bring this up, because there’s a difference between what can be done and what should be done. Some of the pieces in the VR salon felt incredibly computationally complex, but that complexity did not always translate proportionally to a more developed interaction or benefit artistically.

I liken it to traditional artists that create hyper photorealistic portraits. In that, the extra effort put in the technical execution of the portrait doesn’t generate net benefit to the piece as art – really, why bother with photorealism when a camera is so much faster? In this, I think hyperrealism is like using technology just because one ‘can’ and it is left unconsidered.

The pregnancy vr piece was my favorite. There’s the aspect of the unexpected and unnerving in it that really uses the VR medium well to achieve the effect. I wish I had gotten a chance to experience Mars’ archery game; the full body immersion/pieces wherein the user could be an active participant within the environment were always the best;

http://m4r5w0n6.com/games#/archery/

Xastol – LookingOutwards05

Among my favorite projects was the PoopVR, created by Laura Juo-Hsin Chen. The project uses a Google Cardboard, a phone, and a seat (toilets). Using the Google Cardboard, users use their own phone to enter an online VR world she has created. The VR world is rather lighthearted, with encouraging “poops” and psychedelic patterns, and serves as “motivation” when the user finds them-self in a rather “congested” situation. Additionally, the work allows other individuals partaking in this daily-task to connect with one another and, as a result, encourage each other. Personally, I’ve enjoyed the process of defecation a lot more with her project.

In terms of her approach to work, I appreciate Laura’s use of low-tech, open-source technologies to create charming work that attracts all audiences. The user doesn’t have to mentally prepare themselves to invest in her work because her playful style handles that already.

Website: http://www.jhclaura.com/

Keali-LookingOutwards05

Created by Milica Zec and Winslow Turner Porter III, the project Giant is a virtual reality experience detailing a story of a family amidst an active war zone; inspired by true events of Zec’s family during a war-torn Europe, the vision is of two parents struggling to distract their daughter by inventing a fantastical tale–that the belligerence and commotions above ground are mere antics of a giant. The audience is transported into a makeshift basement shelter in which the characters hide, becoming fully immersed in a dark and ominous atmosphere, complete with sound effects and physical motion as if one were living vicariously through someone in that virtual reality.
Being someone who has had minimal exposure and personal experience with VR, donning the Giant‘s headgear and noise-cancelling headphones was an indescribable and very intimate experience. Giant was impressive from both an artistic and technical viewpoint, boasting emotional storytelling expertise and seamless technological execution with heavy attention to detail. This work is the first VR I’ve experienced to have a fully-immersive, 360-degree view of its fictional realm; it was very invigorating, yet it also made me wary, that I could fully turn my head to view the full surroundings of a virtual room whilst within the piece: in this case, I could omnisciently scan the basement in which the family resided.
Giant was a subtle, powerful experience, and explored a concept similarly demonstrated by the film Life is Beautiful: masking darker truths with lighthearted fantasies for the sake of the innocent. It’s an entirely bittersweet intention, especially when one is seeing it from a third-party point of view.

//giant_website

Drewch – LookingOutwards05

Nitzu (Nitzan Bartov) said something very revelatory (at least to me) during the speed presentations, something along the lines: “In the future, everybody is going to be playing games. If you aren’t playing them right now, it’s because the games you want to play haven’t been made yet. What kind of game do you want to play?”

For a while now, I’ve settled for the idea that not all people play, want to play, or appreciate games, and that’s ok. But now that I think about it, I only thought that way because the way games were going then was mainly formalist games. You really ever only had three flavors on the market: competitive, story-based, and arcade games. Games are reaching more people now than ever before, not only because of technology and accessibility but also because new kinds of games are being made. “What kind of game do you want to play?”

Nitzu wanted to play a Soap Opera VR game, so she made The Artificial and the Intelligent. It’s hilarious, but also thought-provoking, and also a soap opera. I wish I could find videos of another one of her games: Horizon, but what The Artificial and the Intelligent proved to me was that games are for everyone, they just don’t know it yet.

Image result for the artificial and the intelligent nitzu The Artificial and the Intelligent

arialy-lookingoutwards05

I found Jeremy Bailey to be the most memorable personality at Weird Reality. I heard his presentation, experienced his AR Pregnancy Simulator at the VR salon, and talked to him a little in person. It was definitely fun to be able to talk to him and then hear his commentary in the Pregnancy Simulator.  The piece itself is pretty relaxing with a stream of commentary, calm background music (birds chirping if I remember correctly), and its setting in the field. But getting to see other people wear his VR headset and rub their imaginary belly at the VR salon was probably my favorite part of the piece. People’s movements almost seemed choreographed. Looking around, then at the hands, then rubbing their belly. Being able to manipulate people’s movement’s in an open environment is both very entertaining and a strange concept.

 

Kelc – LookingOutwards05

I had the pleasure of speaking with this wonderful woman so I thought I would do my LookingOutwards05 on

Salome Asega

vzoylp10

She is an artist and researcher from Brooklyn, NY apart of a duo called Candyfloss and has worked on many projects within the realm of interactive video-games, virtual reality simulations, and digital exploration.

At the VR Salon she facilitated a 3d drawing experience through the use of an Oculus headset and two game controllers. Users were able to bring their otherwise 2-d creations to life, changing the brush and color in real-time and creating marks in what looked like real space. What struck me about her piece in comparison to the others was heavy attention was paid to the quality of the graphics– the environment itself was convincing on its own, and the drawing technology was mesmerizing. One issue I saw detracting from the experience was the cord but otherwise the entire setup was pretty flawless.

https://www.instagram.com/p/BLUK44GAIkm/?taken-by=suhlomay

Iyapo Repository

http://iyaporepository.tumblr.com/

One collaborative project that really stuck with me is the Iyapo repository– a library / collection of physical and digital artifacts “created to reaffirm he future of peoples of African descent.” The pieces bring to life artifacts dealing with past, present, and future cultural endeavors of the African-American and African diasporic community. The character “Iyapo” comes from renowned sci-fi novelist Octavia Butler’s Little Blood, as well each piece addresses concepts of Afrofuturism from strikingly different yet related perspectives. The library tackles topics that range from the lack of diversity in science fiction and futurist media, as well as the crisis of documenting and eternalizing African-American culture and experiences.

Asega also participated in an event honoring Kara Walker’s A Subtlety in an attempt to amplify Walker’s message of heavy cultural significance as a collective experience. She was (is?) apart of a non-profit dedicated to connecting current digital artists just entering the New Media arts scene. She does a really incredible job of blending new media art and technology with her ideological / cultural identity.

Guodu-LookingOutwards05

daydream-labs-experiments

Lessons Learned from Prototyping VR Apps + Weird Reality Conference 

Stefan Welker (GoogleVR / Daydream Lab)

VR is something I’m not too knowledgeable about (yet), and still skeptical. The Weird Reality conference was my first exposure and experience related to this technology. I’m mostly concerned because of the potential motion sickness one can get from staying in a VR environment and the consequences from becoming disconnected from the physical world. But this conference has changed my perspective to view VR as a medium to further understand our natural world, collaborate in interdisciplinary teams, and help those experience or see something they normally cannot.

I was really intrigued by Stefan’s talk because of the parallels I saw between the way Google Daydream Lab approaches to designing for VR and the design process that I’ve been learning and applying in school. In design, we learn to feel comfortable with failure in order to improve; to iterate and test quickly to find the most appropriate solution to a problem. Stefan described their motto as Explore everything. Fail fast. Learn fast. It almost feels like they are in a rush to learn everything in order to have VR become a more widely accepted and helpful tool. In the past year they’ve built two new app prototypes each week, and the successes and failures show in just a few examples out of many that Stefan shared with us. Stefan even joked that their teams thought it wasn’t sustainable at first.

Lots of realizations, setting criteria, challenges and discussions arose from their experiments like

  • users will test the limits of VR
  • without constraints in a multi-player setting, users may invade the privacy or personal space of other users
  • users can troll by not participating or responding in a timely manner
  • ice breakers are also important in a social VR setting because without an initiation of some sort, their is still social awkwardness
  • cloning and throwing objects is a lot of fun (experienced the throwing aspect in  the Institute for New Feeling’s Ditherer, in which it was possible to throw avocados on the ground)
  • adding play and whimsy into VR because you can and it’s fun

 

Even after listing some of these observations, I realize that with the seemingly limitless explorations that VR provides, understanding natural human behavior and psychology is integral in creating an environment and situation that encourages positive behavior from users.

Ultimately, (as cliche as this sounds), Stefan’s talk and the Weird Reality Conference opened up a new world for me in terms of the new possibilities and responsibilities that come with designing for VR, or AR.

As Vi Hart says, VR is powerful; designer and developer’s have the ability to create anything in their imagination, and user’s will have new found capabilities to experience the sublime and fly, or maybe flap.

kander – LookingOutwards05

I was drawn to Martha Hipley’s work after viewing her project “Ur Cardboard Pet” in the VR Salon, which I found to be a tongue-in-cheek role-reversal of men’s attitudes towards women (I think her description said that it commented on the male gaze, but I don’t remember exactly).

Anyway, for this assignment, I looked at “Wobble Wonder” which is an immersive VR segway experience that Hipley collaborated upon with 3 other artists and engineers. The user stands on a platform, and they can tilt their body forward and backwards to move (like a segway). There are fans mounted to the users head, so if the user is moving “fast enough”, the fans will simulate air resistance. The project employs an Occulus headset through which the user can experience the world, which was largely modeled by Hipley. The world has a similar expressive feeling and color scheme to Hipley’s other work — she often uses paint in combination with code (for example, the images in “Ur Cardboard Pet” were hand painted).

I like this project because it has appreciation for what VR can actually do. The project is about VR, rather than simply using VR as a media to display something that could have been displayed on a flat screen. “Wobble Wonder” is a project that allows VR to shape the conceptualization of the project. Furthermore, it goes beyond simply constraining the users world to the visual, with the use of fans and movement. 

An onscreen rendering of what the user of “Wobble Wonder” would experience.

I couldn’t embed a video, but this page has a video of about the project.

 

 

Deliverables 05 (D. 10/14)

Deliverables 05 are due Friday, October 14 at the beginning of class.

There are four parts:

  1. Looking Outwards 05: Weird Reality
  2. Check out each others’ Plot projects
  3. Reading: Faces and Software
  4. A Face-Controlled Software

1. Looking Outwards 05: Weird Reality

For this Looking Outwards, you’re asked to write about a project created by one of the 60+ speakers at the Weird Reality symposium, which features many emerging and established artists who work in augmented and virtual reality. Feel free to ask for recommendations.

The exception is that you may not write about anyone who currently works or studies at CMU (e.g. Golan, Angela, Ali, Larry, Claire, Alicia, Charlotte, etc.).

  • Please title your post as nickname-lookingoutwards05
  • Please categorize your post with LookingOutwards05

2. Check out each others’ Plot projects

Really: it’s worth it. You did an amazing job with your Plot projects, and the documentation you all made is, overall, at a very high level. Before our next meeting, I ask that you examine all of the Plot projects and their documentation, which you can find at this link. This will take about 15-20 minutes.

No written response or blog post is requested, but it would be great if you could identify (in your own mind, and/or in your notebook) one or two projects that you found particularly noteworthy. 


3. Readings

Please read the following two fun, lightweight articles about faces and software. No written response or blog post is requested.


4. A Face-Controlled Software

faces

Humans are equipped with an exquisite sensitivity to faces. We easily recognize faces, and can detect very subtle shifts in facial expressions, often being able to discern the slightest change in mood and sincerity in ways that (for now) remain impossible for computers. From faces we also are readily able to identify family resemblances, or “strangers” in crowds, and we are transfixed by the ways in which the lines on a face can reveal a person’s life history.

The face is the most intimate, yet most public, of all our features. A significant portion of our brain is dedicated to processing and interpreting the faces of those around us. The face is one of the few visual patterns which, it is believed, is innately understood by newborn infants. Kyle McDonald writes:

“One of the most salient objects in our day-to-day life is the human face. Faces are so important that the impairment of our face-processing ability is seen as a disorder, called prosopagnosia, while unconsciously seeing faces where there are none is an almost universal kind of pareidolia.”

In this assignment, you are asked to create an interesting piece of face-controlled software, and you are provided with FaceOSC, a high-quality real-time face tracker, in order to make it. Most likely, I anticipate that you will create an avatar or parametric facial puppet which is controlled by signals from FaceOSC — some kind of animated portrait or character. But you could also make things like: a game that you play with your face; an information visualization of your face gestures; a “face-responsive abstraction”; or some other composition or software that responds to your facial expressions.

Broadly speaking, your challenge is to create an interesting system which responds to real-time input from a high-quality facial analysis tool. The pedagogic purpose (and learning outcome) of this assignment is threefold:

  • To increase your fluency in the craft and conceptual application of computation to interactive form, through practice;
  • To familiarize you with OSC, the most widely used protocol for inter-process data communications in the media arts;
  • To familiarize you with the logistics of installing an extension library in Processing, a basic skill that significantly expands your tool-kit.

If you do elect to make a puppet or avatar:

  • Consider whether your puppet face is 2D or 3D.
    • 2D graphics are fine. But, just so you know:
    • Unlike p5.js, Processing can produce fast, decent-quality 3D graphics. You can find tutorials and other helpful information about this, such as in this tutorial, or here, and in many of Dan Shiffman’s videos. There are also many interesting libraries for Processing that allow you to play with 3D, such as loading 3D models or lofting 3D surfaces.
  • Give special consideration to controlling the shape of your puppet’s face parts, such as the curves of its nose, chin, ears, and jowls.
  • Consider characteristics like skin color, stubble, hairstyle, blemishes, inter-pupillary distance, facial asymmetry, cephalic index, prognathism, etc.
  • Consider adding functionality to your puppet’s face so that it responds to microphone input as well. You can use the new Processing audio library for this.

FaceOSC Assignment Specifications

This is the part where I kvetch at you if you’re missing something. Please read carefully and do all the things. If you’re having a problem, ask for help.

  • Sketch first!
  • Develop a program that responds to data from the FaceOSC face tracker.
    • Template code for Processing has been provided to you. See the technical section below for more information, downloads and links.
    • You are permitted to use another programming environment that receives OSC (such as Unity3D, openFrameworks, Max/MSP, etc.) as an alternative to Processing, but classroom support is not provided for those tools.
    • FaceOSC provides the ability to track faces in a stored video, instead of a live webcam. You are permitted to develop a piece of software that responds to the faces in a specific video. If you elect to do this, include some explanation about why you selected the video you did, and the ways in which your software is specially tuned to it.
  • Create a blog post on this site to present your documentation and discussion of the project. Please title your blog post nickname-faceosc, and categorize your blog post with the WordPress category, FaceOSC.
  • Document your software by capturing 30-60 seconds of screen-grabbed video, in which you are controlling/puppeteering your design in real time. You may wish to practice a short monologue or other routine, and record a screen-captured video performance with your avatar.
  • Make your video documentation effective. It’s important to see how your project actually responds to a face. When documenting your project, one possibility I recommend (if you are comfortable doing so) is to use a “split-screen” format, as shown here. The purpose of this split-screen layout is to help the viewer understand the relationship of the user’s face actions to the software’s behavior:
    faceoscp5
    If you do not prefer to show your own face in the video documentation (perhaps you wish to maintain your anonymity, or other perfectly acceptable reasons), that’s fine. You may ask a friend to perform in the video for you, or you may use a “readymade” video instead of a live webcam.  Another acceptable alternative is to omit video of anyone’s face altogether, but instead include the raw points transmitted by FaceOSC as an overlay, in a corner of your screen:overlay
  • Upload your documentation video to YouTube or Vimeo, and embed it in your blog post. There are helpful instructions for embedding videos here. Also embed your GIF, too, please.
  • Include an animated GIF recording from your video documentation, in addition to the embedded video, as well. This GIF can be brief; just a second or two is fine. (GIFs will persist on this website, whereas I can’t depend on YouTube.)
  • Also in your blog post: upload your code by embedding your Processing/Java code using the WP-Syntax Wordpress plugin. There are instructions for using WP-Syntax in the “Embedding syntax-colored code” section of this web page.
  • Write a paragraph about your inspirations, goal, and evaluations of your project. In your narrative, mention the specific properties of the face that you used to control your design.
  • Include a scan or photo of your sketches, to document your process.

Working with FaceOSC

FaceOSC is a real-time face tracker by Kyle McDonald and Jason Saragih. It tracks 66 landmarks on a person’s face, as well as some additional information, and transmits this data over OSC. Binaries for FaceOSC executables can be found here:

Processing templates for receiving OSC messages from FaceOSC:

  • Processing template for receiving FaceOSC (see the directory /processing/FaceOSCReceiver in the zip download).
  • There’s also some more Processing code below which you might find helpful.
  • Important. To receive OSC messages from FaceOSC, you will need to install the oscP5 library into Processing. This can be done with the “Add Library” tool, instructions for which can be found here.

Note: Windows users, you will also need to install the following system components, in order for the FaceOSC application to work properly:

The information below is copied from the README that accompanies the Mac FaceOSC. You can also read about the FaceOSC message specification here.

--------------
Settings

The settings for FaceOSC are are found in the settings.xml file located at: 
* Mac OSX: right click on FaceOSC.app, select "Show Package Contents", and navigate to Contents/Resources/data/ 
* Win & Linux: included in the FaceOSC folder

Further instructions are contained within the settings.xml file.

--------------
Playing Movies

FaceOSC can load a movie instead of using webcam input. 
Put the movie file in your home folder and set it in the movie  tag with the full path to the movie aka:

/Users/YourUserAccountName/yourMovie.mov

Change the source  tag to 0 to use the movie as input. Also check the other movie settings (volume, speed).

-------------
Key Controls

* r - reset the face tracker
* m - toggle face mesh drawing
* g - toggle gui's visibility
* p - pause/unpause (only works with movie source) 
* up/down - increase/decrease movie playback speed (only works with movie source)

---------------
OSC Information

 * Pose
  * center position: /pose/position
  * scale: /pose/scale
  * orientation (which direction you're facing): /pose/orientation
 * Gestures
  * mouth width: /gesture/mouth/width
  * mouth height: /gesture/mouth/height
  * left eyebrow height: /gesture/eyebrow/left
  * right eyebrow height: /gesture/eyebrow/right
  * left eye openness: /gesture/eye/left
  * right eye openness: /gesture/eye/right
  * jaw openness: /gesture/jaw
  * nostril flate: /gesture/nostrils
 * Raw
  * raw points (66 xy-pairs): /raw

The “pose” data transmitted by FaceOSC represents 3D information about the head’s orientation. Here you can see it being used to control a 3D object.

In case you’re interested, here’s the Processing code for the above face-controlled box:

import oscP5.*;
OscP5 oscP5;
 
int     found; // global variable, indicates if a face is found
PVector poseOrientation = new PVector(); // stores an (x,y,z)
 
//----------------------------------
void setup() {
  size(640, 480, OPENGL);
  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
}
 
//----------------------------------
void draw() {
  background (180);
  strokeWeight (3); 
  noFill();
 
  if (found != 0) {
    pushMatrix(); 
    translate (width/2, height/2, 0);
    rotateY (0 - poseOrientation.y); 
    rotateX (0 - poseOrientation.x); 
    rotateZ (    poseOrientation.z); 
    box (200, 250, 200); 
    popMatrix();
  }
}
 
//----------------------------------
// Event handlers for receiving FaceOSC data
public void found (int i) { found = i; }
public void poseOrientation(float x, float y, float z) {
  poseOrientation.set(x, y, z);
}

Aaaand….

screen-shot-2016-10-01-at-9-23-08-pm

This template code below by Kaleb Crawford shows how you can obtain the raw FaceOSC points:

// Processing 3.0x template for receiving raw points from
// Kyle McDonald's FaceOSC v.1.1 
// https://github.com/kylemcdonald/ofxFaceTracker
//
// Adapted by Kaleb Crawford, 2016, after:
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
// adapted from from Greg Borenstein's 2011 example
// https://gist.github.com/1603230

import oscP5.*;
OscP5 oscP5;
int found;
float[] rawArray;
int highlighted; //which point is selected

//--------------------------------------------
void setup() {
  size(640, 480);
  frameRate(30);
  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "rawData", "/raw");
}

//--------------------------------------------
void draw() {  
  background(255);
  noStroke();

  if (found != 0) {
    for (int val = 0; val < rawArray.length -1; val+=2) {
      if (val == highlighted) { 
        fill(255, 0, 0);
      } else {
        fill(100);
      }

      ellipse(rawArray[val], rawArray[val+1], 8, 8); 
      text("Use Left and Right arrow keys to cycle points", 20, 20);
      text( "current index = [" + highlighted + "," 
              + int(highlighted + 1) + "]", 20, 40);
    }
  }
}

//--------------------------------------------
public void found(int i) {
  println("found: " + i);
  found = i;
}
public void rawData(float[] raw) {
  rawArray = raw; // stash data in array
}

//--------------------------------------------
void keyPressed() {
  if (keyCode == RIGHT) {
    highlighted = (highlighted + 2) % rawArray.length;
  }
  if (keyCode == LEFT) {
    highlighted = (highlighted - 2) % rawArray.length;
    if (highlighted < 0) {
      highlighted = rawArray.length-1;
    }
  }
}
Comments Off on Deliverables 05 (D. 10/14) Posted in