sovid & lubar – Final

For our final we worked to expand on the AR Golan project, as there were certain interactions we wanted to explore and had not yet achieved for out first iteration. The first of these was to turn the character to face the "viewer" when the character is looked at. The next element was to include multiple characters, and have set each of them to perform the same turning, waving interaction. This is the piece we had the most issues with. While we are able to detect when each gameObject is being viewed, we are still unable to trigger the animation sequence in all except the first game object (and we do not yet understand why). So for today's version we have one kind penguin who rotates towards the camera and multiple other penguins floating around minding their own business.

We switched out the Golan model for a series of Coca-cola-esque penguins, one of which is textured with the advertisement, to celebrate the upcoming capitalist Christmas.

sovid & lubar – arsculpture

A tiny AR Golan featuring some of our favorite Golan quotes to stand guard outside the doors of the Studio For Creative Inquiry and show off his dance skills.

 

Our initial intention was to have the figure turn and blink at the viewer when 'looked at', however ran into the conflict of animating the mesh rather than the rig, so we decided instead to transition to a spinning around state when the camera centers on the character. We initially wanted to work off of the placeAnchorsOnPlanes example, but found that surface detection wasn't working as we needed it to. Instead, we decided to place the mesh at the center of the scene, so wherever a user was in space when the app was launched, that was where the model appeared. The model was modeled by Sophia and we rigged and animated it using Mixamo. We also experimented with blendtrees and morphing animation states together, and Lumi figured out how to switch between states well without jumping. The gameobject for the model switches animation states when the camera centers on it (using raycasting). For future iterations, we would like to be able to place multiple animated meshes in a space using raycasting and switch between multiple animation states.

Progress Gifs

Before Smoothing Animation Transitions

Transition tests

 

lubar-SituatedEye

The Situated Psychic Eye

I am fascinated by fortune tellers and the idea of a "psychic eye"  (I don't buy the "psychic" one bit) but the elements of incredibly detailed observation and building from the 'telees' cues is interesting enough on its own. Using the computer vision to create an accurate bodily and verbal cue reader, was ever so slightly out of scope for this piece, but I wanted to continue with the idea of a psychic gaze. So, I created a tarot card reading set up, which incorporates the fun, a bit ridiculous, and somewhat mysterious air of telling the future.

My initial, and continued intent is to train the computer to recognize all 76 cards, however as I have yet to find a way to successfully load pre-stored images and upload the images taken, I scaled down slightly to save sanity when every time the program is restarted the cards are re-scanned.

The set up in a physical space I think is critical to creating the air of mystic, and I pulled a dirty trick in projecting onto a surface by zooming the unnecessary elements out of frame.( This would not be ideal for a final system set up.)

The program itself works beautifully to recognize the different cards, now I just need to figure out how to save and upload the training so that I can implement the entire deck.

Program Link Here

Process:

 

lubar – machinelearning

A. Pix2Pix

This is such a playful and fun tool to play around with.

B. GAN Paint

Original:

Resulting:

This image synthesizing system is really interesting to work with, particularly when the resulting image produces something completely unexpected (for example - drawing 'domes' in the above, resulted in bright blue streaks across the image). This seems like it could be a really powerful tool in generative image making.

C. Art Breeder

What I like most about this program is the control that the user has over the image mixes and adjustments. The resulting images can be really beautiful.

D. Infinite Patterns

Image:

Source:

I've been tinkering around this for a bit now, but still don't entirely understand how the sliding menus change the image (in consistent reproducible ways) however, the resulting images tend to be beautiful.

E.GPT-2

 

This was one of my favorite readymades in this list. It seems that sometimes the completed text would go off in an entirely unrelated direction, and other times a smooth continuation of what I was expecting. Either way the results tended to be really funny.

F. Google AI

One of the experiments I played with was Quick,Draw! in which a network works to guess what you are drawing. It was really fun to hear the in-between guesses that occurred as the images were being drawn.

lubar – telematic

The past message sender, the keeper-upper, the interrupter

An app that plays with the idea of trying to keep up with an ongoing conversation and coming up with something to add only once the conversation has already moved on. The chat app sends the previously written message in the conversation and sends it in your name when you add a new comment. This creates a new way to try to navigate a communication space, because the user has a lack of control over the direction of the conversation as they press send, the messages intercept and disrupt the smooth flow of send and reply, they are all collaborating simultaneously while at the same time, always one step behind.

Link to webpage           Link to Glitch Program

- Process -

For the telematic piece I wanted to create a translating chat app which takes outgoing typed messages and translates them on the screen in the language of the other chatters (excluding your language), losing the original written text (in translation (heh!)). All incoming messages to you are translated into your chosen language, creating a possibility for dialogue and language untangling across boundaries.

I thought that using a google translate API to change the text would be a great opportunity to learn more about APIs as a part of this project. I wish that I had not chosen to do so while also learning how to navigate glitch, and node.js and trying to untangle why examples of working translations immediately failed when remixed.  I ran into so many obstacles and problems trying to implement the google translate API in glitch, and found an alternate resource and  got the language detection and translation working! This was a glorious yet short lived victory as I later discovered that the alternate API resource limited the amount of translations it would allow, thus stopping the program from working entirely at 10:40pm on Tuesday (yay!):

I'm incredibly frustrated that I was unable to get this to work, however I feel that I learned a lot from the process (not necessarily the things I set out to learn, however still useful). Link to this project. I will be continuing to work on this.

So setting that aside and working with some of the framework that I had in place for the translation project, I switched gears in order to have a functioning program.

lubar – techniques

Arduino sensor data via WebJack: https://p5js.org/examples/interaction-arduino-sensor-data-via-webjack.html

Serial Port Connecction: https://github.com/p5-serial/p5.serialport

I was not aware that it was possible to connect/read data from an arduino to p5Js.

Mappa: https://github.com/cvalenzuela/Mappa

I have not worked with maps before in a programming environment but have been interested in doing so, I now have a solid starting point.

Particle JS API: https://glitch.com/~particle-api

I'm interested in being able to create a simulation on a screen that can be translated into a physical computing environment.

lubar – CriticalInterface

4. The interface collects traces: traces and remains of all agents/agencies which converge in it.

"Keep sending the same portrait if someone asks for it. You will never look older and, at some point, nobody will recognize you in real life. (1+cR)"

"Block the GPS of your phone. If you need to find a place, ask someone. Things will happen. (1-cH)"

"Use a notebook to write down your bookmarks, your contacts, your searches. (1+cA)"

I find this one particularly interesting because upon first reading it, I found the idea of collecting traces as something poetic and beautiful, a record of existence on an interface, a gathering of data for the self. However upon reading the propositions, I found that the lean of the traces was towards that of surveillance, the idea of someone else watching you, gathering data about you. The propositions offer ways to hide from the interface, and to leave as few traces as possible. They parallel acts of exchange with someone trusted and someone entirely unknown, placing them on the same level of action, and encourage the act of deliberately donating or presenting all personal data in unconventional ways, to point to the fact, that it's already out there, and not private.

lubar-LookingOutwards04

2. Chinese Whispers by Saurabh Datta

Chinese Whispers is a physical computing work which records the user input audio and transmits it 'telephone' game style between and across four "gossiping" heads. The message or story is distorted and changed at each repetition, resulting in a final telling very different from the original one one that is difficult to understand and "impossible to encrypt". I really like how this piece looks at the errors that can occur in the transmission of data in a very human way, and points with humor to the misinterpretation and mis-processing of data that can occur in the in-between. The construction of it, and use of the human heads as figures also humorously points to the misinterpretation and morphing of data through gossip and human retelling, and how even technology, although we so often see it as a reliable means of reproducing information, is susceptible to this.

 

 

lubar-Body

This piece began as an exploration of marionettes and how funny yet creepy they are when they move. Through the process of creating figures, and trying to decide what visual language I wanted to use, I came across some paper cut out dolls and an amazing Good Housewife magazines from the 50's. The ads were so funny and I though that the slightly uncomfortable motion, that I found resulted from the marionette segmented image tactic I was interested in using, could work to create (what I at least find to be) a really fun piece. The motion is jerky and the body parts disjointed. While this is in large part initially due to errors in the program, I found I really liked how the head pops off and how glitchy it is at times.  I think it adds to my laughing at the ads. (Note: if you clap your hands together off to the sides, the images change). In a lot of ways this is a mess and a half, but despite that (or maybe because of it?), I think this is my favorite piece thus far.

Link to Program --> HERE

Process:

I was hoping to turn this into a game of sorts, to have the funky dancing figure cover the ad screen with the trail of where the participant moved, however tinting the image caused it to slow down too much to work. (see below - tint trial ~300x speed)

Process Images: