Work

gray-arsculpture

I started this project thinking about busses and the chaotic/fleeting feeling they represent. They're a big part of city life in my experience, and I wanted to respond to that chaos and rush by replacing the windows with a window into a forest or similar, where the outside is just slowly moving by in a straight line. I was inspired by AR portals like this one:

I did some sketches to see what a bus would look like with the windows switched, and imagining somebody cutting off all the chaotic input to replace it with the calming view out the window (sorry it's really hard to see, I drew lightly).

So I watched a lot of videos on how to do this, mainly this great guy called Pirates Just AR, which is a great name, by the way.

I had some trouble extending his tutorials to a nice nature landscape though, and I never even got to the point of adding motion, which I'm sure would have been hard too.

I thought I could instead put a hole where the "Emergency Exit" on the roof of the bus is, and put a tree actually inside the bus breaking out through the hole. I also didn't want to go out and look for a bus at that point in the night, so I decided an elevator was a similar idea. So I changed the image target to the roof of the elevator, and added some calming music when the hole and tree appear. I think I can do a lot more with this concept, especially by adding elevator dings and bird sounds; I think that'd be a good contrast. It would also be nice to have flying birds, and add motion of the elevator somehow.

Here's my imagined use of this sculpture:

MoMar-arsculpture

 

This an AR app that lets you place a wooden toy train set on flat vertical surfaces.

I wanted to fulfill a loooong forgotten childhood aspiration. When I was younger I thought that having a train set on a verticle surface would be cool. Alas, 5 year old me did not have enough duct tape and glue to make it happen. I figured I could get back to it sometime down the line when I had more experience in making toy train sets.

Unfortunately, in the present-day, I don't have access to a wooden train set so I had to make a virtual train set, to-scale, complete with one of my favorite toy trains: The Polar Express.

Special thanks to Professor Levin and Lau Hochi.

 

 

 

sovid & lubar – arsculpture

A tiny AR Golan featuring some of our favorite Golan quotes to stand guard outside the doors of the Studio For Creative Inquiry and show off his dance skills.

 

Our initial intention was to have the figure turn and blink at the viewer when 'looked at', however ran into the conflict of animating the mesh rather than the rig, so we decided instead to transition to a spinning around state when the camera centers on the character. We initially wanted to work off of the placeAnchorsOnPlanes example, but found that surface detection wasn't working as we needed it to. Instead, we decided to place the mesh at the center of the scene, so wherever a user was in space when the app was launched, that was where the model appeared. The model was modeled by Sophia and we rigged and animated it using Mixamo. We also experimented with blendtrees and morphing animation states together, and Lumi figured out how to switch between states well without jumping. The gameobject for the model switches animation states when the camera centers on it (using raycasting). For future iterations, we would like to be able to place multiple animated meshes in a space using raycasting and switch between multiple animation states.

Progress Gifs

Before Smoothing Animation Transitions

Transition tests

 

ilovit-arsculpture

The Concept is to add Groucho glasses to the portraits in the Carnegie Museum of Art.

Portraits, especially older portraits of rich people, tend to be very posh and stuffy. Groucho glasses instantly turn them silly, especially if they raise their eyebrows at you.

I didn't get the AR working in time. I made some nice Groucho Glasses though:

Edit: AR kinda working. Enough to make the following documentation:

It still doesn't totally work.

The Augmented Faces only works with the front facing camera - the video is achieved with camera trickery. I futzed for a long time with image targets, but I couldn't get the glasses to line up with the portraits consistently. I then found that the easiest way to get results was to detect faces in the portraits, but that can only be used with the selfie camera: not a good state of things for an AR that you want to point at things other than yourself.

zapra – ARSculpture

Behold the Canal, a goopy, body-like tunnel that transcends time and space. It's shockingly convenient as a portal between campus and my apartment, but at what cost?

Documentation:
I created the canal in the Putty 3D app, which allows one to CAD goopy 3D models by simply drawing with your finger.

Over the shoulder:

Full view of the canal:

vingu – arsculpture

AR Golden Experience Requim Stand from Jojo's Bizarre Adventure. Collaborated with sansal.

In Jojo's Bizare Adventure, Stands are a "visual manifestation of life energy" created by the Stand user. It presents itself hovering nearby the Stand user and posseses special (typically fighting) abilies.  For this project, we used pose estimation in order for the stand to appear at the person's left shoulder. We used a template and package from Fritz.ai.

The code uses pose estimation to estimate the head, shoulders, and hands. If all the parts are within the camera view, the Stand model moves towards the left shoulder.

This is more of a work-in-progress; we ran into a lot of complications with the pose estimation and rigging of the model. Initially, we wanted to have the Stand to appear when the person stikes a certain pose .(Jojo Bizzare Adventure is famous for its iconic poses). However, the pose estimation was not that accurate, and very slow; made it impossible to train any models. In addition, Fritz AI had issues with depth, so we could not control the size of the 3D model (it would be really close or far away). We also planned to have the Stand model do animations, but ran into rigging issues.

Some Adjustments to be made:

  • rig and animate the model
  • add text effects (seen in the show)
  • add sound effects (seen in the show)
  • make the Stand fade in

some work in progress photos

3D model made in Blender

Fritz AI sometimes able to detect the head (white sphere), shoulders (cubes), and hands (colored spheres). 3D model moves to the left shoulder.

Was not able to instansiate the 3D model to the shoulder point, so the model just appears at the origin point.

Fritz AI works only half of the time. The head, shoulders, and hands are way off.

 

meh-arsculpture

Untitled Duck AR by Meijie and Vicky is an augmented reality duck that appears on your tongue when you open your mouth, and is triggered to yell with you when you stick out your tongue.

https://www.youtube.com/watch?v=xt1FgOcHXko&feature=youtu.be

Process 

We initially started off by using a bunch of our limited developer builds (heads up for future builds: there is a limit of 10 per week, per free developer account lol) by testing the numerous different types of build templates that we could use to implement AR over our mouth, most particularly image target, face feature tracker, and face feature detector.

We actually got to successfully have an image tracker work for Meijie's open mouth, however, it was a very finicky system because she would have to force her mouth to be in the same exact shape, and very similar lighting, in order for it to register. We plugged in an apple prefab, and thought it was quite humorous as it almost was like being a big stuffed with an apple.

With this, we initially wanted to explore having an animation of some sort take place in the mouth. However, that proved difficult due to the lack of accuracy with small differences in depth, and also the amount of lighting that would need to be taken into consideration. Also, because the image target had some issues with detecting the mouth, we decided to migrate to the face mesh and facial feature detector.

We combined both the face mesh and feature detector, to trigger a duck to appear on the tongue when the mouth is open.

 

vikz-arsculpture

Untitled Duck AR by Meijie and Vicky is an augmented reality duck that appears on your tongue when you open your mouth, and is triggered to yell with you when you stick out your tongue.

https://www.youtube.com/watch?v=xt1FgOcHXko&feature=youtu.be

Process 

We initially started off by using a bunch of our limited developer builds (heads up for future builds: there is a limit of 10 per week, per free developer account lol) by testing the numerous different types of build templates that we could use to implement AR over our mouth, most particularly image target, face feature tracker, and face feature detector.

We actually got to successfully have an image tracker work for Meijie's open mouth, however, it was a very finicky system because she would have to force her mouth to be in the same exact shape, and very similar lighting, in order for it to register. We plugged in an apple prefab, and thought it was quite humorous as it almost was like being a big stuffed with an apple.

With this, we initially wanted to explore having an animation of some sort take place in the mouth. However, that proved difficult due to the lack of accuracy with small differences in depth, and also the amount of lighting that would need to be taken into consideration. Also, because the image target had some issues with detecting the mouth, we decided to migrate to the face mesh and facial feature detector.

We combined both the face mesh and feature detector, to trigger a duck to appear on the tongue when the mouth is open.

12.4.19 Updated Prototype

Having the duck appear (within grass and more refined detail) when mouth is first opened, and then having a raw duck (yum yum!) appear the second time mouth is opened.

lsh-tli-arsculpture

Color Me Surprised is an experience designed by lsh & tli in which users describes the world around them by tapping to sample the color of that location.

This project explores the ways which we discover the world around us, and how information is stored digitally about our surroundings. Through tapping the screen, one begins to catalog or tag their surroundings, building up to eventually describe objects by color. Very early on, we knew we would want to use the massive color api to express the world around us, but decisions about representation and interaction where what drove this project forward. We originally attempted to use OpenCV to reduce the image and constantly describe everything the camera would see, but even on a laptop, the performance was awful, let alone on an iPhone. Another decision was whether or not to include the world that is being tagged. The decision rests on the idea of the interaction being an emergent experience versus a descriptive tool. Lastly, we had a series of struggles with Unity and Xcode, which are still being ironed out. I would say this project is successful in that it creates a novel experience.