final

Collaboration with Sanjay.

This is a revision of our AR project. The AR package that we used (Fritz AI) was unreliable with pose estimation, so we scrapped AR. Within the timescope that we had to revise it, we change to web brower instead (not enough time to try webAR).

We got Mixamo to work with the 3D model by fixing/reducing the geometry/mesh. The 3D model does a punching animation. We rendered the animation to images, then played the images onto p5js like a sprite.

The 3D model appears when the person strikes a pose. In addition, the piano soundtrack plays. These are references to the show: "Jojo's Bizzare Adventure" (pose that was referenced)


(pose and the stand referenced)


(3D model image appears and starts punching when person poses)

(punching animation done in mixamo)

(3D model done in blender)

vingu – arsculpture

AR Golden Experience Requim Stand from Jojo's Bizarre Adventure. Collaborated with sansal.

In Jojo's Bizare Adventure, Stands are a "visual manifestation of life energy" created by the Stand user. It presents itself hovering nearby the Stand user and posseses special (typically fighting) abilies.  For this project, we used pose estimation in order for the stand to appear at the person's left shoulder. We used a template and package from Fritz.ai.

The code uses pose estimation to estimate the head, shoulders, and hands. If all the parts are within the camera view, the Stand model moves towards the left shoulder.

This is more of a work-in-progress; we ran into a lot of complications with the pose estimation and rigging of the model. Initially, we wanted to have the Stand to appear when the person stikes a certain pose .(Jojo Bizzare Adventure is famous for its iconic poses). However, the pose estimation was not that accurate, and very slow; made it impossible to train any models. In addition, Fritz AI had issues with depth, so we could not control the size of the 3D model (it would be really close or far away). We also planned to have the Stand model do animations, but ran into rigging issues.

Some Adjustments to be made:

  • rig and animate the model
  • add text effects (seen in the show)
  • add sound effects (seen in the show)
  • make the Stand fade in

some work in progress photos

3D model made in Blender

Fritz AI sometimes able to detect the head (white sphere), shoulders (cubes), and hands (colored spheres). 3D model moves to the left shoulder.

Was not able to instansiate the 3D model to the shoulder point, so the model just appears at the origin point.

Fritz AI works only half of the time. The head, shoulders, and hands are way off.

 

vingu – SituatedEye

I made a survelience ramen bot that takes a picture when it sees someone take instant ramen out of the pantry, and tweets the image on twitter. I thought it would be interesting to document me and my housemates' instant ramen eating habits, since our pantry is always stocked with it.

I worked backwards, starting with the twitterbot. I used Twit API, and node.js. (Most of the work was from setting up the twitter account and learning about command prompt.) Then I added the image capture to the Feature Extractor template. I struggled with connecting the two programs, since one runs on pj5s (client-based?) and the other on node (local computer?). I tried to call the twitterbot code in the feature extractor code(trying out different modules and programs), but I couldn't get it to work. I opted to make the twitterbot run continously once I call it in the command prompt; it calls a function every 5 seconds to check if there is a new image to post.

I made the twitter account header/profile look like a food blog/food channel account. I thought it would be a fun visual contrast between the tweets.

code (I didn't run on pj5s editor, I ran it locally from my computer)

Some after thoughts:

  • It would be better if I finished this earlier, so that there would be more realistic twitter documentation of me and my housemates. none of my housemates were avaliable/awake by the time I finished
  • find a better camera location, so it looks less performative
  • I should of done samples of holding food that wasn't instant ramen
  • this can only be run locally from my computer, maybe upload to heroku?

 

Scrolling through my tester tweets.

vingu -ML toe dipping

APix2Pix I played around to see how it would work with non-cat drawings, and how it would detect circles and ellipses.

B GANPaint StudioI found it interesting that it used parts of the image, such as turning the red bus into a red door. (rather than pasting on a door)

 

C Art Breeder

I manipulated the genes so that the original genes could not be recongized. I also combined all the members of BTS for fun.

D Infinite Patterns

 

E GPT 2

vingu – telematic

Chaotic Garden Glitch

Enter Chaotic Garden 

This was inspired by Ken Goldberg & Joseph Santarromana's TeleGarden. I really liked the idea of maintaing a garden together, and the idea of community.

Users collaborate simultaneously, but plant their own seeds independent of each other. ( The user can only see their own plants). When they water their plants, they are watering everyone's plants (whoever is online). Users are anonymous, only shown by a cursor. This makes taking care of your garden somewhat chaotic. If someone is watering their plants, then it seems like water is appearing out of nowhere and watering your plants as well. Each user's watering action affects all the other users' gardens

Initially, I tried to make a virtual shared musical garden. The y position of the plants determine music note, like notes on a music sheet. (In addition, the plants would die not watered within 10 minutes). I was not able to implement the shared plants and music in time. The only thing shared is the watering action.

(first ideas of motion tracking and hand drawings)

vingu – LookingOutwards04

Video

Kaho Abe is an game designer and media artist that creates installation games.

Hotaru is a 2 person interactive task-based game where they play as "the last remaining lightning bugs in a fantastical world, in their last stand against an invisible enemy." One person generates lightning using hand gestures, and transfer the lightning to the second person to "shoot" from their gaunlet. I really like the costume elements of this piece; it adds another level of immersion and detail. (the way the lightning pack lights up, and the glowing spikes on the gaunlet). I appreciate that the artist went through many iterations of this piece, and decided to keep focus on the costumes and gestures rather than bigger theatrical elemtns.

vingu – Techniques

example

This gets audio input from the Computer Microphone. I think that there are fun uses for making this interactive.

library

This is a 2D collision library, which will be helpful in making animations or games.

glitch

hello-Magenta helps create music on the web. It seems simple and fun.