meh-arsculpture

Untitled Duck AR by Meijie and Vicky is an augmented reality duck that appears on your tongue when you open your mouth, and is triggered to yell with you when you stick out your tongue.

https://www.youtube.com/watch?v=xt1FgOcHXko&feature=youtu.be

Process 

We initially started off by using a bunch of our limited developer builds (heads up for future builds: there is a limit of 10 per week, per free developer account lol) by testing the numerous different types of build templates that we could use to implement AR over our mouth, most particularly image target, face feature tracker, and face feature detector.

We actually got to successfully have an image tracker work for Meijie's open mouth, however, it was a very finicky system because she would have to force her mouth to be in the same exact shape, and very similar lighting, in order for it to register. We plugged in an apple prefab, and thought it was quite humorous as it almost was like being a big stuffed with an apple.

With this, we initially wanted to explore having an animation of some sort take place in the mouth. However, that proved difficult due to the lack of accuracy with small differences in depth, and also the amount of lighting that would need to be taken into consideration. Also, because the image target had some issues with detecting the mouth, we decided to migrate to the face mesh and facial feature detector.

We combined both the face mesh and feature detector, to trigger a duck to appear on the tongue when the mouth is open.

 

isob-meh-zapra-justaline

Collaborate with Izzy and Eliza, the AR project sketch is about "never sleep alone".

To use the app, you first scan your bed, and you can pick your favorite partner model and position it on top of the bed(it could be human character or toys like teddy bear), so when you lie on the bed, if you hold up the phone to the other side of the bed, you will see the characters lying next to you.

meh-SituatedEye

Space Invader controlled by hand posture - collaborated with Sanjay

This is an exploration of training our own model for posture detection to apply it to game. We were inspired by the rock paper scissor detection and wanted to something that also detects gestures but in a different game scOur first step is to detect both the rotation of the hand and whether you pull the trigger. Using regression template we were able to create two axises that separately detect the two features. However, as we combined the detection with Space Invader, we faced a very low frame rate because of the expensive calculation. Currently this is a very crude exploration, and we could be more creative with the application of the shooting gestures. A further development of this game could be optimizing the calculation time by moving offline and try to save and preload the model. Following are some other game scenarios this gesture can be developed:

Link to p5js: https://editor.p5js.org/svsalem/sketches/A8Ao1HcrT

 

meh – ML Toe Dipping

Edges 2 Cats

(cat paw!)

Using cat to create a rat actually looks more like rat than cat!

Facades

GANPaint Studio

 

This would be so cool if using in CGI or special effect to morph one place into another.

Art Breeders

General:

I am lowkey disgusted by teddy beetles but also I kind of see the little traces of cuteness. I wish I can teddify all the beetles in the world and the world would be so much better.

Portrait:

Infinite Patterns

GPT-2

Google AI Experiments

NSynth: Sound Maker by Yotam Mann

meh-telematic

Cursor Tag

Here is how it works:

This interactive game is tag (or maybe hide and seek?) with cursor.

In this game you only see your cursor, but you can 'sense' the other players as your pointer cursor changes directions based on the position of it (if you are a normal player) or of the closest prey (if you are it). The playground gets smaller every time someone gets tagged.

This game is (literally) inspired by our classic childhood game tag. (The fact that this is purely digital instead of physical is kinda a sad reminder of how digital our life has been:/ )I enjoy the nervous feeling when players are the game with other people, they are aware of others' presence but they are also alone on the canvas and don't know who the others are. (Isn't this also how life works?) Especially since this is a game that requires people to play it together in the same room, this digital loneliness is even more emphasized.

If i have more time I am definitely going to improve the tag system (because it is currently not working very well) and it would probably be more interesting if everytime a player gets tagged, everyone gets fatter (collision radius bigger) so it becomes more challenging for the normal players to hide.

 

meh-CriticalInterface

Can we make the invisible visible? The more present interfaces are in our lives, the less we perceive them.

  • Change the typography of your browser to a gothic blackletter. Or Dingbats (for a more advanced exercise).
  • How many times do you remember you're shifting gears when driving? Speak the gear number loud every time you do it.
  • Tonight write on a paper what interfaces you have used. Tomorrow score how long have you been using each. Do it every day this week.

If we bring the idea of "interface" to a larger context beyond digital platforms and see it as a touch-point where two systems interact, the same principle still applies. In design, we learnt about the concept of "ready to hand" and "present at hand". When an object/interface becomes invisible to us and fails to present the , it becomes "ready to hand". For example, when a person is looking through the glasses, they are not aware of the glasses but focuses on the things they see through glasses. When the glasses is broken, its existence suddenly becomes visible and thus becomes "present at hand". In design, "ready to hand" is always considered as desirable since it reduces the effort of users to interpret and learn about the product. However, just as what are listed in the propositions, it is sometimes very important to pull us out of this comfort zone and reflect on the interface we are using to be aware of the possible consequences. A good example would be the new feature on iPhone that notifies you how much time you spend on your phone today. With the interruption of pop-up notification, you are suddenly aware of the interface and your actions.

meh-LookingOutwards04

Jacob Tonski, Balance from Within (2010-2013)

The beauty of the project is its simplicity and fragility. It is a very simple installation with only a sofa and a motor that balances the sofa against gravity, without any protection to stop other possible external forces to tip it over. Thus it becomes really fragile to the viewers and might breaks it into pieces. However, since the sofa is assembled as modulars, it is also very easy to assemble the pieces back together when it falls apart, this objects suddenly becomes durable and resilient. It is poetic to see the juxtaposition of fragility and resilience and it almost feels like the piece is designed to be broken.

meh-techniques

  • Reach 2 - Line segments give a lot flexibility for rigging and more elegant moving effect;
  • WaveMaker - Useful when I want to create any moving patterns
  • p5.asciiart - A great way to stylize the image, wonder if i can use it with OpenCV;
  • Tramontana - It would be interesting to develop some multi-platform game using this.
  • Magenta.js - Maybe create an instrument using interaction with a website?