This is my Looking Glass project and the first game that I made in Unity by myself. This project was an excuse to work on the Looking Glass and get to know Unity better.
I am proud to say that this accomplished that.
You control a moving cylinder.
You are supposed to move your head around to find niches in the environment.
The niches contain trigger which unblocks areas that are blocked by striped cubes.
If I had more time I would have made a couple more levels and puzzle types (multiple niche puzzles).
You move left and right using the arrow keys. You move back using the square, and move forward using the circle.
This an AR app that lets you place a wooden toy train set on flat vertical surfaces.
I wanted to fulfill a loooong forgotten childhood aspiration. When I was younger I thought that having a train set on a verticle surface would be cool. Alas, 5 year old me did not have enough duct tape and glue to make it happen. I figured I could get back to it sometime down the line when I had more experience in making toy train sets.
Unfortunately, in the present-day, I don't have access to a wooden train set so I had to make a virtual train set, to-scale, complete with one of my favorite toy trains: The Polar Express.
Adam and I came up with an idea for a virtual forum where members can only access it in a certain location and through an AR app. Special thanks to Joshua Yeom for recording the over-the-shoulder shots.
Well, it depends on the weather! If it's raining in Pittsburgh, it's half-empty. If there's nice weather, it's half full. Basically, depending on the weather the reader will either have a pessimist or optimist look on the world.
The machine learning algorithm is looking for three states: Full, Half, and Empty. Depending on the state, it will ask the Openweather API the state of the weather. If the weather is bad, the machine learning system will say that the Glass is half empty, else it will say that it is half full.
I suggest the viewer to train the model using a dark liquid because it would be easier for the computer to differentiate between the bottle and the background.
Originally, I wanted to do face tracking but then I realized that having to retrain ML5 over and over again would prove too repetitive, instead I decided to make something that would tell me how full a bottle of water is.
When I was working on the accuracy of my project, I modified Professor Levin's variant of the ml5 p5.js classifier so it would tell me what items it would see. For example, if I were to train the model using three different labels, it would give me all the labels in order of accuracy. The most accurate label would appear first, followed by the least accurate and so on.
At one point, I was trying to have a trained model load upon runtime because I didn't want to constantly retrain the classifier. Unfortunately, I couldn't get it to work in time.
My project is an asynchronous one-to-many experience. Users can interact with each other only if they encounter other people at the watercooler. Each player is the same as everyone else, no one has any special roles and everyone is in constant danger of being discharged. Participants can write their own nicknames that will be used at the water cooler message board. This project allows for remote-communication, people can use this app from anywhere in the world.
My project was designed to emulate social distractions at the workplace by giving users a long time between tasks so they have time to talk. If someone gets distracted and misses a couple of deadlines, they get fired! Normally, the tab that the game is on will close after the user gets fired. Problem is, Glitch doesn't let me do that.
Some notes on graphics:
The 3D graphics are pre-rendered because having a real-time rendering of 3d objects is beyond the scope of this project.
Imagine that you are working in an office building.
You are given a meaningless task with a long deadline.
You get bored, so you go to the water cooler to fill up your bottle. Your friends are there and you guys start talking.
But oh no! Look at the time, you should've submitted that project a little while ago...
While your friends shuffle away, your angry boss comes up to you. "You're fired!" he screams.
Type in your name.
Press buttons on the computer display.
If you press the wrong buttons, you get strikes.
Three strikes and you're fired.
You need to keep an eye out on your water level.
If you run out, you can collapse from dehydration!
Fill up your bottle at the cooler by clicking the water cooler icon.
Talk to your friends while your bottle fills up.
I didn't embed my project because it uses windows.alert(), so please press the button below.
1: The interface is a device designed and used to facilitate the relationship between systems.
An interface is a messenger.
Say we want to make a button and a lamp. The button when pressed must send an electric signal (message) to the lamp to turn it on. This electric signal is an interface, specially designed to work between objects that need to be remotely enabled.
To anyone who worked with computer interfaces before this seems to be obvious. Not so to the casual observer! In software, we use interfaces to connect one program (or different parts of a program) to another. In physical computing, interfaces are very important, which is why I chose the first tenant.
In the weather example for P5.Js, the function gotWeather() is an interface designed to help get the information from the weather API.
In my looking outwards, I wrote about claytronics. The team wrote a software interface between the 3D modeling program and the catons to shape the claytronic.
Claytronics is a physical computing concept that combines nano-scale robotics and Computer Science. It is a programmable matter which is being developed and researched at Carnegie Mellon University by Professors Seth Goldstein and Todd C. Mowry, as well as graduate, undergraduate students in collaboration with Intel Labs.
Claytronics is a collection of solid-state components called catoms which attract each other to create objects. Modern catoms (as of 2009), attract each other using electromagnets. Magnets, however, don't work too well on the microscopic scale so they are looking at other possibilities such as electrostatic attraction.
The research team focuses on two main projects:
Creating basic catons.
To enable this, we adopt a design principle which we term the ensemble axiom: a robot should include only enough functionality to contribute to the desired functionality of the ensemble
Designing and writing 3D software to manipulate catons
Millions of sub-millimeter
robot modules each able to emit variable color and
intensity light will enable dynamic physical rendering
systems, in which a robot ensemble can simulate arbitrary
3D scenes and models
In the future, artists using claytronics would be able to create plays/movies using fake people. A popular idea is a 3D fax machine which scans the inputted object and "prints" the same object using the programmable matter on the other end. Robots and other items that break can fix themselves by just reforming into their programmed shape (i.e. T-1000 from Terminator 2).
Such systems could have many
applications, such as telepresence, human-computer
interface, and entertainment.
I wasn't aware that there was an example showing an implementation of a weather API in p5.js. The implementing the API I found was very difficult and I didn't know that there was an easier way right under my nose.