gray-final

Augmentation: Relaxation Transportation (ART)

This project is a revision of my AR sculpture project, and it's a lot closer to my original idea. I wanted to make the city bus experience more like a long car trip through the country. I focused on the best seat of the bus: the right side of the very back row. I changed that seat to a leather chair to be more inviting. I added a robin gliding up and down, which can be used as a guide for a breathing exercise.

Unfortunately, I had to demonstrate this project in a lab environment, under strictly controlled circumstances. In the real world, the AR window gets left behind when the bus starts moving.  🙁

This is the ad:

Technicals

The two main tech-techs (technical techniques) that I used were image targets and stencils. For the image target, I just used Connie's examples, which are great. I replaced the "Reference Image" and "Prefab To Generate" in the GenerateImageAnchor object.

I learned that the Transform of the prefab doesn't matter, because the image anchor script adjusts all of it on its own, so I made a wrapper in the prefab to be able to move & rotate the whole thing relative to the image. I also learned that you can imagine the image's center at 0,0,0 in the prefab world and laying flat, facing up. I went through a lot of builds getting the orientation of the prefab right.

I got the window to work using stencils, which I learned from this wonderful man, PiratesJustAR, in his "How to Unity AR Portal" series: https://www.youtube.com/channel/UCuqVdyk3I8wUtqOAzCoQJIA

I didn't use all of his stuff, because I didn't need to be able to go through the portal. I basically just used two code snippets:

This is the shader that I put on the window itself, which is a quad in the middle of that white frame. I just put the shader on a new material, and then put that material on the window.

This bit of code goes on every shader that is being used by objects that you want to be outside the window (in my project, that's everything but the chair). Then, you will only be able to see that material through the window, and otherwise it will not be rendered. You have to go through all the materials being used, which is kind of a pain, and then find the shader file that they're using by downloading the standard shaders (https://unity3d.com/get-unity/download/archive) and searching through them for the shader that your object is using (for me, all the shaders on the trees and ground and mountain and bird).

Then, you make a copy of that file, add the Stencil { Ref 1 Comp Equal } inside the Subshader {} brackets.

Another thing is that you should rename the shader at the top, which I change to "Custom/[Whatever The Name Of The Shader Is] Stencil".

Then add the shader to the materials your project is using. The whole process is kind of a hassle, but it works. The video by PiratesJustAR explains it all really in depth and really well.

gray-arsculpture

I started this project thinking about busses and the chaotic/fleeting feeling they represent. They're a big part of city life in my experience, and I wanted to respond to that chaos and rush by replacing the windows with a window into a forest or similar, where the outside is just slowly moving by in a straight line. I was inspired by AR portals like this one:

I did some sketches to see what a bus would look like with the windows switched, and imagining somebody cutting off all the chaotic input to replace it with the calming view out the window (sorry it's really hard to see, I drew lightly).

So I watched a lot of videos on how to do this, mainly this great guy called Pirates Just AR, which is a great name, by the way.

I had some trouble extending his tutorials to a nice nature landscape though, and I never even got to the point of adding motion, which I'm sure would have been hard too.

I thought I could instead put a hole where the "Emergency Exit" on the roof of the bus is, and put a tree actually inside the bus breaking out through the hole. I also didn't want to go out and look for a bus at that point in the night, so I decided an elevator was a similar idea. So I changed the image target to the roof of the elevator, and added some calming music when the hole and tree appear. I think I can do a lot more with this concept, especially by adding elevator dings and bird sounds; I think that'd be a good contrast. It would also be nice to have flying birds, and add motion of the elevator somehow.

Here's my imagined use of this sculpture:

gray-SituatedEye

Waving Robot

I really wanted this arm to be sticking out the side of my window:

But I first tried to use the feature classifier to tell if people are walking towards or away, which really did not work at all. It could barely even tell if people were there or not. I spent a lot of time getting a large dataset before even really testing if it would work, so I wasted a lot of time on that.

So then I switched to a KNN classifier mixed with posenet, to try to classify people walking in's poses. I think that might've worked with a bigger dataset, but I tried with around 150 examples for each category, and it just wasn't reliable, so I downsized again to it just recognizing me in my room. Hopefully I can extend it to outside, because I think it would be cool to have a robot that waves at people. I think I'll just hack it together rather than using a knn classifier, but for this project I had to train a model.

I actually used an example that didn't use p5js, because the example that I was using was not registering the poses properly when I was using it outside, and I had written my program around that example, so I just kept using it inside.

I first was going to use a raspberry pi or an arduino in conjunction with the browser, somehow?, and I tried a lot of different ways of doing that and got kinda close but it was just really complicated, so I actually used one of the worst hacks I've ever done. I got the browser to draw either a black or white square based on if someone was waving or not, and then I used a light sensor on the arduino to detect it and wave if the square was white. But if it works, it works, I guess.

PS. Sorry for the delay on this post.

gray-MachineLearning

Pix2Pix:

I tried a bunch of different things with the cat one. I like the eye of Sauron one. The emojis are cool too. I found that you really can learn the language of it and totally use it to draw, even if it doesn't always do what you expect.

I was most interested for facades about how the background changed when you changed one element. The rest of the house got a lot fancier when you added windowsills and shutters and trim to the windows. The first one with just windows was much grayer and even dirty looking.

In GANBreeder, I tried to optimize for mouth size. I learned that you lose points on other factors like realism if you just maximize one trait. That was interesting. I tried to find a balance.

I also spent an embarrassingly long time trying to make myself, but I think it's interesting to see if the algorithm can make anyone. I wonder if it's possible to make any arbitrary person given enough iterations. Or any arbitrary image in general. My hunch would be definitely not, but maybe I'm wrong.

GANPaint

Before:

After:

Thoughts: needs more dome.

Text To Transformer

Everybody wants to be happy. But what most people don't know is that that's the hardest thing to get to be happy."

"I'm just doing my best out here. I'm just doing what I can do. ... It's been hard."

"It's just really rough to play here." -- Drew Bledsoe

So far, Bledsoe says, the NBA hasn't impacted him. He's never made more than $4 million playing in the NBA. But it's the job as a starting point guard that's been his biggest challenge.

"I just came to the NBA because I'm hungry," Bledsoe said. "I didn't make the NBA because I worked hard. I made the NBA because I play hard. At that position, playing with these guys and being together, I just tried to play my hardest and do whatever I can to keep this thing going."

Bledsoe says his teammates don't seem to mind his absence, which makes it easier for him to be a leader. Last year, the Celtics lost Brandon Bass and Al Horford to...

I think it's interesting that it started talking about the NBA even though that wasn't in the prompt at all.

It's clearly time for a revolution.  As a young man, I knew this for sure, or so I thought. But once we got there, the revolution never happened.
There's a very long way to go, but there's light at the end of the tunnel. And that's my point.
The more you talk about it in any detail, the more you realize that we're not fighting a single war -- we're fighting a series of wars, and they're all part of a much, much larger, much grander scheme, designed to do to me what I think they do to anyone who supports me.
I am fighting to preserve the culture of this country, the history and the legacy of America. I am fighting to ensure the integrity of the family, the home, the community, the job, and all the other pillars of our democracy.
I am fighting to preserve the soul and promise of this amazing country, the values that are the foundation upon which it has been built, and the freedoms that have been the bedrock upon which all Americans of every race, every political bent...

Inspiring.

Google AI Experiments

I messed around with the Infinite Drum Machine, and I think the visualization is really cool. It actually makes it super easy to find similar-sounding samples, but each one is still unique and interesting. Even without the machine learning it's a cool idea, but I also really like the visualization of the algorithm's grouping.

 

gray-Body

I made an annoying robot that withholds a cute picture from you as long as it can. It's based in a terminal.

I wanted it to be a little unclear how much this robot knew, and at one point it asks you to do something and then doesn't even check if you've done it at all, it just moves on regardless. But I also just wanted to see how much you can make somebody work if you just slowly add more and more tasks. It's like the sunk-cost fallacy; they've come this far so they can't stop now. It's just a prototype, and I'd like it polished up and longer, but it captures the idea, I think.

Here are some videos of people playing it:

Here's the link to the project: https://editor.p5js.org/gray/sketches/XQDS1U6gE

gray-CriticalInterface

Critical Interface Manifesto, Tenet 11. The standard calls for a universal subject and generates processes of homogenization, but reduces the complexity and diversity. What is not standard?

I thought this one was especially interesting because it actually linked to my Looking Outwards subject in one of the propositions: Surf the web closing your eyes. http://www.eyewriter.org/. Weird because the link is kind of about surfing the web using only your eyes.

I take the tenet to be focused mainly on diversity of interfaces, and mostly on diversity of accessibility options for people who can't use or have difficulty using the "standard" visual-based, keyboard and mouse interface. This is really important, and that's also why I found Eyewriter to be so cool. But the tenet isn't just about accessibility for people with disabilities. It's also about a diversity of ways of presenting information, regardless of the sensory aspect of the interface. For example, if you consider the education system to be an interface in some sense, it's very limited to the "standard" way of learning. Someone might have no disabilities and respond well to visual interfaces, but they might not respond well to the structure of the "interface;" that is, they might not learn much from lectures and tests and homework.

Diversity in all things is necessary to allow the most possible people to enjoy, benefit from, and make use of that thing. I agree with this tenet that computer interfaces seem to lack that diversity, although I think other interfaces, like books (audiobooks, Braille, horror, nonfiction, picture books, etc.), are doing much better. Maybe that's because the information behind other interfaces is simpler, or because computer interfaces haven't been around as long.

gray-LookingOutwards04

"Art is a tool of empowerment and social change, and I consider myself blessed to be able to create and use my work to promote health reform, bring awareness about ALS and help others."

- Tempt One

temptTag-2009_10_11_21_4_41

The Eyewriter using eye-tracking hardware and free software to allow the user to draw with their eyes. It was developed by members of FAT, OpenFrameworks, the Graffiti Research Lab, and The Ebeling Group  for TEMPT1, a graffiti artist from LA who was diagnosed with ALS in 2003 and since has been paralyzed except for his eyes. Since, TEMPT1 has been able to make his art again, and raise awareness for ALS. The Eyewriter was made in 2010, and it was made into a larger project, Art By Eyes, which was funded by Kickstarter to fund TEMPT1's art and awareness campaign.

The project is really well-documented, with a main website as well as a photos page that gives a lot of insight about the project. The documentation and the Art By Eyes campaign are the most important parts of the project in my opinion. Rather than just create a product, the team applied the product in a lot of different ways and imagined what it could do beyond just what they wanted to use it for. To me, this has a lot of potential for an eye-based operating system, which doesn't seem (based on a really quick search) to be that fleshed out yet, although it does seem like Samsung was working on some kind of "eye mouse" in 2014 (https://www.cnet.com/news/samsungs-eyecan-lets-your-eyes-control-your-computer/). I think that would be really cool, and they seem to have done a lot of the work already. Media art is very important in exposing different ways to use existing technology, and I think this is a great example of that. It seems like the tech industry as a whole is very focused on "forward" progress rather than progress in other directions, which is what Alan Warburton was talking about, and that can often lead to a lot of lost potential.

gray-Techniques

Looking through the p5js reference, the functions about drawing shapes caught my eye because I haven't made my own polygons before, I've always just used quad or ellipse or whatever. So the beginShape(), vertex(), and endShape() functions are something I'll definitely plan to use more. I also looked at beginContour(), which is used for negative spaces within the shape you're drawing, which would've been nice to know about when I spent some time in my clock project trying to draw a ring. Here's the beginContour() reference.

Out of all the p5js libraries, I was most interested in p5bots, which allows you to incorporate input from microcontrollers like Arduinos into your sketch. I've never used Arduino before, but I'm interested in trying, and once I learn more about it, it'll be cool to be able to use it with p5js.

This online socketio chatroom is really cool, and chatrooms have a lot of potential. I was reading an article about someone who found an API for Tinder and would catfish guys then have them actually be messaging with each other instead of the girl. Here's the article I read. Manipulating people's chats could be very powerful if done carefully.

gray-Clock

 

This is my clock, at 1x, 10x, and 100x speed.

processing link: https://editor.p5js.org/gray/sketches/wH8Vz6rR8

I'm pretty happy with my final product, although it's not very practical. The last few hours I spent working on it I tried to add practicality, and I was never happy with how it looked. In the end, I had to balance aesthetics with practicality, and I leaned more toward aesthetics. I think it's enough to show a new way of measuring time and have people ponder that, without actually making it a very useful clock. That said, it does encode the time. The hours (0-24) are encoded as red (0-255), the minutes are encoded as green, and the seconds as blue. The color in the background, and where the 3 circles all overlap, is that color. The size of each circle also corresponds directly with the hours, minutes and seconds, as does the color of each circle individually. The background displays the month, year, and day in left-to-right order. I put the year in the middle because the middle line is often mostly obscured, and I figured most people know what the year is anyway.

Initially, I just wanted to swap time with some other dimension and see how it would look. I thought I would encode time as the width or height of the image in a slitscan style, but I've already done a couple projects like that, and I didn't want to repeat myself, so I thought I would use color as the time axis somehow. There are definitely a lot of ways to do this, but I just thought I would do the simplest one I could think of and use hours for red, minutes for green, and seconds for blue. A lot of people have done this before, such as Jaco Pacolo in his Color Clock: http://www.jacopocolo.com/hexclock

The hex clock doesn't cover all possible colors, but other than that it's basically the same idea, so I wanted to make up for that by really thinking a lot about the design. I wanted to show each color individually so that you could at least get a sense for the time without needing the actual numbers, so I first thought of this sketch:

Or something similar, where there's bars in the background to show the seconds, minutes and hours. But I thought it was kind of boring, so I thought of the three circles. I wanted it to kind of look like a color study. This was the second iteration:

I was pretty inspired by these Bauhaus color studies I had seen before:

I liked the complementary colors in the background, but they didn't really serve a purpose for the clock which I didn't like. It also felt a little cluttered, so I changed it to the third version, which I made a lot of different tiny changes on.

I kind of got really caught up in the details. The only thing I wish is that the clock was more functional, which I might work on in the future. Also looking at the top pictures above, I kind of think it looked better when the black bars were separated. I'll probably keep adjusting it.

gray-LookingOutwards3

"Knowledge Games"

Image result for Quantum Moves

This is a game called Quantum Moves, created by ScienceAtHome, where the goal is to move a ripply liquid into a highlighted area in a certain amount of time. What makes it interesting is that this ripply liquid represents the wave function of an atom, and your solution to moving this atom is recorded and analyzed, along with hundreds of other players' solutions, in order to create better paths to move quantum particles with lasers. Website: https://www.scienceathome.org/games/quantum-moves/

Image result for foldit

This is Foldit, created in 2008, a competitive online game in which players try to fold proteins as well as possible. According to Wikipedia, "In 2011, Foldit players helped decipher the crystal structure of a retroviral protease from Mason-Pfizer monkey virus (M-PMV), a monkey virus which causes HIV/AIDS-like symptoms, a scientific problem that had been unsolved for 15 years. While the puzzle was available for three weeks, players produced a 3D model of the enzyme in only ten days that is accurate enough for molecular replacement." Check out their website, which has more info and credits to their developers (nice!).

Karen Schrier is a professor of games and interactive media at Marist College, and she's written a book about these kinds of games. I've only read the introduction of her book, but she has come up with a classification for these games that I agree with. She essentially describes "knowledge games" as games that offer new perspectives on the world, and allow us to understand a topic or solve a problem through the structure of a game. These games are often used for crowdsourced data, like the two I talked about above, but personally I think they can also be helpful for the individual to gain an intuitive understanding of a topic that is not usually presented in an intuitive way. This interactivity really fascinates me. Encoding a complex problem or idea in a simple game is a really cool idea with a lot of practical applications. It's also an interesting example of humans and computers working together; by harnessing human intuition and using that as a guideline to develop a solution, problems that are difficult for both a computer and a human can be solved by a combination of the two.