Color Me Surprised is an experience designed by lsh & tli in which users describes the world around them by tapping to sample the color of that location.

This project explores the ways which we discover the world around us, and how information is stored digitally about our surroundings. Through tapping the screen, one begins to catalog or tag their surroundings, building up to eventually describe objects by color. Very early on, we knew we would want to use the massive color api to express the world around us, but decisions about representation and interaction where what drove this project forward. We originally attempted to use OpenCV to reduce the image and constantly describe everything the camera would see, but even on a laptop, the performance was awful, let alone on an iPhone. Another decision was whether or not to include the world that is being tagged. The decision rests on the idea of the interaction being an emergent experience versus a descriptive tool. Lastly, we had a series of struggles with Unity and Xcode, which are still being ironed out. I would say this project is successful in that it creates a novel experience.


Hopscotch, but slightly more dangerous.

Given that we could store vertical data, we decided to create a game of hopscotch that went down a staircase. Aside from balance being an issue, looking through the phone to see the tiles also adds a challenge. This would probably violate playground regulations.


Given that I had prior experience with Unity, I decided to use this assignment as a reason to watch a tutorial on ShaderGraph. While working this summer, the company I was at this weekend expressed incredible interest in the tool, but it did not fit any of our project pipelines. I followed a getting started with ShaderGraph tutorial, then moved on to a vertex displacement tutorial. My initial impressions are: the graph seems useful. I like the realtime abilities. It currently holds no torch to, for example, Substance Designer or even Houdini. I also noticed a few weird bugs, but I'll chock those up to ShaderGraph still being new and my laptop running OS X Catalina.

Inigo Quilez once said "Well, first let us consider a spherical cow..."


Last year, a visiting guest in the studio mentioned that they consider many of our interactions with smart assistants quite rude and that these devices reinforce an attitude of barking commands without giving thanks. I think back to this conversation every so often, and ponder to what extent society anthropomorphizes technology. In this project I decided to flip the usual power dynamic of human and computer. The artificial intelligence generally serves our commands and does nothing (other than ping home and record data) when not addressed. Simon says felt like a fun way to explore this relationship by having the computer give the human commands, and chide us when we are wrong. I also decidedly made the gap between commands short as a way to consider how promptly we expect a response from technology. I would say this project is fun to play. My housemate giggled as the computer told him he was doing the wrong motions. However, one may not consider the conceptual meaning of the dynamic during the game. Another issue I ran into during development is that when trained on more than three items, the network rapidly declined in accuracy. In the end, I switched to training KNN on PoseNet data, which worked significantly better. There are still a few tiny glitches, but the basic app works.

New debug with less poses
Old debug with way too many params



The edges2cats filter was interesting to play with in regards to how it dealt with line and scale. Large open spaces were filled with texture, while finer, smaller detail led to a blurry mess.

The facades felt more ambiguous in output oddly enough. Maybe due to the Mondrian style of partitioning the UI requires.

GAN Paint

source: Carnegie Mellon University College of Fine Arts Facebook

When working with the default image, the effects were immediate no matter what tool I used. When I uploaded my own image, the effects became much harder to tease out.


Artbreeder is always a lot of fun, given the sheer amount of content one can generate within the tool. I do occasionally consider it through the following lens though:

Infinite Patterns


Do you know who I am?

You see, this is where things get a little confusing. I'm a lot more than I let on as a person.

Let me explain.

First, I'm not a monster.

Second - and this is important because it has implications for my next step - I'm the kind of person who is able to change.

Change is not the same thing as changing a person. A person can make a life-long change in such a short time that the change is invisible - it's not part of the world they know or understand or know how to interact or have compassion for.

Changing a person is hard; you're making the change for the sake of yourself and others.

And that's hard.

The best way I've found to change was in what I didn't know I was capable of.

It wasn't that I wanted to change, it was that I knew I couldn't change, and so I made a choice - a good, thoughtful choice - to not learn the lessons I could

What's love got to do with it? Well, like it's a question that's been answered many times over, it's an all-encompassing question that goes a number of ways and there is no right or wrong answer.

But for those who want to know, the answer starts with a simple premise:

"You should love everyone you meet."

And why would you want to say something so obvious and easy to say? Well, perhaps you need some extra help when it comes to saying it or you fear other people don't really know what they're doing when it comes to the most basic human needs... or maybe you can't quite make the case that it's not a simple idea, even though you know it to be so. But the good news is that once you say it, it's there for everyone to understand and respect, so there's really no reason to fear or deny saying it.

I'd argue that this simple premise applies to any relationship for both men and women so even if you're not ready for it right away, there can be no better idea

I usually don't think of AI as being used for a stylistic decision like font making, so I was curious to see what pairings the system suggested.


Experience footageThe concept for this project was a multiplayer game of jumprope within the browser. The idea was that each player has their own role (jumper, swinger).

In its original concept, the goal was to have the experience be a one-to-two person connection, where two people swing the rope while the third jumps. Due to the limitations of trying to get the physics to work across browsers, the scope has been limited to one-to-one person connections in synchrony. The complementary roles of jumper and swinger would need to be fleshed out if I was to continue working on this sketch. There is a certain power dynamic between being the one swinging the rope and the one jumping. Participants are anonymous, but anonymity does not have a significant impact on the experience. Location is also less important, though due to timing, being in the same room probably makes the experience easier to manage. The project is ultimately trying to explore a novel interaction over wireless connections in browser. Unfortunately, the project is unfinished currently as the logic turned out to be difficult to crack. In order to solve the physics issues, I have the rope swinger using real physics, while the jumper just has points mirroring the swinger. If I were to take this further, I would implement a reward/interaction system for the jump interaction.


The interface responds and embodies the economic logic of the system in which you enroll. It is a political device.

  1. Do not use any free (as in free beer) service for a while. Who does that service really serve?

2. Use your search box to make your statements instead of searching something, they will be suggested to other users. i.e. "I am fed up of google tracking me".

3. Remember that Ronald Reagan once said, when he visited Nintendo Headquarters: "We'll have soldiers in every bedroom"

An unfortunate truth of any software or service is that it becomes subject to economic services. We tend to think of open source software as a digital commons, offering "natural" infinite benefit to all users. In truth, open source software must be developed, usually in the developers spare time. If the software is being heavily developed, one must wonder the reason it is able to achieve such activity (is a company funding its development? If so, why?). There is nothing inherently malicious about free technology, but one should always consider the motives of its funding and development, or consider what is the product being sold.


The tabletop projection interaction is nearly a trope at this point, but Kollision's Tangible 3D Tabletop adds some interactions that I found new and useful. The idea of treating a plane as a viewport for a map is a fairly natural interaction. I think the perpendicular nature of the plane could still use some tuning, but the execution is pretty incredible. I can imagine this interaction being used in the reverse--a block representing a section line moving across an image of a brain and show MRI slices for instance. The attractive part of this interaction is that it breaks the keyboard, mouse, screen paradigm. It also manages to stay in reality, unlike VR for instance. I think this technology could have strong use in educational environments.

Link to project.


While browsing the P5.JS docs, I found out that one can instantiate a P5 instance, so one can have a program of object oriented P5 sketches, which is pretty cool, especially with the concept of scoping.

p5.gui looks like a promising library for prototyping sketches or designs for those used to openFrameworks and Cinder.

The d3 block on glitch looks useful for converting datasets between different types (such as geodata which I regularly use in CSV).