Chloe – LookingOutwards – Community Sourcing (Final Project)

As we approach the end of the semester and a period of high burnout rates, I wanted my final project to take on CMU’s stress culture, and finding ways to foster a greater sense of community. I’ve always been inspired by the projects of Candy Chang, who has taken on the mission of “Making cities more emotional”. Some of her most well-known projects include “Before I die…” and “Confessions”

A principle of her projects and what I want to pursue is the idea of crowd-sourcing. With the capabilities of the tools that are available to us, it should would be easy to collect, store, and showcase such data. It’s only a matter of what that data is and how. One project I came across was Rafael Lozano-Hemmer’s “Voice Array” (2011), in which participants can record their voices, which is looped and layered with the collection of all the past recordings.

“Voice Array” at MCA Sydney (2011) by Rafael Lozano-Hemmer from bitforms gallery on Vimeo.

It is a beautiful display of both user input and history, like an updated oral tradition rich with meaning from those before. I think that it would be interesting if there were a prompt of some kind–like Candy Chang’s “Before I die I want to….” or the confessions that she prompts. But instead of displaying them as mere handwritten notes, how cool would it be to hear the voices–complete with intonation and personality–upon tapping into them?

Messa Di Voce (2003) by Zachary Lieberman, Jaap Blonk, Joan La Barbara, and our very own Prof. Golan Levin is an example of displaying user audio-input, creating bubbles of audio that release the sound recorded when the bubbles are dropped. For a project based on confessions, displaying a participant’s confession as such gives a sense of anonymity, as well as a randomness when others try to tap into and listen to these confessions.

Possible prompts that I have in mind for such a community confession booth would be things like “I wish______”, “I dream_______”, or “Where I see myself in 5 years time”. It would be interesting to see what the hopes and dreams of CMU’s students are, and also a good opportunity for the stress-ridden students of CMU to remember why they are here (i.e. the pursuit of their future, etc.). Another form of a prompt could also be a way to pay homage to Randy Pausch (“What are your childhood dreams”), who was one of my biggest reasons for being here at CMU, as well as fostering my love for the convergence of technology and art.

Another element of community building is by fostering a sense of connection between strangers, making fleeting, transient interactions meaningful and/or interesting. I’m particularly inspired by Passing, a project by Jonathan Ota, a CMU Design graduate during his study abroad in the Netherlands.

I was equally intrigued by the extensive testing and research he did for this project, as well as the simple beauty of the interaction itself. Through this, connection between strangers is facilitated in a transient space.

This then got me thinking about the transient spaces on CMU’s campus, the closest one for me being the long hallway of Maggie Mo, a place truly interdisciplinary both in the spirit of the Design school residing there, and in the literal sense of the mix of students that pass through it, from design, to architecture, to music, to business, to economics, and so on. I thought about ways that I could perhaps facilitate connections there, but struggled with trying to not be too intrusive as it is a very high traffic zone, and dangerous for any projector and kinect to be installed.

Another idea I had was inspired by the Talking Doors project by Julijonas Urbonas that was shown to us in a previous class, where Maggie Mo’s popularly used revolving door could be used as a control for music to be played. I’m not as keen on this one as it is rather one-sided, and kind of a rip-off of its inspiration. As an artist seeking ways to facilitate meaningful community building, it would be important to seek two-way interactions–a give and take/collect.

I think it would be interesting to have a device that demonstrates the goodwill of CMU students. On the giving end, participants would have some equivalent of a “big juicy red button” to push, with the prompt that they could make someone’s day. If pushed, somewhere else, a Hershey’s kiss/some sort of candy/some sort of compliment/note/treat would be released, ready to be picked up by those around the device. Even cooler would be to have live video feeds in these separate stations, so that the Giver could see the results of his goodwill. A compassion machine of sorts, if you will.

The context of the input and output of such a device could very greatly change its intentions–for example, thinking about the differences between a student giver to a student receiver, and a student giver to a campus employee receiver. Whereas the former could be taken as a recognition of empathy, the latter is a form of thanks for the employee in their service of the students. If the input is not a tangible button but rather a Facebook ‘Like’, that it itself already taps into the debatable realm of how meaningful/substantial “Likes” actually are.

Lots to think about and let incubate. I look forward to fleshing out some of these in studio later today!

Looking Outwards – Arduino Shields

During our first visit to Ali’s lab, I was pretty explicit about my interest in GPS capabilities in physical computing. For me, the idea of context awareness, particular in place and time, is key to the development of our identities, as well as for the appropriation of our works. So to my delight I found this cellular shield which extended those capabilities, by giving the Arduino the same capabilities as a regular cellphone, easily adding SMS, GPRS/GSM, and TCP/IP to any project. So not only is one able to generate locational data, but also send it everywhere and anywhere.



D.I.Y. Touch Screen capabilities. Enough said. —Proceeds to daydreaming about building about building own Nintendo DS so as to finally be able to play Pokemon


Many of my favorite projects incorporate audio, so to have a dedicated audio component for Arduino is exciting. At the same time this allows the Arduino to multitask, playing audio/music as it performs whatever other functions it was made to do.

Looking Outwards: Hardware

Pulse Sensor

I think the fact that you could have something sync up with a person’s pulse is in itself already the beginnings of great poetry. I also find it interesting that as a biometric DIY component, it begins to have implications for the market of medical devices. Perhaps in the future medical devices could be built as needed for cheaper, more easy to access by doctors that may be in more rural areas, instead of depending on the far-removed monopoly of medical device companies…

TFT Touch Shield for Arduino

I’m constantly being blown away by how easy it is to access all these amazing devices and components. Being so removed from the manufacturing process as a consumer it just blows my mind that I could build a physical interface and have access to these things like touch screen technology. It can give my users/participants a more natural form of interaction and navigation, depending on what my project is. On top of that I’m really glad that they make a note to tell you that the device comes with its own open source graphics library, further enabling one to explore and expand their personal project.

Flora Wearable GPS Module

For me, fashion has a strong connotation with identity. On top of that, having moved around different cities in my life, the idea of place is also very important to me. So to have something that tells you where you are on something that can portray who you are—it really speaks to me.

Portrait(s) in a Projected Creature

Nervous from Chloe Chia on Vimeo.

In the rush of things, I seem to have forgotten a few things. 1) The ability for art to elegantly express that which is hard to express, 2) the ability for art in catharsis, and 3) reaching out to people when in need of help.


I originally had numerous separate ideas for the assignments, but after numerous iterations and talk of being ‘poetic’, I realized that this creature that I made–originally intended to be a “live stress ball”–could have so much more meaning when placed into context. Especially if that context is my own studio in the Design school. Lately, I have been feeling very nervous in general about all of my classes, ever so precariously keeping the balance between them, outside activities, and mental states. The stress of it can be overwhelming, and leave one feeling very trapped as I do now.


But in the process of making the video, and watching my own creature almost ‘come to terms’ with his confines, I can’t help but wonder whether or not such stress and pressure that I feel is merely an artificial boundary I place upon myself. It is up to me to figure out, given the limitations upon my skills and my time, what am I capable of doing, rather than focusing just on the limits and failures.


This is definitely the same sort of thinking with which I approached these assignments, keeping it as simple as possible, and working with the code that I’m most comfortable with. On the programming end, there are still many things I would like to fix later on as I get more and more fluent in Processing. One of those things includes figuring out how to get the creature attracted to the mouse, since at the moment a mouse press only activates a different gravity vector (which, I admit, I modified throughout the process of filming). I couldn’t quite figure out how to recall each node that creates the spring skeleton of the creature…


Unfortunately Javascript mode doesn’t seem to like my sketch and refuses to run in the OpenProcessing applet… Will have to figure out how to work that kink.


Chloe – LaserCut Screen (In Progress)

Inspired by this topography-style graphic I found while scrolling through Tumblr, as well as Ian Dixon’s ‘Roots’, I studied his code, along with other organic and Perlin noise simulations to try to recreate the organic, flowy blobs, and enclosing them within a circle to make for an interesting silhouette I wouldn’t have the patience of cutting by hand.

Unfortunately I haven’t been able to figure out how to get the outlines, as opposed to trying to be smart and having white eclipse trails over a black circle. At the same time I have yet to figure out how to implement a repulsion system for each drawing particle so as to make them stay a little more separate. Hopefully I can figure out something soon.


Chloe – LookingOutwards03


Personally I’m always a fan of collaborations, particularly when it involves a large corporation attempting to get in sync with the current culture and connect with its consumers. Here, Levi’s agency, AKQA hired  Fake Love to redesign antique objects as web-enabled tools and traveled on Levi’s Station to Station project across the country in the Summer of 2013.

  • Still Camera (1939 Graflex) >> Instagram
  • Video Camera (1953 Bolex B8) >> Instagram Video
  • Typewriter (1901 Underwood No. 5) >> Twitter
  • Guitar(1953 Gibson E-125) >> SoundCloud

The objects relied on a combination of many new technologies, including the Rasperry Pi camera module, custom printed circuit boards, embedded AVR systems, Wi-Fi, Bluetooth, RFID, and OLED screens as well as a variety of buttons, switches, knobs and other input/output peripherals.

I loved the idea of revitalizing the old to update it for the now. On the hardware end, bringing what would be simply virtual services into a tangible state, especially on its classical origins that bring a new-found appreciation for what might be seen as old junk. At the same time, the fact that these devices connect its input to the social web adds a whole new dimension of community, further expanding the poetic effect that it has on me.


CHIAROSCURO — Installation by Sougwen Chung from sougwen on Vimeo.

In an attempt to bring the art of drawing to a modern, interdisciplinary context, Chung’s Chiaroscuro makes use of large installed drawings with projection mapping, sensors and lights to immerse viewers in a world of contrasts. The project makes use of Arduino Teensy 3.0 to monitor a light sensor, used to adjust the brightness to the ambient light intensity, and a frequency analyzer (from Bliptronics) is used to analyze the sound spectrum to enhance the interplay of music, the forms of the drawings, and the lights of the projection mappings.

While the subdued role of Arduino being nothing more than a light emitter turned out to be rather disappointing, I find myself strongly attached to the project simply by its mesmerizing, dream-like aesthetics. For me, it is a reminder that while the advent of technology in art is amazing, it is ultimately the human element that really makes a piece.


Super Angry Birds – a Tangible Controller from Andrew Spitz on Vimeo.

This project brings back the tactile sensation of a slingshot into the modern classic of Angry Birds by using a force feedback USB controller–essentially a hacked motorized fader found in audio mixing consoles to simulate the force one would feel when using a slingshot. For controlling the hardware, Spitz and Matsui used an Arduino-based microcontroller called Music & Motors developed by the CIID, programmed with Max/MSP.

I really appreciate the way the artifacts were so designed to stay true to its original inspirations, making the device a far more effective bridge over the gap between the real and the virtual. On the programming end, I was pleased to see that the controller was quite precise yet still stable despite the small scale of the controller (which I’d imagine would be quite difficult for those with shaky hands). A way that this project could be extended is if the tab on the slingshot could somehow change its graphics according to which bird one was using in the game. At the same time though, part of me wonders if there could be any other applications for these types of controls beyond this particular game, or the realm of gaming at all.

Happy No Grumpy

Happy No Grumpy from Chloe Chia on Vimeo.

As a favor for a friend obsessed with Grumpy Cat, I decided I would bring the meme to life by having him appear and judge the participant whenever a ‘smile’ was detected. After all, I’ve always been curious about the cutesy ‘smile detection technologies’ that some consumer cameras have, where the picture is only taken when all the faces detected in it are smiling. In my project, a ‘smile’ was determined by a rather crude set of ratios between the heights and width of the mouth in proportion to other properties on the face. Other data points used from the OSC included positioning and size, which helped determine the nature of the ‘neutral’ character, as well as the positioning of Grumpy Cat’s eyes, which in person, makes it seem as though he is looking at you regardless of how you are positioned towards the computer.

If it weren’t for time constraints I think I would’ve loved to pursue this further, smoothen the performance of the smile detection, and perhaps making use of OSC’s 3D capabilities to make for more a more natural neutral character, as well as the shifting of Grumpy Cat so that his judgement may rain ever more accurately.

Chloe – LookingOutwards02


Using the newly released LeapMotion, Ball Maze use the ‘Leap Motion For Processing’ library from voidplus in Processing on the laptop, which talks to Arduino in the maze. I admire this project for its tangibility, breaking the boundaries between the real and the digital, making for a much more intuitive gaming interface. The smoothness of the board’s movement really surprises me, as I have played with the LeapMotion before and found that even on the screen, it can be a bit glitchy.



TYPEFACE is a study of facial recognition and type design, creating a typeface that corresponds to each individual, like a typographic portrait. As a communication design student, this project pushes at a lot of my buttons. There is a lot of talk about content driving form, the message defining the medium, etc. This is especially true for things that concern branding and identity, the most effective ones allowing for variation across many platforms while still maintaining a general level of quality and cohesion throughout. At the same time, for there to be a human touch in the formation of the type despite its digital interface shows yet again another wonderful example of bridging the gaps between humans and technologies.



“I believe that most people respond more intuitively to simple colours than to the complex units of data found in weather reports and downloadable apps. My phone can instantly inform me of the current temperature outside in degrees of Celsius, but this reading tells me nothing of how warm or cold it actually feels. How warm is 18°C, exactly? Does that mean I need a jumper or a coat? We can access a multitude of different kinds of data relating to the weather, but can this information be used to create something beautiful or intuitive to read?”

Jed Carter used 64 public-access web cameras across Europe, recording the colour of the sky, at each point, at regular intervals and produced a book that collects a week of paintings where cameras paint the weather, once every hour. Processing was used to map RGB values collected from the photos to geographic locations, rendering a huge sequence of color maps of the sky.

With the erratic weather that has been plaguing my health and sanity this week, I find myself appreciating the quiet poetry of this work. Although it took me a while to figure out how exactly his system worked and to let go of the disconnect of the final product from its initial driving idea, I think it is in itself a beautiful artifact. I would love to see this idea of enhancing weather data using intuitive colors, with plays on the HSB scale depending not just on temperature, but how that temperature is changed with windchill, humidity, etc. It certainly would have helped with me wearing the right things this weird week and helped me avoid getting sick!

Chloe – Schotter

Screen Shot 2013-09-12 at 3.24.19 AM (2)

int sqsize = 38;
int cols = 12, rows = 20;

void setup() {

void draw()
  for (float i = 1.0 ; i <= rows; i++)
    for (float j = 1.0 ; j <= cols; j++)
      float angle = radians((i/rows)*random(-45, 45));

        translate(j*sqsize, i*sqsize);