Feedback on Situated Eye

Reviewers

GL: Golan Levin
CH: Claire Hentschker
JL: Joey Lee


arwo

link

CH:
N/A

GL:
In your project's current state, it's not possible to provide feedback. Please provide screenshots, links, and other documentation as stated in the assignment.

JL:

				
"* remarks:
    * technical:
        * A few methods to achieve this could be:
            * training a really "dumb" classifier 
            * stringing together a bunch of ML models like "text to image" https://digg.com/2018/text-to-image-generator-ai-cris-valenzuela or stylegan https://github.com/NVlabs/stylegan which you can do in RunwayML https://runwayml.com/ 
    * conceptual
        * Did you manage to get this to work? It would be interesting to see if you could begin to build a dataset that takes people's classifications of clouds (e.g I see a snake or a turtle) and create a classifier that can "imagine" things in the clouds. I'd be curious to see your approaches here! 
    * references
        * https://experiments.runwayml.com/generative_engine/
"

gray

link

CH:
I love the idea of building yourself a roommate in a dorm room to interact with. There is something sort of hilarious and melancholy about it that is ultimately compelling as a project narrative. If you run into time issues in the future, one way to polish the final result is to focus on the documentation (editing, cutting, lighting etc.) This project also highlights how so much expression can be derived from a simple mechanical movement, and I think your work showcases that really well. Other fun examples you might like to check out are Double-Taker (Snout) by Golan, or the works of Arthur Ganson.

GL:
Fine work for a quickly-executed project. Conceptually: the overall interaction premise is low-hanging fruit -- by which I mean: the concept is workaday; someone else could have thought of this, and indeed, someone even did -- in the same classroom! Check out "HiBot" by Caroline Hermans, who graduated last year from CMU's Art+Engineering double-major program. (For her project, which was also very quickly executed, Caroline used Wekinator's ML classifier with a hand skeleton reported by the Leap Motion sensor, which meant that its sensing range was limited to less than a meter.) Technically: I really appreciate the ingenuity of using a light sensor to couple a physical actuator to a computer display screen. This is not such a jank idea as you might think; check out Kyle Machulis's discussion of this "movie sync" technique in his lecture about teledildonic interfaces, in particular, the 1998 "Safe Sex Plus" device at 41'20", here. Craftwise, the material display and documentation of this project is where I feel you fell short the most: the storytelling in the video is under-considered, and the care given to the visual design of the cardboard robot, even for cardboard, feels unconsidered as well. For example, by pushing this scenario to an extreme, you could have told a bleak story about a college student whose only friend was a waving robot (imagine if this video were twenty minutes long!).

JL:

		
"* remarks:
    * technical:
        * Excellent combination of physical computing and ML! Great to see you put all these pieces together. 
        * You can totally use p5.serial for this to communicate with your arduino with the p5 serial library linked below
        * One way to stage this might be to situate your camera such that it zooms to a specific point on the sidewalk. You can use some tape to square off a section and maybe point an arrow in the direction that your audience member should face. In this case, the KNN classifier isn't a bad choice since you want to use the outputs from PoseNet's key point estimation. Again the zoom will likely affect your results so making sure you can get out the right pose will be crucial here.
    * conceptual
        * This is a lovely little interaction. Reminds me of Niklas Roy's "Little piece of privacy." 
    * references
        * https://github.com/p5-serial/p5.serialport
        * https://itp.nyu.edu/physcomp/labs/labs-serial-communication/lab-serial-input-to-the-p5-js-ide/
        * Little Piece of Privacy, Niklas Roy: https://www.niklasroy.com/project/88/my-little-piece-of-privacy "

ilovit

link

CH:
This is such a delightful project. I love the sound effect, it reminds me of some of my favorite movies growing up, like the Marx Brother' Duck soup. The idea of gamifying the act of fussing with a window while hollywood style bombs are going off outside is amazing. It's a great trope to play with in this context. The pairing of the rickety window with the contemporary classification technology is also funny and ultimately thought provoking. Congratulations!

GL:
Maybe it's my headspace, but there's a grim quality to your project. I want to be clear that this in itself isn't a bad thing, but it does mean that your *discussion* of your project interaction as a "game", and the extent to which it is "fun", could come off as a little glib. I'm imagining the project from the perspective of someone who lives in a part of the world (Aleppo, etc.) where there could literally be bombs dropping outside the window. People hear that whistling sound and duck for cover. I mean, you chose THAT SOUND; it's not like you made a Halloween game about keeping ghosts from coming in, or space aliens, etcetera. Formally speaking, I think it was original of you to go for an audio-only interaction. Technically, I approve of your detection regime; I think you've learned valuable skills in making a system that can detect whether or not the window is open or closed. Craftwise, there could be more contrast in the ways that the system responds to its inputs.

JL:

				
* remarks:
    * technical
        * Intriguing use of ML to breath life into the the features in the physical environment. 
        * In this case, classification might also have been a fine way to handle the state of the window, but regression also works.
    * conceptual
        * Your solution is exciting because you demonstrate how different "states" of an object can be sensed and used as triggers in completely different setting (e.g. your game). It would be great to see how your might string together a series of interactions between varying objects in a room to create a game that builds on top of these varying interactions. 
        * The premise reminds me a of these old cat/mouse cartoons like Tom and Jerry or the Road Runner. With a bit more work and polishing, this could be a nice indie game in a context like https://www.wonderville.nyc/ 
    * references
        * Wonderville is an indie arcade where game developers build their own creations https://www.wonderville.nyc/ "	

isob

link

CH:
This is such a great use of the microscopic camera. Using it as the input for the classification is really inspired. I love the gory, almost gross out elements in this project. ( Have you seen the animation work of Cool3Dworld? not directly related but might be of interest if you're into that genre.) As someone with fingernails that look similar, I really appreciate the idea here, though I do find myself wishing the little bacteria was more visually sophisticated or expressive. The addition of the blood classification was a delightful surprise.

GL:
Strong work: the project is charming, improbable, and rigorously executed. You also did a great job in adaptively scoping the project to the time and resources available. Furthermore: I'll go out on a limb and say that your initial concept, the "Lacanian Mirror", was not only too ambitious -- I actually think it works best in the form of a speculative cartoon, and would have been less interesting *in practice* than your vampiric bacterium friend. Technically speaking, I was very pleased to see that you learned the use of both a classifier and a regressor.

JL:

				
"* remarks:
    * technical
        * ML is super funky and hard. Documenting these difficulties and these ideas is super important for your own growth and for others to help point you to resources. 
        * IMHO Ellen Nickles has one of the best blogs documenting her creative practice - https://ellennickles.site/blog?offset=1541125592190&category=Neural+Aesthetic 
    * conceptual
        * The fascination with "seeing what the machine sees" is very much a subject of interest for researchers and artists alike. Projects like "What a neural network sees" are more literal explorations of this idea by looking at how data are processed through the layers of a neural network while projects like Philipp Schmitt's Introspections also explore this idea of "what machines want to see" by feeding in the outputs of a model as an input. Helena Sarin does a lot of work in exploring latent space - https://aiartists.org/helena-sarin - that are absolutely lovely. You might explore some of these references to see how you might poke at the ways that data are being processed in varying ML models and contexts. 
        * Shifting the scale to the super macro view is a nice change in perspective. The advantage of doing so allows you to create a new world from the small space in the field of view (FOV). ML is being used a lot for medical applications https://ai.google/healthcare/ and it is left to see both the potential and pitfalls of these applications. Golan mentioned last class that ML models designed to find cancers and tumors would incorrectly classify images as "positive" for a certain ailment with the presence of a pencil in the image (because pencils are used to show the relative size). Your project has a the potential to explore some of these macro views and potentially comment on some of these funky artifacts of "zooming in."
    * references:
        * What a Neural Network Sees: https://experiments.withgoogle.com/what-neural-nets-see 
        * Tushar Goyal, Visualizing Neural Evolution: https://wp.nyu.edu/tushargoyal/2019/05/07/noc-final-learning-flappy-bird/ 
        * Latent Space visualization: https://ai-odyssey.com/2017/02/24/latent-space-visualization%E2%80%8A/ 
        * Philipp Schmitt, Introspections: https://medium.com/runwayml/introspections-9cb6660c0311 
        * Helena Sarin - https://aiartists.org/helena-sarin"

lsh

link

CH:
Using the game of Simon Says as a way to subvert an expected interaction with a digital assistant is really interesting! I really like where this project is headed conceptually, and think the way we interact with these personified tools is ripe for critic and reevaluation. I found myself wanting to see more of a connection to to the digital assistant idea in the final project. Maybe through including an element or two that signify the digital assistants we are familiar with; a robotic voice, a physical presence, a name etc.

GL:
The premise of using Simon Says shows a very keen intuition for the technical problem space, and is an excellent match for the project scope. But the current presentation design, a charmless debug view, is seriously underworked, and begs a real investment of consideration -- for example, with typography and sounds that make it look like a carnival game, or a Soviet-style design treatment that emphasizes the game's imperative nature.

JL:

				
"* Remarks:
    * technical
        * NOTE: Posenet will improve soon! Right now it is doing some funky things with image resizing that make it less accurate but this will be updated in the near future.
    * Conceptual
        * Nice usage of an age-old game/exercise that is remade in a modern flavor. You mention this in your blog post that your room mate thought the game was fun to interact with, but might not have understood the deeper significance of the HCI. Often times successful projects have what some might call "hop-on hop off" type of interaction which is one where people can easily jump into or out of the project depending on their interest. For some, the project might just be "fun" or "entertaining" but for those that stick around or read deeper into the project, they might start to tease out those nuances that you are trying to comment on. Designing projects in this way means you can cast a wide net that allows people to interface with a topic or domain they might not otherwise interact with. Not all projects need to be structured this way, but this style can be effective for translating difficult concepts to broader audiences. 
    * References
        * https://techcrunch.com/2018/11/29/google-assistant-please-thank-you-santa/
        * Self driving human, Dan Oved: https://vimeo.com/337308779
        * https://stealingurfeelin.gs/ "

lubar

link

CH:
Conceptually linking machine learning with tarot and psychics is fascinating! I love the way this concept marries an older method for trying to predict the future (tarot) with a more contemporary approach to a similar question of what will happen next (ML). If you don't have the app Costar, check it out! There is definitely a lot of thinking going on in contemporary culture at the intersection of these two fields.

GL:
Excellent project development, thoroughly considered. The tech works, the fortunes are charming, and the attention to setting up a physical space suitable for "creating the air of mystic" is marvelous.

JL:

				
"* Remarks:
    * technical:
        * Excellent storytelling + narrative. 
        * You may consider to render the text more slowly so as to enhance the feeling of mysticism.
        * Add in a fog machine and you've got a very exciting interactive installation! 
    * Conceptual:
        * Wonderful narrative here. The usage of tangible objects (the cards) that have an established mysticism works really well with this type of classification task. Using the illuminated ball to render the text is a very nice touch. 
        * Whether as a game or as an installation, I can imagine this providing many hours of fun. 
        * Your project touches on a number of themes, but the most prominent is that of the mysticism of technology and the race towards modeling and prediction (regression) of the future. Part of the reason why all of this data collection is occurring -- the "Data is the new oil" mentality -- is based on this idea that companies can begin to predict your behavior based on your previous choices, preferences, etc. There is a huge market for trying to collect data as a way to build models that can predict where you will shop, what you will by, who you will see, etc. In some ways, these old forms of "future/fortune telling" feel convincing because of the ways that fortune tellers practice deduction as well as the ways that people are always trying to center themselves in a narrative. Long story short, you might consider taking this further to explore the parallels between fortune telling and the truths and mythos around AI/ML/statistical prediction. 
    * References:
        * The new Organs, Sam Lavigne & Tega Brain.  https://lav.io/projects/the-new-organs/ "	

momar

link

CH:
This project has a such a fun and unexpected twist encoded into it. To me, it shows that you have really considered the interaction the viewer will have with the story this piece and I really appreciate that. As I'm sure you can imagine, at first glance I thought this work looked at the liquid level in a vessel to assess how full it is, but using the real time weather instead was a fantastic way to subvert my expectation. One thought I have for you is to consider how documentation can be used to help tell play up the drama and telling of this story. Check out Lauren McCarthy documentation videos, or the video for Miranda July app Somebody. They are both great examples of how storytelling can take place in documentation.

GL:
The conceptual premise is very strong -- at least, as a starting point. But the visual craft is poor, and undermines the project: the typography reeks, the visual "rain" effect is superfluous, and the documentation shows weak attention to detail (dim lighting, poor cropping, inadequate background, ripped label on bottle, etc.). I'm on the fence about whether or not you actually need to have the dependency on external weather data at all: On the one hand, I'm glad you took on the technical challenge, but on the other hand, these ML systems already have uncertainty -- it's really interesting to have the ML decide whether the glass is half-full or half-empty, given that we know it can assuredly be trained to detect the extremes (which you have done, very well). In any case, I think the piece would be stronger (and would show more restraint) if the weather information were used secretly in the background.

JL:

				
"* Remarks:
    * Technical:
        * Ah! I will make a note & GitHub issue to address the model loading issues you might have encountered. It seems this was an issue for many of your classmates as well. The p5 web editor handles files in unique way that hasn't been addressed yet I believe. Thanks for noting this! 
    * Conceptual:
        * As a general comment, I think it can be really effective to use common sayings "e.g. the glass half full" as a way to inspire these types of project explorations. What is nice is that it immediately frames your project for your audience so that they can map in their brains how you've made that quote/saying tangible in one way or another. 
        * The usage of the weather API is an intriguing additional layer to interpreting the literal state of the glass (or in this case, the bottle).  
        * Your project raises some potential future exploration to examine the role that the environment might have in shaping the results of ML/AI and vice versa. Can there be optimistic AI systems? Can there be pessimistic ones? Can AI be temperamental? Is there something to Douglas Adam's Depressed Robot? By exploring this theme you're starting to move quickly into questioning the "objectivity" of technology and data. How do these deeply entangled systems riff off each other "in the wild" to return these kinds of results? How might you use this project as a meditation on things like Facebooks' Mood Manipulation Experiment (https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/) ?
    * References:
        * Best of Marvin: https://www.youtube.com/watch?v=Eh-W8QDVA9s
"

sansal & meh

link

CH:
This is a great way to leverage technology like image classification to experience a familiar game in a totally new way. The use of hand gestures as input is really clever, and looks like it made for a remix of a classic game that was also fun to play. If you are interested in alternative controllers used in games, e check out the site shakethatbutton.com or the conference called 'abandon normal devices'

GL:
Good job! One thing that occurs to me is that we already use our hands for so many things, so there's a kind of unsurprising quality to having selected hands as the interface (e.g. why not just give the person a joystick?). What if you hadn't been allowed to use hands, and had to use (just for example) medial deviation of the jaw?

JL:

				
"* Remarks:
    * technical:
        * Wow! You did an incredible amount of work here. Bravo. 
        * Personally, I find the gesture control to be a bit awkward (bending the wrist that way feels and looks unnatural). Since you've already created your game, it should be a matter of exploring the HCI / UX with different models. This is definitely worth exploring!
    * Conceptual
        * Fun usage of the the way that gestures might be used as a new controller. The "bang" of the thumb as the hammer of a gun is relevant analogy that maps well to what is happening in the game. 
        *  The gestures however are a bit awkward from a user perspective (see note above). You might consider reworking the gestures instead of having to bend your wrist inward and outward which is not the most natural movement. 
        * You might consider using a downward facing camera so that people can move their hand horizontally left and right to move the space ship, and then maybe flick their finger as a way to shoot the space invaders. Experimenting with alternative hand gestures would greatly improve the usability. 
    * References:
        * https://www.yonaymoris.me/projects/airiflies
        * https://wp.nyu.edu/lillianritchie/2019/05/13/nature-of-code-final-food-chain-game/"

sovid

link

CH:
I love this project so much, this is such great work. Using sign language as input to "play" a scale is brilliant. Taking the time to make an interface that makes the interaction really clear is brilliant. And building a musical tool you can actually master and play is brilliant. Congrats on this project!! Check out Onyx Ashanti's eyeo talk, it might be up your alley.

GL:
It's a strong beginning, technically well-developed, and well-scoped. For me the main shortcoming is one of documentation: having built your own instrument, you don't present a performance on it -- you only show a scale, the most basic demonstration. Also, it's a small point, but the specific mapping of fingers to indices seems disappointingly arbitrary (and undiscussed). Had you considered binary counting on fingers? Also worth considering, using a Gray Code to count on your fingers, since the LSB switches half as often.

JL:

				
"* Remarks:
    * technical:
        * Excellent work trying to standardize your background to ensure that your classifications perform as well as possible. 
        * If you were to try to make an installation from this, you might consider using different skin tones and hand shapes for the training. 
        * It is super satisfying to see how responsive the sound is to your gestures. Bravo!
    * Conceptual:
        * Lovely to see the vibrato which is mapped to the shaking of the hand. This feature opens up a lot ideas around other gestural combinations that might affect the nature of the sounds or feedback being given to the user. 
        * A lot of people have done gestural classifications to actuate sound, but the addition of the vibrato mapping added another dimension. What might be neat is trying to unpack what it might be like to express additional qualities like "nervousness" or excitement. It could be worth exploring how you might be able to translate those kinds of latent signals in sign language to more audible or visual cues. Christine Sun Kim's work (linked below) speaks a lot to these ideas of expressiveness in sign. 
        * I'm linking to Yeseul Son's work on Invisible Sculptures as well. 
    * References:
        * Yeseul Song's Invisible Sculptures: https://yeseul.com/Invisible-Sculptures-8
        * Christine Sun Kim's Tech Talk: https://www.ted.com/speakers/christine_sun_kim"	

tli

link

CH:
I love your drawings so much! The bird is so endearing and it looks like the classification works well. I'm excited to see how this interaction might be used in unexpected ways. What if you linked up with 'zapra' and used their eye tracking as input for drawing identification?

GL:
I admire that you chose categories with legitimate ambiguity; it really is possible to make something which resembles both a bird and a plane, which becomes a possible goal for the player. The drawings are super-charming. To add more appeal to the software itself, you could play an audio clip from this video when a detection is made.

JL:

				
"* Remarks:
    * Technical:
        * Despite that there exists a doodle classifier exists for ml5: https://editor.p5js.org/ml5/sketches/ImageClassification_DoodleNet_Canvas it is still very exciting that you were able to experiment and build your own version of an image classifier that can classify hand drawn images. Its interesting you chose birds and planes which are similar in that they have wings and otherwise exhibit similar aerodynamic properties. What is neat is that your image classifier, despite the similarities of birds and planes, was able to differentiate between those two classifications (more or less). 
        * Given that your doodle classifier was trained on your own drawings, how might this perform with other people's drawings?
    * Conceptual:
        * Your initial concept is quite intriguing - to apply categorizations/classifications of the more "subjective" qualities of what an image might contain. I totally encourage you to explore this as a research project as it raises a lot of interesting questions around "what is in an image." 
        * There is this new-ish methodology of doing machine learning that is called "federated learning" which is based on the idea that we can create and train models locally on the devices collecting data themselves and only sync up the models themselves with the "cloud." By doing this you keep your data, but allow those models to live across the various devices and services you chose to use. Basically it is a way of better maintaining your privacy. 
        * The reason I bring this up, is because these more subjective avenues of image classification are specific to you and therefore can help you learn about how you might classify images according to this chart. 
        * Alternatively creating the application to classify images using this chart might allow you to see the range of possibilities that people would classify images. 
    * References:
        * Federated learning: https://en.wikipedia.org/wiki/Federated_learning"	

vikz & szh

link

CH:
I really like the way you created a physical, hand drawn data set as a way to train your system. There is something so lovely about seeing a dataset, which is usually invisible in an ML system, exist as a tantible handmade object. There is also something conceptually really interesting in thinking about how we geographically place characters we come across in the world, especially handmade ones.

GL:
I'm glad you did this. The project comes from an interesting place, reflecting both curiosity and anxiety about hybrid ethnic identities, profiling and stereotyping, and machine discrimination. I feel it needs better training data, and I wonder why you used handwriting instead of printed materials. If I understand correctly, I think it could have helped to have an additional category for 'blank page'.

JL:

				
"* Remarks:
    * technical:
        * As you noted in your future directions, having more training data, examining rotation and trying out different arrangements might help with your classifications. 
        * If this topic continues to be of interest, one thing you might look into is the use of Recurrent Neural Networks or time series analyses to explore the "westernness" or "easternness" from the way people write rather than how the text looks. Looking at the data points that compose a word or a letter might reveal some interesting patterns. See: https://experiments.withgoogle.com/font-map and https://experiments.withgoogle.com/handwriting-with-a-neural-net 
    * Conceptual:
        * Interesting experiment! Trying to encapsulate the "easternness" vs "western-ness" is difficult given how variable type and structure of the characters are across languages. Did you train the model specifically on those words? 
        * In an applied setting I wonder if given a more robust training set you could imagine doing a project trying to look at diversity of street signage in a city - of course this is not so easy given how much lighting changes might affect the classification, the color of the signage, etc, but as an exercise, it might be interesting to think about how this might be applied in other contexts. 
    * References
        * See: https://vimeo.com/304131671 == ~20 min the discussion around creating a map of relationship between languages "	

vingu

link

CH:
This project is so much fun, I love how you were able to build a portrait out of this seemingly mundane action, (taking ramen out of the cabinet). Especially because once the images start to build in the twitter feed, the result is anything but mundane. I would love to see how this project can continue over time. This also reminds me a bit of a favorite project of mine by CMU Alumni Dan Sakemoto, who created an app that took a picture of your face each time you typed 'lol' on the computer, and put it on tumblr. The sea of blank faces that were supposedly 'loling' grew over time into a really wonderful data set, and I can totally see this project headed in a similar direction.

GL:
You've made an extremely impressive technical pipeline. I really admire your fearlessness in creating a bot, connecting to Twitter APIs, node.js, etcetera. There's something really ambivalent about your approach to the subject. On one hand, it's humorous to build a whole surveillance apparatus for something as banal as a package of ramen. On the other hand, it's not, really; I could imagine that someone might build this to actually monitor their roommates' behavior. I'm left wondering "why Ramen?", and whether there is something more *memorable* you could have built a detector for.

JL:

				
"* Remarks:
    * Technical:
        * > I struggled with connecting the two programs, since one runs on pj5s (client-based?) and the other on node (local computer?). 
        * Ah yes, we are in the process of planning to support ml5 for node.js but at the moment we're not there yet! Great to see you working through these constraints though! 
    * Conceptual
        * There are obvious privacy concerns that this kind of project raises, so this is just a note to ensure that your roommates are OK with you posting images of them to the internet, that they are OK with knowing that they are being captured in your home. I guess it it were me, I would not be ok to be photographed at home! 
        * What this project raises for me is the idea that every part of this interaction is automated and running "in the background" - the sensing, the photographing, the public nature of one's behavior is just running and running. I wonder if there are other mechanisms you might explore on this topic of automating one's public facing "profile" using ML and other APIs from the web. Do these mechanism help you in some ways, do they harm you, can your friend or family or other people be implicated as a result of this automation?
        * One thing you might think about is the performative nature of media art and how the performance can be as much or more interesting than the outcomes that are produced. The work of Alberto Frigo or other people interested in the quantified self come to mind here. 
    * References:
        * Alberto Frigo: https://www.fastcompany.com/3042784/for-11-years-this-man-has-taken-photos-of-everything-his-right-hand-touches"

zapra

link

CH:
This is a really great eye tracker! It looks like it works really well, congrats! Eye tracking is a really powerful tool and I'm excited to see what this could be used for in the future. A favorite project of mine that you might like is Eyewriter by the Graffiti Research Lab. It used a similar kind of eye tracking to allow someone without the use of their limbs to "draw" with light onto a building. What if you linked up with 'tli' and used their eye tracking as input for drawing identification?

GL:
I would have *killed* for an eyetracker this good in 2007. Note that the artist-led project Eyewriter (http://www.eyewriter.org/) won the Golden Nica at Ars Electronica 2010, the world's largest and most prestigious prize for computer arts. You learned a lot and I'm proud. The primary shortcoming of your project is similar to sovid's: having built your own eyetracker (HFS!), you don't then use it to control a *thing*, whatever that thing might be. You don't even use it to make a drawing! Consequently, the project, while impeccably technologized, is missing your voice.

JL:

				
"* Remarks:
    * Technical remarks:
        * Great documentation and exploration of the affordances (and difficulties) that training the regressor added to your project. 
        * Really nice use of debugging the "grid" and also using a grid to set your data points for training. 
    * Conceptual remarks:
        * The framing of of the camera is intriguing. It creates a sense of intimacy and offers a vantage point that we don't often get to see and experience. 
        * A happy accident of the inaccuracies of the training creates a question for me, "who/what is actuating who/what?". The erratic nature of the circle/line could be read in two ways: that your eye is controlling the drawing, and that your eye is controlled by the drawing. 
        * One project this reminds me of is Hochi Lau's "Learn to be a machine" project (lucky for you he is your TA!) in which the installation piece speaks to this dynamic of controlled/controller.
        * I think your project on a more meta level could be taken further to think about the ways that we are both actuating and actuated by ML. The messiness and the scribbles can reflect that exchange. 
    * References:
        *  http://archive.j-mediaarts.jp/en/festival/2013/art/works/17a_learn_to_be_a_machine_distantobject_1/"