Being a connoisseur of space exploration, robotics, and linguistics, this video hit right home with me. There's one thing, in particular, I'd like to mention here however, and that is the beautiful use of metaphor by Allison. The use of 'bots' we're sending out on a journey into linguistic space and who are sending us signals back from their exploration is an incredibly powerful one in my mind, and it really helped me shape my project. It allowed me to think of my work as all of these little creatures I was sending out into the void and getting answers back from - quite a novel approach to creative thought.


Above all, I think the way she defines her text robots as explores that explore "whatever parts of language that people usually find inhospitable" are very inspiring to me. Personally I have never though of generative texts in this way. I agree that we often want to create robots that can closely assemble humans, not matter if it's a bot that plays chess or reconstructs languages. Thus, embracing the rawness and the awkwardness of the content that was created by a machine and more importantly, find meanings within it can in some sense open up possibilities to new ways of understanding languages, and more specifically, the holes that we had naturally avoided.

Another that stuck with me is the phrase that "gaps could have been there for a reason", as it might have indications of violence or other things that are harmful. I think this is an important point to make. When we make automated machines and let them out in the world, we often consider what they create are out of our control, and it is just what it is (ex. TayTweets). However, I totally agree with the speaker that we, as the creator of the bots, need to take on the responsibility to make sure that we are actively take percussions against those undesirable outputs.


The part of the lecture that stuck with me the most was the discussion on the mapping of the word net hierarchy to the cortex of the brain. The fact that the brain has this topographic representation of words and concepts is quite incredible. There are other mappings in the brain too; visual and auditory information is represented in a sort of a hierarchy throughout the cortex. The fact that this also extends to language is even more impressive given that language is evolutionary newer, and therefore less ingrained in the structure of the brain. These findings suggest a sort of innate mapping of the abstract meanings of words that affects how they are perceived and processed. Very interesting bit of research presented there.


Two things stuck with me after watching this video. First thing was how she ended the talk by talking about responsibility. To recap her opinion in my own way, people who are empowered by technology should be more thoughtful about the results they are putting out, and humble about their achievements--even if they are making art. I think this is an exemplary attitude for anyone involved in tech. Second thing was her method of detecting the unknown. As she drew a clear comparison between exploration of physical space and word spaces, I thought the way she framed linguistics was genius. Wondering about the outer space is something all of us have once done. But reassembling vocabularies in n-dimensional space to locate empty spots seemed like a totally new and inspiring way to look at a commonplace thing.


I think Calvin's got a point here. Its significantly more difficult to chop things we personify into entree-sized bits. Its also precisely what I loved about Allison Parish's talk and her approach to her work. Allison regards exploration bots and generative programs as personified robot beings that serve humanity by venturing out into spaces too hostile for us. What struck me the most was the comments some of the twitter bots were getting, where followers were directly speaking to the bot as a person no less than the rest of us. I tend to get attached to my work pretty easily, but perhaps that will allow me to create generative art and AI that feel real and have a unique personality. Every one of the automatons shown in class recently were far more than a set of servos and scrap material from CCR; it was even fitting for most of them to have names because ultimately in critique we referred to the automatons by their name and commented on their character. After seeing Allison's talk I'm certain the same could be achieved with the generative text assignment. Given the relationship humans have with reading so far, there's an inherent expectation that an author wrote the generated text.

The Deep Questions bot truly felt like there was a conspiracyKeanu-esque individual with too much time on their hands sharing their thoughts on the internet, and going forward with the generative text assignment I want to anthropomorphize the perceived "author" of the text just as much as the text itself.


I feel that the concept that stuck with me the most in watching the lecture was Parrish's conception of Nonsense. This element of challenging the human brain to dissect that which is has never been spoken before, or indeed is at all inhospitable to human reading/taking in, was a striking new way to look at poetry. In some respects, poetry for me is making that which is more subconscious more accessible to the reader, so to subvert this function for the sake of unique iterations and indeed images within the content I found was a powerful lesson in the power of the fragile nature of language.


After viewing Allison Parrish's 2015 EyeO lecture, there were a few things that stuck with me. The first thing, and what probably struck me the most was her opinion that the role of the AI bot in art should not be to replicate what a human artist might do, but rather to go into spaces that a human would not go into. Parrish demonstrated this point with her example of semantic space and how her bots typically attempt to navigate into the empty parts of this semantic space, effectively going somewhere where people would not be able to systematically go on their own.

I was also struck by how humorous generative literature can be. Much of Parrish's work appears to have a humorous element to it, and this might be by design, but I would not be surprised if much of this actually is the result of how we interpret language. Often times, absurd, nonsensical pairings of words are humorous to us. This is, of course, exactly what Parrish's bots produce.


Watching Allison Parrish's talk before actually doing the project has given me a very informative and helpful introduction to the significance of text in my daily life. Coming from a decision science background, I did research on word recognition and verbal perception. During the research and learning process, what I have been focused on is the human beings' impacts on words. For example, how our opinions/personalities influence the way we talk, write, and communicate. However, when Allison talks about the "lexical space", what is being reflected is how words show our behaviors. It focuses on words and its pattern to discover human beings instead of focusing on humans and see their impacts on words. This change in perspective is really interesting. While I am listening to Allison's talk, my mind has jumped to places where I reflected on how our language reflects our political standing, our emotions, etc. Then, some ideas came to my mind, it will be interesting to create an algorithm where the drawings are created based on the keywords in the text. And the text is captured through social media/email/messenger or any of the medium of communication. Then, without seeing the subject of this sentence, what are some interpretation that can be created upon the keywords, the word choice, the tenses, and other verbal patterns?