I really like her point about how "nonsense is frustrating and scary." People resist nonsense, in this case odd or previously-unspoken orders of words, because it makes them uncomfortable. This makes me think about where something stops being sensible and becomes nonsensible. By an extreme definition, anything original is nonsense, as what people see as sensible comes form what they know. It is the familiar parts of something original that makes the whole thing sensible. Parrish's example of nonsense has so few familiar parts that it is often considered nonsense.

But should I create nonsensible work? I don't think that I should make nonsense to the far extreme. If I am trying to convey a message, then I may be trying to be relatable. True nonsense cannot be relatable outside the fact that it is nonsense at all.


What stuck with me about Parrish's talk was the overall theme of the exploration of uncertainty. It's something thats commonly done in science/math areas, yet not so much in the humanities. Parrish gives many options to exploring the unknown, from the most literal way--for example, mapping out semantic space through n-grams, and finding the empty spaces--to more abstract ways, such as creating new words through splicing existing words together and generating new definitions. It was something I thought about a lot as I was working on the book project as well, since I felt pulled towards making something more generative/abstract, yet ultimately made something that explored connections between existing language. These two sides of the spectrum feel like the two types of infinity, one that exists between numbers and goes towards the infinitely small, and the other one that exists towards infinite largeness.


My biggest take away from this lecture was that the greatest tool to understanding chaos is to frame it in as many different ways as possible, finding which frame is the most compatible with our existing tools of perception. It was really interesting to see how organizing word relationships in 3D space gives you the ability to judge those relationships in a really tangible way. It takes information that exists in our minds (as reflected by the medium of recorded text) and represents the most significant aspects of that data in way that utilizes our spacial reason to show us those relationships. I also appreciated the connection she made between these data extrapolation fields and autonomous recording of weather data from balloons. It's important to realize what these systems are at their lowest level, which are systems that process information on our behalf when we are unable or too impatient to do so.


There were a two points that stuck out with me about this talk, each of which I'll go over below:

  1. Relating generative language to exploration. I really liked Parrish's theme of space exploration throughout the talk, partially because it gave an exciting tone to the lecture, but mostly because it alluded to how much we can learn from generative writings. Parrish was astute in observing that since bots aren't constrained to norms the way that humans are, they can come up with things humans would be unable to in their current state.
  2. What is known and unknown is in the eye of the beholder. I really liked how Parrish talked about how what we deem as nonsense or sensical depends on who we listen to. Recent movements such as as #MeToo or Gay Marriage Equality would have been completely dismissed 100 years ago. Conversely, we now look back certain midcentury opinions that were held as facts and gawk. This shows the importance of both lending a voice to people that have none and maintaining an open mind.


The thing that stuck with me from Allison Parrish's talk was that her goal as a poet is "not to imitate existing poetry but to find new ways for poetry to exist." This reminded me a bit of the talk given by Robbie Barrat, who also uses computers to generate art. The goal of AI-driven art isn't to replace or mimic artists, but to create new things that could never be conceived by a human. I really liked the example with the cute robot explorer going into the unknown, because robots are truly helping us to explore new areas and should be thought as our helpers rather than competitors.


I find the idea of coding as exploration very compelling in her talk. Not only applied to poetry or generative text, but as it applies to creative coding in general. Thinking of it as an exploration opens up the world of "happy accidents" where if you are exploration, you might not completely know what it is you're looking for, and I find that exciting. I love her discussion about the creation of "nonsense", and that "what you thought was nonsense was actually sensical all along, you just had to learn how to look at it right." Robots being reporters is also a funny idea - that you are asking a bot or robot to go find things out for you.


I love Allison Parrish's comparison of literature with space exploration in that there are places even in literature that are mostly unexplored because it is "taboo" such as books that only repeat a single word or speak in a made up generated language. This particular point stuck with me because it made me realize what other fields, not just space and literature, have this exciting opportunity. It makes me imagine what it would be like to explore vastly different fields with automatic systems, programs, or robots in ways which people had never thought of or thought was worthy of much exploration.



Being a connoisseur of space exploration, robotics, and linguistics, this video hit right home with me. There's one thing, in particular, I'd like to mention here however, and that is the beautiful use of metaphor by Allison. The use of 'bots' we're sending out on a journey into linguistic space and who are sending us signals back from their exploration is an incredibly powerful one in my mind, and it really helped me shape my project. It allowed me to think of my work as all of these little creatures I was sending out into the void and getting answers back from - quite a novel approach to creative thought.


Above all, I think the way she defines her text robots as explores that explore "whatever parts of language that people usually find inhospitable" are very inspiring to me. Personally I have never though of generative texts in this way. I agree that we often want to create robots that can closely assemble humans, not matter if it's a bot that plays chess or reconstructs languages. Thus, embracing the rawness and the awkwardness of the content that was created by a machine and more importantly, find meanings within it can in some sense open up possibilities to new ways of understanding languages, and more specifically, the holes that we had naturally avoided.

Another that stuck with me is the phrase that "gaps could have been there for a reason", as it might have indications of violence or other things that are harmful. I think this is an important point to make. When we make automated machines and let them out in the world, we often consider what they create are out of our control, and it is just what it is (ex. TayTweets). However, I totally agree with the speaker that we, as the creator of the bots, need to take on the responsibility to make sure that we are actively take percussions against those undesirable outputs.


The part of the lecture that stuck with me the most was the discussion on the mapping of the word net hierarchy to the cortex of the brain. The fact that the brain has this topographic representation of words and concepts is quite incredible. There are other mappings in the brain too; visual and auditory information is represented in a sort of a hierarchy throughout the cortex. The fact that this also extends to language is even more impressive given that language is evolutionary newer, and therefore less ingrained in the structure of the brain. These findings suggest a sort of innate mapping of the abstract meanings of words that affects how they are perceived and processed. Very interesting bit of research presented there.