iSob-LookingOutwards1

The new and improved Artbreeder UI.

For this blog post, I wanted to go into detail about GANbreeder (soon to be renamed Artbreeder,) my favorite software art tool/project and one which I have used extensively. GANbreeder is a web application developed by Joel Simon, a BCSA alum, that allows users to interact with the latent space of BigGAN.

BigGAN is a generative adversarial neural network (GAN) that benefits from even more parameters and larger datasets than a traditional GAN. Therefore, its creations achieve a higher Inception score, a measure of the diversity and photorealistic quality of a generative network's creations. On GANbreeder, users can mix "genes" (ImageNet categories, say 'Siamese cat' and 'coffeepot',) and BigGAN will create novel images within the latent space between those categories. By "editing genes" (adjusting sliders,) the user can modulate the network's relative confidence levels of one category or another. The user can "crossbreed" two prior GANbreeder creations to mix their categories into one image, or spawn "children," which are randomized variations on the confidence levels that tweak the image slightly.

What I find most inspiring about GANbreeder as a project is the magical, surreal quality of the images that come out of it. These dreamlike images provoke responses ranging from awe to fear, and question the straightforward sense of how meaning is created in the mind. Perceptual information in the visual system provokes memories and emotions in other parts of the brain. But where in this process does a slightly misshapen dog make us feel so deeply uncomfortable?

As a tool, GANbreeder is inspiring because it democratizes a cutting-edge technology -- the user doesn't have to write a single line of code, much less possess a graduate ML degree. I've been interested in AI art since high school, but coding doesn't come naturally to me, so I have this project to thank for keeping my interest alive and helping me get a sense of what work I want to make.

From a conceptual standpoint, GANbreeder raises complicated questions about authorship. I chose the categories that make up 'my' creations and messed with sliders, but who 'made' the resulting image? Was it me, Joel Simon, the researchers who developed BigGAN, or the network itself? What about Ian Goodfellow, who is said to have 'invented' GANs in 2014, or all the researchers going back to the early days of AI? You can read here about a dispute between Danielle Baskin, an artist and active GANbreeder user, and Alexander Reben, who painted (through a commissioned painting service) a near-exact copy of one of Baskin's generated 'works.' At this time, GANbreeder was anonymous, but Simon has since implemented user profiles. It's not clear whether this will solve the question of authorship or merely complicate it further. As shown in the case of Edmond Belamy, any given person or group's ownership of AI artwork is tenuous at best.

Simon is currently at work on a GANbreeder overhaul. Not only will the project be renamed Artbreeder, it will expand to include more categories, an improved ML approach (BigGAN-deep,) and specific models for better generation of album covers, anime faces, landscapes, and portraits. I'm in the Artbreeder beta, and I still think the standard BigGAN model ('General') produces the most exciting images. Maybe it's because lack of commonality between the categories leads to weirder and more unexpected imagery. But overall, as a sort of participatory, conceptual AI art project, I think GANbreeder is one of my favorite things created in the last two years.

Here's a collection of my GANbreeder creations that I'm most satisfied with (I like to make weird little animals.)

 

There isn't a singular artist who I would say is making the 'best' GANbreeder work, but you can find great stuff on the Reddit page or the featured page on the site.

MoMar-lookingoutwards01


I often look for inspiration in the 1996 video game "The Elder Scrolls II: Daggerfall." It was developed by only 27 people (excluding beta testers), which is a small amount when compared to modern teams. Daggerfall ran on XnGine, one of the first true 3D game engines. In terms of gameplay, there is a lot of influence from the roleplaying game Dungeons and Dragons and earlier first-person PC RPGs like Ultima: Underworld. Aesthetically speaking, the art direction draws clear inspiration from medieval fantasy art. The lead artist, Mark Jones, worked on many DnD themed games. As to what we can learn about the game's immersion, I'll provide examples of what players can do: buy and sell buildings, loan money from banks, barter with merchants, own boats and explore a region the size of the United Kingdom. If all this was possible in 1996, why can't we do it now? I admire this project because of the ambitious world and the hand-drawn art it has.

It's worth mentioning that a small community of modders ported the game over Unity 3D and Daggerfall has never looked better!

gray-LookingOutwards1

Generating Human Faces From Sliders

Generating-Faces

This is a machine learning application that has been given a database of high school yearbooks and learned to generate yearbook photos based on what it has decided are the most important components of these photos. Some of these components can be easily identified and labeled, meaning that theoretically, with some fine-tuning, unique faces could be generated given gender, hair color, height, complexion, etc. The application was developed CodeParade, who is a self-described software hobbyist. This program is showcased on their Youtube channel, and is available for anyone to download.

This project was especially impressive to me because despite the fact that it's just a single developer exploring an idea that they're interested in, the product actually works surprisingly well, and it's pretty fun to play with. It seems like CodeParade is genuinely curious about the projects that they do, and they put serious effort into it. I think this project is a really novel and somewhat intuitive way to showcase and experiment with this aspect of machine learning (auto encoders). They've actually put the project online since I last looked at it: http://codeparade.net/faces/. I really appreciate that they made the code open source and available. It would also be interesting to see what most people think of as the most important components of a human face, and compare those with what the computer thinks. CodeParade has also applied this program to other datasets, such as Garfield comics and names of classes. Check out their website (http://codeparade.net) and the video introducing this project:

Computer Generates Human Faces

 

tli-lookingoutwards01

https://www.choiceofgames.com/creatures-such-as-we/#utm_medium=web&utm_source=ourgames

For this Looking Outwards, I'd like to share a charming text-based adventure game site I found around 2009 and my favorite game from their collection. Choice of Games is a company that produces and hosts text-based adventure games created using their scripting language ChoiceScript. I like these works because they are lengthy, well-written, fantasy-esque games with meaningful choices, but there are a couple interesting characteristics about this platform that are relevant to this class.

First is how Choice of Game hosts curated user-created games written with ChoiceScript. I find this relevant to how media art seeks to be completely democratized and accessible, but I am reminded of a post I saw a long time ago complaining about ChoiceScript ruining the sanctity of text-based adventures. This makes me wonder why creative technology seems particularly prone to gatekeeping. Perhaps this is more because art in general is prone to gatekeeping, but its exclusiveness is highlighted when juxtaposed with technology's affinity for mass adoption.

Second, I cannot help drawing comparisons to Twine and wondering why Twine has become the creative's tool of choice rather than ChoiceScript. The most straightforward conclusion is that ChoiceScript is created specifically for text-based adventure games, whereas Twine broadly supports choice-based games in general. HTML, CSS, and visual languages are all tools supported by Twine. With ChoiceScript, narrative trees and eloquence are the only tools available. To me, I don't see this as a downside because these tighter constraints make for rich storytelling, but it definitely falls short on interactivity and functions more as a novel.

What I desire more from Choice of Games are narratives that are not standard fantasy-adventure stories but more introspective or exploratory. This is why I link Creatures Such as We as an example instead of one of their more popular games like Choice of the Cat. (Yes, it's exactly what you'd expect). Creatures Such as We is "a philosophical interactive romance novel by Lynnea Glasser" about video games on the moon. I don't have any justification for liking this story besides I'm a sap.

ilovit-lookingoutwards01

"The Purpose of Water" is a browser game that Stephen Lavelle made in a month. It is one of the many small games that he publishes for free on his website. It's a kind of puzzle game, but an obtuse one, where the real puzzle is figuring out how to interact with the logic of the game. All the pieces are packed with symbolism which doubles as game logic, so the solution to the puzzle is also the story being told. The way all the parts interact is so elegant and complete that the experience lasts after you finish the relatively short amount of time it takes to play. The ending is especially impactful in how it uses the vocabulary that the rest of the game introduces to deliver something genuinely meaningful and surprising.

To create it, he used a free programming library called Haxegon, but presumably created the game logic himself. The Pixel art aesthetic and grid based format seems inspired by old video games, and the content has references to folklore.

The project points to other ways simple game logic can be harnessed for powerful symbolism.

"The Purpose of Water" by Stephen Lavelle
https://www.increpare.com/game/the-purpose-of-water.html

rysun-lookingoutwards01

 

Project: Lingdong Huang, CMU 15-112 Term Project: Hermit (2015)
Link to Video: https://www.youtube.com/watch?v=mPYeTJd8klQ

Link to GIF: https://giphy.com/embed/XBuy5KwySYOAruoFrN

<iframe src="https://giphy.com/embed/XBuy5KwySYOAruoFrN?video=0" width="480" height="274" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/XBuy5KwySYOAruoFrN">via GIPHY</a></p>

Lingdong Huang, Hermit (2015)

One person, Lingdong Huang , was involved in the making of this project.  What is particularly inspiring about this project is that its creator is not (or at least, wasn't at the time) a professional artist, but rather a fellow CMU student. I assume it took Huang only a few weeks to complete this simulation, as I also took 15-112 and am familiar with the term project. Huang used Python code to create this project, and was likely inspired by preexisting procedural generation projects which mimicked nature.

I was shown this video when I was a senior in high school by a student when I came to visit CMU. The combination of aesthetics and algorithmic complexity of this project left a lasting impression which led me to apply to CMU. I was drawn to the minimalistic yet effective use of monochromatic tones to evoke a quiet, fantastical atmosphere; the usage of simple polygons in specific ways to create organic, life-like creatures. The usage of building a "skeleton" in order to animate the creatures can be seen in Lingdong's later work, doodle-place, which was shown in Lecture today.