Work

lsh-Reading01

The first tenet of the Critical Engineering Manifesto by Julian Oliver states:

The Critical Engineer considers any technology depended upon to be both a challenge and a threat. The greater the dependence on a technology the greater the need to study and expose its inner workings, regardless of ownership or legal provision.

This tenet could be explained as follows.

Any convenience comes at a price, and the more integral a piece of technology becomes in ones life, the more suspicious the user should be. Thus, the critical engineer must find the extents of this price and tradeoff.

I appreciate this tenet for its relevance to contemporary life in relation to technology. As we pour more of our personal lives into our cellphones and become reliant on smart assistants, it seems like convenience and privacy are on a spectrum.

A real world example of this tradeoff is using a smart assistant for location based dining. If one uses their smart assistant to find nearby restaurants, they have the convenience of a quick search, but without knowing about the device's bias in deciding what restaurants to list, one is unable to know if the device's manufacturer's are attempting to promote certain restaurants over others.

iSob-LookingOutwards1

The new and improved Artbreeder UI.

For this blog post, I wanted to go into detail about GANbreeder (soon to be renamed Artbreeder,) my favorite software art tool/project and one which I have used extensively. GANbreeder is a web application developed by Joel Simon, a BCSA alum, that allows users to interact with the latent space of BigGAN.

BigGAN is a generative adversarial neural network (GAN) that benefits from even more parameters and larger datasets than a traditional GAN. Therefore, its creations achieve a higher Inception score, a measure of the diversity and photorealistic quality of a generative network's creations. On GANbreeder, users can mix "genes" (ImageNet categories, say 'Siamese cat' and 'coffeepot',) and BigGAN will create novel images within the latent space between those categories. By "editing genes" (adjusting sliders,) the user can modulate the network's relative confidence levels of one category or another. The user can "crossbreed" two prior GANbreeder creations to mix their categories into one image, or spawn "children," which are randomized variations on the confidence levels that tweak the image slightly.

What I find most inspiring about GANbreeder as a project is the magical, surreal quality of the images that come out of it. These dreamlike images provoke responses ranging from awe to fear, and question the straightforward sense of how meaning is created in the mind. Perceptual information in the visual system provokes memories and emotions in other parts of the brain. But where in this process does a slightly misshapen dog make us feel so deeply uncomfortable?

As a tool, GANbreeder is inspiring because it democratizes a cutting-edge technology -- the user doesn't have to write a single line of code, much less possess a graduate ML degree. I've been interested in AI art since high school, but coding doesn't come naturally to me, so I have this project to thank for keeping my interest alive and helping me get a sense of what work I want to make.

From a conceptual standpoint, GANbreeder raises complicated questions about authorship. I chose the categories that make up 'my' creations and messed with sliders, but who 'made' the resulting image? Was it me, Joel Simon, the researchers who developed BigGAN, or the network itself? What about Ian Goodfellow, who is said to have 'invented' GANs in 2014, or all the researchers going back to the early days of AI? You can read here about a dispute between Danielle Baskin, an artist and active GANbreeder user, and Alexander Reben, who painted (through a commissioned painting service) a near-exact copy of one of Baskin's generated 'works.' At this time, GANbreeder was anonymous, but Simon has since implemented user profiles. It's not clear whether this will solve the question of authorship or merely complicate it further. As shown in the case of Edmond Belamy, any given person or group's ownership of AI artwork is tenuous at best.

Simon is currently at work on a GANbreeder overhaul. Not only will the project be renamed Artbreeder, it will expand to include more categories, an improved ML approach (BigGAN-deep,) and specific models for better generation of album covers, anime faces, landscapes, and portraits. I'm in the Artbreeder beta, and I still think the standard BigGAN model ('General') produces the most exciting images. Maybe it's because lack of commonality between the categories leads to weirder and more unexpected imagery. But overall, as a sort of participatory, conceptual AI art project, I think GANbreeder is one of my favorite things created in the last two years.

Here's a collection of my GANbreeder creations that I'm most satisfied with (I like to make weird little animals.)

 

There isn't a singular artist who I would say is making the 'best' GANbreeder work, but you can find great stuff on the Reddit page or the featured page on the site.

MoMar – reading01

  1. The Critical Engineer looks to the history of art, architecture, activism, philosophy and invention and finds exemplary works of Critical Engineering. Strategies, ideas and agendas from these disciplines will be adopted, re-purposed and deployed.

I see this rule as the following: "An Engineer should look back to the past to learn how to implement ideas in the present."

Interestingly, I was unknowingly following this rule for a couple of years.  I tend to look how the work was done in the past, and how I can repurpose these ideas to work in the present.

When I made Dungeon Crawler VR, I created maze-like dungeons using modular pieces which spread up the development process. This tactic is used in professional game development. The actual gameplay derived from dungeon crawlers from the 1990s.

rysun-CriticalEngineeringManifesto

"5. The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user's dependency upon it."

I interpret this tenet to mean that the Critical Engineer is responsible for understanding the potential effects their work may have on the individuals who use it. Art and technology have the capacity to change people, with this capacity being "proportional to that user's dependency upon it," meaning that the more emotional investment the consumer has in the product, the more likely it is to change them. I thought this tenet was interesting because it addresses that people engineering inanimate objects happens just as often as the opposite. A real example of this is the modern man's relationship with his smartphone. It's cliched at this point, but most citizens living in the developed world today are dependent on their smartphone for communication, information, entertainment, transportation, and more. This dependency can lead to addiction, and can alter the user's lifestyle and relationships with other people. Simultaneously, this new way to connect people around the world and access to a growing multitude of information previously unavailable to previous generations can help contribute to the user's cultural and intellectual growth.

MoMar-lookingoutwards01


I often look for inspiration in the 1996 video game "The Elder Scrolls II: Daggerfall." It was developed by only 27 people (excluding beta testers), which is a small amount when compared to modern teams. Daggerfall ran on XnGine, one of the first true 3D game engines. In terms of gameplay, there is a lot of influence from the roleplaying game Dungeons and Dragons and earlier first-person PC RPGs like Ultima: Underworld. Aesthetically speaking, the art direction draws clear inspiration from medieval fantasy art. The lead artist, Mark Jones, worked on many DnD themed games. As to what we can learn about the game's immersion, I'll provide examples of what players can do: buy and sell buildings, loan money from banks, barter with merchants, own boats and explore a region the size of the United Kingdom. If all this was possible in 1996, why can't we do it now? I admire this project because of the ambitious world and the hand-drawn art it has.

It's worth mentioning that a small community of modders ported the game over Unity 3D and Daggerfall has never looked better!

gray-LookingOutwards1

Generating Human Faces From Sliders

Generating-Faces

This is a machine learning application that has been given a database of high school yearbooks and learned to generate yearbook photos based on what it has decided are the most important components of these photos. Some of these components can be easily identified and labeled, meaning that theoretically, with some fine-tuning, unique faces could be generated given gender, hair color, height, complexion, etc. The application was developed CodeParade, who is a self-described software hobbyist. This program is showcased on their Youtube channel, and is available for anyone to download.

This project was especially impressive to me because despite the fact that it's just a single developer exploring an idea that they're interested in, the product actually works surprisingly well, and it's pretty fun to play with. It seems like CodeParade is genuinely curious about the projects that they do, and they put serious effort into it. I think this project is a really novel and somewhat intuitive way to showcase and experiment with this aspect of machine learning (auto encoders). They've actually put the project online since I last looked at it: http://codeparade.net/faces/. I really appreciate that they made the code open source and available. It would also be interesting to see what most people think of as the most important components of a human face, and compare those with what the computer thinks. CodeParade has also applied this program to other datasets, such as Garfield comics and names of classes. Check out their website (http://codeparade.net) and the video introducing this project:

Computer Generates Human Faces