gray-Reading3

One main purpose of art, in my mind, is to show something that has never been seen before. Art as communication of an idea is extremely powerful, and if an idea is represented in a way that has never been seen before, or a completely novel idea is represented, it can catch the attention of the viewer/experiencer and let them understand it in a way nothing else can. In this respect, new technology is crucial to art, in that it can express ideas that can't be expressed any other way.

I don't think my own work is first word art or last word art, really, because I haven't done anything that hasn't been done before, but I also haven't done anything in an already existing medium that is good enough to stand out much in that medium. However, I definitely would lean more toward first word art. I do find it a little daunting to try to use a medium that has been exhausted so fully, or at least to use it in a way that stands out. I get more out of trying to do something that's never been done before than just trying to perfect my technical skills, even though sometimes I wish I didn't feel that way.

One danger I see in using cutting-edge technology is losing your message or losing yourself in the tech. It's easy to come up with an idea that fits the medium rather than vice-versa, and while that's a good exercise and can sometimes even have artistic value, you're much more likely to produce something worthwhile if you choose the medium to match the idea.

gray-AnimatedLoop

This piece took a lot longer than I though it would. In the end, I think I'm happy with it, but I would probably try to add more interesting parts to the fire if I could. It was very difficult to get a shape where I wanted it and moving how I wanted it to, so my initial idea of having a lot of individually-moving parts wasn't super feasible. My vision was of the fire being more chaotic. Because fire is so organic, it feels wrong to have this uniformly-moving, symmetrical set of shapes representing it, so if I could change that I probably would.

I used the adjustable-center exponential sigmoid function, with a different center and different phases for different parts of the piece. At first, I wanted to use a bunch of different easing functions, but I realized that this easing in and out function was what I wanted, and that the different parts of the fire are all reacting to the same thing, so it makes sense for them to behave similarly.

p5.js code: https://editor.p5js.org/gray/sketches/g8tvFT-kn

gray-LookingOutwards2

AARON

AARON is an AI developed by Harold Cohen to create original paintings. Cohen has been developing the same algorithm since 1973. There's a lot of things I find fascinating about this project. The algorithm is really an evolving thing, with a lot of complex parts. It's a little difficult to find information, but it seems that AARON has some simple imperative rules as well as some learning functions. It's amazing how much the algorithm has changed since its inception. At first, it just did line drawings, then color, then more and more abstract shapes. Its most recent works look like a different artist than the early works:

Painting by AARON from 1995
Image result for AARON harold cohen paintings
04052, a painting done in 2004

 

 

 

 

 

AARON's paintings, I think, are actually more simple than many generative algorithms, and that's something that I find impressive. They are concise.

Harold Cohen has some articles he's written about AARON on his website: http://www.aaronshome.com/aaron/index.html

Here's a video of one of AARON's older paintings: https://www.youtube.com/watch?v=3PA-XApZkso

gray-Interruptions

Observations:

  1. The composition is square.
  2. The composition consists of black lines on a white background
  3. All the lines are the same length.
  4. The lines have randomized slopes, but tend towards vertical.
  5. There are margins that are about the length of two lines.
  6. The height of the rows and width of the columns is half the length of the lines.
  7. There are gaps where multiple lines are missing from a section of the grid, but the number of lines and the shape and position of the gap is fairly random.
  8. About 5-10% of the grid consists of gaps.
  9. There exist lines on the grid within gaps such that they are not adjacent to any other lines.
  10. There are 56 rows and 56 columns.

So I first did the grid of randomized lines that tend to be vertical, and then I wanted to do a weird recursive function to generate the voids as an array of points by starting with one point and then including adjacent points based on a probability. But that was really complicated, so then I saw someone talking about using just a random radius and making points in that radius voids, and I tried that. I added a weighted probability that those points wouldn't actually be voids, based on how far from the center of the circle void they were. That made the edges of my circles more fuzzy. I added a couple other things to add fuzziness. It's still definitely not the same program as Molnar's. Hers is very complex from what I can see. It's really impressive that she did that; it definitely seems like some kind of noise function, and mine is pretty far from that. When I squint I still just see a bunch of circles in mine.

link: https://editor.p5js.org/gray/sketches/LFzBlbk2c

gray-Reading2

I saw a really cool video by Veritasium recently about randomness and information theory that I was reminded of by Galanter's article. In it, he talks about the effective complexity of the universe, although he doesn't use that term. After watching the video, I think that the universe is in a pretty good sweet spot between crystal lattice and complete randomness. Maybe because we evolved at this time in the universe, we are well suited for this stage, and that's why I think we're at a pretty good point. But since the universe is trending from the Big Bang (simple) toward total entropy (complex), I think our ability to create effectively complex things might come from how effectively complex the universe is right now.

The Problem of Intent: Why is the artist working with and ceding control to generative systems?

I've thought about this before, and I think randomness is crucial to progress. I think that most original ideas, whether they be inventions or artistic pursuits, come from accidents or random circumstance. Usually we can only work off what we already know, and that usually only can create things we already know. New information, just like Veritasium says in the video above, comes from random chance, because information is randomness at a certain level.

For this reason, I am ceding control to a partially random system. Hopefully this can lead to new insight that I would never have gotten through work that I fully control.

gray-Reading1

Critical Engineering Manifesto, Tenet 4: "The Critical Engineer looks beyond the 'awe of implementation' to determine methods of influence and their specific effects."

I take this tenet to mean basically "Don't believe the hype." Especially in the tech industry, there's an almost religious obsession with new products and new systems (blockchain, cryptocurrency, machine learning, VR, quantum computing, etc.). Often, despite the novelty of these inventions, the applications they are used for quickly become cliche, and the tech becomes much more important than the project.

I'm glad that I found this advice, and I hope to follow it. I want to be sure that each creative project that I do has a purpose beyond trying out a new piece of technology. I'm also interested in the second half of this tenet; they emphasize influence strongly as if it is the most important aspect of a project. I definitely want to consider influence more, but I don't know if I feel like it should be the top priority.

gray-LookingOutwards1

Generating Human Faces From Sliders

Generating-Faces

This is a machine learning application that has been given a database of high school yearbooks and learned to generate yearbook photos based on what it has decided are the most important components of these photos. Some of these components can be easily identified and labeled, meaning that theoretically, with some fine-tuning, unique faces could be generated given gender, hair color, height, complexion, etc. The application was developed CodeParade, who is a self-described software hobbyist. This program is showcased on their Youtube channel, and is available for anyone to download.

This project was especially impressive to me because despite the fact that it's just a single developer exploring an idea that they're interested in, the product actually works surprisingly well, and it's pretty fun to play with. It seems like CodeParade is genuinely curious about the projects that they do, and they put serious effort into it. I think this project is a really novel and somewhat intuitive way to showcase and experiment with this aspect of machine learning (auto encoders). They've actually put the project online since I last looked at it: http://codeparade.net/faces/. I really appreciate that they made the code open source and available. It would also be interesting to see what most people think of as the most important components of a human face, and compare those with what the computer thinks. CodeParade has also applied this program to other datasets, such as Garfield comics and names of classes. Check out their website (http://codeparade.net) and the video introducing this project:

Computer Generates Human Faces