Zarard-LookingOutwards02

Inside Social Soul
Inside Social Soul (Perspective 2)

Social Soul is a social media installation created by a collaboration between Kyle McDonald, Lauren McCarthy, and Kyle McDonald. It brings to life a user’s Twitter stream and profile in 360 degree monitors mirrors and sound.

More impressively they created an algorithm to match users with other attendees and speakers at the conference this was installed at and displayed their match’s stream. The user was allowed to connect with their social soul mate.

They used seven different languages to make it and the visual and audio arrangements were computationally generated live.

What inspires me about this piece is the fact that it is personal. Every user can step in and feel that this is a piece for them. Also when you leave the piece the experience isn’t over because it connects you to a real live person and so the experience of participating in the social soul outlives the life of its existence. I personally really enjoy the idea of connecting people and exposing the differences and similarities amongst social circles which is why this piece really fascinates me.

Zarard-lookingoutwards09

For this project I decided to look at something related to my Teenie Harris Research.

The Loop Suite
Kids with Santa

Jason Salavon focuses his artistic practice around visual averages and overlays. Because he chooses datasets with high similarity, by layering all the images together you can pull out key insights about the situations depicted in the photographs. In the top image, you can get a sense of the shapes that come through in Chicago’s inner loop, very tall and long buildings. In the Santa photo, you notice that many of the children sit on the same leg and are very small, probably all less than 6 years old. I might end up trying this in my visual annotations of the Teenie Harris archive.

Zarard-lookingoutwards07

A visual study from the artist on coherence, plausibility, and shape.

I decided to look at Moritz Stefaner because he created one of my favorite data visualizations, Selfie City. While surfing his website I found to OECD index visualization

Countries represented by their wellness components.

THIS IS SUCH A GOOD DATA VIZ. The link to it is here: http://www.oecdbetterlifeindex.org/#/11111111111

It uses a flower. Flowers are pretty and aesthetically pleasing but also simple enough of a shape to still be legible. Additionally each petal represents a different factor in wellness for a country. Not to mention it is interactive and customizable. People are much more engaged when they can see how the data relates to them personally. I think that might be one key insight that will stick with me through my data viz pursuits.

Zarard-lookingoutwards06

The bot I like is the ASCII Art bot. I’m always into computationally generated art and event though it doesn’t say how it is made, since an art piece is posted every 30 minutes, I’m assuming it was computationally generated. One thing this bot does extremely well is creating character and emotion. It’s not like these bots are just stick figures. They are overwhelmingly personified by the simplest of characters.

Although I really enjoy this bot, I’d like if it played with font or boldness or italicization or formatting more. I think it could stand to use the full range of text editing options.

Zarard-lookingoutwards08

and the wind was like the regret for what is no more by João Costa

What it is: “This work consists of a set of sixteen bottles – with air blowers attached to each one of them – and a wind vane. The vane is fixed on the outside of a window and detects the direction the wind is blowing. Inside of the room, the motor starts blowing air into the bottle that corresponds to that particular direction. This event generates a smooth sound, and each direction has its own pitch. The bottles are arranged in a circle, similar to the shape of the compass rose, depicting the eight principal winds and the eight half-winds.” – Costa

To be honest I thought this was referencing some important historical monument but I did some research and I was actually just thinking of a spongebob episode.

The episode SpongeHinge. Not the monument Stonehinge. Honest mistake.

I think what makes the project so effective is that it requires your full attention to really be aware of whats going on. The artist is capturing wind direction with sound which is something you probably wouldn’t notice if you weren’t fully present in the moment. Wind direction isn’t something that people are generally attuned to so for us it is something like the invisible.

The capturing of the invisible, which is what the artist claims to get across isn’t quite there for me. The sound is obvious, but at the same time i don’t think it’d be immediately clear to me that the sound is linked to the wind direction (at least from the documentation). I think the winds would have to be more forceful and and controlled than what is given in that environment.

However I think the project is technically sound.

Zarard-Manifesto

The tenet I chose is the one where the critical engineer doesn’t just marvel at a new technology because it is new and combines cool elements of technology. The tenet says that the critical engineer looks beyond how their work is implemented to see how it will actually have an impact, more so the critical engineer digs into specificity of the impact of their work. An example of this is the advent of the social media platform. When Facebook, Twitter, Snapchat, etcetera became popular it was initially conceived of as a way to share light-hearted photos, jokes, and stories. However, the engineers didn’t really look in to the depth of what it means to be social. Being social means to sometimes be envious, which is why people who spend unhealthy amounts of time on social media are prone to depression. Being social means to compete to get the most friends, to ask people for money, to argue, and to ignore. But because the focus originally was on the technology and the ability, it wasn’t until years later where the apps were refined to account for things like fraud, hate speech, suicide posts.

Zarard- LastProject

Over the semester I’ve been working with the Carnegie Museuem of Art to analyze artwork from the photographer Teenie Harris. Teenie Harris was an amazing photojournalist who captured the most comprehensive visuals of Black American life from the 1930s to the 1970s. Because I am working on this project for the next 1.5 years, I wanted my last project to lay the foundation for future explorations.

So my project was essentially to create a collection of scripts to aid me in visually annotating the Teenie Harris archive and create a system of storing that information.

Things I did over the 3 weeks:

Got code working with Microsoft Azure to get face and emotion data for the Teenie Harris Archive, as well as tagging. Which involved debugging their starter code and working with tech support to figure out why my api keys didn’t work.

Figured out Jupyter

Installed and set up a MongoDB database to hold data from Teenie Harris Archive.

Learned the Pymongo driver for interacting with MongoDB through python.

Learned multithreading so that the code could run 12 times as fast (hours instead of weeks)

Integrated the data and descriptions from the Carnegie Museum of Art into the database.

Integrated the data and descriptions from dlib into the database.

Got Familiar with the OpenCV library and the Pillow library for annotating and photo manipulation.

Created images that combined CMOA, Dlib, Azure, and OpenCV data and inserted them into a database.

All of this work sets me up to do meaningful composition analysis on the data. View the results below:

Zarard-final-proposal

Preface: The Teenie Harris Archive is a collection of photographs currently held by the CMOA from a photographer Teenie Harris who captured African-American lifestyle in Pittsburgh during the 1930s through 1980s. His work is one of the most comprehensives lenses we have to view African-American life during that time period in America.

So as I’ve been working on computationally analyzing the Teenie Harris Archive, I’ve realized that my artistic practice lies more in the realm of the artifact. My main interest is information visualization, but not in the way of conveying data and ideas to other people, but in the imagery and aesthetics of data to the computer. The images below are just a few of the artifacts produced through my analysis of the archive, but they are a far cry from what we usually think of when we talk about data visualization. However these images below are what the data looks like. So my final project is going to be the creation of another artifact: 8 bit image tags. Actually, it’d be somewhat like QR codes for photos. The way i see it, Each pixel has 6 dimensions (row, column, red, green, blue, and alpha), Each of those dimensions can be used to represent a feature of the data, and multiple pixels can be used to represent the multiple features that the photo is comprised of.

Features that might be used

  • Faces (Number of them, male or female, approximate age, position in photo)
  • Line of best fit (number of them, rho, theta, sum of squared residuals)
  • Hough Lines (see above)
  • Tags by Microsoft API (tag, confidence of tag, location of tag in photo)
  • Classification of Courier vs. Non-Courier
  • Edge Detection results
  • Fourier Transform results
  • More will be considered but the end result could result in new methods of searching, sorting, or storing this information. Hopefully leading to a fuller understanding of the data.

    resizedequalized_2035

    copy_2010

    final_54021

    screen-shot-2016-11-18-at-12-06-56-pm

    zariacontrasts2052

    Zarard-Visualization

    This is a visualization of the spatial locations of faces in the Teenie Harris archive. Teenie Harris photos are generally taken in a 4:3 ratio which means I only had to plot 2 arrangements: the vertical case and the horizontal case. Since all of the photos were resized to have a max side-length of 1600, I just plotted where the faces would land on a 1600×1600 pixel grid assuming the photos were placed closest to the upper left hand corner.

    Each box represents a 40×40 pixel grid and the color is representative of how many faces overlap with that 40 x 40 pixel grid.

    I used R to create a matrix of where the faces were, exported it as a tsv and then imported it into javascript. Although I think the visualization is effective, one thing I don’t like is how the gradient of colors don’t really reflect the gradient of values. It makes some rings look like they contain more faces than what are actually located there.

    The First Button is for Horizontal Pictures


    The Second Button is for Vertical Pictures








    Zarard-Book

    PDF Download of The Faces of Teenie Harris

    Teenie Harris was a leading African-American photographer and photojournalist, who was active in Pittsburgh from the ~1940s-1970s. Through my internship at the Carnegie Museum of Art, I’ve been lucky to have access to the museum’s large database of Teenie’s photographs.

    This book is catalog of all the faces in a (small but representative) sample of 2000 photos from the photographer Teenie Harris. I used the dataset of photos and a JSON output of a facial detector to isolate all the faces in each image; compute the brightness of each face photo; and sort the faces by brightness. The effect was a book that goes from dark to light, eventually fading into the background color of the page.

    Most of my process was oriented around creating the code to generate the book. At first I needed to figure out how to retrieve all of the pixels that contained the face, so i used the bounding box that the face detector’s JSON output gave me. Then for each set of pixels I created a face object with the original size, the pixels, and the brightness. When creating the new face object I had to resize each face to a standard 20 pixels by 20 pixels. Then I sorted each face object by brightness. Then I looped through the sorted faces to create grids of faces that looks like this:

    screen-shot-2016-10-31-at-12-27-44-pm

    What I discovered was that the charm of the book wasn’t really the gradient but all of the variation within the gradient: the hidden pockets of light and shadow, the variation in face orientation, the expressions that deviate from the traditional portrait smile, and the occasional (and very rare), misclassified “face”. My inspiration for doing stemmed from the practice of of african americans lightening their photos to look “light-skin” or “passing white”. At first I wanted to see if I could catch any hint of that through sorting the faces, but then I realized i had no way of knowing the actual skin tones of the people in the photo versus the exposure that he used.

    All things considered, I think this was a successful project. If i were to change anything, i’d probably make the book double-sided instead of flip book style. Additionally I would run the script on the full dataset of images (which i did not have at the time) instead of a sample. Also I struggled a lot with sorting because I couldn’t figure out how to bring in built in java libraries, and so i implemented quicksort which gave me a lot of bugs, so in retrospect I would’ve just asked someone for help sooner.

     

     

    Zarard-FaceOSC

    So my inspiration for this project was The Wizard from the Wizard of Oz. One idea that I really enjoy is the mystery of the floating head. Although it’s really cheesy looking back on it now. I wanted to see if i could create the same feeling of grandness that the special effects in the picture below conveys. To me the grandeur doesn’t come from the fire or the pedestals, it actually comes from the magic conveyed by floating and transparency.

    wizardofoz1

    Those are the simple things I was most focused on capturing. Something I also played with was different ways to create depth. To create the depth complexity of the face in 2D is really hard however the person who was actually most famous for doing that inspired me, Picasso. With cubism he just broke everything into shapes (not particular accurate placement or shading) however the humanistic aspect was still conveyed. By trying to take aspects from his cubism, I realized that color would play a big part in how much this effect could be conveyed. Monochromatic was true to the one hue attribute of faces so I kept that. But since the polygons looked kind of glassy with the triangles I went for more modern colors. Additionally, I tried to make sure the mask was more dynamic than the normal one-to-one movements such as: if you blink the mask blinks or if you smile the screen turns yellow. So with my computation I tried to make the mask evolve through changing the polygons composing the face but not necessarily in direct response to your movements.

     

    A couple of technical notes:

    Because the face tracker constantly lost my face in different lighting environments, I have the program set so that the program just pauses when it loses your face.

    Additionally the mask uses all of the data points given by the OSC face tracker which is why the mask reflects the face so well.

    
    import oscP5.*;
    OscP5 oscP5;
    int found;
    float[] rawArray;
    int highlighted; //which point is selected
    int pickpoint = 0;
    IntList pickpoints = new IntList();
    int numpoints = 300;
     
    //--------------------------------------------
    void setup() {
     size(640, 480);
     frameRate(30);
     oscP5 = new OscP5(this, 8338);
     oscP5.plug(this, "found", "/found");
     oscP5.plug(this, "rawData", "/raw");
     
    }
    int time=0; 
    //--------------------------------------------
    void draw() {
     pushMatrix();
     scale(1.75);
     translate(-150,-150);
     
     
     //fill(random(0,20),random(200,244),random(140,150),100);
     noStroke();
     
     if (found != 0) {
     background(230,230,250);
     // fill in cubist mask
     
     fill(20,30);
     beginShape();
     for (int edge = 0; edge <= 32; edge +=2){
        vertex(rawArray[edge], rawArray[edge+1]);
     } 
     vertex(rawArray[52], rawArray[53]);
     vertex(rawArray[48], rawArray[49]);
     vertex(rawArray[38], rawArray[39]);
     vertex(rawArray[34], rawArray[35]);
     endShape();
     
     // fill in eyebrows
     //strokeWeight(5);
     //strokeJoin(MITER);
     ////strokeCap(SQUARE);
     //stroke(100);
     fill(random(0,50),140);
     beginShape();
     for (int brow = 34; brow < 42; brow +=2){
       vertex(rawArray[brow],rawArray[brow+1]);
     }
     endShape();
     
     beginShape();
     for (int brow = 42; brow < 52; brow +=2){
       if (brow != 42){
         vertex(rawArray[brow],rawArray[brow+1]);
       }
     }
     endShape();
     noStroke();
     //fill in nose
     fill(random(0,50),180);
     beginShape();
     vertex(rawArray[54], rawArray[55]);
     vertex(rawArray[70], rawArray[71]);
     vertex(rawArray[66], rawArray[67]);
     vertex(rawArray[62], rawArray[63]);
     endShape();
     
     //fill in left eyes
     fill(0, random(50,200));
     beginShape();
     for(int eye = 72; eye < 82; eye +=2){
       vertex(rawArray[eye], rawArray[eye+1]);
     } 
     endShape();
     
     //fill in right eyes
     fill(0, random(50,200));
     beginShape();
     for(int eye = 84; eye < 94; eye +=2){
       vertex(rawArray[eye], rawArray[eye+1]);
     } 
     endShape();
     
     if (pickpoints.size() == 0){
       for(int k = 0; k < numpoints; k += 3){
         pickpoint= int(random(rawArray.length));
         float x,y;
         if (pickpoint%2 == 1){
           x = pickpoint -1; 
           y = pickpoint;
         } else {
           x = pickpoint; 
           y = pickpoint + 1;
         }
       pickpoints.set(k,int(x));
       pickpoints.set(k+1,int(y));
       pickpoints.set(k+2, int(random(100)));
       }
     }
     for (int val = 0; val < rawArray.length -1; val+=2) {
       if (val == highlighted) { 
         fill(255, 0, 0);
       } else {
         fill(100, random(255));
       }
    
     }
     if (time % 3 == 0){ 
     for(int k = 0; k < numpoints; k += 3){
       if( int(random(0,8)) == 0){
         pickpoint= int(random(rawArray.length));
         float x,y;
         if (pickpoint%2 == 1){
           x = pickpoint -1; 
           y = pickpoint;
         } else {
           x = pickpoint; 
           y = pickpoint + 1;
         }
       pickpoints.set(k,int(x));
       pickpoints.set(k+1,int(y));
       pickpoints.set(k+2, int(random(100)));
     }
     }
     time = 0;
     print(pickpoints);
     }
     //pickpoints: x, y, alpha
     noStroke();
     
     for (int i = 0; i+7 < pickpoints.size(); i+=9){
       if(pickpoints.size() != 0){
       //make triangles by hopping every 9 points?
         fill(0,pickpoints.get(i+2)); 
         beginShape();
         vertex(rawArray[pickpoints.get(i)],rawArray[pickpoints.get(i+1)]);
         vertex(rawArray[pickpoints.get(i+3)],rawArray[pickpoints.get(i+1+3)]);
         vertex(rawArray[pickpoints.get(i+3+3)],rawArray[pickpoints.get(i+1+3+3)]);
         endShape();
       }
     }
     
     }
     time += 1;
     popMatrix();
    }
     
    //--------------------------------------------
    public void found(int i) {
     
     found = i;
    }
    public void rawData(float[] raw) {
     rawArray = raw; // stash data in array
    }
    
    

     

     

    Zarard-Plot

    You can find the code here:

    https://github.com/ZariaHoward/EMS2/blob/master/FinalBLMPlot/FinalBLMPlot.pde

    You can find the data here:

    https://www.theguardian.com/us-news/ng-interactive/2015/jun/01/the-counted-police-killings-us-database

    My original inspiration for this piece was the shooting of Alfred Olango last wednesday which you can read here: https://www.theguardian.com/us-news/2016/sep/30/san-diego-police-shooting-video-released-alfred-olango . Honestly I really just wanted to do something to show respect to the lives that are being destroyed publicly approximately every few days. This piece is a representation of the 198 lives that have been taken this year as of 9/28/2016.

    One aspect of the publicity of the Black Lives Matter movement is that when stories are reported around the shootings, they are usually from either a statistics and facts and figures perspective or from a perspective that essentially serves to dehumanize the victim and make the victim’s death seem justified. No sufficient homage is paid to the lives these people once lived, and no one recognizes who they could’ve been beyond their current twitter hashtag.

    So in this project I created a narrative of a life that could’ve been given to these black men and women, then repeated it line by line for every victim, and cut it off short where the police cut off their lives. The narrative is as follows:

    Met Best Friend. Performed On Stage. Had First Kiss. First Day Of High School. Joined Basketball Camp. First Romantic Relationship. First Trophy. First Paycheck. Prom. Finished Freshman Year of College. Pledged To a Fraternity. Voted For The First Time.Celebrates 20th Birthday. First Internship. First Legal Drink. Graduation. Paid First Rent Check. Cooked First Real Meal. First Car. Got Off Of Parents Health Insurance Plan. Spent First Christmas Away From Home. Married the Love of Their Life. Bought First Home. Beginning of Hair Loss. Stopped Wearing Converse Sneakers. First Family Vacation. Watched The Birth Of First Child. Starts Volunteering In The Community. Had Second Child. Invests In The Stock Market. Buys Life Insurance. Awarded Huge Promotion. Got An Office With A View.  Moved Into New Home. Started Their Own Company. Went to High School Reunion. Parents Passed Away .Tries To Get Fit Again. Joined A Church. Celebrated Golden Anniversary. Both Kids Move Out Of The House. Hosted Thanksgiving At Home. Creates A Will. Bought Another Pet. Babysat Grandchildren. Retired. First Grandchild. Openly Uses AARP Discounts.

     

    Below are the results of processing sketches that are visual representations of this project. In these first two sketches I played around a lot with how much of the narrative should be visible by overlaying the stories. I also played around with removing the narrative entirely and just letting the numbers be the narrative. But that goes back to reducing these lives to numbers and their deaths. One reason i considered keeping these aesthetics is because they both have that ghostly effect that really screams death to me. And I wasn’t sure if i should place emphasis on the idea of death.
    screen-shot-2016-09-30-at-11-06-34-am screen-shot-2016-09-30-at-11-05-31-am

    The next two sketches are my experimentations with color. I wasn’t sure whether I wanted to do some type of symbolism through color. I considered red because it symbolizes blood and also the silver-blue of gunmetal or alternatively the police. Ultimately it didn’t add anything in my opinion however when I print the piece I still want it to be printed in silver ink.

    screen-shot-2016-10-02-at-9-34-26-am screen-shot-2016-10-02-at-9-34-02-am

    This is an up-close snapshot of the composition. The clutter and slight illegibility is intentional. The cutoff of the narrative is intentional. The spacing and gapping is also intentional. The size of how you’re viewing it on your screen (if you are viewing it at 100%) is essentially how small I wanted the text to be to force the viewer closer to the piece. screen-shot-2016-10-02-at-9-29-36-am

    This is what the composition looks like as a whole. It’s approximately 24 inches by 72 inches. Due to the scale I had to use a unique method to plot (which I honestly haven’t gotten the hang of yet) I haven’t been able to make the machine that prints large-scale work. Also I can’t reduce the size otherwise you won’t be able to see the narrative. And unfortunately because 198 black people were killed by the cops this year, I have to print the entire 198 lines. Overall my biggest criticism for myself is that I couldn’t get the large plot machine working. This would have been so much more powerful with the handwritten visuals that would have been conveyed by silver pen on paper.

    screen-shot-2016-10-02-at-9-32-52-am

    Zarard-LookingOutwards04

    Nike+ Collab: City Runs

    YesYesNo Team: Zach Lieberman, Emily Gobeille and Theo Watson.

    http://www.yesyesno.com/nike-collab-city-runs/

    This project is fascinating for me because it takes real time data from nike+ sensors in people’s shoes to light up a map. The thing that makes the project so fascinating though is the scale at which this data is produced. Since Nike has thousands of people wearing their shoes, their data generates all kinds of variation, but also a consistency where they have their Nike training sessions.  One thing that makes the interactivity so captivating is how the lines evoke the energy they are representing. It is one of the few monochromatic projects that I’ve seen that actually work and a lot of that comes from the glowing style evoked from the lines but also the layering with street life.

    If I were to change something about the project, I’d probably layer current runs over previous runs or do something to highlight the change over time. I can imagine that at certain times of the day the project “dims out” because no one is running. An even cooler idea would be to highlight runners running the same path so you could look at it like a race.

    Zarard-ResponseToClockFeedback

    I liked the clock feedback mostly because it showed me what other people are evaluating for. When I start the pieces in this class I don’t really know what the critical parts to pay attention to and the things that people commented on (i.e. color scheme) really didn’t occur to me. And even when I completely failed at the concept, it was nice that people could find good things to say about it. I think I actually learned the most looking at the feedback on other projects as opposed to just my own because that allows me to see what is and isn’t successful in the particular assignment.

    Zarard-AnimatedLoop

    My initial inspirations for the piece were orbits, constellations, compasses, and time. When I fist thought of looping, I thought about how hands loops around the face of a clock. Even in the absence of numbers, the looping still give a sense of time passing. As i sped up the time of the loops I noticed a pendulum effect starting to emerge. I really just wanted to experiment with the aesthetics of time passing, and as I tweaked different details , I noticed that I could make myself feel like time was moving slower or faster, solely based off of my mind’s association between the pace of the orbiting lines and time. When i think of where I fell short in this project, I think of detail. When I was designing this loop, in my head I had pictures of chronograph watches which is the classic and refined style I was looking for. It was just hard to accomplish that aesthetic through graphics. So even though it conveys a sense of time and space, it looks more like a diagram than a design piece.

    Animated Loop

    Zarard-Interruptions

  • The artwork is square.
  • The artwork consists of many short black lines, on a white background.
  • The lines all have the same length.
  • There are certain regions where lines are not generated from
  • Those regions do not appear to have any certain polygonal shape
  • Sometimes the gaps in the pictures can go off the page
  • All the lines start and stop within the boundary
  • Each line is only intersected at most once
  • There are a lot of triangular and diamond shapes
  • It looks like the gaps can sometimes coincide with eachother
  • The lines almost strike me as if they were randomly drawn in a grid
  • Because the lines are almost evenly spaced apart
  • No line is 0,90,180,or 270 degrees
  • Or even within 10 degrees of those numbers
  • Most of the lines tend to be more vertically oriented
  • Click the drawing to see it animate. Sometimes it creates the interruptions, sometimes it doesn’t, it’s random. Upon reflection maybe if the lines were smaller

    This piece was really hard to create only because it is a subtractive work intuitively. The first thing it thought to do was to draw the lines and then randomly erase them but when I looked closer at her works, the erasure was clustered. Also it became clear to me that her lines weren’t random and that they had randomized structure. The article on effective complexity really helped me look closer. My process was essentially creating, seeing, and revising multiple times until it looked closer and closer to the piece Interruptions. I’m still facing difficulty seeing the actual shape of the interruptions but overall i think it is a good representation.

    clock

    Zarard-LookingOutwards03

    Videorative Portrait of Randall Okita from Sergio Albiac on Vimeo.

    Sergio Albiac created a project

    “Painting a Videorative portrait (a generative, narrative and interactive video portrait) starts with collecting personal videos of the person portrayed, tagged by him/her with relevant concepts and descriptions. Then, using a custom developed tool, the artist “paints with meanings” and generates a video portrait, subtitled with generative personal narratives.” -quoted from his website. What I find most inspiring about the project is it’s ability to take something standard, traditional, antiquated and turn portraiture into a living breathing medium again. As we are able to more accurately and thoroughly represent life, for the majority of people it is not enough to simply have a picture. By encompassing time and memories within his piece, he takes the timeless medium of portraiture into the fourth dimension. One critique I have on the piece is that there was only one final product made. Had he done these portraits on several people we would be able to get a sense of the real powers of the tool. Additionally if he had released the tool so that people could make their own self portraits, it would have been much more powerful. Unfortunately the algorithm behind the tools that he created isn’t really disclosed through the video (or anything else). I know it responds to whatever photos you are tagged in and how you interact with the tool. Albiac is the one who created the response the user gets from the tool. Since he created the rules, his artistic sensibilities are pervasive. The effective complexity leans more in the realm of disorder. The only structure the piece really gets order from is the ability of the viewer to identify eyes, noes, ears and mouth. Because those elements are always there then the viewer can always classify the image as a face which brings comfort and consistency.

    Zarard-Reading03

    Portrait of Cara Walker by Chuck Close
    Portrait of Cara Walker by Chuck Close

    Effective Complexity

    I like Chuck Close paintings. These exhibit effective complexity because the shapes and orientation of the grid is random. The colors are semi-random and rotate between skin tones and pure saturated hues. But the subject and content is always portraiture which is what gives his work order. If i had to choose his work does lie more towards having order, but the fact that you can’t identify a pattern to the color choices is what helps his work flourish in chromatic chaos.

    The Problem of Meaning

    Should generative art be about more than generative systems? This issue in the chapter takes a very strong “what art should be” approach. I personally think that art should always go beyond just being about the process, function, and properties of art. In my practice, art is a statement. If you aren’t saying something then what are you really contributing? This applies more so in the case of generative systems because the code and the autonomy are just a means to an end. The real fascination lies in what the code and autonomy is able to speak, and if it’s something that people couldn’t speak for themselves.

    Zarard-LookingOutwards01

    Eyeo 2016 – Gene Kogan from Eyeo Festival // INSTINT on Vimeo

    http://www.genekogan.com/

    So I looked at a lecture by Gene Kogan. He is an artist that looks at generative systems and software as fuel for creativity. What intrigued me is that he’s writing a book called Machine Learning for Artists. Seeing that I’m finding it difficult to figure out how to merge ML and art, his lecture seemed like the way to go.

    This piece is called Deepdream prototypes , it uses Google’s inceptionism code and artificial neural networks. in his own words, “The code accepts images as inputs and iteratively evolves the pixel values towards some coherent resemblance to the image classes it knows, producing wild images of “pig-snails,” “camel-birds,” “dog-fish,” and the famous “puppy slugs,” among many other categories.” I really liked it’s piece because it requires a sort of technical mastery with ML techniques as well as an aesthetic sense to produce the kinds of images you want to create. In my opinion it also lies in the uncanny valley because the resulting pieces resemble human art so much that it’s very awkward seeing the computational modification.

    My favorite quote: He compared style transfers with machine learning to “Like if you rewrote the book of Genesis, Edgar Allen Poe style”

    Zarard-FirstWordLastWord

    I am personally interested in last word art, specifically that which stands the test of time. I consider experimentation to be important but to me it feels like JUST the beginning, and lacking of more mastery. While new technology is allowing constant experimentation and a never-ending feeling of “newness”, It is still slow to be accepted into established and more prestigious museums (MOMA, Guggenheim, Louvre). This is because the works are so new and experimental, that they don’t evoke the same feeling of completeness like when you look at a Jackson Pollock piece. Part of the reason why the sense of completeness isn’t their because we as a culture are still shaping and defining what it means to make quality pieces that lie in the intersection of Art and Technology. Although some may argue that technology is essentially an ephemeral medium anyways that doesn’t require a piece to stand the test of time, I would argue that the digital footprint we leave is more thorough and permanent than ever before, and that in the age of data where everything is recorded and stored… our art might last forever … and maybe that means it should have the last word.