Xastol – Last Project


For the last project I created a “scene generator”. My initial idea consisted of developing a program that would generate random scripts and movie ideas given a database of well known films. However, after doing more research on LSTM and recursive neural networks, I found that it would take to much time to train the network.


After conversing with Professor Golan, I began to pursue a similar idea. Utilizing a database of various photos and captions, I introduced two chat bots to a random photo.  One bot would say the caption associated with the provided photo and set the scene for the two bots to converse. After the “scene” ended the entire script would be saved into a text file, similar to the format one would find for a film.

For coding purposes, I decided to use python. Although not very good with visualizing things, python has a lot to offer in terms of collecting and presenting data to AI. Regarding AI, I found the cleverbot module to be the most responsive. Additionally, the program worked particularly well when the bots shared the same database of responses (even though shared same databse, bots were initialized differently as to not respond the exact same things every time).


I actually really enjoyed the process for this project. Although I felt lost about the direction of my project, I really enjoyed the outcome and look forward to developing it more to give more humanistic qualities to the two “actors” (i.e. – text sentiment analysis,  vocal inflections,  etc.).



Favorite scenes.

Another example.

In program conversation/picture change.

A short conversation about genders (saved script).

Github: https://github.com/xapostol/60-212/tree/master/scriptingScenes


# Xavier Apostol
# 60-212 (Last Project)
    # NOTE: runs in python 2.7

import os
import re
import time
import msvcrt
import random
import pyttsx
import pygame
from textwrap import fill
from cleverbot import Cleverbot

# initializing chat bots
bot1Name = "ROBOTIC VOICE 1"
cb1 = Cleverbot()
bot2Name = "ROBOTIC VOICE 2"
cb2 = Cleverbot()

# getting started with voice recognition
engBots = pyttsx.init()
voices = engBots.getProperty('voices')

# misc
sleepTime = 1

# conversation lists
bot1Conversation = []
bot2Conversation = []

# max length for text
maxTextLen = 60

# formats txt appropriately (text wrapping)
def formatTxt(text):
    lstSpace = []

    text = fill(text, maxTextLen)
    for char in range(0, len(text)):
        if text[char] == "\n":
    return lstSpace

# change to location of "photo_database" folder
picsFldr = "C:/Users/Xavier/Desktop/60-212/Class Work/FINAL PROJECT/scriptingScenes/photo_database" 
filenameLst = []

# collect photo names
for f in os.listdir(picsFldr):
    fileName, fileExt = os.path.splitext(f)

# collect captions
fo = open("SBU_captions_F2K.txt", 'r')
captionsList = fo.read().split('\n')

# all image titles are numbers
def grabCaption(imgTitle):
    indx = int(imgTitle)
    return (captionsList[indx])

# initiating pygame

# start window/set values
running = True
windSz = winW, winH = 1280, 720
#windSz = winW, winH = 1920, 1080
window = pygame.display.set_mode(windSz, pygame.RESIZABLE)
pygame.display.set_caption("Robotic Voices Script")

imgSz = imgW, imgH = 450, 400
#imgSz = imgW, imgH = 600, 550

backGClr = (0, 0, 0)

# optimize frame rate
clock = pygame.time.Clock()
framesPSec = 30
clock.tick(framesPSec)  # change FPS

# font implementation
fontSz = imgW / 10
font  = pygame.font.SysFont("Arial", fontSz)
fontClr = (255, 255, 255)

# bot X and Y
displayTextX = winW/2
displayTextY = winH/2 + fontSz*3 + 10

# loads and displays picture of interest
def displayPicture(pictureName):
    imgLoad = pygame.image.load(picsFldr + "/" + pictureName + ".jpg").convert()
    imgLoad = pygame.transform.scale(imgLoad, (imgW,imgH))
    window.blit(imgLoad,(displayTextX-imgW/2, displayTextY-(imgH + fontSz/1.5)))

# displays text for each bot on screen
def displayConvo(botName, botVoice, botText, pictureName):
    # initializing variables
    botTextLH1 = ""  # last half of botText (if too big)
    botTextLH2 = ""  # last half of botText (if bigger than twice the maxLen)
    indxChng1 = 0
    indxChng2 = 0

    # for testing
    #print(botName + " - " + botText)

    # set voice and what to say
    engBots.setProperty('voice', voices[botVoice].id)  # feminine voice

    # start writing text
    if len(botText) > maxTextLen*2:
        # formats to three lines
        indxChng1 = formatTxt(botText)[0]
        indxChng2 = formatTxt(botText)[1]
        botTextLH2 = botText[indxChng2+1:]
        botTextLH1 = botText[indxChng1+1:indxChng2]
        botText = botText[:indxChng1]

    elif len(botText) > maxTextLen:
        # formats to two lines
        indxChng1 = formatTxt(botText)[0]
        botTextLH1 = botText[indxChng1+1:]
        botText = botText[:indxChng1]

    # sets up vocalization of text
    vocTxt = font.render(botText, False, fontClr)
    vocTxtLH1 = font.render(botTextLH1, False, fontClr)
    vocTxtLH2 = font.render(botTextLH2, False, fontClr)

    # displays text
    window.blit(vocTxt,    (displayTextX - vocTxt.get_rect().width/2,
    window.blit(vocTxtLH1, (displayTextX - vocTxtLH1.get_rect().width/2,
                            displayTextY + fontSz))
    window.blit(vocTxtLH2, (displayTextX - vocTxtLH2.get_rect().width/2,
                            displayTextY + fontSz*2))

    displayPicture(pictureName)  # display subject
    pygame.display.update()      # update display
    engBots.runAndWait()         # vocalize text
    time.sleep(sleepTime)        # wait time
    window.fill(backGClr)        # reset canvas (set to black to erase prev msg)

# runs entire scene (program)
def runScene():
    # setting counter and magic numbers
    count = 1
    maxRuns = 200  # free to change

    ### CONVERSATION ###
    # bot 1 starts conversation
    ranPicName = random.choice(filenameLst)

    bot1Response = grabCaption(ranPicName)
    displayConvo(bot1Name, 0, bot1Response, ranPicName)

    while (count <= maxRuns):
        # chances of implementing item
        ranInt = random.randint(5, 10)
        result = count % 4

        # testing purposes
        print("Random Int: " + str(ranInt))
        print("Result: " + str(result))

        # check if randomly apply item from "Table of Responses"
        if (result == 0):
            # collects random picture and caption
            ranPicName = random.choice(filenameLst)
            bot2Response = grabCaption(ranPicName)
        # check if it's time to say goodbye.
        elif (count == maxRuns):
            bot2Response = "Bye."
        # else keep responding
            bot2Response = cb2.ask(bot1Response)

        # bot 2 responds
        displayConvo(bot2Name, 1, bot2Response, ranPicName)

        # bot 1 responds
        bot1Response = cb1.ask(bot2Response)
        displayConvo(bot1Name, 0, bot1Response, ranPicName)

        count += 1

        # press anything to stop program (break out of loop)
        if msvcrt.kbhit():


# writes conversation to a .txt file (script)
def saveConversationToScript():
    file = open("robotic_voices_script.txt", "w")

    file.write("SCENE 1")
    file.write("INT. DARKNESS")

    file.write("There is nothing but darkness.")
    file.write("Suddenly, two robot voices emit into conversation.")
    file.write("The first, ROBOTIC VOICE 1, speaks.")

    for i in range(0, len(bot1Conversation)):

        if i == len(bot1Conversation) - 1:

    file.write("The voices stop.")
    file.write("There is nothing but darkness.")
    file.write("END SCENE")



Xastol – LookingOutwards09

As stated in my final project proposal, I want to create a program that generates movie plot-lines. My initial influence from this project came from the short science-film Sunspring. Unlike other films, Sunspring is generated from an AI named Benjamin, which uses Long Short-Term Memory (LSTM) to develop a script based off of other scripts fed into. Although Benjamin was created over the course of a year, I hope to develop a similar AI, or algorithm, that can do the same thing but on a smaller scale. Rather than actually generating the entire script of the movie, I hope to at least generate these movie plot-lines with their corresponding characters, conflict, resolution, etc. (i.e. – film synopsis).


Very similar to this idea is the Story Idea Generator – Automatic Plot Generator. This plot generator creates a small synopsis and a few lines of “praise” for the film generated. I think this provides a good start and could definitely be improved, as this plot generator isn’t entirely generative (fills in random characters, items, etc. for an already written template for “Paranormal Romances”, “Comedies”, etc.).

Sunspring: http://arstechnica.com/the-multiverse/2016/06/an-ai-wrote-this-movie-and-its-strangely-moving/

Plot Generator: http://www.plot-generator.org.uk/

Xastol – Proposal

For my final project, I want to revisit generative works. Specifically, I want to create a program that randomly generates movie concepts. I am currently researching machine learning algorithms and searching for a large movie database, as these will be imperative to my program.

Xastol – ManifestoReading

From the manifesto reading, Tenet #1 is the most compelling to me. Tenet #1 says that the Critical Engineer looks at technology and its effects on the well-being of society. If this technology proves to be a possible threat to said society, then the Critical Engineer’s job is to evaluate the threat and propose a change/solution regardless of any legal protections. I think this is interesting because a Critical Engineer could be anyone in society. I feel like this tenet says it’s up to the people that make up the social structure to determine if the technology is a possible threat and whether to abolish, change, or keep it.

An example of this tenet is obvious in intellectual property laws. Although the entire point of intellectual property laws are to give ownership to technology/work, there are cases where information/technology is seen as public domain and deemed imperative that citizens have access.

Xastol – LookingOutwards08

A project from Hiroshi Ishii and the Tangible Media Group at (MIT Media Lab) that I became very interested with is Materiable. The project is based off of Hiroshi Ishii’s concept of tangible works called radical atoms. The idea behind radical atoms is a combination of computational screen work and actual physical work: using technology to make previously un-tangible data, tangible. Materiable exemplifies this idea very well. In this project, intractable prisms/pins come together to create a larger malleable prism that is responsive to touch and is able to replicate dynamic material (i.e. – sponge, elastic surface, etc.). This work, among many other Hiroshi Ishii works, is on the forefront of new technology/dynamic works. I’m excited to see how this concept of radical atoms can expand farther from its roots and effect other previously intangible media like film.

Xastol – Mocap









For this project, I really wanted to alter some characteristics  of previously created narratives, in hopes of changing their concepts. My initial idea consisted of imitating lead roles in films and switching their living forms with inanimate objects. (i.e. – Replace movie characters with the tools/objects they use.)


When coming up with possible movies to imitate, I regarded the key objects (i.e. – staff, gun, etc.) and how they related to their character’s role in the film (i.e. – police guard, wizard, etc.). The film that I thought would convey this the best was Quinton Tarantino’s Pulp Fiction. More specifically, I aimed to re-create Jules, played by Samuel L. Jackson, and a specific dialogue he has with one of his boss’s “business partners”. After reviewing the scene multiple times, I then decided to change up my concept and replace the main characters with a sort of visual pun (Hint: Pulp Fiction and Oranges).

After finalizing details, I recorded multiple BVH files of Jules and the business partner, Brett. This process was a bit difficult since the camera used (Kinect V2) didn’t particularly like the fast movements I was trying to imitate while standing and sitting. As a result, some of the movements came out a little glitchy and some of the previous “aggressive” movements had to be slowed down.

After recording, I inputted the BVH files and adjusted camera angles similar to those in the actual scene. This took quite a while, as timing was key. After the scenes were lined up, I proceeded to create a set that would fit the new concept I was aiming for (i.e. – kitchen counter). I then rendered out the figures and adjusted certain characteristics at certain points of the film. For example, when the Brett Orange is shot, his color begins to change to a greener, more vile color.


I am particularly happy with the results I created. Although the rendering of the characters is not as high of quality as I would like for it to be, I am happy with the results given a rather chaotic week.

I will definitely continue to make this project better in the future (i.e. – work on developing software to automatically rotor-scope an inputted scene, make adjustments to character rendering for smoother movement, etc.). Once I have a better understanding of the bugs I’m facing and also have created more efficient programs to render out these scenes, I may even continue to recreate the entire film!


GitHub Link: https://github.com/xapostol/60-212/tree/master/Deliverables%208

// Renders a BVH file with Processing v3.2.1
// Note: mouseX controls the camera.
import ddf.minim.*;

PBvh1 orngJ;
PBvh2 orngB;
PImage bg1; // background
PImage bg2; // background 2

// Time
int m;

AudioPlayer player;
Minim minim; // audio context

void setup() {
  size( 1280, 720, P3D );
  // Load a BVH file recorded with a Kinect v2, made in Brekel Pro Body v2.
  orngJ = new PBvh1( loadStrings( "jules_00.bvh" ) );
  orngB = new PBvh2( loadStrings( "brett_00.bvh" ) );
  // Load the soundfile
  minim = new Minim(this);
  player = minim.loadFile("Pulp Fiction - Jules and his Bible Verse1_01.mp3", 2048);
  bg1 = loadImage("background_02.jpg");
  bg2 = loadImage("background_01.jpg");

void draw() {
  m = millis();
  //println(m);   //Purposes of testing/timing for camera angles and effects.
  setMyCamera();        // Position the camera. See code below.
  //drawMyGround();     // Draw the ground. See code below. (Purposes Of Testing)
  updateAndDrawBody();  // Update and render the BVH file. See code below.

void updateAndDrawBody() {
  // Stop The Scene
  if (m > 118800) {
    m = 0; 
  translate(width/2+50, height/2, 10); // position the body in space
  scale(-1, -1, 1);                    // correct for the coordinate system orientation
  orngJ.update(m);                     // update the BVH playback
  orngJ.drawBones();                   // a different way to draw the BVH file
  translate(width/2, height/2, -250);
  scale(-1, -1, -1);

void setMyCamera() {
    // Adjust the position of the camera
  float eyeX = width/2;            // x-coordinate for the eye
  float eyeY = height/3.0f - 500;  // y-coordinate for the eye
  float eyeZ = 500;                // z-coordinate for the eye
  float centerX = width/2.0f;      // x-coordinate for the center of the scene
  float centerY = height/2.0f;     // y-coordinate for the center of the scene
  float centerZ = -400;            // z-coordinate for the center of the scene
  float upX = 0;                   // usually 0.0, 1.0, or -1.0
  float upY = 1;                  // usually 0.0, 1.0, or -1.0
  float upZ = 0;                  // usually 0.0, 1.0, or -1.0

  //                          CAMERA ANGLES                              //
  // Angle #1 (Over Shoulder - BRETT)
  camera(eyeX-70, 0, -eyeZ, centerX, centerY, -1*centerZ, upX, upY, upZ);
  // Angle #2 (Over Top - JULES)
  if (m > 6600) {
    camera(width/2, height/3.0f - 250, 200, centerX, centerY, centerZ, upX, upY, upZ);
  // Angle #1 (Over Shoulder - BRETT)
  if (m > 9500) {
    camera(eyeX-70, 0, -eyeZ, centerX, centerY, -1*centerZ, upX, upY, upZ); 

  // Angle #3 (Wide)
  if (m > 10300) {
    camera(width/2, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ);
  // Angle #1 (Over Shoulder - BRETT)
  if (m > 17000) {
    camera(eyeX - 100, 0, -eyeZ + 200, centerX, centerY, -1*centerZ, upX, upY, upZ); 
  // Angle #4 (Close Up - JULES)
  if (m > 24600) {
    camera(width/2 + 50, height/3.0f - 250, -60, centerX, centerY, centerZ, upX, upY, upZ);
  // Angle #1 (Over Shoulder - BRETT)
  if (m > 31500) {
    camera(eyeX - 100, 0, -eyeZ + 200, centerX, centerY, -1*centerZ, upX, upY, upZ); 
  // Angle #4 (Close Up - JULES)
  if (m > 36000) {
    camera(width/2 + 50, height/3.0f - 250, -60, centerX, centerY, centerZ, upX, upY, upZ);
  // Angle #1 (Over Shoulder - BRETT)
  if (m > 44800) {
    camera(eyeX - 100, 0, -eyeZ + 200, centerX, centerY, -1*centerZ, upX, upY, upZ); 
  // Angle #2 (Over Top - JULES)
  if (m > 48850) {
    camera(width/2, eyeY, 200, centerX, centerY, centerZ, upX, upY, upZ);
  // Angle #4 (Close Up - JULES)
  if (m > 52000) {
    camera(width/2 + 50, height/3.0f - 250, -60, centerX, centerY, centerZ, upX, upY, upZ);
  // Angle #1 (Over Shoulder - BRETT)
  if (m > 61000) {
    camera(eyeX - 100, 0, -eyeZ + 200, centerX, centerY, -1*centerZ, upX, upY, upZ); 
  // Angle #4 (Close Up - JULES)
  if (m > 62000) {
    camera(width/2 + 50, height/3.0f - 250, -60, centerX, centerY, centerZ, upX, upY, upZ);
  // Angle #4 (Close Up - JULES)
  if (m > 79000) {
    camera(width/2 + 50, height/3.0f - 250, -60, centerX, centerY, centerZ, upX, upY, upZ);
  // Angle #1 (Over Shoulder - BRETT)
  if (m > 93000) {
     camera(eyeX - 100, 0, -eyeZ + 200, centerX, centerY, -1*centerZ, upX, upY, upZ); 
  // Angle #5 (Tilt - JULES)
  if (m > 97000) {
    camera(width/2 + 50, height/3.0f - 300, -80, centerX, centerY, centerZ, -0.5, upY, upZ);
  // Angle #1 (Over Shoulder - BRETT)
  if (m > 110000) {
     camera(eyeX - 100, 0, -eyeZ + 200, centerX, centerY, -1*centerZ, upX, upY, upZ);
  // Angle #3 (Wide)
  if (m > 112800) {
    camera(width/2, height/6.0f - 1000, eyeZ, centerX, centerY, centerZ, upX, upY, upZ);

void drawMyGround() {
  // Draw a grid in the center of the ground 
  translate(width/2, height/2, 0); // position the body in space
  scale(-1, -1, 1);

  float gridSize = 400; 
  int nGridDivisions = 10; 
  for (int col=0; col<=nGridDivisions; col++) {
    float x = map(col, 0, nGridDivisions, -gridSize, gridSize);
    line (x, 0, -gridSize, x, 0, gridSize);
  for (int row=0; row<=nGridDivisions; row++) {
    float z = map(row, 0, nGridDivisions, -gridSize, gridSize); 
    line (-gridSize, 0, z, gridSize, 0, z);


Xastol – Visualization


For the data visualization of Healthy Ride Pittsburgh, I wanted to figure out the specifics of the bikes used. I wanted to figure out, “What’s the most popular bike used?”


Given the data for Quarter 1 of 2016, I used Excel and D3 to solve this question. Using Excel, and its formulas, I was able to deduce what bike was taken for the most rides. I then divided the data between the two user types: Subscribers and Customers. After creating these two separate files, I totaled the data for each user type and then concatenated it into a final file. Using this final information, I then used the bl.ocks example for Pie Charts and Bar Graphs (http://bl.ocks.org/NPashaP/96447623ef4d342ee09b) to represent the information.


After observing the data, I found that the majority of 61 rides, using the most popular bike (Bike ID: 70342), were initiated by customers (36:25 for customer:subscriber). This ratio seems to be applicable to the entire Healthy Ride Pittsburgh system. Additionally, I found that the bike had a lot of minutes on it compared to most bikes. This is primarily because it is the most popular bike. However, this bike also had one of the longest trips accounted for in the entire system (initial Healthy Ride Pittsburgh Q1 2016 Data).


From this quantitative data, I then began to question more about the bike:

Does it have the most comfortable seat out of the surrounding bikes? Does it ride the smoothest? Are there certain aesthetic qualities that make it more appealing to most bikes? Is it just by chance that this bike has become the most popular bike and is it actually identical in quality to others?

Only further research of the bike’s physicality can help me answer this question. I hope that one day, if I do find myself using Healthy Ride Pittsburgh, that I’ll come across Bike 70324 and determine for myself if its popularity is based on chance or fact.

github link: https://github.com/xapostol/60-212/tree/master/70342%20BIKE%20DATA%20-%20xastol

Xastol – LookingOutwards07

A project that I’m particularly interested in is Lev Manovich’s “SelfieSaoPaulo” (2014). In this project, Manovich collects thousands of selfies taken from individuals living in Sao Paulo, Brazil and displays them on a building within the city. Although his collection of mass data is intriguing, I’m particularly interested in how he is able to further his audience by involving people of the city. Additionally, the topic of facial recognition still interests me, as his program is able to recognize thousands of selfies (with varying quality of photos).

Another project of his that seems connected to “SelfieSaoPaulo” is his most recent piece, “Inequiligram”. In his project, he collects social media data from New York City, New York and finds patterns of inequality. How he is able to recognize these patterns in such a large concentration of data is astonishing and sets the current bar for mass data collection.

SelfieSaoPaulo: http://lab.softwarestudies.com/2014/06/selfiesaopaulo-new-project-by-moritz.html

Inequiligram: http://inequaligram.net/

Lev Manovich: http://manovich.net/

Xastol – LookingOutwards06







A twitter bot that I enjoyed was reverseocr. The bot selects a random word and then draws until the ocr library recognizes it as the given word. This process occurs four times a day.

The algorithm used is intriguing to me because it’s based on the probability of the cursor drawing shapes that are similar to letters; the algorithm is based on randomness. I’m interested in learning more about generativity and applying it to my own work (particularly film). I find this algorithm to be a good starting point at developing new, randomly genereated aesthetics/filters that can be applied to the video form.

reverseocr – https://twitter.com/reverseocr

Xastol – Book




#RIP is a generative book that showcases the use of the hashtag “#rip” on Twitter for October 21st, 2016. Using Temboo to gain easy access to the Twitter AI for collecting data, random tweets with the given hashtag are accessed and placed over 8-bit tombstones. In correlation with the randomly selected tweets are randomly generated backgrounds. The tombstones, on which the tweets are placed, are the same size and shape and flip horizontally every page (even pages vs. odd pages). Additionally, the background colors are randomly generated with dark color range to give off a somber feel that contrasts with with the general sarcastic mood of the tweets.


The idea for my book actually sprang up while in the studio. After coming up with many over-complicated ideas and trying to figure out how realistic those ideas were while facing the learning curve of basil.js, I was stumped. That’s when a friend (takos) went into detail about us suffering and how we were going to probably spend the entire night in the studio, I responded with “rip” and then got myself thinking about how the term is used in slang. Initially, the term comes from “R.I.P.” or “Rest In Peace”, generally used when expressing ones condolences to the end of another’s life. However, the term has evolved into the singular word “rip”, and is often used to express sympathy with minor incidents of negative connotation.


Person 1: “I just stubbed my pinkie toe and it hurts.”

Person 2: “rip.”

To help this come to life, I created two programs: one that would scrape data from Twitter and one that would generate the visuals. After being introduced to the website Temboo, gaining access to tweets on Twitter became trivial, as the website provides code for accessing the Twitter AI and Search features. This was completed in Processing.

I wanted to create visuals that would connect the traditional and modern usage of the term and found that Processing would also be useful for this. To connect the usages, I decided to create an 8-bit tombstone against a dark background (modern visuals against traditional feel).

Lastly, I converted my information into .json files (also done through Processing), so that data would be easily transmitted into InDesign (using basil.js).

In terms of my final product, I am satisfied, but not fully pleased. I am satisfied with the outcome of my project, given my limited knowledge on formatting print media and limited window to work on the project due to other class projects. Mainly, I think it’s the aesthetics of the book that lack, compared to the concept itself. I think if given more time to work with basil.js and print media, I’ll be able to create a much more aesthetically pleasing and content full piece.


Book Link: http://www.blurb.com/bookstore/invited/6594146/096a0b83792047730ec84f6dc02ab9d8dcb10012

PDF Link: rip-xapostol

Github Link: https://github.com/xapostol/60-212/tree/master/GenBook-xastol



import com.temboo.core.*;
import com.temboo.Library.Twitter.Search.*;

// Create a session using your Temboo account application details
TembooSession session = new TembooSession("xastol", "myFirstApp", "WHATEVER YOURS IS");

void setup() {
  // Run the Tweets Choreo function

void runTweetsChoreo() {
  // Create the Choreo object using your Temboo session
  Tweets tweetsChoreo = new Tweets(session);

  // Set inputs
  String myQuery  = "rip";
  tweetsChoreo.setAccessToken("YOUR STUFF");
  tweetsChoreo.setConsumerKey("THEIR STUFF");
  tweetsChoreo.setConsumerSecret("SECRET THEIR STUFF");
  tweetsChoreo.setAccessTokenSecret("SECRET YOUR STUFF");

  // Run the Choreo and store the results
  TweetsResultSet tweetsResults = tweetsChoreo.run();
  // Print results

  String[] result = {tweetsResults.getResponse()};



// Template Provided by Golan
// Xavier Apostol (Xastol)
// 60-212 (8:30 - 11:20am & 1:30 - 4:20pm)
// xapostol@andrew.cmu.edu
// Random Character Generation

import processing.pdf.*;
boolean bRecordingPDF;
int pdfOutputCount = 1; 

float count = 1;
float changeFac = 40;
void setup() {
  size(750, 750);
  bRecordingPDF = true;
void keyPressed() {
  // When you press a key, it will initiate a PDF export
  bRecordingPDF = true;
  count += 1;
  if (count % 2 == 1) {
    changeFac = 40; 
  } else {
    changeFac = -40; 
void draw() {
  if (bRecordingPDF) {
    float backGCol = 255;
    float ranR = random(50);
    float ranG = random(50);
    float ranB = random(50);
    background(255); // this should come BEFORE beginRecord()
    beginRecord(PDF, "tombstone" + pdfOutputCount + ".pdf");
    //Make all drawings here.
    float tombX = width/2;
    float tombY = height/2;
    float tombW = tombX*1.15;
    float tombH = tombY*1.5;
    // Tombstone
    rect(tombX,tombY, tombW,tombH);
    rect(tombX,tombY, tombW-20,tombH+20);
    rect(tombX,tombY, tombW-40,tombH+40);
    rect(tombX,tombY, tombW-60,tombH+60);
    rect(tombX+changeFac,tombY, tombW,tombH);
    rect(tombX+changeFac,tombY, tombW-20,tombH+20);
    rect(tombX+changeFac,tombY, tombW-40,tombH+40);
    rect(tombX+changeFac,tombY, tombW-60,tombH+60);
    rect(width/2,height/2, width,height);
    bRecordingPDF = false;



#includepath "~/Documents/;%USERPROFILE%Documents";
#include "basiljs/bundle/basil.js";
var jsonString = b.loadString("clean_tweets_sensored.json");
var jsonData;
function setup() {
// Clear the document at the very start.
b.clear (b.doc());
// Initialize some variables for element placement positions.
// Remember that the units are "points", 72 points = 1 inch.
var titleX;
var titleY = 72;
var titleW = 1080;
var titleH = 72;
var captionX;
var captionY = b.height/2;
var captionW = b.width - b.width/2;
var captionH = 180;
var imageX = 6;
var imageY = 10;
var imageW = 72*7;
var imageH = 72*7;
var coverFileName = "images/tomb1.png";
var coverImage = b.image(coverFileName, imageX, imageY, imageW, imageH);
// Make a title page.
b.text("#RIP", 12,90,480,360);
b.text("Xavier Apostol", 153,339,360,72);
b.text("Fall 2016", 153,369,360,72);

// Make a info page.
b.text("A Generative Book Using Twitter Hashtags", 72,162,360,180);

b.text("For Friday October 21st, 2016", 72,191,360,121);
// Parse the JSON file into the jsonData array
jsonData = b.JSON.decode( jsonString );
b.println("Number of elements in JSON: " + jsonData.length);
// Loop over every element of the book content array
// (Here assumed to be separate pages)
for (var i = 0; i < jsonData.length; i++) {

// Create the next page.

// Load an image from the "images" folder inside the data folder;
// Display the image in a large frame, resize it as necessary.
b.noStroke(); // no border around image, please.
var anImageBFilename = "background/" + jsonData[i].image;
var anImageFilename;
if (i % 2 == 0) {
anImageFilename = "images/tomb1.png";
titleX = -b.width/2 - 10;
captionX = b.width/2 - b.width/5;

} else {
anImageFilename = "images/tomb2.png";
titleX = -b.width/2 - 40;
captionX = b.width/2 - b.width/4 - 10;

var anImageBack = b.image(anImageBFilename, imageX - 6, imageY-10, imageW+1, imageH);
var anImage = b.image(anImageFilename, imageX, imageY, imageW, imageH);

// Create textframes for the "screenName" field.
b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.CENTER_ALIGN );
b.text(jsonData[i].screenName, titleX,titleY,titleW,titleH);

// Create textframes for the "date" fields
b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.TOP_ALIGN );
b.text(jsonData[i].date, captionX,titleY+108,captionW,captionH);

// Create textframes for the "text" fields
b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.TOP_ALIGN );
b.text(jsonData[i].text, captionX,captionY,captionW,captionH);

// For even amount of pages.
var endPageFileName = "background/tombstone1-page-001.jpg";
var endPage = b.image(endPageFileName, imageX - 6, imageY-10, imageW+1, imageH);

// This makes it all happen:

Video of the professor flipping through my book:

Xastol – LookingOutwards05

Among my favorite projects was the PoopVR, created by Laura Juo-Hsin Chen. The project uses a Google Cardboard, a phone, and a seat (toilets). Using the Google Cardboard, users use their own phone to enter an online VR world she has created. The VR world is rather lighthearted, with encouraging “poops” and psychedelic patterns, and serves as “motivation” when the user finds them-self in a rather “congested” situation. Additionally, the work allows other individuals partaking in this daily-task to connect with one another and, as a result, encourage each other. Personally, I’ve enjoyed the process of defecation a lot more with her project.

In terms of her approach to work, I appreciate Laura’s use of low-tech, open-source technologies to create charming work that attracts all audiences. The user doesn’t have to mentally prepare themselves to invest in her work because her playful style handles that already.

Website: http://www.jhclaura.com/

Xastol – FaceOSC


For the FaceOSC project, I decided to grow off of my plotting project (http://cmuems.com/2016/60212/xastol/09/29/xastol-plot/). I decided to develop the characters I generated in the plotting project as “wear-able” identities.


Every face is randomly generated and changes when the user presses the UP key. In terms of the characters in relation to the user’s face, they basically follow all movements made by the head (rotation and translation among all axises: x, y, z). Additionally, the mouth moves in relation to the user’s mouth (height and width) and the eyes change size based off of eyebrow movement: this was initially going to be in relation to the actual eye-openness of the user, however, I noticed I got a better effect while tracking the eyebrow position.

Random Face Generation Demo


Random Face Generation (Sound of Silence Performance)

My main goal for this project was to expand upon previous work and find new/interesting ways of presenting a concept. I felt this project was important and realizing these new ideas. In the overall scheme of things, I think I achieved my goal fairly well. However, I’m not sure if I did well in terms of maintaining the originality of the initial concept (from the plotting project). I was having a hard time deciding to strictly maintain the initial concept or use it as a catalyst and then shoot for the development of an entirely new way of presenting the initial idea. In the end, I came up with a project that is still very close to the initial idea (i.e. – generative faces, face shapes, sizes, etc.) but also has some detail changes (i.e. – new colors,  slight differences in shape movement, etc.).



// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230

//Xavier Apostol
//Generative Faces: Plotter Project Concept

import oscP5.*;
OscP5 oscP5;

// num faces found
int found;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;

float sz = 1;
float spacing = 100;
float genSz = spacing/4;
float fcOff = genSz/2;

//Initialization of Colors
float R = random(255);
float G = random(255);
float B = random(255);

//Initialization of Head
float rotInt = 15;
float hdX = cos(sz) + random(genSz, 3*genSz);
float hdY = sin(sz) + random(genSz, 3*genSz);
float rotAngle = random(-rotInt,rotInt);

//Initialization of Eyes
float lEyeX1 = sin(sz*0) + random(genSz);
float lEyeY1 = cos(sz*0) + random(genSz);
float rEyeX1 = sin(sz*0) + random(genSz);
float rEyeY1 = cos(sz*0) + random(genSz);
float lEyeX2 = sin(sz*1) + random(genSz);
float lEyeY2 = cos(sz*1) + random(genSz);
float rEyeX2 = sin(sz*1) + random(genSz);
float rEyeY2 = cos(sz*1) + random(genSz);
float ranREye = random(7, 9);
float ranLEye = random(7, 9);

//Initialization of Mouth
float mthX = cos(sz) + random(genSz);
float mthY = sin(sz) + random(genSz);
float ranM = random(-0.1, 1.5);

//Initialization of Spine
float hdOffset = hdY/1.5;
float spineSz = random(genSz/2);
float spXOff1 = random(-8, 8);
float spYOff1 = hdOffset + random(genSz/3);
float spXOff2 = random(-8, 8)+spXOff1;
float spYOff2 = random(genSz/3)+spYOff1;
float spXOff3 = random(-8, 8)+spXOff2;
float spYOff3 = random(genSz/3)+spYOff2;
float spXOff4 = random(-8, 8)+spXOff3;
float spYOff4 = random(genSz/3)+spYOff3;
float spXOff5 = random(-8, 8)+spXOff4;
float spYOff5 = random(genSz/3)+spYOff4;

void setup() {
  size(800, 600, OPENGL);

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");

void keyPressed() {
  if (key == CODED) {
    if (keyCode == UP) {
      //Create an entirely new character.
      //For Eyes
      lEyeX1 = sin(sz*0) + random(genSz);
      lEyeY1 = cos(sz*0) + random(genSz);
      rEyeX1 = sin(sz*0) + random(genSz);
      rEyeY1 = cos(sz*0) + random(genSz);
      lEyeX2 = sin(sz*1) + random(genSz);
      lEyeY2 = cos(sz*1) + random(genSz);
      rEyeX2 = sin(sz*1) + random(genSz);
      rEyeY2 = cos(sz*1) + random(genSz);
      ranREye = random(7, 9);
      ranLEye = random(7, 9);
      //For Mouth
      mthX = cos(sz) + random(genSz);
      mthY = sin(sz) + random(genSz);
      ranM = random(-0.1, 1.5); 
      //For Spine
      spineSz = random(genSz/2);
      spXOff1 = random(-8, 8);
      spYOff1 = hdOffset + random(genSz/3);
      spXOff2 = random(-8, 8) + spXOff1;
      spYOff2 = random(genSz/3) + spYOff1;
      spXOff3 = random(-8, 8) + spXOff2;
      spYOff3 = random(genSz/3) + spYOff2;
      spXOff4 = random(-8, 8) + spXOff3;
      spYOff4 = random(genSz/3) + spYOff3;
      spXOff5 = random(-8, 8) + spXOff4;
      spYOff5 = random(genSz/3) + spYOff4;
      //For Head
      hdX = cos(sz) + random(genSz, 3*genSz);
      hdY = sin(sz) + random(genSz, 3*genSz);
      rotAngle = random(-rotInt,rotInt);
      //For Colors
      R = random(255);
      G = random(255);
      B = random(255);

void draw() {  
  if(found != 0) {
    translate(posePosition.x, posePosition.y);
    //Scales head and allows for rotations
    rotateY(0 - poseOrientation.y);
    rotateX(0 - poseOrientation.x);
    ellipse(0,0, hdX,hdY);
    translate(posePosition.x, posePosition.y);
    float eyeFac = 1;
    float eyeBL = eyebrowLeft * 2;
    float eyeBR = eyebrowRight * 2;
    ellipse(-20,eyeLeft * -ranLEye, lEyeX1*eyeFac + eyeBL,lEyeY1*eyeFac + eyeBL);
    ellipse(20,eyeRight * -ranREye, rEyeX1*eyeFac + eyeBR,rEyeY1*eyeFac + eyeBR);
    ellipse(-20,eyeLeft * -ranLEye, lEyeX2*eyeFac + eyeBL,lEyeY2*eyeFac + eyeBL);
    ellipse(20,eyeRight * -ranREye, rEyeX2*eyeFac + eyeBR,rEyeY2*eyeFac + eyeBR);
    ellipse(0, 20*ranM, mouthWidth* mthX/3, mouthHeight * mthY);
    ellipse(spXOff1,spYOff1, spineSz,spineSz);
    ellipse(spXOff2,spYOff2, spineSz,spineSz);
    ellipse(spXOff3,spYOff3, spineSz,spineSz);
    ellipse(spXOff4,spYOff4, spineSz,spineSz);
    ellipse(spXOff5,spYOff5, spineSz,spineSz);


public void found(int i) {
  println("found: " + i);
  found = i;

public void poseScale(float s) {
  println("scale: " + s);
  poseScale = s;

public void posePosition(float x, float y) {
  println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);

public void poseOrientation(float x, float y, float z) {
  println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);

public void mouthWidthReceived(float w) {
  println("mouth Width: " + w);
  mouthWidth = w;

public void mouthHeightReceived(float h) {
  println("mouth height: " + h);
  mouthHeight = h;

public void eyeLeftReceived(float f) {
  println("eye left: " + f);
  eyeLeft = f;

public void eyeRightReceived(float f) {
  println("eye right: " + f);
  eyeRight = f;

public void eyebrowLeftReceived(float f) {
  println("eyebrow left: " + f);
  eyebrowLeft = f;

public void eyebrowRightReceived(float f) {
  println("eyebrow right: " + f);
  eyebrowRight = f;

public void jawReceived(float f) {
  println("jaw: " + f);
  jaw = f;

public void nostrilsReceived(float f) {
  println("nostrils: " + f);
  nostrils = f;

// all other OSC messages end up here
void oscEvent(OscMessage m) {
  if(m.isPlugged() == false) {
    println("UNPLUGGED: " + m);

Xastol – LookingOutwards04

Kimchi and Chips’ piece, Light Barrier, concerns a form of real-time processing involving the reflection of light.

The piece involves sound and the reflection of light through mirrors. Not only is this an interaction between the user who experiences the reflections, it’s an interaction between space and time. (“The light installation creates floating graphic objects which animate through space as they do through time.”)

I enjoy this interaction mainly because it allows for new forms of visuals to be experienced in open space. As a filmmaker, light plays a huge role in filming and projecting movies. I also believe this project leaves open the discussion of using light, and open space as seen in the video, as a medium for having films/images “appear out of thin air”. In general, Kimchi and Chips’ work set the new standard for interactive visual experiences and create the opportunity for artists to develop new forms of media through light.

Website: http://kimchiandchips.com/#lightbarrier

Xastol – Plot


I decided to generate some faces…or rather some “creepers” entirely out of ellipses.


Initial sketches and thoughts.

The first prototype for the faces.


Another idea I had resulted in using these “creepers” as pixels and generating another grid like visual. However, there were some complications in how it would render out on paper and also wasn’t as charming as the previous.


Some action shots of the plotter. (Action Shot #1)

Action Shot #2

I ended up rendering a PDF entirely for the creeper’s lower region (i.e. – spine, vomit, etc.). This would make it easier when switching out the colors(pens) used on the plotters.

Here is the other PDF of the creeper faces.

In program shot.

First rendering.


The process for generating these creeper faces involved a lot of trial and error (specifically in the early programming stages). I initially created a grid of these creepers as my initial idea. However, I was unsatisfied with what I had, so I began to play around with the number of generated creepers and ended up making a mess of an image (see last picture thoughts). After some consulting with Golan, I ended up going back to my original idea and made some changes to the creepers to make each individual one more unique and awkward (differences in “bodies” and angles of faces).

Much different from the almost instant timing of a program, the rendering of the image using a plotter took a while. Although my particular image didn’t require a lot to plot, there were complications with getting the plotter to align evenly throughout the page. If I had the chance to go back in and make some changes, I think I would change the weight of the pens (uneven because of pressure of plotter at different points). Overall, I was satisfied with what I came up with and glad I went back to my initial idea.


//Template Provided by Golan
// Xavier Apostol (Xastol)
// 60-212 (8:30 - 11:20am & 1:30 - 4:20pm)
// xapostol@andrew.cmu.edu
// Composition For A Line Plotter (Processing)

import processing.pdf.*;
boolean bRecordingPDF;
int pdfOutputCount = 11; 
void setup() {
  size(1000, 1000);
  bRecordingPDF = true;
void keyPressed() {
  // When you press a key, it will initiate a PDF export
  bRecordingPDF = true;
void draw() {
  if (bRecordingPDF) {
    background(255); // this should come BEFORE beginRecord()
    beginRecord(PDF, "weird" + pdfOutputCount + ".pdf");
    //Make all drawings here.
    float offset = 100;
    float ranX = random(100);
    float ranY = random(100);
    for (float x=offset; x <= width - offset/2; x+=offset) {
      for (float y=offset; y <= height - offset/2; y+=offset) {
        creeper_pixel(x,y, 1, offset);
    bRecordingPDF = false;

void creeper_pixel(float x,float y, float sz, float spacing) {
  float genSz = spacing/4;
  float fcOff = genSz/2;
  float spineX = x;
  float spineY = y;
  for (float i=0; i <= sz; i++) {
    float lEyeX = sin(sz*i) + random(genSz);
    float lEyeY = cos(sz*i) + random(genSz);
    float rEyeX = sin(sz*i) + random(genSz);
    float rEyeY = cos(sz*i) + random(genSz);
    ellipse(x-fcOff,y, lEyeX,lEyeY);
    ellipse(x+fcOff,y, rEyeX,rEyeY);
  for (float j=0; j < sz; j++) {
    float rotInt = 15;
    float hdX = cos(sz*j) + random(genSz, 3*genSz);
    float hdY = sin(sz*j) + random(genSz, 3*genSz);
    spineY += (hdY/2);
    ellipse(0,0, hdX,hdY);
    float mthX = cos(sz*j) + random(genSz);
    float mthY = sin(sz*j) + random(genSz);
    ellipse(x,y+fcOff, mthX,mthY);
  for (float s=0; s < 5; s++) {
    float spineSz = random(genSz/2);
    spineX += random(-8, 8);
    spineY += random(genSz/3);
    ellipse(spineX,spineY, spineSz,spineSz); 

Xastol – Clock-Feedback

The feedback I received for the clock project was very helpful. In particular, the comments got me thinking about my design and concept. A lot of the comments seemed to support my concept, but would have liked to have seen more in terms of making it come to life (i.e. – having the times start off screen, differences in numbering, etc.). If I were to do this project over, I would definitely reconsider how to make the aesthetics more void-like or maybe move towards a different aesthetic (i.e. – deep space).

Xastol – AnimatedLoop



I actually really enjoyed creating this piece. Initially, I was having a hard time coming up with a concept. However, after working on the “Interruptions” re-code, I accidentally came across having all the lines point in one random direction. This eventually got me thinking about the potential of having different shapes point in a singular direction.

While thinking of different ways to further this concept, I was listening to a jazz musician by the name of Trombone Shorty (http://www.tromboneshorty.com/) and was kind of thinking how wide he allows his mouth to open in order to get new and interesting tones regarding his voice and instruments. One thing led to another and I decided to create a creature that was constantly talking, or “quaking” in this case, and that didn’t really have control over how wide it allowed its aperture to open.


I really wanted to create something cute and kind of funny looking in this project, which I think I achieved very well. I think I fell short on how expressive the creature (duck) could be in regards to other parts of its body, even though my intent was to focus on the aperture and face of the creature. If I were to continue working on this piece, I think I would create a body for it and also have that flail around.

GitHub Link: https://github.com/xapostol/60-212/tree/master/Deliverables%203%20-%20xastol/Animated_Loop


Xastol – LookingOutwards03

A generative piece that I really enjoyed is Aether by Thomas Sanches and Gilberto Castro. The piece is, “A series of studies in geometric symmetry, dynamic particles and interactivity on a large multitouch screen.”

I really admire the interactivity and how the artists allow for users to explore different ways of altering the geometric structures. Interactivity is something I’ve always been interested in, and I believe that giving the user the power to control the outcome of pieces is really important for developing a more personal relationship with the consumer.

The algorithm that generated the work had to involve a lot of geometry, physics, and had to know how certain points react to one another and how they also react to the entire geometric system.

The artwork’s effective complexity is the geometric shape that the user interacts with. For anyone looking at it, it’s obvious that it’s just a random shape. However, the system becomes complex when the user alters it: each user will manipulate the piece in a different way, therefore allowing for more complexity and a different outcome. The idea of order and disorder is balanced by allowing the system to return to its original state (similar state) even after it was altered.

Aether Website – http://codigogenerativo.com/works/aether/

Xastol – Reading03

1) As a young boy growing up in a desert wasteland, clouds always caught my attention. I believe they’re a good example of effective complexity because they’re easy to depict; everyone can tell a cloud by it’s random shape. At the same time, clouds are very complex in their formations and volume of gasses (each cloud is unique).

Here are some clouds formed in the Arizona deserts:




2) The problem I have the most internal conflict with is “The Problem of Creativity”. I often question how my work is unique to that of other’s (i.e – how my aesthetics/purpose differs while creating art in a systematic medium). In terms of this problem, my goal is to create work to counter it; I want my work to be undoubtedly me.


1) There are small random lines.
2) The lines all have the same size.
3) The lines are all black
4) The background is white (tint of gray).
5) For the most part, the lines seem to be evenly spaced.
6) The lines are generated at different angles than the ones around them.
7) There are some interruptions (spaces without lines).
8) The lines follow one general direction (may be vertical or horizontal).


Looking at the piece, I noticed that there would be a few difficulties I would have to deal with, the obvious one being how do I get lines to show up across the board relatively evenly. My first attempt started with making an entirely new line class that would give random angles to each line. However, this became difficult when applying in a list and trying to display (the display function was being buggy). Instead, I decided to go a more direct way and created a double-for loop. This ended up being a lot more time-efficient and easier to manage when looking at. The problem I still struggle with is getting larger random white-space to occur. In my code, I applied randomGaussian() to the randomness of the y-values for each line. This creates some disruption in the general pattern, but is still not big enough.

I really appreciate how such a simple-looking piece is actually pretty complex when applying to code. I really admire Molnar’s ability to make the computer produce human-like patterns (it seems as though a person drew out the work instead of programmed it).

Github Code: https://github.com/xapostol/60-212/tree/master/Deliverables%203%20-%20xastol/Interruptions_p5



For this week’s project, I decided to develop a clock in a setting that seems indifferent of time: the void. This idea of a void being a placeholder for time came to be when I realized these seemingly empty/endless places (voids, black-holes, etc.) only seem to be empty and endless because there lacks evidence of anything existing in them. However, just because our understanding of a subject is restricted doesn’t mean it doesn’t exist.

Imagining a void with characteristics completely different from most expectations, I decided to give the void boundaries and make the color of the three different creatures vibrant. The hour creatures are the largest of the three and move very slowly. The minute creatures are middle-sized and move at a calm pace. The second creatures are the smallest and the fastest, as they appear and disappear the most. Additionally, each creature has a number placed on it (representing in number in terms of time). However, these numbers are purposefully placed in hard to read areas of the creature. This ties directly with the previous theme that just because something is not visible (we don’t understand), doesn’t mean it doesn’t exist.

Github Code Link: https://github.com/xapostol/60-212/tree/master/Deliverables%202%20-%20xastol/clock_p5



A computational project that got me interested in taking this class was SketchSynth by Billy Keyes. The project is basically a draw-able controller. The user takes a piece of paper and draws various buttons, sliders, and toggle switches. The program recognizes these drawn controls and then makes them functional through the tracking of human interactions with said controls. I admire how the artist connects the physical and virtual world through the nature of the project, and also how the user has control in what type of controller they develop. I also think it’s cool that this project actually sprang from a class held at CMU in 2012. It goes to show how we, although intermediate programs, have the resources to develop such exciting programs.

The project was created/developed entirely by one student. In terms of the software used to develop the program, Keyes primarily used commercial software (openFrameworks) and also used add-ons that were developed by other artists. This projects aids in developing a stronger user influence and sets the stage for more complex works where users can change the outcome of the program based on minor decisions.


Eyeo 2016 – Kyle McDonald from Eyeo Festival // INSTINT on Vimeo.

Kyle McDonald is a media artist who’s based in Brooklyn, New York. Kyle attended Rensselaer Polytechnic Institute for AI and started a career in Philosophy/Computer Science. Kyle’s work has a lot to do with experimentation with visuals and how it affects the world. As a video artist, this resonates with me. Among my favorite works of art are Kyle’s “Exhausting a Crowd” and, more recently, “pplkpr”. Both deal with creating new visual statements and helping others develop different perspectives from these statements.

Something Kyle said that really stuck with me was, “If Ella Fitzgerald never sang a single song, and a bot synthesized her tomorrow, would it feel the same?…I think it might and that’s kind of scary.” Although this is a response to a question he gave himself, it still holds a lot of weight. It shows his internal struggle to aid in the advancement of AI at the expense of losing the humanity (basis of art). As artists living in an era of technology, I think we all should ask similar questions to ourselves in order to determine whether we’re guiding the tech for our artistic purposes, or if the tech is guiding us.

Kyle McDonald Website – kylemcdonald.net


First and last word art are constantly battling one another in today’s society. New innovations and discoveries in technology create the basis for first word art, while improving upon these technologies, to create even more elaborate works, is central to last word art.

Personally, I don’t believe one side is more important than the other. The two forms of art are essentially the same thing in that they are a part of the same timeline of events and have no end: art is always “becoming” and “evolving”. In terms of technology, the advancement of tech allows for more diverse works of art to be made, which has a direct effect on culture (tech influences how society acts with the world/perception of the world is changed). For example, advancements made in VR (Virtual Reality) will change how entertainment (video games, movies, etc.) will be consumed. Already, our culture reflects this with more video games having compatibility with VR products such as the Oculus Rift. In turn, the needs and wants of society will push these technologies to further develop. Since people want to consume entertainment in a more realistic way, tech will improve to match these desires.