Catlu – Last Project

Please click on images to play GIF (for some reason they won’t play otherwise):

For this project, what I wanted to do was do more coding with Python in Maya. Basically I wanted to practice more coding in Maya, and get to know the Maya-specific coding commands and language like the basic polygon commands and how to find the information on objects and control them. At first I thought I wanted to construct a generative city with Maya, but later decided not to because I realized that with the knowledge I’d be able to get about Maya Python in the time I had, I didn’t think I’d be satisfied at all with how complex I’d be able to make a city. After that, I decided it would be good to explore another useful feature, making objects move in a relation to another object. Originally I wanted to make a projectile object that scattered particles  in a field. To do this I started with more basic movements. I had a really hard time this project getting things to work. Whereas last time I used coding to mass produce objects at different angles, this time it was moving objects. Mass-generating objects was definitely a lot easier. Even though the things I was trying to do weren’t supposed to be that hard, I found things to be harder than I thought and more time-consuming. Figuring out Maya’s kinks without a good guide was also challenging. In the end, I could only get basic animation code to sort of work. I generated the lanterns in the scene in their formation using code, and made them move in relation to the mask using code. In the end, I think I learned more about coding in Maya and am more comfortable in it, but definitely need to practice tons more.

 

Here are the links to the code on Github. Once again WP-Syntax has failed me.

These are not the final versions of the code I used for the animation and creation. Unfortunately my Maya program crashed before I saved the final code, but the code down below are the not-so-final versions of the ones I used.

Maya lantern move code

Maya lantern generate in pattern code

Catlu – Final – Proposal

For my final project I’d like to continue working in Maya and learning more about python scripting. I’m not sure if I want to continue with my mocap project and improve it (then I wouldn’t have to remodel stuff) or if I want to start a new project instead. I think I’m leaning towards continuing my mocap project so I can really dig into the python scripting of Maya.

Catlu – Arialy – Object

IoT – Umbrella

https://vimeo.com/192148854

Physical Object Documentation:

dsc02195 dsc02209 dsc02210

If This Then That Documentation:

screen-shot-2016-11-18-at-12-05-44-pm screen-shot-2016-11-18-at-12-09-59-pm screen-shot-2016-11-18-at-12-10-45-pm screen-shot-2016-11-18-at-12-12-09-pm

Diagram:

file_000

For our project, we were drawn to the idea of using the internet and the Little Bits to establish a connection between 2 people. We thought a lot about how we could use the little bits we had (mostly lights, buttons, and a speaker), to establish a relationship between different places. In the end we decided to use the sound of rain. When people are living apart from each other in different places, it’s often hard to feel connected because you do not experience things together anymore. The final idea we decided on was a set of 2 umbrellas, each of which uses a cloud bit to connect to If This Then That. Each umbrella would be set to the place where the opposite person is, so when the opposite person is experiencing rain, you will hear the rain too. Sometimes it only take a little to help establish a greater feeling of intimacy, and our project was an attempt to do so. For the umbrella, we connected a wall charger power bit to a cloud bit and an input speaker bit and an mp3 player. We then connected the cloud bit to IFTT and created an applet with Weather.com that would let you set a location. If the status of the location changes to “rain,” IFTT sends a signal to the cloud bit, and the speaker will start playing the music from the mp3 player. Optimally we would have wanted something small like an Ipod Nano or Shuffle, but had to make do with a phone. Originally we also wanted to use a battery for the power supply, but the p1 battery power didn’t work with the cloud bit, which then wouldn’t connect to IFTT. We ended up using the wall charger bit and running the strip along the edge of the wall.

Catlu – Mocap

 

Final rendered video on Vimeo:

https://vimeo.com/191498194

Screenshot of the work right before rendering:

skeleton

Screenshot of the scripting device in Maya:

scriptsnip

Gif of time slider animation pre-render in Maya (you may need to click it to see it run):

movement-of-air

Sketches I did of the characters:

20161114_003659 20161114_003704

For my project, Golan suggested that instead of using Processing or Three.js, I could learn scripting in Maya because of my interest in animation. I was very excited to start this project, and took to it with a more story-focused mindset with the motion capture than I think most of the class did. I wanted to use the scripting to do things in Maya that I couldn’t do by hand (or at least couldn’t bear to do by hand or in the given time frame) that would supplement a story, no matter how short. The initial idea I had for this was  a pair of disgraced/fallen/unfit samurai that circled each other in blame, getting closer and farther together with an audience of masks turning to look always at the two of them and closing in gradually. Eventually, I realized I didn’t have time to model two samurai and settled on modelling the shell (mask, gloves, socks, cape) of a disgraced/fallen/unfit samurai warrior and trying to achieve a feeling of melancholy and nostalgia for a better time. I wanted to use python scripting to generate and randomly place another modelled mask, and make it so that whenever the main mocap samurai moved, the masks would turn their faces to always follow him. Starting this project, I watched video tutorials on how to python script in Maya, following along with them. After figuring out if I could do what I wanted to do, which actually the video tutorials basically covered, I started modelling. Before this project, I had only had a bit of basic modelling experience and a general broad overview of what Maya could do. The modelling ended up taking me more time than I thought. Afterwards, I also learned how to import a BVH file into Maya and how to rig/bind a model to the BVH skeleton. When I got to coding, I came to an unexpected circumstance. Although the masks would turn to face the samurai, after the samurai was binded to the skeleton, this no longer worked. At first I tried to bind the skeleton different ways, but in the end I made a separate object that I gave 100% transparency that I hand animated to follow the samurai around. The masks then followed that object. In the end, I didn’t end up liking the effect of the turning masks because they made the scene more confusing because the masks didn’t turn enough to be horribly noticeable. After finally getting everything set up and moving, I learned how to render. This is the first time I’ve rendered a scene, and I didn’t expect the end number of frames to be around 2000. The 2000 frames took longer to render than I thought they would. I tried to change the frame rate to 24 fps, but doing so significantly slowed down the mocap. The final step was to take my rendered scenes and stitch them together in Premiere. The end product was slower than it looked in Maya so I sped it up, utlimately shortening it by half, and also rendered darker than my test frame renders. I didn’t have time to re-render all the frames, but I think it was good experience going into the next time I try to render something. In the end I think I’m satisfied with the project, but I would definitely like to do more with it given more time to really get things to move, thinking more interactively along with my story-focus and getting more interactivity (leaving enough time for when things I want to work out don’t and so on). I want to utilize code more and dig deeper into what I can do with it, and also learn more the Maya-Python vocabulary.

Once again the WP-Syntax tool still hates me, and so here is the Github link to the code:

Code

Catlu – LookingOutwards08

inFORM

Tangible Media Group – Daniel Leithinger, Sean Follmer, Alex Olwal, Akimitsu Hogge, Hiroshi Ishii / 2013

inFORM project link:

http://tangible.media.mit.edu/project/inform/

inForm is a project by the Tangible Media Group that I really enjoyed. The project wants to create a relationship between a user’s digital information, and tangible space. I found this project to be amazingly playful and fun. The idea of transferring digital to physical data is very interesting to me. In a digital world, often we don’t make much of an effort to be physical anymore, even though the physical world is so important and integral to living. While we stare at our screens, we’re almost living in a virtual world that is constantly evolving to fit us better, to be more addicting, and to not let us go. When everyone is making such an effort to turn everything digital, it’s so refreshing to see a way physicality can factor back into digital space and possibly make digital space better. In the end, I think the thought processes and the technology behind this project have great potential going into a future of digital takeover, and that it will help us in developing spaces of digital and physical combination, interactions and interfaces that satisfy our invisible and tangible needs, which truly, I think are the best kind.

Also, they really made it look damn good in the video.

Catlu – Manifesto

One tenet of the manifesto I found interesting was tenet number 1. The tenet basically says that every piece of technology we depend on must be considered both a “challenge and a threat.” Because we depend on these objects, it is imperative that we know them inside and out, all their workings, so we can rise to challenges that may arise due to our dependency on them, perhaps even shake ourselves from their shackles, and also to be prepared for the event of their failure or effects. This should be done with all technology regardless of “ownership or legal provision.” To me, this tenet is extremely important as we continue in the technological era. Technology exists so much around us that it’s not something a lot of us think too much about anymore. We assume that all our commodities will continue working forever. This dependency combined with our mindlessness could end in catastrophe. Therefore I agree that it’s true that a critical engineer must not only think about the great effects that a new or old invention may have if it exists, but also the negative effects of its existence along with the effects of its absence after a prolonged period. If when the internet, a year after it was first invented suddenly crashed and disappeared, it would maybe have been inconvenient, but if the internet crashed tomorrow, there would be a global crisis. Panic would spread as information would be lost, communication down, and a massive amount of commodities the internet provides that we simply don’t know how to live without. I know some people that can’t even get around without Google Maps. For this perhaps eventual crisis, I don’t know if we have a backup. I don’t know what the damage could be, and that is terrifying. In order to be able to bring in new technology, we have to first be the critical engineer, and look further than the technology itself so we can gauge the cost of dependency and the perhaps unexpected costs of its existence.

Catlu – Visualization

datavisualization

On this project, I was initially curious about how many times each bike returned “home,” based on the station they were parked at the beginning of the year. Later, I realized I wouldn’t be able to do this because the Healthy Ride data did not come with dates. I then focused on the idea of bike “diversity.” I wondered how many different bikes had been to each station. This information I thought would be best shown in a bar graph for clear comparison. More than information, I guess I was trying to draw out a story. First, I pulled the Healthy Ride file into Processing and used Processing to calculate and turn out the “diversity” per station. This I then saved as a TSV file. As for the making of the visualization, D3 proved a bit confusing. I tried to load the TSV into my code, but just couldn’t get it to show up. In the end, since my data that I wanted to use D3 on wasn’t that long, I ended up just hard-coding it into the code as 2 arrays. I changed and tested a lot of things from the code (taken from the D3 Workergnome bar graph example), and ended up with my graph.

The screenshot is a little blurry for some reason. Here is a link to a clearer version you can zoom in on:

localhost

Here is the Processing code for the bike diversity calculations (github):

bike calculations

Here is the D3 code used to make my bar graph (github):

bike D3 code

Catlu – LookingOutWards07

Fleshmap (2008) – Martin Wattenberg collaborating with Fernanda Viegas and Dolores Lab

capture

I found the Fleshmap project very interesting because of the nature of the investigation. Sex and desire are subjects that are so embedded in our culture right now, but at the same time they’re still treated with a heavy taboo in regards to talking about sex in daily conversation and with the treatment of sexual education in the school system. Visually I find the first graphic of the most desirable places to be touched (man and woman) to be very engaging, and although the data (from Mechanical Turk) isn’t surprising, it does help the viewer more palpably understand the erotic nature of the data. The other part of the project I found really interesting was their analysis of the mention of different body parts in different genres of music. The information you can draw from this visualization (2nd image), gives surprising insight into the way sex factors into cultural and musical trends. It’s interesting to consider the popularity of hip hop right now combined with the visualization that it’s one of three genres that do not list the eyes as the most mentioned, and instead most mentions the ass, as opposed to the other two that mention hands most. This is fascinating when thinking about how the eyes are called “the window into the soul” and are often associated with love and emotion, while asses are most often are an object of desire. I think Fleshmap is a meaningful project that shows the pervasive and popular nature of sex in the current cultural environment.

Catlu – LookingOutwards06

A pair of bots I really liked were Thricedotted’s The Rhymin’ Riddler, and the Riddler’s Apprentice. The Rhymin’ Riddler posts riddles that always start off with “What do you call a…” and then the other bot the Riddler’s Apprentice will respond with an answer to that riddle that will rhyme. One thing I like about these bots is how funny they are. Often, the answers to the riddles make more sense than I expect them to, all while rhyming. I like that there are two bots that interact with each other. I think the idea of bot interaction is really nice, because even though the twitter accounts are just bots, they seem to take on more personality when there’s plausible interaction between the two, and the tweets seem mysteriously more intriguing automatically. It’s funny how you can almost see the relationship between the riddler and the apprentice.

joke2

joke3

riddle

Catlu – Book

MythMash is a whimsical mashup of the styles of major world mythologies. It pulls from a list of 9 Mythologies: African, Arabian, Asian, Celtic, Classical, Egyptian, Native American, Icelandic, and Polynesian. Each page contains a small myth generated using Markov chains and 2 random mythologies. The idea is to think about the place of stories and mythology in the world. The different mythologies all have many similarities, while also retaining individuality and cultural flavor. I encourage the reader to think about this while enjoying fun and entertaining newly generated stories.

20161028_004932

https://vimeo.com/189265806

The idea of this book came to me as I was thinking about how to incorporate something I really loved and cared about into this generative book project. I think mythology is fascinating, beautiful, and amazing in scope. As I was thinking about it, I thought about how certain stories carry across cultures and regions: hero stories, creation myths, god figures, etc. This made me think about how storytelling is universal in the world, and is a vessel for creativity and empathy. Eventually, I decided to mash mythology styles together to show these similarities in an interesting and whimsical way, while hoping they still retained enough individual flavor to make them recognizable within the whole. Starting this book, I read a bit on Markov code. Eventually I decided to use RiTa.js’ Markov functions. For each mythology, I went through online sources and our school library to find texts of myths, and compiled about 60 pages in MS Word for each. I made it so that my program would randomly select two different mythologies for each page and mash them together. For pictures to accompany the words, at first I thought I could use patterns (also universal), but Golan suggested this could be cliche and I agreed. He then suggested that I choose random images that had to do vaguely with my myths, and that this might add a humorous and mysterious element. I liked the idea so Golan gave my a file with 1 million Flickr photos and their captions. With these pictures, I picked a random noun from the first 15 or so words, and looked for it in the captions that Golan provided me, saving everything to a JSON. I had a lot of problems with the million image file, which took 2 hours to transfer every time I had to move it. In the million images, a small percentage of them were missing (and way more missing photos got pulled than I really expected), and I had to go in and manually add existent images in when I was importing them into inDesign. Eventually, as I was checking through, I found out that some of the caption numbers were offset by 1, while others were not, and that this was completely random. Very frustrated at this point, I really looked at the pictures and decided they weren’t relevant enough to really be all that funny or merit being in the book. In the end, it turned out to be just the text. I was very excited about this project to be begin with, and wish it had turned out a little better. I couldn’t find how to make my own Markov code so had to use RiTa’s with little choice of personalization. Dan Shiffman released videos explaining how to write Markov when I was rushing and struggling to get the pictures to work with the Markov I already had. As for the pictures, I spent maybe 25-30 hours trying to transfer the files several times(8 hours), figure out the code, and work out all the little insidious bugs and how to deal with all the shifting zeros in the 1000 folders of 1000 pictures in the million file. In the end, I was very disappointed that I didn’t love the effect of the pictures as much as I would have liked, and that I spent so long on them and didn’t use them. I definitely really like the idea of the generative book, and how the myths came out, but I may revisit this project again when I have more time to really invest in getting everything right. The content of this book is important to me, and in the future I definitely plan on coming back and making the book every bit as good as it can be, writing the Markov myself and thinking through the right way to illustrate the myths.

MythMash PDF:

mythMash2.pdf

The code can be found in the below Github links:
Github Code:

Processing:

https://github.com/catlu4416/60-212/blob/master/markov4.pde

import processing.pdf.*;
import rita.*;


String captions[];

int rand1;
int rand2;
String myth1;
String myth2;

String mythsTitlePart1;
String mythsTitlePart2; 
String mythsTitle;

//lexicon initialization
RiLexicon lexicon = new RiLexicon();

//markov initialization
RiMarkov markov;
String mythText = "click to (re)generate!";
int x = 160, y = 240;

//array of myth types
String[] mythTypes = {"africanMyths.txt", "nativeAmericanMyths.txt", "asianMyths.txt", "arabianMyths.txt", 
  "celticMyths.txt", "egyptianMyths.txt", "norseIcelandicMyths.txt", "polynesianMyths.txt", "classicalMyths.txt"};

String[] mythTypesText = {"African", "Native American", "Asian", "Arabian", 
  "Celtic", "Egyptian", "Icelandic", "Polynesian", "Classical"};


//caption stuff
String captionPull;
//caption stuff try 2
String mythCaption1;
String mythCaption2;

String searchNoun;

int foundCount = 0;
IntList picNumbers;

//JSON stuff
JSONArray bookStuff;
String jsonFinal;


void setup()
{
  size(500, 800);

  beginRecord(PDF, "everything.pdf");

  fill(0);
  textFont(createFont("times", 16));
  bookStuff = new JSONArray();
}

void captions() {
  //caption stuff//Golan

  picNumbers = new IntList();

  String captionsFilename = "SBU_captioned_photo_dataset_captions.txt"; 
  String captions[] = loadStrings(captionsFilename); 

  int answerCount = 0; 
  String wordsIWant[] = { searchNoun }; 
  for (int i=0; i

Basil.js JavaScript for InDesign (.jsx):

https://github.com/catlu4416/60-212/blob/master/myBook.jsx

#includepath "~/Documents/;%USERPROFILE%Documents";
#includepath "D:\\Documents";
#include "basiljs/bundle/basil.js";

// Load a data file containing your book's content. This is expected
// to be located in the "data" folder adjacent to your .indd and .jsx. 
var jsonString = b.loadString("book.json");
var jsonData;
var imageFolder;
var anImageFilename;
var anImage;

//--------------------------------------------------------
function setup() {

  // Clear the document at the very start. 
  b.clear (b.doc());
  
  // Make a title page. 
  b.fill(0,0,0);
  b.textSize(24);
  b.textFont("Garamond"); 
  b.textAlign(Justification.CENTER_ALIGN); 
  b.text("MythMash", 72,72,360,36);
  b.textSize(16);
  b.text("Is a journey into the many mythologies of our world; a whimsical examination of their similarities and differences through generative mashup myth.", 72,108,360,300);

  
  // Parse the JSON file into the jsonData array
  jsonData = b.JSON.decode( jsonString );
  b.println("Number of elements in JSON: " + jsonData.length);
  b.println(jsonData);


  // Initialize some variables for element placement positions.
  // Remember that the units are "points", 72 points = 1 inch.
  var titleX = 320; 
  var titleY = 415;
  var titleW = 150;
  var titleH = 50;

  var captionX = 54; 
  var captionY = 54;
  var captionW = 396;
  var captionH =396;

  var imageX = 72; 
  var imageY = 72-30; 
  var imageW = 72*3; 
  var imageH = 72*3;


  // Loop over every element of the book content array
  // (Here assumed to be separate pages)
  for (var i = 0; i < jsonData.length; i++) {

    // Create the next page. 
    b.addPage();

    // Load an image from the "images" folder inside the data folder;
    // Display the image in a large frame, resize it as necessary. 
    b.noStroke();  // no border around image, please.
    
    //6 digits (hundreds of thousands)
    if (b.floor((jsonData[i].image/1000000)) == 0) {
        imageFolder = "00" + b.floor((jsonData[i].image/1000));
    }
    //5 digits (tens of thousands)
    if (b.floor((jsonData[i].image/100000)) == 0) {
        imageFolder = "000" + b.floor((jsonData[i].image/1000));
    }
    //4 digits (thousands)
    if (b.floor((jsonData[i].image/10000)) == 0) {
        imageFolder = "0000" + b.floor((jsonData[i].image/1000));
    }
    //first 1000
    if (b.floor((jsonData[i].image/1000)) == 0) {
        imageFolder = "0000" + b.floor((jsonData[i].image/10000));
    }

    
    
    var anImageFilename = "images/" + imageFolder + "/" + jsonData[i].image + ".jpg";
    var anImage = b.image(anImageFilename, imageX, imageY, imageW, imageH);
    anImage.fit(FitOptions.PROPORTIONALLY);
   
    

    // Create textframes for the "title" field.
    // Draw an ellipse with a random color behind the title letter.
    //b.noStroke(); 
    //b.fill(b.random(180,220),b.random(180,220),b.random(180,220)); 
    //b.ellipseMode(b.CORNER);
    //b.ellipse (titleX,titleY,titleW,titleH);
    
    b.addPage();
    
    b.fill(0);
    b.textSize(10);
    b.textFont("Garamond"); 
    b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.CENTER_ALIGN );
    b.text(jsonData[i].title, titleX,titleY,titleW,titleH);
    //b.println(jsonData[i].title);

    // Create textframes for the "caption" fields
    b.fill(0);
    b.textSize(12);
    b.textFont("Garamond"); 
    b.textAlign(Justification.LEFT_ALIGN, VerticalJustification.TOP_ALIGN );
    b.text(jsonData[i].caption, captionX,captionY,captionW,captionH);

  };
}

// This makes it all happen:
b.go(); 


Here's a video of Golan flipping through my book:

Catlu – LookingOutwards05

Jeremy Bailey – Preterna (AR Pregnancy)

jeremy-bailey

One of the projects I found really interesting at the VR salon during Weird Reality was Jeremy Bailey’s pregnancy simulator, Preterna. When I put the VR headset on, I was transported to a calm plain of grasses and wildflowers. As I looked down, I saw the body of a pregnant woman. I thought the premise and execution of this project was really smart. By placing the mesh of a pregnant woman at a certain place and having us stand at that same place, it really did feel natural to look down and see a body that could be ours. I appreciated how we could see funky versions of our arms and hands without having to hold a remote or controller. It made the feeling of holding my hands to my “pregnant belly” more real. Although I couldn’t actually feel the belly, I did get an odd sense of happiness and contentment, probably because of general associations with motherhood and happiness, and also because of the calm environment. Everyone has wished to step into a body of someone of the opposite gender, and I think this is a great way for men to see at least a little bit what it’s like being pregnant. I thought it was very smart and thought provoking.

Catlu – FaceOSC

FaceOSC Project Video:

https://vimeo.com/187303953

GIF:

movement-of-air

I couldn’t get the WP-Syntax Plugin to work correctly for my code, so here is a link to the code on Github where it looks decently nice:
https://github.com/catlu4416/60-212/blob/master/Face_thing.pde

20161014_04242720161014_042449

 

For this project I began with a loss of what to do. At first I did some research into importing 3D models into Processing, but realized quickly that I did not have the time to figure that out. I thought about making a game, but felt weird about controlling it with my face. Personally, I find moving parts of my face like my eyebrows very hard and awkward. In the end I decided to do a small devilish face that hid behind energy particles. When the devil face opens its mouth, the particles gather, and when they’ve collected, it shoots the particles back at the screen. Afterwards, the particles return to how they were before, albeit closer to the devil face’s mouth. If you don’t let the particles gather long enough and don’t keep your mouth big enough, the particles will slowly move back to their positions. I feel alright about this project, but not super. I didn’t have a very intriguing inspiration for this particular project. Although I think the end result is fun, I didn’t get to do as much with it as I wanted. It definitely took me a lot longer to make the particles gather, disperse, and follow certain steps, than I thought it would. I added a few small nuances like randomized speed of the particles and size changing, but I feel like there could have been more attention to making it really shine.

Catlu – Plot

Final Product:

budha-face-physical-plot

Link to code on Github:

https://github.com/catlu4416/60-212/blob/master/catluBuddhaPlot.pde

For this project, I had 2 main ideas I thought about. The first idea I pursued was the idea of a falling leaf that would be pushed around by an invisible wind. The leaf would leave a trail on the drawing and travel from top to bottom, chaos to order. This idea I started coding until I realized that visually it did not seem very interesting, and the idea gradually became boring. The second idea I had was inspired by my trip to see family in China this past summer. There was one day we were touring the Mogao Caves, and I distinctly remember the walls full of mini Buddhas that were almost identical but ever so different. Golan’s second prompt reminded me of them, and I decided to let this guide me. I made a program so that with every click, a new, slightly different set of Buddha heads appeared. I went with a very stylized and simple look for the small Buddha heads and wanted to create something with a more ancient sort of feeling. To achieve this, I first tried paper and then ultimately wood veneer with a laser cutter instead of a plotter so I could get the distinct mark of the burnt wood markings. Having to come up with something physical was different, and made me think about both how the code would look on the screen, and how it would translate over onto a physical object. Overall, I’m happy with my result. I think the laser cutting on the wood veneer was successful, although there definitely could be a few more tweaks and maybe more detail/decoration.

catlubuddhaplotjpg

I couldn’t get good video of the laser cutter doing my code because the cover on the laser cutter is dark to protect your eyes. The camera brightness never adjusted right, but you can still see a bit.

 

Here’s a picture of the laser cutter in action.

20160930_031425

Here are inspiration and sketch pictures.

mogao-caves

20160929_210843 20160929_210941 20160929_210934

 

 

 

 

 

 

Catlu – LookingOutwards04

Superfeel 2014 at Cinekid by Molmol

superfeel

I was drawn to Superfeel by Molmol because of its fun nature. Superfeel is an interactive stage where people wear devices embedded with sensors that take information from muscle movement and body gesture. These devices then send that information to the mechanical elements of the stage to give the users an interactive experience. From moving and flexing, the user can cause gusts of air, wind, fog, and vibration that lets then feel and understand their body’s movement in a new way. With these devices, the users are given new power. This project was commissioned by the Cinekid Festival in Amsterdam, October 2014. What I admire most about this is the exciting and unique way it’s getting kids interested in the electronic and interactive arts. It’s showing children that the electronic arts aren’t just limited to the screen, and to games and videos. By giving them super powers that they must be amazed to have, the project is inspiring them to think about what is possible, if already such super powers are, in the realm of new media and interactive and computer art. The project is perfectly designed to capture the energy and excitement of kids, and use that to its advantage, and I really appreciate the thought put into that.

Catlu – Clock-Feedback

I thought the feedback to my clock project was very insightful. It was interesting to see how much the opinion differed on my clock, from the concept, to the visual execution. Looking back, I agree on the the points given that the white frame might have been too much, and that perhaps I could have pushed the visuals of the fire a little more abstract, or farther. The concept seemed to be generally well-received by my classmates, while the feeling was more mixed with our professional reviewers, which I found interesting. Overall I enjoyed the project.

Catlu – Interruptions

The artwork is square
There are short lines
More of the lines are vertical or horizontal than at an extreme angle
There’s a certain flow to the piece
There are patches of white, interruptions, in the lines that seem to random and taper off
The lines are sort of in a grid
The lines form vague polygonal shapes
There’s a white background

PLEASE CLICK IN WHITE SPACE TO GENERATE
catluinterruptions

This copy code I had a really hard time with. I fiddled around a lot with the lines to get them as close as I could to Molnar’s work. I wasn’t able to get the actual gaps in the lines working, so I left them out. For some reason, even though I know how to use arrays, no matter how I did them, they would just crash P5, and I couldn’t think of any other ways to loop through the lines and subtract patches. I really want to figure out why my arrays aren’t working with this code, and I intend to know eventually. I think it might have been weird nesting with loops.

Github code:
https://github.com/catlu4416/60-212/blob/master/interruptions

Catlu – Reading03

blues-scale

1a:
Something I like that exhibits effective complexity is improvisational jazz. Jazz was a huge part of my hometown, being home to a Grammy Award winning college jazz band. I’d go to jazz events several times a year, and listen to my friends practice their skills and eventually go on to pursue music in college. What I learned from them is that improv jazz is a mix between exactly what the question asks about, chaos and order. Improv jazz is based on the skill, style, and mood of the musician, but it is also usually a variation on a blues scale, or the main melody or harmony of the song which the improv solo resides within. Some musicians just go completely improv, but most are relying on some knowledge of the music they are playing, music they know, music standards, and the mood they want to achieve. Despite this underlying foundation though, no 2 musicians sound the same when improvising the same scale or song, which I think lends jazz such energetic dynamism.

1b:
The Problem of Dynamics
The problem of dynamics is very interesting to me because I have found exceptional beauty in both things that are still, and things that move. There are advantages to each, and often when thinking about projects, I find myself thinking about this. A still frame from a generative code can be beautiful, as seen in Molnar’s Interruptions, but imagining her piece as not a print, but on a screen or projection constantly morphing, is just as beautiful. While static artifacts are not as continuously generative, they come from an algorithm that has the potential to be continuously generative. Static artifacts give you time to look over them and soak in the details, while changing exhibits grab your attention and sweep you away in motion. Especially in society today where everything is always moving and hurried, I wonder what suits us more? Continually generative art to keep up with us, or static generative artifacts to make us slow down? I think in the end, it depends on each specific piece and situation, and neither is better than the other.

Catlu – LookingOutwards03

The Movement of Air: A New Dance Performance Incorporating Interactive Digital Projection from Adrien M & Claire B

RESIDENCE CREATION CIE AM-CBADRIEN MONDOT / CLAIRE BARDAINNELE MOUVEMENT DE L AIRTHEATRE DE L ARCHIPEL / SCENE NATIONALEPERPIGNAN 01/02 OCTBORE 2015.
RESIDENCE CREATION CIE AM-CBADRIEN MONDOT / CLAIRE BARDAINNELE MOUVEMENT DE L AIRTHEATRE DE L ARCHIPEL / SCENE NATIONALEPERPIGNAN 01/02 OCTBORE 2015.

movement-of-air

Link to work:
http://www.thisiscolossal.com/2015/11/movement-of-air-dance/

This work, The Movement of Air, I found fascinating. Whenever I’ve thought of computer art in the past, or generative art, it’s always been of a screen, or some sort of plant. I don’t think of computer art being mixed with performance art. I admire the simple beauty of the piece. The code makes tantalizing images that dancers/artists react to. Essentially, the computer and code are the dance partners of the people. The dance is ever-changing because of its generative nature, and so the people must always be prepared. The black background of the room grounds the light projections beautifully, while the small chaos of human and machine somehow create balance working together. The music is also gorgeous. In a coding sense, I must admit I have no idea how they randomly generate such diverse, changing imagery. I can imagine maybe doing one of them, but all the different scenes together seems incredible to me.

Catlu – AnimatedLoop

ANIMATED GIF OF CODE:

This particular GIF is very slow because I had to optimize it to be able to upload it to WordPress. A live code version is below too.

output_tjmnuu

LIVE EMBEDED CODE:
catluloopinggif

14466343_1292002557478951_1466845774_o

With this GIF project, I wanted to create something relatively clean and simple to look at. At first, I toyed with the idea of wind blowing cloth, and did a few sketches of the frames of that idea. I thought it would create a nice sense of movement in the confines of the GIF. Later, I remembered a video I watched in my concept studio about origami, and then started thinking about that. I thought about the folding of paper in half and how it always transforms into itself. I also thought about the circular arc square paper makes as it folds forward. I decided to go with this idea because I thought the simplicity would make it intriguing. As I played with the idea in code, I started out just having the paper fold. Then, curious about its path, I removed the background redraw and found myself liking the trails that the paper/box left as it folded each time. The interesting shape it created was almost painterly, and reminded me of some sort of one-eyed cartoon character. Overall I’m happy with the GIF. It was fun to make and pleasing to my eye. One thing I couldn’t do was get the code to start with the trail marks already behind it. Therefore, after the first rotation it endlessly loops.

Github code link:
https://github.com/catlu4416/60-212/blob/master/catluloopinggif.js

Catlu – LookingOutwards02

The thing that may have most inspired me to begin to learn coding was the program Meander that was used in Disney’s short film, Paperman. I have always loved animation, but before I saw Paperman, there was always such a clear distinction for me between 2D and 3D. 2D had distinct stylistic advantages, while 3D could achieve gorgeous and realistic effects. Although the new wave of 3D animation was exciting, there was a bit of nostalgic longing for the old 2D movies that always stayed with me. Hand drawn lines have an appeal and stylish nature that I’ve never seen really matched in a 3D film. With certain nuances, most 3D films give off the same sort of feeling. With Paperman, it was different. I was amazed. Even if the style was classic Disney, the feeling the animation gave off, the atmosphere of the lighting and lines, was unique. I looked up the short later and found that the effect was achieved by using a program called Meander that was made in Disney’s R&D department. Meander made it so that the short could be animated in 3D, but that 2D lines could be hand drawn on top, that would morph, contour, and follow the curves of the 3D spaces and characters. They had essentially given a 3D movie a further 2D appeal. It was because of Paperman that I realized the possibilities that computer science brought to the field of animation, and that by learning computer science, I could open up those possibilities for myself.

Catlu – Clock

catluclock

For this project I went through several ideas. First I thought about the human biology and considered clocks that were measured in heartbeats and breaths. Later, I thought about making a clock on the 28 day average menstrual cycle. Slowly, I moved away from this biological theme as I shifted to a more personal view of time. The clock I chose to do in the end was a clock based on emotions I was feeling while brainstorming. These feelings were of a lack of time. Often I feel that there is much to do in the day, but the time gets away from me and just burns away. I end up getting close to nothing done. This cloth represents the feeling of watching your blank day being eaten up before you can make a memorable mark on it, leaving only the nothingness underneath. I hope it is a reminder that time is precious, and a bit of a fright when you check back and see the fire has consumed more than you thought.

Making the clock involved a lot of research. Initially, I had no idea how I would make the fire. After looking around the P5.js website and trying a few methods including particles and shapes, I decided that perlin noise would be the best way to handle it. I watched all of Dan Shiffman’s videos on perlin noise, and also found some useful reference code http://p5js.org/reference/#/p5/noise. The clock is on a 24 hour schedule meant to symbolize the passing of the day. Time can be estimated by the amount of un-burned paper left within the white frame, the frame of time.

For some reason my pictures refused to upload in the correct orientation. Please click on them to see them correctly. Thanks!

20160916_030503

20160916_030624

20160916_030642

Catlu – LookingOutwards01

Rajat Bhatnagar’s Website:

About

Rajat Bhatnagar is an interactive and sound artist based in New York. Originally from the San Francisco Bay Area, he became interested in sound at a young age, listening to weird late night programs on the 94.1 KPFA radio station. Eventually he got his BA from U.C. Berkeley and an MS from the University of Pennsylvania.
Many of Bhatnagar’s projects caught my interest, and his lecture in particular because I have played the clarinet for 8 years now, and have come to really appreciate the amazing ability to make sound. I’ve always thought that sound, or the lack of it, adds crucial points to any experience, and that incorporating sound into art very much enhances it. In one commissioned project, Bhatnagar set up a light sensor to capture and interpret sound from the wavering of the smoke of burning incense. The sound produced was calming and in perfect harmony with the imagery of the burning incense, creating an intense and compact quiet, meditative environment. I am interested in environments and their ability to provoke certain reactions in people. Another of Bhatnagar’s projects involved the experience of creating an instrument, and the relationship of that to the sound it produced. Every February, every year, each day he would make a new, small handheld instrument. Many of these were surprisingly successful, and Bhatnagar found hundreds of unique ways to produce sound from his environment over the course of almost 10 years. I admire his dedication to fully exploring sound in every way, and his willingness to learn new techniques such as 3D printing and laser cutting in order to further his exploration of sound.
Overall his lecture was very interesting, more about just relaying his experiences with sound art than anything else. The important message he left behind was to really experiment. To push and not be afraid of producing something that sounded bad. As a speaker, he could be smoother. What shone through in the end was his sense of fun and exploration. You could hear the excitement in his voice as he talked about his projects, and that in itself was wonderful.

Catlu – FirstWordLastWord

Rather than compare first word art and last word art, I think it’s more important to acknowledge them both and how they impact the direction of art as a whole. Groundbreaking ideas and technology are ever present in art, yet they themselves cannot sustain it. The first word is the spark of a movement, yet if only it can be seen as valuable, future endeavors of its kind in art or culture, it itself cannot be seen as something that truly had an influence. A great leap forward has never been contained to one person, or one piece of artwork. It has always been sparked by novelty and inspiration, and perpetuated by further exploration and refinement. Indeed, it’s the last word, if there can be a last word, that leads many to seek a new first word. Without refinement, a first is just that. Without a first, a last will never be reached. It’s through the combination of the two that great and lasting change is made. When the novelty has worn out, what mark has been left underneath? What truly matters is the impact after the excitement.