kramser – FinalProject – Speed/Scroll/Stutter

Until this semester, my experience with code consisted mostly of changing the background color of the Xanga account I had in 7th grade. I think I’d always put off learning how to code because it kind of scared me. I thought–like many people, I suppose–that coding takes some kind of innate skill or intelligence that I didn’t have. When actually all it takes is someone to force me to do it.

My background is in video production and live video mixing, so for the final project I wanted to make a simple tool that I could use to manipulate video on the fly.

The video above is a screen capture of my software in action. A row of 30 image sequences, each one-second long, can be manipulated in realtime using sliders that control frame rate, left/right motion, and loop time. All the image sequences in my sketch are excerpted from original footage that I shot using a multi-camera system and mixed live with an analog mixer. The performer is a performance artist, close friend, and CMU alum named Audra Wist.

One of my favorite video pieces–and one which definitely influenced my project–is Modell 5 by Granular Synthesis. I find it incredibly hypnotic, despite being jarring and frenetic, even abrasive. I love the piece because I think it’s a great study of motion and time at the micro scale–how meaning can be constructed, deconstructed, and reinterpreted by repetition, fragmentation, and changes in speed.

I think the big thing missing from my sketch–and the thing that really elevates Modell 5–is a soundtrack. I didn’t really realize how much the sketch needed audio until I had finished it, since I always listen to hype music while coding. Audio has always kind of been an afterthought for me in my video work because most of the visual content I make is created to work as live visuals for musicians. I think that this project would have been a great opportunity to consider how to combine audio and video and intensify the effect of both.

The other big weakness of my project is the way that I coded it. I figured out how to create a loop that would load each frame of video, but not how to create a loop that would rename the variables holding the frames of each image sequence. So my code looks insane and took lot of copy and pasting complete. I’m sure there’s a way to do this that would require about 150 fewer lines of code, but I couldn’t figure it out, even after going to office hours. But it works. Most of the time. Sometimes you have to refresh two or three times to get it to open in the browser.

Overall, I’m happy with my final project and super happy that I finally learned to do this thing that I’ve avoided for so long. I feel like it’s changed the way I think about problems and how to solve them, which is really empowering. I felt confident enough in the skills I’ve learned this semester to code a final project for another class in Unity using C#, and was even able to help a friend debug a project of their own. I’m really excited to keep building on these skills and apply them in my work.



To be fair I didn’t really follow the instructions for the assignment because I didn’t study a curve and use a formula to replicate it. Instead I played around with a recursion example we looked at in class to see what kind of forms I could generate from animating nested ellipses. I think the result is pretty–I especially like the way that the layers of transparency distort the colors to create what look like afterimages–though the amount of user interaction is minimal.


I chose A Cable Plays (2008) by Chris Sugrue because while browsing through the list of artists I saw that she had been involved in Eyewriter–a project I love–and I wanted to see what else she had done.

Sugrue creates interactive installations, aduio-visual performances, and experimental interfaces. She studied at Parsons, where she received her MFA in Design and Technology. She has been an Eyebeam Fellow, a creative engineer at the Ars Electronica Futurelab, and has taught at the Parsons School of Design and the KunstUniversitat in Linz, Austria.

In A Cable Plays, developed in collaboration with Damian Stewart, two performers sit across from one another and use pins and yarn to draw shapes on a game board. A video camera captures a bird’s eye view of the board and augments the resulting composition with animations that respond in real time to new formations.

I really like how the artists mimicked the grainy film look of the video in their animations. It helps to merge the documented and augmented images, which initially confused me because I couldn’t quite tell what the camera was capturing and what was being added over the camera image. I also loved the fluidity and responsiveness of the animations, and thought that this was a clever way to explore and activate negative space.



I decided that I wanted to use emojis for this project because there are so many beautiful and expressive little ready-made characters that I hadn’t seen anyone in the class use for this project. After perusing through my phone to find a couple that I liked, I decided to make a sketch depicting a kitten-killing monster’s litter box.

I think what I ended up with is fine. It doesn’t have nearly all the functionality I had planned for, but believe it or not it took me a really long time. I spent sooo long trying to figure out why the little turds weren’t showing up. For some reason I couldn’t push them into an array and then they were being pushed into an array but not displaying properly, etc. I still don’t know why exactly it started working. It seems equally likely that it was either a bug or me being tired and careless.

In any event, I’m bummed that I didn’t have the energy to add more of the features I wanted. I would have liked to have allowed the user to add a bunch of cats at once–rather than one at a time–and have the monster chase after them based on either the order in which they appeared or whichever was closest. I also wanted to have the turds go away and make a little sparkle emoji fly up off the screen when the user put their mouse over them. Unfortunately I’m a bit rusty with object array programming and just ran out of time and energy.



I started with the Perlin noise generators we worked with on the flag assignment, but everything I tried with them looked flat and boring. I thought that instead I would make an abstract design that was based on the Perlin noise landscapes, but which had more fluidity and dimension to it.

I used thin lines to trace the paths of the landscape across the screen, with one landscape moving on y-axis and the other on the x-axis. The result are moire patterns that make what looks like flowing sheets of fabric. I added some slight random variation to the color of the lines to lend some freneticism to what is otherwise a very smooth and slow animation.

kramser – Looking Outwards 11

While formulating an idea for my final project, I looked up a lot of work that used generative and/or audio-reactive visuals. One of the first people I came across after looking through some of the links on the course website was Tina Frank. I loved her piece Vertical Cinema, colterrain (2013) but realized I mostly admired it because of the way it was exhibited–in a gorgeous cathedral on a tall thin screen.

I started looking instead for web-based work and came across the Echo Nest Remix API, which hosts a bunch of projects that use Javascript and Python for audio-visual mashups. Below are a couple of the projects I liked and thought I could feasibly replicate aspects of for my final project.

Check out a browser app version of this project here.

I like these projects because they use jarring cuts to create a sense of freneticism and sensory overload. I’d like my final project to have a similar kind of intensity.


kramser – Final Project Proposal

For my final project I want to write a program that generates audio/video mashups by using audio analysis to automate video cuts. The program could uses changes in volume, levels, and/or waveform to trigger forward/backwards playback, frame jumps, and loop points. I’d like for the program to be able to accept either live audio input or a pre-recorded track. If possible, I’d also like to give the user the ability to adjust the sensitivity of these functions using keypresses. Finally, it would be great to have some kind of visual readout of where the playhead is in the video.

I’ve done a lot of live video mixing, but never using my own custom software, so I think that this would be a good opportunity for me to begin to develop something that I could use to automate my mixing in a way I haven’t been able to achieve using VJ software. I’m not sure if all of the functionality I’ve outlined is too ambitious for the time allotted, so I’d like to meet with a TA after the break to figure out what’s feasible and where to begin.

Here’s a video still with the bar at the bottom indicating the position of the playhead in the video.




For my composition I elaborated on my computational portrait to make an animated and interactive composition using an array of turtles to reveal the colors of an image. I used a very thin line weight so that the image takes minutes to appear as the turtles’ angles change.

The spawn point for the turtles is tied to the user’s mouse. The image will gradually appear whether or not the user draws with the mouse. The still at the top of the post shows what the image looks like if the user uses their mouse to draw all over the image. The still below is an example of what results if the mouse is completely stationary for about 5 minutes.


And below is the finished sketch.



I chose a post by Jo McAllister from Week 3 about a structure created by ICD/ITKE. A transparent shell is threaded with black fibers unspooled by a robotic arm, which moves according to algorithms derived from analyses of water spider webs. These spiders suspend themselves underwater in air bubbles to make their webs.

It seemed that Jo wasn’t terribly impressed with this project because it was based on human analysis and modeling of an actual web, rather than being created from scratch by a computer program, but I think it’s pretty amazing nonetheless.

I like the idea of scaling up naturally-occurring micro structures using computational analysis. I wonder if these sorts of architectural projects might encourage greater appreciation not only for organic forms but also for more organic and eco-friendly construction methods.




I embedded the image above because my sketch isn’t interactive and it kept crashing the page. When I was creating this image it took about 4 seconds for Chrome to compute and display the image, but when I tried to embed and preview my sketch in this post, the page froze. I assume it’s because my sketch does about 225,000 turtle calculations in setup, which is probably dumb. But it was fine when I was editing, so I thought it would be fine.

In any case..

Aside from my code being clunky, I’m really happy with how this sketch turned out. The image I chose is a photo that my photographer friend took of me over the summer when I sat for him in his studio. He caught me making a really ugly, dead-faced expression, so I took the photo and re-touched it to make myself look even grosser and more discolored than I did in the original. I chose it for this assignment because I think it’s a really beautiful photo, even though I look like I’m deathly ill, and I liked the idea of taking a “bad” photo of myself and re-manipulating it to make it beautiful.

I realized after beginning to work with the code that the exaggerated skin discoloration in the photo really helped define the features of the face, which allowed me to abstract the image quite a bit while still keeping the basic features of the face legible. Here are a few of my earlier attempts:

kramser_05  kramser_07   kramser_15  kramser_09  kramser_25  kramser_03  kramser_27

I used Roger’s example code for turtles to create 16 turtles that move more or less randomly across the canvas, drawing pixels as they go. I played around with different line weights, lengths, and densities to achieve different amounts of detail and abstraction. I settled on the version at the top because I liked the dimensionality I created by varying the weights of the turtles.

Here are a few different versions of the code I settled on with some parameters changed:

kramser_22  kramser_21  kramser_23

I think if I had more time I would–first of all–figure out how to optimize my code so that I could post it to this page without crashing Chrome, and then add some interactive functions so that you could press a key to swap between different draw modes.

Here are screen caps of my code since I couldn’t embed it:kramser-08-code-1 kramser-08-code-2