For this project, I wanted really badly for keys to correspond to body parts and create “creatures” with it. I had alot of fun drawing out everything on illustrator but I had a hard time trying to make everything not overlap. The scale and the way the creature would be presented was also a problem at times. However, I am quite happy with what it looks in the end. The code doesn’t work because all the images are local.
I created a creature that interacts with the user, that grows when fed and speaks using the user’s information, which thus far only includes first name and age. I had originally wanted it to engage with user information that did not have to be put in manually but I have included the last email I sent, as well. This creature makes noise, originally intended to be connected to its y position, and as she is fed, her dialogue changes. I wanted to play with narrative and ghosts of oneself/ the abstraction of one’s digital self using the creature I started earlier, and this is a preliminary phase of what this project will end up like in the end.
This is an animation based on the K-T extinction. It depicts a dinosaur called a Conchoraptor, running through the desert looking for food or water. Clouds are raining down molten glass and there are dead Tyrannosaurs laying everywhere. I wanted to emulate a scene from the movie Fantasia where dinosaurs lurch aimlessly towards death. For the landscape, I made a really neat effect by using angled triangles. It’s pretty busy in terms of objects so the frame rate is probably terrible.
For my final project, I created an interactive environment that responds to scrolling. I paired my own art and collage elements with Pixies lyrics, to create a disorienting, surreal experience.
Skills that I used in this project include creating objects, manipulating images, storing information in arrays, and enabling interactivity via the mouseWheel (scrolling) function.
In order to view my project, please refer to the zipped file on Autolab. It contains the index, sketch, and a folder called “assets” that contains all of my images. Once you have that, run a local server so that the images will be able to load quickly.
Here’s a video capture of it in action:
It seems a bit jerky, but that’s just because of the way my trackpad lets me scroll. With an actual mouse, it would probably be better!
Due to the multiple files, the project is hosted on my website HERE. Code has been submitted to Autolab.
For my final project I created a info vis /digital memorial to migrants that went missing while traveling across the Mediterranean. The data came from the Missing Migrants Project, which tracks and locates migration around the world.
This project turned out to be more ambitious then I had expected- both technically and conceptually. After looking at several data vis projects, I wanted to try something that was a bit less “data vis”. I used the idea of being “lost at sea” as an experience by having the user scroll around to find the flower memorials floating in the space. When the mouse is over a flower, a map and information appears that give you information about the incident that occurred in that location. I am still unsure about how I feel about the overall experience, but am interested in further pursuing alternative ways of showing information.
While coding, I came across multiple technical issues that I had to resolve.. I created several functions and objects for this project and had to deal with placement and mapping things across the screen. It felt great to be able to put together different things I learned throughout the semester and be comfortable with p5js to create something a little more elaborate. I would be interested in continuing working with programming to see how to further integrate it into my practice.
In this project, the user uses the webcam to touch the lines to play the harp. The user knows that he or she touched a certain line because it will not only bounce but produce colorful particles. The way this project works is that it subtracts the current pixel from the previous pixel of body parts and if the absolute value of that is greater than 100, then the sound plays. In order to prevent this program from slowing down, we removed particles once they leave the canvas. Originally we created a virtual piano, however we realized that adding a string sound would be more interesting. We recorded the harp sounds using the electronic piano. (This project does not work on Chrome but it seems to work fine on Edge)
To those who are crying due to our hard coding, we are sorry we’re making your eyes hurt. We spent the first week trying to figure out how to detect motion, and we spent the second week adding particles and figuring out how to make the sound stop playing frequently. By the time we figured this out, we didn’t have enough time to make our codes more neat.
My project is an interactive data visualization that illustrates the number of unsheltered homeless in the Seattle area. It shows how the number of unsheltered homeless people in Seattle has changed over time and how it has increased dramatically in recent years. Every year there is a “one night count” in Seattle and surrounding areas where all of the unsheltered homeless are counted. The Seattle/King County Coalition on Homelessness publishes a summary of the data collected which keeps track of the number of people found in different places throughout the city. This is the data I used to create this project.
In my project, each scene shows a number of images which each represent a place where unsheltered homeless people are living. Each image is labeled with the number of homeless people living there. As the directions instruct, you can use your keyboard to change the scene to a different year. The dimensions of each image change for each year in accordance with the number of homeless people found living in each location that year. This project illustrates data from 2006 through 2015 so that we can see how the number and distribution of unsheltered homeless has changed over time.
As a Seattle native, the visibly large increases in homelessness are troubling to me. But I think that this issue is being largely overlooked by people in the greater Seattle area, specifically those from the more affluent cities surrounding Seattle who have more resources with which to try and stop this homeless epidemic. By creating a visual representation of the number of people living in the different unsheltered areas throughout the city I hope to help others understand how big of an issue this is. I think that my project is both informative and visually stimulating. I think that is does a good job of showing the increase in homelessness and I am proud of the time and effort that I put into it. If this were a longer-term project I would add more detail to the images and the overall scene, but what I have done so far meets my goal of communicating an important message.
A number of different projects inspired me and helped me to come up with my project idea. The first is Nathan Yau’s project on FlowingData called “Years You Have Left to Live, Probably” which is an interactive graph that shows the user the range of statistical possibilities for how much longer he/she will live. The second is a visual simulation of traffic patterns done by Lewis Lehe called “Gridlock vs. Bottlenecks” which uses graphics to simulate the processes that cause traffic congestion. The third, called “Dencity,” is a data visualization of population density across the world done by 3rd Floor at Fathom. I was inspired by how these projects use simple visual elements to display data in an interesting and informative way. I admire how these projects help people to understand the data that surrounds them in their everyday lives. With my project I wanted to use visual elements based on data to “paint a picture” for people that would help them to understand the issue of homelessness that is surrounding their daily lives. And I think I have done this.
Note: The canvas width is too large to display fully here, but the video above shows the canvas as it should appear.
For my final project, I chose to make an interactive piece were the participant can write something into the project, and progressively reveal the image captured through the text. I knew I wanted to utilize the camera in some way, and I wanted to have my final project incorporate text, so I thought the best way to go about doing this would be to create essentially a blank doc that could be written on to reveal an image.
My basic inspiration for this project was old style text-based games, were the player would just be looking at a screen covered in letters and simply have to imagine the world they were playing in. In this way, I wanted the expereince to be primarily text-based.
To begin, I used code for making videos we had used on the Text Rain assignment, and then I incorporated posterizing effects that we were asked to make during the last exam.
After implementing this, by using an array and a for() loop, I created a large display that I enabled the writer to push new key letters onto. Using the posterize code, I made the letters change color depending on the greyscale average of the region being written over.
This was my basic outline going into the creation of my final product, and was what led me to my project as you see it today. I wanted the individual to write about themselves, and as they constructed a more detailed bio of themselves, they would also build up a more detailed portrait of them as people. In this way, my project was designed to truly incorporate the individual into an experience.
The program proved too graphically demanding to upload to WordPress, so I uploaded it to Autolab and used this video to better illustrate the projects capabilities. To give the writer more control, I included the ability to delete text, and press enter to skip a line. I did this also to compensate for the fact that it takes a little while to actually fill the screen with letters. This is probably what most frustrates me about the project: that it is so hard to fill the screen with text. This is something I would want to tackle if I revisit this work. Also, the project seems to run much smoother on Firefox than on Internet Explorer… All in all, this project was an immense amount of fun, and I loved the freedom it provided me in creating something really representative of my interests!
My project is a generative art piece where every letter of the alphabet produces a different kind of artistic command. Thus, by putting together letters to create words and phrases, and art piece is created that is a visual metaphor for the words that have been typed.
For this project, I chose to use what we learned from turtles to draw the image, because I felt that it would using a single turtle to draw everything would create continuity.
I’m pretty happy with how it turned out with the time that I had. I think if I were to improve on this further, I’d like to implement a way for the user to “backspace” and delete previously typed letters.
Move the UFO with your mouse and click the mouse to abduct cars!
For our final project, we created a generative landscape with a slight twist – an alien (you, the user) shows up and abducts all the cars because you have the power of mouseX,mouseY, and mouseIsPressed at your fingertips! The most difficult part was getting the car to fade away (because we first had it so the car popped off the screen, but since we had more time we decided to make the car disappear slowly so it actually feels like the UFO is abducting it). Moving the bridge was a lot more difficult than it should have been because we first constructed a class of objects, but had trouble with the code. If we worked on it more, I think we probably would’ve made it more like a game and added sound effects. This project is very different from what Lexi and I planned to do initially but we’re really happy with the results!