Category: Project

Kasem Kydd Final Project

For my Final Project I created a twitter bot that I’ve been working on for some time to build up a library of tweets and also add some functionality for it to interact with other uses on twitter.
The bot is called Black Excellence Bot, but it’s twitter handle is @BlackLoveBot. I wanted to create something that was a bot but actually contributed to something online in a meaningful matter that was relevant to my interests. I consider myself to be an artist that works with political and social issues especially pertaining to race and different intersectionalities. I wanted to make a bot that created a loving atmosphere for people of color, specifically black people, because I think the online sphere sometimes creates an environment where people feel that they can be free to exhibit the most racist and disgusting intentions in an attempt to “troll” others or truly display their real mindset. I wanted to bring something I work on in my work, black liberation, celebration, and my tendency to actively address destructive systems that continue to oppress our people. The bot tweets different figures that I associate with black excellence along with the hashtag #blackexcellence. It also tweets different ideas and moments in history that are relevant to my idea of black excellence. The bot has been functioning sort of for some time but I have continuously been trying to curate the library and add more functionality such as responding to followers, and replying to @ mentions. This is my first time using twitter so it was a little bit of a strange process for me but I honestly think I learned a lot from this small project.
You can view the bot here

BlackExcellenceTwitterBot

blackexcellencebot

blackexcellencebotinteraction

Final Project: Audiovisual mixer

For this project, I wanted to create a program that takes sounds and relates images to them via a mixer or board style. The keys used are a,s,d,f,j,k,l. Each key corresponds to a component of a short song I made using LogicPro. Basically, when each key is pressed, the sound will go on, and when it is released, the sound will stop, so it is more of a killswitch board. I am using primitive shapes right now and would like to eventually develop them into gifs or images to make it more complex. The current result, I feel, is not optimal since the synchronization is quite awful at the moment(not a p5.js problem – I trimmed one of the audio clips too short by mistake), so please excuse the urge to vomit. Here is a quick demo:

sketch to be uploaded in a minute

Project 10: Creature

For a while I had this image of a fat, fleshy creature sitting in a box, going about his business, possibly living in another dimension or world.  (The sandwich he is eating is invisible because it exists in a virtual non-space). In this project, I wanted to give the sense of invading someone else’s living space.  I wanted to create a little bit of humor that refers to this kind of aesthetic in my drawing through the movements of this creature that make it seem alive. Using the spring template, I  added a trait where the spring ball will hide every time the cursor moves towards him. He might try and peek out if you wait for a few seconds.  You can try to pick him up but he will eventually break from terror.
Screen Shot 2015-12-18 at 8.42.34 AM

Screen Shot 2015-12-18 at 11.12.52 AM Screen Shot 2015-12-18 at 11.13.19 AM

 

Project10

Project 09: Turtle Graphics

This is self-generative scribble graphic, where the user is only in control of the thickness of the line. I thought the randomness of the scribbles would give it a human or endearing quality, almost like witnessing a toddler scribbling on a wall. I enjoyed playing with this project, as I’ve found you can make some interesting compositions.

Screen Shot 2015-12-18 at 7.50.21 AM Screen Shot 2015-12-18 at 7.58.49 AM

Project09

Project 08: Generative Portrait

For this project, I chose to render my brother. I originally had wanted to use particle systems to create a water-like simulation on top of the loaded pixels using a mutual repulsion force. In this sketch, however, I used a gaussian distribution to create a black hole in which the array of tiny pixel ellipses would form and disappear. The sketch did not turn out as I had hoped since I did not use a gradually added velocity and just layered the gaussian particles on top of the grid, and used too many ellipses in the for loops. This made the transition abrupt and the program slow.
Screen Shot 2015-12-18 at 6.28.11 AM

Screen Shot 2015-12-18 at 6.28.30 AM

Screen Shot 2015-12-18 at 6.29.13 AM

Screen Shot 2015-12-18 at 6.29.31 AM

 

Project08Face

Project 07: Generative Landscape

When asked to plan a generative moving landscape, I immediately thought of the game, Journey. This is a visually stunning, emotionally oriented game where you, a cloaked wanderer fly through a landscape of sand dunes, old buildings, ruins and caves.

I tried to refer to the color scheme and the my favorite structures, the posts, in this sketch. I used noise functions to render the scenery. I used a random function for the color of the posts to create a dust-wind effect.

journey_613081

 

sketch

 

Final Project – Jo McAllister – Mosaic Maker

My Mosaic Maker is a Photo-Booth-like program that creates image mosaics out of images or videos taken from your computer. There are different modes so you can control what the image mosaic is made up of. After struggling to code this in python using openCV and Numpy, I’ve come to better appreciate those who have developed P5.js, Processing, OpenFrameworks, and other tools that ease the creation of quality graphics and image manipulation. Just the simple task of displaying an image needs so much thought to be executed in the most convenient way.

This first video shows image mosaics created with a simple tint function to change the colors of the pixel-images.

This second video shows a mode that uses an algorithm that sorts recorded images by their average grayscale color, and then finds the most appropriate pixel-picture to place at each pixel of the mosaic.

 

The rest of these are screenshots from my friends and  I playing with the Mosaic Maker.

 

imgImg2

imgImg2CopyScreen Shot 2015-12-12 at 12.26.50 AM Screen Shot 2015-12-12 at 12.54.11 AM

Screen Shot 2015-12-06 at 9.57.39 PMScreen Shot 2015-12-12 at 1.46.40 AM

 

 

Final Project – Lidar Visualization

Over the past month or two, I’ve been scanning people and environments—both urban and natural—using a form of light-based rangefinding called LIDAR. Over thanksgiving break, I began to “shoot” 360˚ environments at a small farm in Virginia using a rig I built that allows the LIDAR, which captures points in a plane, to rotate about a vertical axis. This additional movement allows the LIDAR to “scan” over a space, capturing points in all directions.

IMG_5550

My favorite capture by far was taken in a bamboo forest (pictured above) at an extremely high resolution. The resulting point cloud contains over 3 millions points. Rendering the points statically with a constant size and opacity yields incredibly beautiful images with a fine-grained texture. The detail is truly astonishing.

Screen Shot 2015-11-30 at 11.38.11 PM

However, I wanted to create an online application that would allow people to view these point clouds interactively as 2.5D forms. Unfortunately, I was not able to develop a web app to run these, as I underestimated 1. How difficult it is to learn how to use shaders well and 2. How much processing it takes to run a 3 million point point-cloud. One possible solution is to resample the point cloud to cull every nth scan and, in addition to that, remove all points within a certain distance of each other.

Even so, I developed an application that runs locally using OpenFrameworks (see here for the full code). It performs operations to mimic depth / blur on every point, including making points larger and more transparent the closer they are to the eye coordinates (camera). It also introduces a small amount of three dimensional perlin noise to certain points, to add a little movement to the scene.

To allow others to see and explore the data from the bamboo forest capture, I made a three.js demo that visualizes an eighth of it (since the browser doesn’t like loading 3 million points, 200k will have to suffice).

Screen Shot 2015-12-29 at 10.43.06 PM