Final Project – Lidar Visualization

Over the past month or two, I’ve been scanning people and environments—both urban and natural—using a form of light-based rangefinding called LIDAR. Over thanksgiving break, I began to “shoot” 360˚ environments at a small farm in Virginia using a rig I built that allows the LIDAR, which captures points in a plane, to rotate about a vertical axis. This additional movement allows the LIDAR to “scan” over a space, capturing points in all directions.


My favorite capture by far was taken in a bamboo forest (pictured above) at an extremely high resolution. The resulting point cloud contains over 3 millions points. Rendering the points statically with a constant size and opacity yields incredibly beautiful images with a fine-grained texture. The detail is truly astonishing.

Screen Shot 2015-11-30 at 11.38.11 PM

However, I wanted to create an online application that would allow people to view these point clouds interactively as 2.5D forms. Unfortunately, I was not able to develop a web app to run these, as I underestimated 1. How difficult it is to learn how to use shaders well and 2. How much processing it takes to run a 3 million point point-cloud. One possible solution is to resample the point cloud to cull every nth scan and, in addition to that, remove all points within a certain distance of each other.

Even so, I developed an application that runs locally using OpenFrameworks (see here for the full code). It performs operations to mimic depth / blur on every point, including making points larger and more transparent the closer they are to the eye coordinates (camera). It also introduces a small amount of three dimensional perlin noise to certain points, to add a little movement to the scene.

To allow others to see and explore the data from the bamboo forest capture, I made a three.js demo that visualizes an eighth of it (since the browser doesn’t like loading 3 million points, 200k will have to suffice).

Screen Shot 2015-12-29 at 10.43.06 PM

Looking Outwards 11 (for Final Project)

Two pieces which I am inspired by are “In the Eyes of the Animals” by Marshmallow Laser Feast and the Fursuit Parade Point Cloud by Kyle Machulis. Both pieces involve lidar, though in different ways and to different purposes.

The former uses lidar to capture a still forest in striking detail, then reinterprets the point cloud generated to create an entirely new, yet strangely familiar, virtual forest that is responsive to the user in VR. The piece is incredibly beautiful and poses a novel reinterpretation of reality, but seems to be, more than anything, “for show.”


The latter project is much more interesting, for it captures an incredibly large scene (or environment) in 3D with people in it to high detail. The concern of this project is less aesthetics and more open-sourcing of data and capture for experimentation. For me, this project has greater potential to be used in interesting ways. It also was captured on a truck as it drove by a scene. I find this method more applicable to the applications I’m after.


Final Project Proposal

For my final project, I’m planning on using lidar to scan spaces and people, the data from which I’ll use to create interactive software implementables. I plan to use OpenFrameworks (with the permission of Golan Levin) to develop these applications. I’ve already begun the process of exploring the lidar and will continue to iterate on these concepts to discover new creative applications for this technology. One of the areas which I am keen to focus on is the relationship between the observer and the observed and how this is manifest in the capture itself.

You can find my process up until this point documented here and here. Also, here are a few photos for reference:




Project 10 – Creature – Ben Snell

I sought to create a swarm which could in turn simulate the movement of amoeba-like creatures similar in form and interaction to droplets of oil in water. Fortunately, I was able to develop my own swarm program with about 500 “particles.” However, I have run out of time and did not develop the swarm into the blobs I had hoped for. With that being said, the swarm is still fully operational. Movement of the mouse creates a “wind” that pushes the swarm of insects in a specific direction.

The movement of the swarm really surprised me when the program first came alive. The emergent properties of the system are incredibly sensitive to even the tiniest changes in parameters. It took quite some time to get all the parameters aligned properly to create the demo below.

I again programmed in OpenFrameworks (C++), with the permission of Golan Levin. OF is an incredibly powerful arts-engineering toolkit similar to Processing and p5.js, but much much more powerful. If you’d like to play with the code yourself, you can find it on Github.

Sketches:IMG_5299 IMG_5300 IMG_5301

Golan’s Explanation of the Marching Squares method to approximate isolines around a collection of points:IMG_5302

Demo Videos (Demos 2 -5 added later)


Pauline Oliveros is a famous electronic musician who’s been working in this realm of art and technology since the 1950’s. She’s well known for what she calls “deep listening,” which, as one might imagine, is the process of awareness and awakening to sounds unheard. It is this fascination with the nuances of daily life—those things which most people don’t even notice—that I love so much about her work. Many of her pieces, including a more recent one at the Whitney Biennial (see below), give viewers and listeners a whole new perspective and stage from which to observe.

Here’s another composition she made from way back in the day when she worked at the San Francisco Tape Music Center:

And here is Pauline’s website.

Project 09 – COMPOSITION

Inspired by Casey Reas, I set off to create my own generative environment governed by two independent factors. Here’s the idea: these little reactive agents (turtles) want food. The mouse cursor represents the food, so they try to get as close to it as possible. Here’s the catch: the environment in which they live is full of hills and valleys, and they don’t know how to traverse the landscape besides that which is in their immediate vicinity. Furthermore, the agents prefer to move around mountains or down inclines, but rarely straight up a steep incline.

Note: With the permission of Golan, I developed my project in OpenFrameworks (OF), a powerful open-source arts engineering toolkit. I also modified the turtle API to be compatible with C++ and added a “angleToward()” function, which angles the turtle a certain amount in a specific direction (specified in terms of an angle, not a coordinate pair). Since the sketch is written in C++ and not based in the browser, it’s a lot faster. I’ve attached video captures of it below. Here’s a link to the GitHub repository.

Process sketches:

IMG_5273 IMG_5274

Screenshots (Noise field, Noise-derived Topology, and following process screenshots):

Screen Shot 2015-11-05 at 9.27.24 PM Screen Shot 2015-11-05 at 9.27.47 PM Screen Shot 2015-11-05 at 10.25.34 PM Screen Shot 2015-11-05 at 10.32.10 PMScreen Shot 2015-11-05 at 10.48.51 PM  Screen Shot 2015-11-05 at 10.41.27 PM Screen Shot 2015-11-05 at 10.33.07 PM   Screen Shot 2015-11-05 at 10.51.33 PM  Screen Shot 2015-11-05 at 10.42.16 PM

Video Demos:

Crucial Code Snippets (in c++):

#include "ofApp.h"

void ofApp::setup(){
    // ------------------------------
    // ------ REFERENCE NOISE -------
    // ------------------------------
    // create terrain (black = high, white = low)
    float noiseScale = .003; // 1 = one pixel is one unit
    float noiseOffset = 0.0;
    topoNoise.allocate(ofGetWidth(), ofGetHeight(), OF_IMAGE_GRAYSCALE);
    int nPixels = ofGetWidth() * ofGetHeight();

    // pointer to beginning of pixels array
    // this stores the address of the 0th element
    // (an array is really just a pointer to a contiguous block of memory)
    unsigned char* pixelPointer = topoNoise.getPixels();
    // go through all pixels
    for (int i = 0; i < nPixels; i++) {
        // set where the pointer is pointing to a pixel value (perlin noise)
        // note: data type pointed to must be same as pointer type (How is this possible if the pointer index can get larger than char?)
        float px = noiseScale * (i % ofGetWidth());
        float py = noiseScale * ((int)(i / ofGetWidth()));
        *pixelPointer = (unsigned char)(round(ofNoise(px + noiseOffset, py + noiseOffset) * 255.));
        // print the value just assigned (the value pointed to by the pointer)
        // note: convert to int since the stored value is char
//        cout << (int)*pixelPointer << endl;
        // increment the pointer (the "house" that stores the address)
    // ------------------------------
    // ---------- TOPOLOGY ----------
    // ------------------------------
    // create a new pointer (is this necessary??) at beginning
    unsigned char* pointer = topoNoise.getPixels();
    // allocate space for directions array
    topoNormals.allocate(ofGetWidth(), ofGetHeight(), OF_IMAGE_GRAYSCALE);
    unsigned char* normals = topoNormals.getPixels();
    // based on topoNoise, create a new toplogy map of the direction of declines (i.e. normals projected down into 2d)
    // loop through pixels to find their directions (0 to 255 ~ 0 to 2PI radians)
    for (int i = 0; i < ofGetHeight(); i++) { // rows
        for (int j = 0; j < ofGetWidth(); j++) { // cols
            // --------- HORIZONTAL -----------
            // pixel to Left: find the index
            int xL = (j - 1) % ofGetWidth();        // x coordinate
            int yL = i;                             // y coordinate
            int indexL = yL * ofGetWidth() + xL;    // index
            // move pointer to this index
            pointer = pointer + indexL;
            // find pixel value at this location
            int pixelL = (int)*pointer;             // pixel value to left
            pointer = pointer - indexL;             // return pointer to start
            // pixel to Right
            int xR = (j + 1) % ofGetWidth();
            int yR = i;
            int indexR = yR * ofGetWidth() + xR;
            pointer = pointer + indexR;
            int pixelR = (int)*pointer;
            pointer = pointer - indexR;
            // find the x axis normal
            int xNormal = pixelL - pixelR; // facing to right is positive
            // ---------- VERTICAL -----------
            // pixel Up
            int xU = j;
            int yU = (i - 1) % ofGetHeight();
            int indexU = yU * ofGetWidth() + xU;
            pointer = pointer + indexU;
            // find pixel value at this location
            int pixelU = (int)*pointer;
            pointer = pointer - indexU;
            // pixel Down
            int xD = j;
            int yD = (i + 1) % ofGetHeight();
            int indexD = yD * ofGetWidth() + xD;
            pointer = pointer + indexD;
            int pixelD = (int)*pointer;
            pointer = pointer - indexD;
            // find the y axis normal
            int yNormal = pixelU - pixelD; // facing down is positive
            // ----------- NORMAL -----------
            // convert these into a direction
            // NOTE: this discards the amplitude (steepness of incline)
            int angle = round(atan2((double)yNormal, (double)xNormal) / (2 * M_PI) * 255.0);
            // set the pixel value to the angle
            *normals = (unsigned char)angle;

            // increment pointer
    // ------------------------------
    // -------- SETUP AGENTS --------
    // ------------------------------
    // initialize the agents
    for (int i = 0; i < nAgents; i++) {
        turtle tempAgent;
        tempAgent.setPosition(ofRandom(ofGetWidth()), ofRandom(ofGetHeight()));
        tempAgent.angle = ofRandom(2 * PI);
        tempAgent.setColor(round(ofRandom(1)) * 255);

void ofApp::draw(){
//    topoNoise.draw(0, 0);
//    topoNormals.draw(0, 0);
    // ------------------------------
    // ---- UPDATE & DRAW AGENTS ----
    // ------------------------------
    // agents want to "eat" the mouse; however, their environment influences the path they must take
    if (bStart) {
        unsigned char* pointer = topoNormals.getPixels();
        for (int i = 0; i < nAgents; i++) {
            // find the slope downward at agent's location
            int px = round(agents[i].x - 1);
            int py = round(agents[i].y - 1);
            int index = px + py * ofGetWidth();
            pointer = pointer + index;
            int tempAngle = (int)*pointer;
            double topoAngle = (double)tempAngle / 255. * 2 * M_PI;
            pointer = pointer - index;
            // find direction to mouse
            double mouseAngle = agents[i].angleTo(mouseX, mouseY);
            // turn toward mouse by averaging terrain and food source
            double avgAngle = (0.7 * mouseAngle + 0.3 * topoAngle) / 2;
            agents[i].angleToward(avgAngle, 0.75); // low lerps are really interesting
            // move forward a bit
            // wrap the agents
            agents[i].x = fmod(agents[i].x + ofGetWidth(), ofGetWidth());
            agents[i].y = fmod(agents[i].y + ofGetHeight(), ofGetHeight());
//    ofDrawBitmapStringHighlight(ofToString(ofGetFrameRate()) + " fps", 10, 20);


Project-08-Portrait-Ben Snell

Multidimensional Image Extrusions

I chose to explore the visualization of color photography as a multidimensional form of data capture. Using the form of a tetrahedron, which is well suited for interfacing with four-dimensional data sets, I projected the individual R, G, and B channels from a single face outward onto their own separate faces. This can be thought of, in simple terms, as a three dimensional bar chart, where different colored bars originate at the same location but are extruded at different angles. The result is a particularly interesting form exhibiting crystal-like formations, that allows us to view and understand color photography in a whole new light.

The portrait is of myself, and has, strangely enough, turned my hair white. Could this be a manifestation of my future self living within a purely digital world?

Here are some of my process sketches and screengrabs:

IMG_5263 IMG_5264 IMG_5265 IMG_5266 IMG_5267

Screen Shot 2015-10-29 at 10.59.12 PM Screen Shot 2015-10-29 at 8.07.08 PM Screen Shot 2015-10-30 at 12.29.14 AM Screen Shot 2015-10-30 at 12.09.40 AM Screen Shot 2015-10-30 at 12.12.12 AM

And here is the final piece:

I used Three.js and a few functions written by the illustrious Mr.Doob. If the above visualization is not displaying correctly, try loading the page, or clicking / dragging on the image.

		Tetrahedron using Three.js revision 73


Looking Outwards 08

I found Aman’s recent Looking outwards really interesting. It’s called NeuroViz and it’s an incredibly simple tool for visualizing the activation and inhibition of nodes within neural networks such that those within our brain. It does this in a very forthcoming manner that’s extremely easy to understand. It reminds me of the projects done by Nicky Case that attempt to explain complex topics in ways that make the process of learning fun. My only regret is that I had hoped it would allow more complexity, but there’s only so far one can get with this “tool.”

Screen Shot 2015-10-29 at 11.55.57 PM

Here’s Aman’s original post.

Three.js in WordPress

Three.js is a powerful tool that makes it easy to do 3D graphics in the browser. Check out some of the amazing projects made with it here.

To add it to your wordpress post, simply create an html file using the following format, kindly provided by Mr.Doob, in your preferred text editor. Then, upload this file to wordpress and retrieve the link to this upload. Include in your post the following text, with your link inserted into the spot labelled “YOUR_LINK_HERE”

Screen Shot 2015-10-24 at 11.34.17 PM