Final Project – Lidar Visualization

Over the past month or two, I’ve been scanning people and environments—both urban and natural—using a form of light-based rangefinding called LIDAR. Over thanksgiving break, I began to “shoot” 360˚ environments at a small farm in Virginia using a rig I built that allows the LIDAR, which captures points in a plane, to rotate about a vertical axis. This additional movement allows the LIDAR to “scan” over a space, capturing points in all directions.

IMG_5550

My favorite capture by far was taken in a bamboo forest (pictured above) at an extremely high resolution. The resulting point cloud contains over 3 millions points. Rendering the points statically with a constant size and opacity yields incredibly beautiful images with a fine-grained texture. The detail is truly astonishing.

Screen Shot 2015-11-30 at 11.38.11 PM

However, I wanted to create an online application that would allow people to view these point clouds interactively as 2.5D forms. Unfortunately, I was not able to develop a web app to run these, as I underestimated 1. How difficult it is to learn how to use shaders well and 2. How much processing it takes to run a 3 million point point-cloud. One possible solution is to resample the point cloud to cull every nth scan and, in addition to that, remove all points within a certain distance of each other.

Even so, I developed an application that runs locally using OpenFrameworks (see here for the full code). It performs operations to mimic depth / blur on every point, including making points larger and more transparent the closer they are to the eye coordinates (camera). It also introduces a small amount of three dimensional perlin noise to certain points, to add a little movement to the scene.

To allow others to see and explore the data from the bamboo forest capture, I made a three.js demo that visualizes an eighth of it (since the browser doesn’t like loading 3 million points, 200k will have to suffice).

Screen Shot 2015-12-29 at 10.43.06 PM