Briley Newell – Final Project (Sticks and Stones)

ScreenShot1I really enjoyed the generative portraits and the text-rain assignment the most of everything we learned this semester. I like the idea of a canvas that reacts to what it sees, and so I decided to combine the two and create a generative portrait that responds to the camera on my computer.

For this project, I created a program that generates filled in black and white circles that represent pixels that are bellow a certain threshold and larger open circles in an amplified color that represent pixels ScreenShot2above that threshold.  In my original portrait project, I found that I most liked the compositions that involved both lines and circles, and so for this project I implemented both, but put the lines only at extreme light and extreme dark.  Also, I flipped the camera so that it provides a mirror image and created a basic frame to tie all the loose shapes together. I would have liked to be able to determine what type of circle is drawn based on brightness relative to surrounding pixels rather than brightness relative to the whole canvas, but that would have been for() loops inside of for() loops and probably would have made my computer have a heart attack.

var myCaptureDevice;
var brightnessNum = 50; 
var darkestNum = 30; 
var lightestNum = 80; 

function setup() {
  createCanvas(640, 480);
  myCaptureDevice = createCapture(VIDEO);
  myCaptureDevice.size(640, 480);
  myCaptureDevice.hide(); // this hides an unnecessary extra view.

function draw() {
  reverseCamera(); //makes camera reflect like mirror

  drawLightLines(); //lines from lightest points
  drawCircles(); //circles in mid-range brightness
  drawDarkLines();  //lines from darkest points 
  drawFrame(); //white frame 

function drawFrame(){
  rect (20, 20, 600, 440); 

function reverseCamera(){
  for (var r = 0; r < myCaptureDevice.width/2; r++){
    for (var t = 0; t < myCaptureDevice.height; t++){
      var rightColor = myCaptureDevice.get(r, t); 
      var leftColor = myCaptureDevice.get(myCaptureDevice.width-1-r, t); 
      myCaptureDevice.set(r, t, leftColor); 
      myCaptureDevice.set(myCaptureDevice.width-r-1, t, rightColor); 
  myCaptureDevice.updatePixels(); //changes image rather than canvas 

function drawCircles(){
  //filled circles
  for (var j = 0; j < 4000; j++){
    var w = random(7, width-7); 
    var h = random(7, height-7); 
    var iw = constrain(floor(w), 0, width-1); 
    var ih = constrain(floor(h), 0, height-1); 
    var sSize = random(1, 25); 

    var whColor = myCaptureDevice.get(iw, ih); 
    var whBW = (red(whColor)+green(whColor)+blue(whColor))/2; 
    var whBrightness = brightness(whColor); 

    if (whBrightness < brightnessNum){ //mid-dark range brightness 
      ellipse(w, h, sSize, sSize); 

  //open circles  
  for (var i = 0; i < 1500; i++){ var x = random(7, width-7); var y = random(7, height-7); var ix = constrain(floor(x), 0, width - 1); var iy = constrain(floor(y), 0, height - 1); var size = random(5, 40); var xyColor = myCaptureDevice.get(ix, iy); var xyBrightness = brightness(xyColor); var brighterColor = color(red(xyColor)*1.4, green(xyColor)*1.4, blue(xyColor)*1.4); //generates new color if (xyBrightness > brightnessNum){ //mid-light range brightness 
      strokeWeight(random(1, 6)); 
      ellipse(x, y, size, size); 

function drawLightLines(){
  for(var s = 0; s < 500; s++){ var l = random(width); var m = random(height); var il = constrain(floor(l), 0, width-1); var im = constrain(floor(m), 0, height-1); var el = l + random(-75, 75); var em = m + random(-75, 75); var lmColor = myCaptureDevice.get(il, im); var lmB = (red(lmColor)+green(lmColor)+blue(lmColor))/3; var lmBrightness = brightness(lmColor); if(lmBrightness > lightestNum){
      stroke(0, lmB*1.1, lmB*1.1); 
      strokeWeight(random(1, 10)); 
      line(l, m, el, em); 

function drawDarkLines(){
  for(var g = 0; g < 1000; g++){
    var p = random(width);
    var d = random(height); 
    var ip = constrain(floor(p), 0, width-1); 
    var id = constrain(floor(d), 0, height-1);

    var ep = p + random(-50, 50); 
    var ed = d + random(-50, 50); 

    var pdColor = myCaptureDevice.get(ip, id); 
    var pdB = (red(pdColor)+green(pdColor)+blue(pdColor))/3;
    var pdBrightness = brightness(pdColor); 

    if(pdBrightness < darkestNum){
      stroke(0, pdB, pdB*2); 
      strokeWeight(random(1, 10)); 
      line(p, d, ep, ed); 


Briley Newell – Final Project Looking Outwards

Wooden Mirror, Daniel Rozin

Besides text rain, my project is also inspired by another type of reactionary art. In Thibaut Sid’s HEXI (2014) and Daniel Rozin’s Wooden Mirror (1999), the surface of a sculpture mounted on the wall reacts to nearby “motion” (actually a change in perceived brightness).

I have seen similar projects in museums, specifically the Children’s Museum in downtown Pittsburgh, and I am always fascinated by them.  HEXI uses large hexagonal tiles that tilt in a certain direction depending on how the sculpture is programed to react to changing brightness, and Wooden Mirror uses smaller wooden tiles that tilt a certain degree based on perceived brightness in order to create shadows and thus mimic the planes of a face.

Hexi – responsive wall from Thibaut Sld on Vimeo.

For my project, I won’t be creating a tangible wall sculpture, but I intend to use similar methods to these two projects to create similar on-screen effects. I want the particles of my project to trail behind the motion of the brightness on the camera like HEXI, and I also want the particles to be colored based on the color of the camera like the tile shadows in Wooden Mirror.

Briley Newell – Final Project Proposal

For my final project, I want to write a program that will behave similarly to my generative portrait but based on live video input like the text rain assignment.  My program will generate some sort of shape/line to represent the color at a given pixel, but the density of the shapes will be dependent on the brightness of the surrounding area, so that if a person runs the program while standing against a white wall, particles will cluster around the persons face and disperse around the background, as well as move when the person does.  I plan on using similar code to that of the text rain assignment to access the brightness of each pixel that I will edit it to take the average brightness of a certain area around the given pixel (10×10 pixels?).  I also intend to use part of the code that made pixels that “got stuck” in the wrong place move towards the right place.  Also, when I originally wrote my code for the generative portrait, I wanted to use probability to effect the density of particles, but I couldn’t remember how, so this time, because I have more time to work on it, I intend to get help making the probability (and therefor the density) of particles dependent on the brightness of the area. Finally, I want the particles to only appear for a short amount of time and then fade, but I’m not sure how to go about that (perhaps by loading them into an array and then pushing them out like a queue).


Briley Newell – Looking-Outwards-10

Yael Braha’s Tree of Changes is a life-size tree that is lit up from the inside and changes colors and emits sound based on the thoughts and sentiments of the people nearby. I can’t figure out what input the tree is changing based off of,
but it seems to register sound. This project stood out to me because it is a public installation of art meant to be enjoyed by everyone and inspire change, rather than a piece of art in a museum meant to be admired.

Yael Braha is a freelance artist from Italy who dabbles in design, filmmaking, and art. She works a lot in interactive media, including pieces like the Tree of Changes. She studied graphic design at the European Institute of Design in Rome, and fine arts in cinema at San Francisco State University.


Briley Newell (Project-10)

I immediately thought of feeding a fish in a fish bowl for this one.
I ran into problems b/c I tried to draw tiles in the background but they made the whole program run super slow. Other than that, I really enjoyed problem solving with this one. I also finally got opacity to work!

Briley Newell (Project-09)

I really liked the golden spiral thing with the shapes, and even in my deliverable I was experiment with the way the colors might be changed as well as the size and weight according to the step size, so I decided to go further with that.  I discovered some lovely curves, and I don’t 100% understand how each factor effects them, but I get most of it.  I can’t for the life of me figure out how to get a spiral to start in the other direction.


Briley Newell – Looking Outwards-08

I really like this project because although the 3D printing part is familiar to me, it is still outside of what I’ve done with a 3D printer because it uses a generated randomness from an algorithm. The project was developed cooperatively by MIT and Harvard and it explores expansion in the realms of design and architecture, as well as the new use of glass as a material that can be 3D printed.

Sydney’s Looking Outwards:

Original work: 3-D Glass Printing

Briley Newell – Looking Outwards-07

Wes Grubbs founded Pitch Interactive, a data visualization studio that mixes creativity with programming skills to produce data representations that are visually interesting and accurate. Grubbs values visual metaphors that help to convey the complex information that his data visualization projects aim to represent. Grubbs studied Information Systems and International Economics at the University of Arkansas. His work focuses on visually representing and revealing the patterns in human behavior, with an interest in how human action and interaction affects the surrounding settings. In his company and in his own life, Grubbs strives to connect the creative and technical sides of things. Grubbs also works in several different tech “mediums”, from illustrations to apps. As someone who appreciate the combination of technology and the arts, data visualization is usually pretty fascinating to me, and Grubbs’s work is no exception. I really appreciate his focus on representing data in a way that reflects the content – the “visual metaphors”, so to speak.

Pitch Interactive’s Website

Briley Newell (Project-07)

Originally I planned to do a scrolling kind of landscape, but when I was reading tFullSizeRenderhrough the description of the project, the mention of a glass bottom boat I thought of the view out of a stationary window and how that is changing randomly just as well as a moving one.  I wanted to do bugs on a window in summer.
Admittedly, I was not able to spend as much time as I would have liked on this project, but if I had been able to do more I would have liked to use sin or cos patterns to dictate the motion of the lady bugs and bees, as well as create rare falling leaves from the tree, a more detailed background, and pulsing rays around the sun.  Overall, though, I definitely understand how to build and use objects much better now.