Category: FinalProject

Final Project- GroupAlytics – CSB

GroupAlytics solves the problem of acquiring useful analytics about Facebook Groups from Facebook’s unwieldy Graph API. It also expands on my ongoing durational work Productive Substitute, providing another facet to the documentation I have been compiling.

Productive Substitute:

GroupAlytics is a pipeline that strips the Graph API of all data from the life span of any Facebook Group and visualizes the users’ engagement over time in a 3D scatterplot. A user of the service can input any Facebook Group ID (found in the URL of the group) and receive useful and often enlightening analytics as well as a visualization. The order of the pipeline is Graph API -> Python -> MATLAB. I’m still working on developing and perfecting it so it is more automatic and so that the end visualization is also web-based (possibly using D3. It is interesting to compare the user engagement over time by author ID with the power law of participation, shown below.

Power Law of Participation:

Another 2D version of the visualization:
Screen Shot 2014-12-03 at 3.27.42 PM

*Thanks to Maddy Varner for helping me figure out the back-end, python side of things*

Python script that scrapes all info from the Facebook’s Graph API:

import json, collections, facebook, urllib2
'''replace token w new token'''
oauth_access_token = "CAACEdEose0cBAEZB6SNyyYCBwYyuDf4eKe14Lp88ZCHMTU8Q7mQCk559gmcNUCqksWdz2q67ygE0YvkwIbNvDDYs36QkyZC0Br3FRwHgTOLuAvjhpLUPAk3igMRrytjVtAVNO53E8mUF2dfuGK2I3ZAMuuYmqZAZACqydeRyZC1xcdcnsha8MwT2BNeIrBNEtYRfHiz0nSiCZCPi0pAZBhGc8"
graph = facebook.GraphAPI(oauth_access_token)
def get_memes():
'''get the graph'''
group = graph.get_object("231149907051192/feed", limit=999)
current = group
iteration = 0
while current != False:
saveName = "%d.json" % iteration
download_memes(current, saveName)
if "paging" in current and "next" in current["paging"]:
url = current["paging"]["next"]
page = urllib2.urlopen(url)
next = json.load(page)
next = False
iteration += 1
current = next
def download_memes(data, saveName):
print type(data)
#ok now expand all comment/like data ^______^
for post in xrange(len(data["data"])):
datum = data["data"][post]
#it's a dictionary so im just making sure the key is in the dictionary before accessing it
if "likes" in datum:
if "next" in datum["likes"]["paging"]:
excess = slurp(datum["likes"]["paging"]["next"])
print "keyerror? ", datum["likes"]["paging"]
if "comments" in datum:
if "next" in datum["comments"]["paging"]:
excess = slurp(datum["likes"]["paging"]["next"])
print "keyerror? ", datum["comments"]["paging"]
with open(saveName, 'w') as save:
save.write(json.dumps(data, sort_keys=True, indent=2))
#this crawls thru next links and then merges all the data
def slurp(next):
response = urllib2.urlopen(next)
data = json.load(response)
laterData = []
currentData = []
if "data" in data:
currentData = data["data"]
if "paging" in data and "next" in data["paging"]:
laterData = slurp(data["paging"]["next"])
if laterData != None and currentData != None:
return currentData
def combine_memes():
with open('0.json') as json_data:
root = json.load(json_data)
for i in xrange(1,17):
name = "%d.json" % i
with open(name) as lil_data:
data = json.load(lil_data)
with open("internet_lonely_ULTIMATE.json", 'w') as save:
save.write(json.dumps(root, sort_keys=True, indent=2))

Python script that takes Json file from Facebook’s Graph API and outputs both some analytics in text form and a spreadsheet of the parameterized data:

import json
import collections
import xlwt
#making sure i open the json file safely
with open('internet_lonely_ULTIMATE.json') as data_file:
# loading the json data
data = json.load(data_file)
postCount = len(data["data"])
likeCount = 0
mostLiked = 0
commentCount = 0
dates = []
posters = []
engagement = []
# im iterating thru each post in the dataset
for post in xrange(len(data["data"])):
datum = data["data"][post]
#it's a dictionary so im just making sure the key is in the dictionary before accessing it
if "likes" in datum:
likeCount += len(datum["likes"]["data"])
if (mostLiked < len(datum["likes"]["data"])): mostLiked = len(datum["likes"]["data"]) mostLikedInfo = datum if "comments" in datum: commentCount += len(data["data"][post]["comments"]["data"]) if "created_time" in datum: time_made = str(data["data"][post]["created_time"])[:10] dates.append(time_made) if "from" in datum: posters.append(datum["from"]["name"]) if "from" in datum and "created_time" in datum and "id" in datum: if "likes" in datum: yay = len(datum["likes"]["data"]) else: yay = 0 time = datum["updated_time"] year = int(time[:4]) month = int(time[5:7]) day = int(time[8:10]) hour = int(time[11:13]) minute = int(time[14:16]) engagement.append([month, day, year, hour, minute, datum["from"]["name"], yay, int(datum["id"][16:]), int(datum["from"]["id"])]) book = xlwt.Workbook(encoding="utf-8") sheet1 = book.add_sheet("Sheet 1") sheet1.write(0,0,"Month") sheet1.write(0,1,"Day") sheet1.write(0,2,"Year") sheet1.write(0,3,"Hour") sheet1.write(0,4,"Minute") sheet1.write(0,5,"Posted By") sheet1.write(0,6,"Likes") sheet1.write(0,7,"Post ID") sheet1.write(0,8,"Author ID") for x in xrange(len(engagement)): print x sheet1.write(x+1,0, engagement[x][0]) sheet1.write(x+1,1, engagement[x][1]) sheet1.write(x+1,2, engagement[x][2]) sheet1.write(x+1,3, engagement[x][3]) sheet1.write(x+1,4, engagement[x][4]) sheet1.write(x+1,5, engagement[x][5]) sheet1.write(x+1,6, engagement[x][6]) sheet1.write(x+1,7, engagement[x][7]) sheet1.write(x+1,8, engagement[x][8])"test.xls") print engagement[1] #aggregates total comments, likes, and posts for the json file print commentCount,"comments,", likeCount, "likes and", postCount, "posts ALL" dates = sorted(dates) date_counter=collections.Counter(dates) poster_counter=collections.Counter(posters) # prints counts for amount of posts print len(poster_counter), "people have ever posted to the group!" # prints the top posters in descending order print "Top Posters:" for letter, count in poster_counter.most_common(15): print ' %s: %d' % (letter, count) print# prints dates when the most posts were made in descending order print "Most Popular Days to Post:" for letter, count in date_counter.most_common(3): print ' %s: %d' % (letter, count) print #most liked post print mostLikedInfo["from"]["name"], "with", mostLiked,"likes, and here's the link:", mostLikedInfo["actions"]

MATLAB code that processes the data from the spreadsheet generated in python to be visualized in the 3d scatterplot:


i = 1;
for i = 1:length(struct_psbetter)

a = struct_psbetter(i).CreatedTime;
CreatedTime_var([i],:) = cellstr(a);

b = struct_psbetter(i).PostedBy;
PostedBy_var([i],:) = cellstr(b);

c = struct_psbetter(i).Likes;
Likes_var([i],:) = c;


%duplicates and turns struct into cell array
struct_psbetterCopy = struct_psbetter;
cells_psbetter = struct2cell(struct_psbetterCopy);

%changes the 3 structs to doubles/matrices

like_counts_cell = cells_psbetter(3,:);
like_counts_mat = cell2mat(like_counts_cell);
like_counts_double = double(like_counts_mat);

%doesn't work- why converts strings to ints??
CreatedTime_cell = cells_psbetter(1,:);
CreatedTime_mat = cell2mat(CreatedTime_cell);
CreatedTime_double = double(CreatedTime_mat);

%doesn't work- why converts strings to ints??
PostedBy_cell = cells_psbetter(2,:);
PostedBy_mat = cell2mat(PostedBy_cell);
PostedBy_double = double(PostedBy_mat);

sparse_mat_CreatedTime = sparse(CreatedTime_double);
sparse_mat_PostedBy = sparse(PostedBy_double);
sparse_mat_like_counts = sparse(like_counts_double);

for i = 1:length(day)
timestamp(i) = datenum(year(i),month(i),day(i),hour(i),minute(i),second(i));
timestamp(i) = timestamp(i) * .005;

MATLAB code to automatically generate 3D scatterplot of analytics from parameterized JSON data from Facebook’s Graph API:

function createfigure(timestamp, authorid, likes)
% TIMESTAMP1: scatter3 x
% AUTHORID1: scatter3 y
% LIKES1: scatter3 z
% S1: scatter3 s
% AUTHORID2: scatter3 c

% Auto-generated by MATLAB on 03-Dec-2014 17:49:19

% Create figure
figure1 = figure;

% Create axes
axes1 = axes('Parent',figure1,'CLim',[544321851 922337203685478],...
'FontName','Caviar Dreams');
view(axes1,[50.5 38]);

% Create zlabel
zlabel('Likes (per post)','FontSize',15.4);

% Create ylabel
ylabel('Author (ID)','FontSize',15.4);

% Create xlabel
xlabel('Time (1 Year)','FontSize',15.4);

% Create title
title({' Facebook Group Analytics ? User Engagement Scatterplot'},...

% Create scatter3

Final Project

For my final project, I created a generative text program that would recognize a face when  a person approached the computer, and then “interact” with you, and dispense a snippet of generated prose.
My original intent was to create a very complex program which would take data from your face, and assign elements of your face to specifics strings of words, but i ended up straying from that entirely, and the result turned into more of a tool, or a way for fresher ideas to be conjured up. The words within the string sets are all words which relate to various other ideas I have been thinking of lately, or things I have been writing. Shuffling up these words allowed me to view them differently, and while this is more of simple program, I am still pleased.
Screen Shot 2014-12-03 at 13.23.15

import oscP5.*;
OscP5 oscP5;

// num faces found
int found;
int stage = 0;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;

 PFont font;

String [] nounbegin = {"morning","we"," you", "i", "you", "we", "i", "stop", "the calling", "hiss", "keep", "ever", "never", "this cannot", "this", "please", "convenience", "bedding", "resting place", "womb", "part", "projections", "interlude", "styrofoam", "night", "lives", "invisible", "fingernails", "fruit", "return", "wet", "vulnerable", "metal", "unknown", "physical", "perpetual", "away", "way", "immesurable", "not found", "always", "exist", "exits", "exit", "think", "distance", "cold", " tepid", "lukewarm", "south", "equation", "difficulty", "tough", "message", "air", "humidity", "touch", "deathwish", "puddle", "sunspots", "wheat", "ration", "disbelief", "never", "you", "sleeping", "number", "mapping", "pressed", "i", "you", "a", "it", "i'll", "paths", "if", "light", "a final", "skin", "they", "curtain", "this horizon", "holes", "a patch", "the wind", "a carpet", "seam", "blaze", "your", "dog", "this", "pitter", "the cusp", "do not", "hue", "eye", "end", "one of", "a space", "afterglow"};
String [] nounnon = {"3:", "vulnerability", "principle,", " forever", "slipped", "credit", "line", "people", "tucked", "away", "find", "36", "wood", "choice", "dwelling", "mama", "knee", "spillage", "slope", "shirts", "seat", "trees", "blanket", "this", "nothing", "gold", "excrement", "windowsill", "bread", "hair", "exchange", "fern", "beings", "blue", "touch", "face", "mechanisms", "me", "mouth", "you", "2008", "core", "room", "awning", "sunspots", "flesh", "streams", "water", "feeling", "it", "text"};
String [] adj = {"zero", "oddly", "elongated", "lucid", "crippled", "slippery", "black", "uneven", "white", "pink", "quaint", "five", "dry", "bright", "off", "crumbly", "soft", "furrowed", "spread", "burrowed", "limp", "jaded", "tumultuous", "tiny", "open", "curled", "multiple", "raw", "still", "strange", "circular", "leaned", "sleeping", "lathed", "blurry", "translucent", "plastic", "babbling", "bug-like", "no", "bleached", "chewed", "closer", "slight", "sun-kissed", "shadowed", "dead", "grassy", "giddy", "bent", "sloppy", "flickering"};
String [] verb = {"return","replace", "gift", "pour", "blame", "trap", "solidify", "obliterate", "concern", "sink", "got", "steaming", "sleepwalking", "breaking", "lengthen", "devour", "swallow", "piece", "fault", "kill", "destroy", "choose", "rain", "shift", "remember", "admit", "lean", "bother", "glint", "glints", "notice","v.s.", "sat", "sentimentalize", "drag", "agree", "sang", "found", "hum"," muddle", "bore", "touch", "go", "repeat", "bear", "is", "asleep", "looming", "curves", "listening", "want", "emulate", "press", "lag", "speak", "know"};
String [] adverb = {"slightly","next", "oddly", "mid","before,", "closely", "guilty", "vehemently", "unexplainable", "after", "far", "fully", "during", "however", "just", "only", "once", "later", "never", "not", "now","right", "from", "less", "watery", "kindly", "numb", "quite", "soon", "so", "subtly", "there", "then", "too", "twice", "tensely", "up", "very", "when", "where", "well", "yet", "over", "tenderly", "softly", "painfully", "slowly", "always", "dewy", "fluorescently", "never", "thickening", "blindly", "deftly", "exactly", "evenly", "finally", "madly" };
String [] other = {"his","above", "about", "during", "is", "her", "amongst", "along", "before", "through", "to", "and", "if", "this", "on", "at", "after", "but", "how", "into", "the", "as", "in"};
String [] punc = {",", ".", "/", "--", ";", ":", ""};
String [] punc2 = {",", "--", ";", ":", " ", "", ""};

String prose = "";

int startTime ;
int currentTime ;

void setup() {

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");
  size (1400,1100);
  background (180, 204, 188);
  smooth ();
  fill (255);
  textSize (40);
  textAlign ( CENTER );
  text ("please, come closer", width/2, height/4);

void draw() {  
  currentTime = millis();
  if(found > 0) {
    if (stage == 0) { 
      stage = (stage + 1) % 4;
      startTime = millis();
    switch(stage) {
      case 1:
         background (180, 204, 188);
         text ("let me get a good look at you", width/2, height/4); // this is initiated after a face is detected
         // 10 sec delay
      case 2:    
         /// after face is detected and text is said, after a few seconds, a circle will appear on screen 
         // 5 sec delay
         background (180, 204, 188);
         noFill ();
         stroke (255);
         strokeWeight (10);
         ellipse (width/2, height/4, 200, 200);
         textSize (55);
         if ( ((currentTime - startTime) / 1000) >= 1) {
           prose = nounbegin[int(random(0, nounbegin.length))] +" "+punc2[int(random(0, punc2.length))]+" "+adverb[int(random(0, adverb.length))] +" "+verb[int(random(0, verb.length))] +" "+other[int(random(0, other.length))] +" "+adj[int(random(0,adj.length))]+" "+nounnon[int(random(0, nounnon.length))] +" "+punc[int(random(0, punc.length))]+"\n" +
        (adverb[int(random(0, adverb.length))]+" "+verb[int(random(0, verb.length))]);
      case 3: 
      // 20 sec delay     
        background (180, 204, 188);
        text (prose, width/2, height/4);
  } else if (stage == 0) {
    background (180, 204, 188); 
    text ("please, come closer", width/2, height/4); 
  if ( ((currentTime - startTime) / 1000) >= 7) {
     stage = (stage + 1) % 4;
     startTime = millis();



public void found(int i) {
  println("found: " + i);
  found = i;

public void poseScale(float s) {
  println("scale: " + s);
  poseScale = s;

public void posePosition(float x, float y) {
  println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);

public void poseOrientation(float x, float y, float z) {
  println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);

public void mouthWidthReceived(float w) {
  println("mouth Width: " + w);
  mouthWidth = w;

public void mouthHeightReceived(float h) {
  println("mouth height: " + h);
  mouthHeight = h;

public void eyeLeftReceived(float f) {
  println("eye left: " + f);
  eyeLeft = f;

public void eyeRightReceived(float f) {
  println("eye right: " + f);
  eyeRight = f;

public void eyebrowLeftReceived(float f) {
  println("eyebrow left: " + f);
  eyebrowLeft = f;

public void eyebrowRightReceived(float f) {
  println("eyebrow right: " + f);
  eyebrowRight = f;

public void jawReceived(float f) {
  println("jaw: " + f);
  jaw = f;

public void nostrilsReceived(float f) {
  println("nostrils: " + f);
  nostrils = f;

// all other OSC messages end up here
void oscEvent(OscMessage m) {
  /* print the address pattern and the typetag of the received OscMessage */
  println("#received an osc message");
  println("Complete message: "+m);
  println(" addrpattern: "+m.addrPattern());
  println(" typetag: "+m.typetag());
  println(" arguments: "+m.arguments()[0].toString());
  if(m.isPlugged() == false) {
    println("UNPLUGGED: " + m);

Final Project

For my Final Project I created an interactive clothing

For my final project I created an interactive clothing. There were many clothing that is worked with lilypad and lights but not some outside material besides lights. So by using three servo motors I created an clothing where there are fabric flowers that bloom when there is pressure detected. I originally wanted to use lilypad for my clothing project but I somehow couldn’t figure out how to do it so I used Arduino instead(which is on the ground and not shown in the video).

What I was trying to express was the feminine shape of the flower blooming in a dress, but due to the spring and the mechanical things along with it it adds the gross look of an alien being born from the dress, which the more I look at it, the more I like about it. I think this paradox between the beauty and the alien gross look that this dress has adds more to the work compared to when it didn’t have that look.


DSC_0018DSC_0015  DSC_0019 DSC_0026



 Fritzing Diagram

Untitled Sketch_bb



Servo servoMotor;
int servoPin = 3;
Servo servo_0;
Servo servo_1;

//int pos = 0;

//int sensePin = 2;    // the pin the FSR is attached to
//int motorPin = 9;
void setup() {
void loop() {
  float value = millis();
  int analogValue = analogRead(A0);
  int servoAngle = map(analogValue,0,1023,94,180);
  //servoMotor.writeMicroseconds(1700);  // Counter clockwise
  //delay(2000);                      // Wait 2 seconds
  //servoMotor.writeMicroseconds(1300);  // Clockwise
  //servoMotor.writeMicroseconds(1500);  // Stop
  if (value == 3500){
    for(int i=0;i<=170;i++){

Final Project- Nathalie

So for my final project, I’ve created an audiovisual actualizer.

(Seriously watch the whole video; about 2/3 of the way through during the breakdown it gets really cool annnnd I just realized I was talking in the video; I forgot about that. My internet is a mess right now due to intermittent power outages; I’ll upload a different video that doesn’t have me gushing over my own project in the middle of it once it’s working again)

How it’s working: Music goes in through the audio jack, any music you want, off your laptop, off your phone, mp3 player -anything. It gets fed through an arduino FFT library, which does audio analysis on the frequencies. The results of that analysis are then mapped in 6 sections on the 24-pixel LED ring to various colors that change dramatically with the frequencies.


My inspiration for this was that, for a lot of my recent artistic career, I’ve been looking for a way to visualize synesthesia in a way that’s meaningful and understandable to someone who isn’t perpetually inside my head with it. Unfortunately, I don’t think I’ve succeeded; the only real way to make something look like what music actually looks like to me would probably be a tone/pitch analysis instead. That said, it still looks pretty darn cool. I think I learned more working on this project than I have all year in any class; I learned how to actually program, for one, without people sitting over my shoulder and telling me what to type next, or following a pattern I found on the internet. I learned a whole lot about audio analysis, and inadvertently about a decent portion of the math behind it (although I think I’ve forgotten most of that now), and I learned a lot about patience and improvisation when things don’t appear to be going the right way.

I also learned that sometimes really strange things happen with code that make your projects work even when nothing is plugged into them. But I’ll figure out why it does that eventually, I think.


(sorry the diagram is a little funky; Fritzing doesn’t have any neopixel stuff in its libraries so I had to photoshop it in afterwards)

//example sketch modified from FFT library's examples
//example: fft_adc_serial
//butchered and frankensteined together 
//by yours truly

#define LOG_OUT 1 // use the log output function
#define FFT_N 64 // set to 256 point fft

#include  // include the library
#include  //includes FastLED library

#define LED_PIN     6 //pin attached to Neopixels
#define NUM_LEDS    24 //number of lights per Neopixel
#define LED_TYPE    WS2812 //type of LED
#define COLOR_ORDER RGB //these are pretty obv
CRGB leds[NUM_LEDS]; ///this is just important idk what it does

#define UPDATES_PER_SECOND 100  //how many times the LEDs refresh color per second

void setup() {
  FastLED.addLeds<NEOPIXEL, LED_PIN>(leds, NUM_LEDS); //this is copied from FastLED's example codes
  //it's in all of them so I figure it's necessary. It looks necessary. 
  //I will readily admit my lack of comprehension to retain honesty.
  TIMSK0 = 0; // turn off timer0 for lower jitter
  ADCSRA = 0xe5; // set the adc to free running mode
  ADMUX = 0x40; // use adc0
  DIDR0 = 0x01; // turn off the digital input for adc0

void loop() {

  cli();  // UDRE interrupt slows this way down on arduino1.
  for (int i = 0 ; i < 128 ; i += 2) { // save 256 samples
    while(!(ADCSRA & 0x10)); // wait for adc to be ready
    ADCSRA = 0xf5; // restart adc
    byte m = ADCL; // fetch adc data
    byte j = ADCH;
    int k = (j << 8) | m; // form into an int
    k -= 0x0200; // form into a signed int
    k <<= 6; // form into a 16b signed int
    fft_input[i] = k; // put real data into even bins
    fft_input[i+1] = 0; // set odd bins to 0
  fft_window(); // window the data for better frequency response
  fft_reorder(); // reorder the data before doing the fft
  fft_run(); // process the data in the fft
  fft_mag_log(); // take the output of the fft
  int numLedsToLight = 24;


  //using: start i = 3 because the first 3 bins pretty much never change and as such are unusable
  //groups of 4; 3456, 78910, 11-14, 15-18, 19-22, 23-26
  //only going up by 2s so 3-10, 11-18, 19-26, 27-34, 35-42, 43-50
  //3-10 blue group: high bound 87, low bound 44
  //11-18 teal group: high bound 52, low bound 8/19
  //19-26 pink group: high bound 42, low bound 0/8
  //27-34 purple group: high bound 38, low bound 0
  //35-42 green group: high bound 46, low bound 0
  //43-50 yellow group: high bound 37, low bound 0

  //blue, LEDs 0-3; bass
  for (int i = 3 ; i < 11 ; i=i+2) {      int val = fft_log_out[i];     if (val > 70){
      int blueR = 0;
      int blueG = map(val, 30,90, 0,5);
      int blueB = map(val, 30,90, 0,45);
      leds[i/2].setRGB(blueR, blueG, blueB);
    else if (val > 50){
      int blueR = 20;
      int blueG = map(val, 30,90, 0,3);
      int blueB = map(val, 30,90, 0,27);
      leds[i/2].setRGB(blueR, blueG, blueB);
    else {
      int blueR = 35;
      int blueG = map(val, 30,90, 0,0);
      int blueB = map(val, 30,90, 0,15);
      leds[i/2].setRGB(blueR, blueG, blueB);

  //cyan, LEDs 4-7; lower medium
  for (int i = 11 ; i < 19 ; i=i+2) {      int val = fft_log_out[i];     if (val > 30){
      int cyanR = 0;
      int cyanG = map(val, 30,90, 0,25);
      int cyanB = map(val, 30,90, 0,25);
      leds[i/2].setRGB(cyanR, cyanG, cyanB);
    else if (val > 20){
      int cyanR = 17;
      int cyanG = map(val, 30,90, 0,20);
      int cyanB = map(val, 30,90, 0,13);
      leds[i/2].setRGB(cyanR, cyanG, cyanB);
      int cyanR = 30;
      int cyanG = map(val, 30,90, 0,20);
      int cyanB = map(val, 30,90, 0,0);
      leds[i/2].setRGB(cyanR, cyanG, cyanB);
  //pink, LEDs 8-11; medium
  for (int i = 19 ; i < 27 ; i=i+2) {      int val = fft_log_out[i];     if (val > 30){
      int pinkR = map(val, 30,90, 0,45);
      int pinkG = 0;
      int pinkB = map(val, 30,90, 0,5);
      leds[i/2].setRGB(pinkR, pinkB, pinkG);
    else if (val >20) {
      int pinkR = map(val, 30,90, 0,27);
      int pinkG = 20;
      int pinkB = map(val, 30,90, 0,3);
      leds[i/2].setRGB(pinkR, pinkB, pinkG);
      int pinkR = map(val, 30,90, 0,15);
      int pinkG = 35;
      int pinkB = map(val, 30,90, 0,0);
      leds[i/2].setRGB(pinkR, pinkB, pinkG);

  //purple, LEDs 12-16; higher medium
  for (int i = 27 ; i < 35 ; i=i+2) {      int val = fft_log_out[i];     if (val > 30){
      int purpleR = map(val, 30,90, 0,25);
      int purpleG = 0;
      int purpleB = map(val, 30,90, 0,25);
      leds[i/2].setRGB(purpleR, purpleG, purpleB);
    else if (val > 20) {
      int purpleR = map(val, 30,90, 0,13);
      int purpleG = 17;
      int purpleB = map(val, 30,90, 0,20);
      leds[i/2].setRGB(purpleR, purpleG, purpleB);
      int purpleR = map(val, 30,90, 0,0);
      int purpleG = 30;
      int purpleB = map(val, 30,90, 0,20);
      leds[i/2].setRGB(purpleR, purpleG, purpleB);

  //green, LEDs 17-20; high, also one of these is the bass drum and I don't know why
  for (int i = 35 ; i < 43 ; i=i+2) {      int val = fft_log_out[i];     if (val > 33){
      int greenR = map(val, 30,90, 0,5);
      int greenG = map(val, 30,90, 0,45);
      int greenB = 0;
      leds[i/2].setRGB(greenR, greenG, greenB);
    else if (val > 22) {
      int greenR = map(val, 30,90, 0,3);
      int greenG = map(val, 30,90, 0,27);
      int greenB = 20;
      leds[i/2].setRGB(greenR, greenG, greenB);
      int greenR = map(val, 30,90, 0,0);
      int greenG = map(val, 30,90, 0,15);
      int greenB = 35;
      leds[i/2].setRGB(greenR, greenG, greenB);

  //yellow, LEDs 20-23; highest
  for (int i = 43 ; i < 51 ; i=i+2) {      int val = fft_log_out[i];     if (val > 25){
      int yellowR = map(val, 30,90, 0,25);
      int yellowG = map(val, 30,90, 0,25);
      int yellowB = 0;
      leds[i/2].setRGB(yellowR, yellowG, yellowB);
    else if (val > 15){
      int yellowR = map(val, 30,90, 0,13);
      int yellowG = map(val, 30,90, 0,20);
      int yellowB = 17;
      leds[i/2].setRGB(yellowR, yellowG, yellowB);
      int yellowR = map(val, 30,90, 0,0);
      int yellowG = map(val, 30,90, 0,20);
      int yellowB = 30;
      leds[i/2].setRGB(yellowR, yellowG, yellowB);

  for (byte i = 0 ; i < FFT_N/2 ; i++) { 
    Serial.println(fft_log_out[i]); // send out the data to the serial port so I can see what it's doing


Final Project- Follower Robot


My project is a robotic 1/10th scale car that follows specific colors and items. For this project, I purchased a superman cape with hopes of making this into a cool interactive project that can be put out in a public space and people can try running away from the robot. However due to technical difficulties and many issues, i’ve only been able to get my project working to a certain degree.




Screen Shot 2014-12-14 at 1.02.30 PM

Screen Shot 2014-12-14 at 1.02.56 PM

The Technology

My project consists of three main parts- an Arduino to control everything, an RC car base and electronics (motor, servo, etc) and a Pixy camera (specialized camera developed by CMULabs and built for Arduino, available at ).
The pixy cam follows the red cape, and the Arduino measures the distance based on the size of the cape. This is slightly problematic as the person wearing the cape runs faster and the cape flaps around, changing sizes.

Below is the Fritzing Diagram for the breadboard controlling my robot. Signals are being sent from the arduino to the servo for steering and the ESC on my car for the motor speeds. The servo is powered by the ESC rather than the Arduino because the ESC outputs more voltage. The arduino recieves power from a standard power source, but the ESC on the robot is powered by a two cell LiPo battery. The two buttons control the base-size of the cape (resets distance) and the power (pressing the power starts the robot, and press again to pause it, stopping all servo and motor use).


// Call Pixy and Servo
Pixy pixy;

//Initialize Variables and objects
Servo motor; 
Servo servo; 
int width = 319;
int height = 199;
int speed = 90;
int dir = 90;
int sizeMIN = 8000;
int sizeMAX = 11000;
int sizeSetPin = 3;
int stopPin = 2;
boolean STOP = true;
int lastSize;
int refresh = 5;
int refreshSpeed = 50;
int sizes[50];

void setup() 
  Serial.println("Starting program...\n");
  Serial.print("Initializing Pixy...\t\t");
  Serial.print("Mounting Servos...\t\t");
  Serial.print("Calibrating ESC...\t\t");
  Serial.print("Setting Servo... \t\t");
  Serial.println("\n\n Standing By...");
  pinMode(sizeSetPin, INPUT);
  pinMode(stopPin, INPUT);

void loop() 
  if (digitalRead(sizeSetPin) == LOW) {
    sizeMIN = lastSize-1500;
    sizeMAX = lastSize+1500;
    Serial.print("New ideal size: ");
    Serial.print(" - ");
  if (digitalRead(sizeSetPin) == LOW) {
  if (digitalRead(stopPin) == LOW) {
    if (!STOP) {
      STOP = true;
      speed = 90;
      Serial.println("Program Halted.");
    else {
      STOP = false;
      Serial.println("Program Resumed.");

  static int i = 0;
  int j;
  uint16_t blocks;
  char buf[32]; 
  int size = 500;
  int largest;
  blocks = pixy.getBlocks();
  if (blocks)
    if (i%refresh==0)
      for (j=0; j size) {
          size = cur_size;
          largest = j;
      if (size> 500) {
        lastSize = size;
        //       Serial.print("Size (");
        //       Serial.print(size);
        //       Serial.print(")\t\t");

        if (i%refreshSpeed==0) {
          int total=0; 
          int sum=0; 
          int aveSize;

          for (int i=0; i<50; i++) {
            if (sizes[i] != 0) {
              sum = sum + sizes[i];
              sum = sum/2;
          aveSize = sum;

          //Change speed
          Serial.print("Speed (");
          if (!STOP) {
            Serial.print("* ");
            Serial.print(" *");
           // checkSpeed(aveSize);
          speed = 107;

        //change direction
        //Serial.print("\tDir (");


        for(int i=0; i<50; i++) {
          if (sizes[i] == 0) {
            sizes[i] = size;
    if (!STOP) {
      //write servo and motor
    else if (STOP) {
      speed = 90;
      dir = 90;
  else {

void resetArray() {
  for (int i=0; i<50; i++) {
    sizes[i] = 0;

void checkSpeed(int size) {
  if (size< =sizeMAX && size>=sizeMIN){
  if (size< =sizeMIN) {
  if (size>=sizeMAX) {

void reduceSpeed() {
  if (speed > 90){
    if (speed <90){
      speed = 90;
  else if (speed>=40) {
    speed -=3;

void increaseSpeed(){
  if (speed>=90 && speed < = 94) {
    speed = 95;
  else if (speed >= 115 && speed < = 120) { // MAX 140
    speed += 1;
  else if (speed < 90) {
    speed = 90;
  else if (speed < 115){ 
    speed +=2;

void checkDir(int x) {
  if (x < 165) {

    dir = map(x, 20, 165, 0, 90);
    dir = constrain(dir, 0, 90);
  else if (x>235) {
    // Serial.print("\t>\t");
    dir = map(x, 235, 320, 90, 180);
    dir = constrain(dir, 90, 180);
  else {
    // Serial.print("\t-\t");
    dir = 90; 
Written by Comments Off on Final Project- Follower Robot Posted in FinalProject

John Choi – Ambassador Robot 001, Halley


Ever since I set foot at Carnegie Mellon, I’ve been playing around with the idea of building a mid-large humanoid robot, just for kicks.  Well, not actually just for kicks, but mostly because I thought the idea of building one was simply flatout awesome.  So I began by learning all the hardware basics (Wiring, Soldering, Perfboards, 3D Printing and so forth) at the CMU Robotics Club during my freshman year.  During the Winter Break of my freshman year, I prototyped a concept of a singing robot head that uses an Android phone as a virtual face display system.  This worked out great, and I knew I wanted to implement the same phone-face for the large expressive humanoid robot I was thinking of building.  Then comes Summer Break, which is where I truly began fleshing out my concept for building this robot.  The entire CAD model was created using Rhinoceros.  When the semester for Fall 2014 began, I applied for an FRFAF Grant so I would be able to turn my concept in to reality, and thankfully, the STUDIO accepted my request and awarded me precisely $500 to do just that.  I wasted no time and proceeded to get all the necessary parts laser cut from black, white and clear cast acrylic (looking back, I’m thinking I probably should have used ABS instead due to acrylic being relatively easy to shatter).  Over the course of the semester, I assembled the robot, beginning with the head, then the legs, then the arms, and finally the torso and backpack.  This took a VERY long time.  Even though I spent every scrap of free time I had on this project, coursework nevertheless got the better of me and slowed down assembly.  Barely over Thanksgiving Break, I finished assembly, along with all the necessary electronics.  No code yet, except for some very rudimentary testing of the servo motion (see video below).  Ultimately, I see this project as a platform for a greater range of humanoid mechatronic animation.  The first use:  turn Halley into a robotic student capable of interacting with professors in class.

Above:  Just testing some of the servos.  Nothing too interesting, yet.

Below:  Just a clean photograph of the robot sleeping.



Above:  Just an initial concept sketch.

Below:  A render of the CAD model using Rhinoceros.



Above:  A photograph showing the wiring.  Careful wire planning prevented the whole thing from becoming a giant rat’s nest.

Below:  A Fritzing diagram showing the wiring.  Its actually simpler than it looks, as it really is nothing more than 21 servos hooked up through a perfboard to an Arduino Mega.



“praytopray” modern prayer, for a modern you. Luca Damasco Final Project.

Rather than create a “networked object” I decided to attempt a project which I have wanted to develop for quite a long time, a peer to peer praying network,“praytopray”, which allows users to send prayers into a network of other users who allow their devices (computer/cellphone), rather than their minds, commit the act of praying.

I have had a very interesting relationship with prayer my entire life. Being raised Catholic, I was constantly reminded of my need to pray for others and others’ need to pray for me. Every day I would think of the relatives, friends and events that I would need to pray for and attempt to hit them all. As I grew older, my relationship with religion deteriorated but my underlying desire to continue praying, or in some way indirectly help myself or those around me remained.

Rather than simply rely on the human mind to achieve these goals, I decided to harness the power of a computer. Every time a user sends a prayer in “praytopray”, another user’s anonymous and encrypted prayer is sent to their device. At that point the device processes their prayer by hashing (committing several mathematical operations on the contents of the prayer). Rather than create a new forum for people to pray in the same way prayer has been achieved for thousands of years, I hope this method can be seen as an entirely new way to pray.

The entire project can be accessed via I really like the way I’ve actually designed the system in that it works, however I truly think it needs one more level of user interaction (i.e. someway to track your prayers) to really be an effective piece. In the end, I am extremely happy with my results and will be continuing this project in the future.


Here’s a sketch of how the project works :


A number of relevant inspirations were:

“Temple OS” an operating system created to worship God.

“prayr” a modern forum for traditional prayer using a facebook-like system.

“” a site which allows users to have their prayers read aloud by a computer voice filling in for a human.

Written by Comments Off on “praytopray” modern prayer, for a modern you. Luca Damasco Final Project. Posted in FinalProject

Final Project: Panopticon

Screen Shot 2014-12-03 at 6.49.07 PM

The Panopticon is an installation that simulates a digital creature with a hundred-sided polygon of faces as a body. When this creature is looked at by the viewer, their face is stolen and added to the collective ball. The creature then uses the faces it has stolen to see in all directions, acting as the center of a massive cosmic panopticon. The database of faces is never purged: you cannot remove yourself from the sphere once you are a part of it.

With this project, I wanted to try to take the unseen and secret processes of modern surveillance and have them manifested as a mythical digital object/creature.

This project is still a work in progress, as there are still some features I want to get working. I wasn’t able to finish the system for creating the 3D face models from the Kinect in real-time, but this is what the ball will look like once I can get that system running with multiple faces (it only works with a single face in real-time right now):

Screen Shot 2014-12-12 at 10.37.00 AM




Debugging the Face Stealer and more:

Screen Shot 2014-12-03 at 6.23.26 PM

Screen Shot 2014-12-03 at 11.39.48 AM

#pragma once

#include "ofMain.h"
#include "ofxFaceTracker.h"
#include "ofxAssimpModelLoader.h"
#include "ofxOpenCv.h"
#include "ofxKinect.h"

class PanopFace {
    ofVec3f position;
    ofImage color,depth;
    ofMesh mesh;
    int age;
    PanopFace() {
    void setup() {
        age = 0;
    void fixToSphere() {

class testApp : public ofBaseApp {
	void setup();
    void updatePanopticon();
    void updateFacestealer();
	void update();
    void drawPanopticon();
    void drawFacestealer();
	void draw();
    void resetPanopticon();
    void captureFace();
	void keyPressed(int key);
    // facestealer stuff
    ofxKinect kinect;
	ofxFaceTracker tracker;
    int cropX;
    int cropY;
    int cropW;
    int cropH;
    bool faceCaptured;
    int captureFaceTimer;
    ofxCvColorImage kinectColor;
	ofxCvGrayscaleImage kinectDepth;
    ofImage capturedFaceColor;
    ofImage capturedFaceDepth;
    bool drawFacestealerDebug;
    const float HTW_RATIO = 1.4;
    const float SCALE_TO_PIXELS = 55.0;
    const float Y_FUDGE_FACTOR = 7.0;
    const float ORI_TOL_X = 0.2;
    const float ORI_TOL_Y = 0.05;
    const float ORI_TOL_Z = 0.15;
    // panopticon stuff
    int panopSize = 0;
    float panopRadius = 0;
    const float PANOP_RADIUS = 750.0;
    const bool USE_KINECT_DEPTH = false;
    ofEasyCam cam;
    ofShader facesShader;
    ofxAssimpModelLoader faceModel;
    ofMesh generatedMesh;
    ofxAssimpModelLoader starsMesh;
    ofImage starsTexture;
    ofMesh mesh;
    vector faces;
#include "ofApp.h"

using namespace ofxCv;
using namespace cv;

void testApp::setup() {
    kinect.setDepthClipping(500.0f, 1000.0f);
    kinectColor.allocate(kinect.width, kinect.height);
	kinectDepth.allocate(kinect.width, kinect.height);
    faceCaptured = false;
    drawFacestealerDebug = false;
    captureFaceTimer = 0;
    } else {
    facesShader.load("shaders/faces.vert", "shaders/faces.frag");
    for(int i = 0; i < 10; i++) {
        faces[panopSize-1].mesh = faceModel.getMesh(0);

void testApp::update() {
	// there is a new frame and we are connected
	if(kinect.isFrameNew()) {
		kinectDepth.setFromPixels(kinect.getDepthPixels(), kinect.width, kinect.height);
        kinectColor.setFromPixels(kinect.getPixels(), kinect.width, kinect.height);

void testApp::draw() {
    if(drawFacestealerDebug) {

void testApp::updateFacestealer() {
    cropX = tracker.getPosition().x;
    cropY = tracker.getPosition().y;
    float faceSize = tracker.getScale();
    cropW = faceSize*SCALE_TO_PIXELS;
    cropH = faceSize*SCALE_TO_PIXELS*HTW_RATIO;
    cropY -= cropH/Y_FUDGE_FACTOR; // center of face was too low
    float orientationX = tracker.getOrientation().x;
    float orientationY = tracker.getOrientation().y;
    float orientationZ = tracker.getOrientation().z;
    if(!(orientationX==0&&orientationY==0&&orientationZ==0) &&
       abs(orientationX) < ORI_TOL_X &&
       abs(orientationY) < ORI_TOL_Y &&
       abs(orientationZ) < ORI_TOL_Z &&
       captureFaceTimer > 30) {
        captureFaceTimer = 0;

void testApp::drawFacestealer() {
    ofSetColor(255, 255, 255);
    kinectColor.draw(0, 0);
    if(faceCaptured) {
        capturedFaceColor.draw(500, 500);
        capturedFaceDepth.draw(700, 500);
        ofSetColor(0, 255, 0, 150);
        ofDrawBitmapString("extracted face texture", 0, 15);

        ofSetColor(255, 0, 0, 150);
        ofSetColor(0, 0, 0, 255);
        ofRect(0, 0, 230, 75);
        ofSetColor(255, 100, 0, 150);
        ofDrawBitmapString("looking for stealable face", 0, 15);
        float orientationX = tracker.getOrientation().x;
        float orientationY = tracker.getOrientation().y;
        float orientationZ = tracker.getOrientation().z;
        if(abs(orientationX) < ORI_TOL_X) {
            ofSetColor(0, 255, 0, 150);
        } else {
            ofSetColor(255, 0, 0, 150);
        ofDrawBitmapString("orientationX: " + ofToString(orientationX), 0, 30);
        if(abs(orientationY) < ORI_TOL_Y) {
            ofSetColor(0, 255, 0, 150);
        } else {
            ofSetColor(255, 0, 0, 150);
        ofDrawBitmapString("orientationY: " + ofToString(orientationY), 0, 45);
        if(abs(orientationZ) < ORI_TOL_Z) {
            ofSetColor(0, 255, 0, 150);
        } else {
            ofSetColor(255, 0, 0, 150);
        ofDrawBitmapString("orientationZ: " + ofToString(orientationZ), 0, 60);

void testApp::updatePanopticon() {
    for(int i = 0; i < panopSize; i++) {
        for(int j = 0; j < panopSize; j++) {
            if(i != j) {
                ofVec3f d = (faces[i].position.operator-(faces[j].position));
                if(d.length() < 40.0 / panopSize) {
        if(faces[i].age < 255) faces[i].age+=10;

void testApp::drawPanopticon() {
    ofRotate(ofGetElapsedTimeMillis()*0.004, 0, 1, 0);
    // 'heightmap' shader disabled because openframeworks is a strange animal
    // about texture ids...could't get the damn thing to pass the right texture
    // in...i know it's possible though because i got actually got it to work
    // with only one texture (it's in the other project...go see it it's glorious)
    for(int i = 0; i < panopSize; i++) {
        float panopRadiusGoal = sqrt(panopSize/(4*PI))*16.0;
        panopRadius += (panopRadiusGoal-panopRadius)*0.05;
        float yrot = ofRadToDeg(atan(faces[i].position.x / faces[i].position.z));
        if(faces[i].position.z < 0) yrot += 180;
        yrot += 90;
        ofRotate(yrot, 0, 1, 0);
        ofRotate(-faces[i].position.y*90.0, 0, 0, 1);
        ofScale(10, 10, 10);
            // yeah same disabled texture uniform stuff for that god forsaken shader
            facesShader.setUniformTexture("tex0", faces[i].depth, 1);
            ofTexture a = faces[i].color.getTextureReference();
            ofLogNotice() << a.getTextureData().textureID;
            ofSetColor(255,255,255, faces[i].age);

void testApp::resetPanopticon() {
    if(panopSize > 0) {

void testApp::captureFace() {
    // later this will save every face catpured so once your
    // face is in the system, it stays forever !!
    generatedMesh = faceModel.getMesh(0);
        for(int i = 0; i < generatedMesh.getVertices().size(); i++) {
            ofVec3f v = generatedMesh.getVertex(i);
            ofImage c = faces[panopSize-1].depth;
            float rx = (v.z+1.0)/2.0;
            float ry = (v.y+1.0)/2.0;
            int x = (int)(rx*c.getWidth());
            int y = (int)(ry*c.getHeight());
            v.x = (c.getColor(x,y).r/255.0)*10.0;
    faces[panopSize-1].mesh = generatedMesh;
    faceCaptured = true;

void testApp::keyPressed(int key) {
    // capture face
    if(key == 'c') {
    // set to draw panopticon
    if(key == 'p') {
        drawFacestealerDebug = false;
    // set to draw facestealer debug
    if(key == 's') {
        drawFacestealerDebug = true;
    // randomize face positions (if they get stuck)
    if(key == 'r') {
        for(int i = 0; i < panopSize; i++) {

Elizabeth Agyemang-FinalProject: SubTweets




As I discussed in my last post, seen here, on twitter the culture of ‘subtweeting’ has been the source of many public arguments, rages and tension between users–from everyday people, to celebrities, organizations and even politicians. The project SubTweets looks to comment on this widespread practice by parodying the act of subtweeting through the voice of submarine sandwiches. In the project submarine sandwiches are weighed on a scale, resulting in a tweet, or a ‘subtweet’, being sent to twitter through a Processing program that utilizes Temboo in order to tweet.

SubTweets SubTweets: The Project

The piece seeks to generate a conversation on how people communicate in the modern digital world. More and more it seems, when social and personal tensions arise, people turn to forums like Twitter to vent their frustration and anger. However, rather than resolving such issues, subtweeting instead heightens them, resulting in rage wars and twitter battles.  Subtweets looks to comment on this culture through humor and food.


How it works:

The sandwiches are weighed by a digital scale and, depending on the weight and a list of properties stored in the computer, the program will recognize the type of sub and will generate the subtweets based on other parameters.




Surprisingly, the process of creating this program was a lot more complicated than I initially anticipated.

The Scale(s)

When I first began to prepare for this project, I knew I needed to purchase a scale that would be able to communicate with a computer, via a usb port, and send and receive bytes of information. I browsed through  varying catalog ads and looked at programmers who had also used a scale to interact with a computer. I purchased a scale, a DYMO 10 lb that had a usb port capabilities with intention of getting the HUI device to interact with my computer in the same means the other programmer sources had suggested–through an open source program called libusb which gave a HUI device or any simple device the ability to interact and be read by a computer. Well, as things turned out, getting the computer to read the scale wasn’t my only problem, it was getting the scale to communicate back to the computer consistently. Though I was able to garner some communication from the computer and the scale, and after a lot of consultation and help from my professor, in the end another scale needed to be purchased. In the meantime I went to work on creating the code.


The other scale

This scale, an older, discontinued model,  connected to my computer as I had initially assumed any usb scale would, through serial communication rather than HUI. After simply installing the driver the scale was able to communicate reliably and completely with my computer.

Failures and Success

In the ended, I remained slightly disappointed that I wasn’t able to get the DYMO scale working with my computer however, the issue may in fact have stemmed from an issue from the scale rather than my computer.

Code wise, when I imagined this program I envisioned something that would be better able to distinguish from different types of sandwiches. However, because it uses a scale, I quickly came to realize that the ways in which I might gain information from the sandwich was much more limited than I assumed. At the moment, the program only differentiates between 6 inch subs and footlongs and the proceeding code and comments go along with this. Also, because I was creating the comments the subs could tweet,  there weren’t as much variety, at least in the hashtagging as I would like. Even so, I am still very proud of the way the program turned out.

During critiques I got a lot of really positive feedback on my program and I was really enthused to see people reacting so well to the humorous nature of the piece. I really feel that this was one of my most successful projects in that I was able to create a program, that in some way, had it’s own personality.

The project can be found on twitter here



//Importing the libraries

// Processing code for a 7010sb-scale

import processing.serial.*;
Serial myPort;  // Create object from Serial class
import com.temboo.core.*;
import com.temboo.Library.Twitter.Tweets.*;

int myData[];
int byteCount = 0; 

float oldsixtweets=0;
float oldfoottweets=0;

// Create a session using your Temboo account application details
TembooSession session = new TembooSession("marinesub", "myFirstApp", "8d605622f29f4b77b4e783db227d880a");

//checking sixinch tweets

float checktweetssix=0;
float checktweetsfoot=0;

//tracking changes
float minimumAcceptableSandwichWeight = 10; 
float timeAveragedAbsoluteChangeInWeight = 0; 
float prevWeight = 0;
float subWeight = 0; 
float weightAtLastTweet; 
int lastTimeITweeted = -10000; 
float currWeight = 0; 

//if the sub is a sixinch or a footlong
boolean isSixInch= false;
boolean isFootlong=false; 
float sizeSixInchSub;
float  footLongSUB;
//StringList insult; 
String [] insult=  new String [31];
String[] boasting = new String[10];
String[] hashtag = new String[7];
String[] people = new String[4];
String [] finaltweet= new String[2];
String[] both = new String[3];

//what to tweet
String [] maintweet = new String[3];

//generating tweets
int [] sixTweets = new int[5];
int [] footTweets = new int[5];

//making sure not the same tweet is said
int currentInsult;
int oldInsult;
int changeInsult; 
//changng if to tweet about foot longs or six inch
int update=0;

void setup() {
  size (1000, 500);
  // Run the StatusesUpdate Choreo function

  //looking at ports, choose 0
  String portName = Serial.list()[0];
  myPort = new Serial(this, portName, 2400);

  myData = new int[10]; 
  byteCount = 0;


  //  insult = new StringList();
  //six inch general 
  insult [0] = " ";
  insult [1] = "Every sub has to be okay with the fact that some subs taste better than others";
  insult [2] = "Man these wiches be delicious";
  insult [3] = "It's not a subtweet if you're not a sandwich";
  insult [4] = "You subtweet like a bro. And by bro, I mean burger";
  insult [5] = "nothing annoys me more than when people assume a sandwich is for them";
  insult [6] = "Eat your sub right or someone else will";
  insult [7] = "Don't assume a sub is just for you . They can literally be eaten by anyone, then you look HUNGRY.";
  insult [8] = "Did you know that you can actually execute a brutal subtweet through the ancient art of sharing?";
  insult [9] = "I never tasted something so imperfectly perfect could exist until I tried you.";
  insult [10] = "You're so hungry, you probably think this sub is for you...";
  insult [11] = "Big bites come in little wrappers";
  insult [12] = "Idk about you, but i think it's pretty stupid if you don't buy a sub just because it's smaller";
  insult [13] = "I love when people bite me or eat other things to try and make me mad. I'm an sub, but keep going because it's amusing.";
  insult [14] = "if you're really  going to eat  a sub for its size  then you need to get your priorities straight.";
  insult[15] ="I can't stand when people  share subs like why don't you just eat your own";
  //foot long general insults
  insult[16] = "Sometimes I type out a subtweet about how unheahlty u are then I delete it bc I'm like what's the point I'm gonna eat you anyway";
  insult[17] = "Bigger is always better";
  insult[18] =  "Keep munching, in the end it's always the bigger sub that comes out stronger";
  insult[19] =  "You wish you could make them full like I do";
  insult[20] = "Two of you can't measure up to me" ;
  insult[21] =  "Some Subs are bigger than others";
  insult[22] =  "If you're gonna subtweet me at least get your ingredients about me straight first ";
  insult[23] =  "Sandwiches only rain on your paddy because their jealous of your crust and tired of their crumbs";
  insult[24] =  "I'm a sandwich that never ends, you ";
  insult[25] =  "OMG get over it, they buy me cus they have stomachs that you can't fill" ;
  insult[26] =  "If you're gunna throw out your sixinch or footlong it better be sublovin. There's no time for subhaters in a relationship.";
  insult[27] =  "I am a tasty sub & don't need to subtweet I am a tasty sub & don't need to subtweet I am a tasty sub & don't need to subtweet";
  insult[28] =  "people will tell you how you taste in a subtweet before they take a bite or have a taste";
  insult[29] =  "No I don't wanna munch @you that's the point of a sandwich, to eat it";
  insult[30] = "";
  //String[] boasting = new String[10];

  boasting [0]="";
  boasting[1] ="Girls will subtweet and put little music emojis after it to make it look like a song "  ; 
  boasting[2] ="I can't stand when couples subtweet each other like why don't you just text them the whole world doesn't need to see your drama";

  //String[] hashtag = new String[6];

  hashtag[3]= "#subterfuge";
  hashtag[4]= "#mineisbiggerthanyours";
  hashtag[5]= "#itsmysandwichandiwantitnow";
  hashtag[6]= "#BiteMe";

  //String[] people = new String[4];

  people[0]= "";
  people[2]= "@quickcheck";
  people[3]= "@suboneelse";
  //String[] combinations = new String[30];

  //String both a hashtag or/and at people
  //using concat() function 
  //  Concatenates two arrays. For example, concatenating the array { 1, 2, 3 } 
  //and the array { 4, 5, 6 } yields { 1, 2, 3, 4, 5, 6 }. Both parameters must be arrays of the same datatype. 
  both = concat(hashtag, people) ;

  size(640, 360);



int testing(int rest) {
  if ((sixTweets[1])!=0) {
  } else {

  return rest;

void draw() {
  //six INCH sub Tweets
  //a six inch sub weighs from 100 to 290 grams

  //say nothing
  sixTweets[0] = 0;

  sixTweets[1] = (int)random(1, 16);
  sixTweets[2] = (int)random(2);

  sixTweets[3] = (int)random(6);

  sixTweets[4] = (int)random(3);

  //a footlong is from 291 to 500grams

  //int [] footTweets = new int [4];
  footTweets [0] = 0;
  footTweets[1]= (int)random(16, 30);
  footTweets[2] = (int)random(2);
  footTweets[3] = (int)random(6);
  footTweets[4]= (int)random(3);


  // Read data from the scale until it stops reporting data. 
  while ( myPort.available () > 0) {  
    int val =;         
    if (val == 2) {
      // Figure out when the message starts
      byteCount = 0;
    // Fill up the myData with the data bytes in the right order
    myData[byteCount] = val; 
    byteCount = (byteCount+1)%10;

  int myWeight = getWeight();//most recent value of the weight
  subWeight= myWeight;
  float changeInWeight = (subWeight - prevWeight);
  float absoluteChangeInWeight = abs(changeInWeight); 

  float A = 0.97;
  float B = 1.0-A; 
  timeAveragedAbsoluteChangeInWeight = A*timeAveragedAbsoluteChangeInWeight + B*absoluteChangeInWeight; 

  //saying that you can't boast and insult at the same time for sixinchtweets
  if ((sixTweets[1])!=0) {
  } else {
  if ((sixTweets[3]!=0)) {
  } else {

  //saying that you can't boast and insult at the same time for footlong
  if ((footTweets[1])!=0) {
  } else {
  if ((footTweets[3]!=0)) {
  } else {


  //checking if the same tweets are being said
  oldInsult= sixTweets[1];
  changeInsult = (sixTweets[1]-oldInsult);

  //telling it to tweet if it is in this area
  sizeSixInchSub= constrain(subWeight, 100, 290);
  footLongSUB = constrain(subWeight, 291, 600);
  //  } else {
  println(" ");
  // }


  //If the sandwich weighs more than some negligible amount, then bother tweeting;
  //  // Don't bother tweeting about an empty scale, even the weight has settled down (to nothing)
  if (subWeight > minimumAcceptableSandwichWeight) {
    //    // If the change in weight has settled down (the weight is no longer changing)
    if (timeAveragedAbsoluteChangeInWeight < .02) {//was.02

        oldsixtweets= oldsixtweets;
      oldfoottweets= oldfoottweets;
      //if the weight of the sub is in the range that a six inch is, tweet what a six inch sub is

      int now = millis(); 
      int elapsedMillisSinceILastTweeted = now - lastTimeITweeted; 
      //if weight change has past the oldtweets value changes

      //      // and if it has been at least 10 seconds since the time I tweeted
      if (elapsedMillisSinceILastTweeted > 10000) {//was 10000

          float weightDifference = abs(subWeight - weightAtLastTweet);
        if (weightDifference > 10) {
          //          // Then send the tweet
          if (isSixInch) {

            //if it is a six inch sub, check by making checktweet the value of the tweet
            if (sixTweets[1]!=oldsixtweets) {


            println("maintweet", maintweet, "sixinchtwee/", sixTweets, "foottwees:", footTweets);

          //if the sub is a footlong call the function for footlong tweets
          if (isFootlong) {
            //if the old footlong tweet is not new to the new footlong tweet, call the function
            if (oldfoottweets!=footTweets[1]) {

              println("They arent equal");
            } else {

              println("They are equal");
          isFootlong= false;

  prevWeight = subWeight;

  println("checktweetsix:", checktweetssix, "checktweetsfoot:", checktweetsfoot, maintweet[update] );

//function that returns the value of the string the scale is giving out in ints
int getWeight() {

   STX character - Hex 0x02, indicates "Start of TeXt"
   Scale indicator - Hex 0xB0 for lb/oz and 0xC0 for grams
   Hex 0x80 (placeholder)
   Hex 0x80 (placeholder)
   First character of weight, ascii format
   Second character of weight, ascii format
   Third character of weight, ascii format
   Fourth character of weight, ascii format - single Ounces in Lb/Oz weight
   Fifth character of weight, ascii format - 1/10th Ounces in Lb/Oz weight
   Finishing carriage return - Hex 0x0D

  String weightString = ""; 
  if (myData[1] == 192) { // grams yo
    for (int i=4; i<=8; i++) {
      if ((myData[i] >= 48) && (myData[i] <= 57)) { // only want characters between '0' and '9'
        weightString += (char)myData[i];
  } else {
    println ("Scale is using pounds/ounces, yuck");

  int aWeight = 0; 
  if (weightString.length() > 0) {
    aWeight = Integer.parseInt(weightString);
  return aWeight;

//function that tweets when the sixinch sub is called
void sixInch() {
  println("SubWeight:", subWeight, "sizeSixInchSub", abs(sizeSixInchSub));
  fill(255, 180);

  text((insult [sixTweets[1]]) + (boasting[sixTweets[2]])+(hashtag[sixTweets[3]])+(people[sixTweets[4]]), 50, 50);
  maintweet[1]= (insult [sixTweets[1]]) + (boasting[sixTweets[2]])+(hashtag[sixTweets[3]])+(people[sixTweets[4]]);
  //  println("changeInsult:", changeInsult, "oldInsult:", oldInsult,"subWeight:", subWeight, "mouseX:", mouseX);

  lastTimeITweeted = millis();
  weightAtLastTweet = subWeight;

  //making it so that a six inch is no longer true

void footlong() {
  println("SubWeight:", subWeight, "sizeSixInchSub", abs(sizeSixInchSub));

  fill(255, 180);
  text((insult [footTweets[1]]) + (boasting[footTweets[2]])+(hashtag[footTweets[3]])+(people[footTweets[4]]), 50, 50);

  maintweet[2]=(insult [footTweets[1]]) + (boasting[footTweets[2]])+(hashtag[footTweets[3]])+(people[footTweets[4]]);
  //  println("changeInsult:", changeInsult, "oldInsult:", oldInsult,"subWeight:", subWeight, "mouseX:", mouseX);

  lastTimeITweeted = millis();
  weightAtLastTweet = subWeight;

void checkconstraints() {

  if (subWeight==sizeSixInchSub) {

  } else if (subWeight==footLongSUB) {
    //if the weight of the sub is in the range that a footlong is, tweet what a footlong sub is
  } else if (subWeight==0) {
  if (subWeight==0) {
    println("it's zero");
void noWeight() {
  if (subWeight==0) {

void runStatusesUpdateChoreo() {
  //  // Create the Choreo object using your Temboo session
  StatusesUpdate statusesUpdateChoreo = new StatusesUpdate(session);
  //  // Set credential
  //  // Set inputs
  //  // Run the Choreo and store the results
  StatusesUpdateResultSet statusesUpdateResults =;
  // Print results
//  println(statusesUpdateResults.getResponse());



Sources of other programers communicating with scales and other HUI devices

Reading a Dymo USB scale using Python

Final Project: Aurora

For my project, I wanted to build a hologram after being shown a particular method used in the project Light Barrier by the group Kimchi and Chips. For my project I wanted to reproduce this effect in a smaller scale using lenses rather than mirrors and with much less funding.

Light Box (8)

For hardware, I made a wooden box with a mirror inside it and a platform on which a projector can be placed. On the top of the box is a hole in which wide-angle (concave) Fresnel lenses are placed. I also created a clear acrylic box which is placed on top of the first box to hold fog in place over the lenses. I purchased a fog machine from Guitar Center for the production of the fog.

Aurora Setup

For software, I made a C++ program which minimally utilizes Cinder. In this program I implemented methods which calculate the scene from the perspective of each lens. I do this using the thin lens formula.



Here is the relevant code for drawing the view.

void GhostApp::draw()
	// clear out the window with black	
	glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

	float camToLensDist = 1000.0f;
	float lensWidth = 35.0f; //number in mm
	float lensHeight = 35.0f; //number in mm
	float lensFocal = -40.0f/2.0f; // number in mm
	int numXLenses = 10;
	int numYLenses = 5;
	for (int x = 0; x < numXLenses; x++){
		for (int y = 0; y < numYLenses; y++){
			// set the viewport
			glViewport(config.offsetX + x*(config.viewWidth / numXLenses), config.offsetY + y*(config.viewHeight / numYLenses), config.viewWidth / numXLenses, config.viewHeight / numYLenses);
			// lens, object and image positions
			float lensCenterX = lensWidth*0.5f + (lensWidth*x) - (lensWidth*numXLenses*0.5);
			float lensCenterY = lensHeight*0.5f + (lensHeight*y) - (lensHeight*numYLenses*0.5);
			//distance to projector on xy plane
			float Ho = sqrtf(lensCenterX*lensCenterX + lensCenterY*lensCenterY);
			//distance to projector on z axis
			float Do = camToLensDist;
			//focal point
			float f = lensFocal;
			// 1/f = 1/di + 1/do
			float Di = 1.0f / ((1.0f / f) - (1.0f / Do));
			// -Di/Do = Hi/Ho
			float Hi = (-Di*Ho) / Do;

			float camX = (Hi == 0.0f) ? 0.0f : lensCenterX*(1.0f - (Hi / Ho));
			float camY = (Hi == 0.0f) ? 0.0f : lensCenterY*(1.0f - (Hi / Ho));
			float nearZ = camToLensDist*(-Di / Do);
			float camZ = -(camToLensDist - nearZ);
			float farZ = 10000.0f;

			float left = (x*lensWidth - (numXLenses*lensWidth*0.5f)) - camX;
			float right = ((x + 1)*lensWidth - (numXLenses*lensWidth*0.5f)) - camX;
			float top = (y*lensHeight - (numYLenses*lensHeight*0.5)) - camY;
			float bottom = ((y + 1)*lensHeight - (numYLenses*lensHeight*0.5)) - camY;

			glFrustum(left, right, top, bottom, nearZ, farZ);
			gl::translate(-camX, -camY, -camZ - camToLensDist);
			gl::scale(1.0f, 1.0f, -1.0f);
			switch (scene){
			case 0:
			case 1:
			case 2:
				scene = 0;
	// draw guide grids to help lens placement
	if (grids){
		glViewport(config.offsetX, config.offsetY, config.viewWidth, config.viewHeight);
		glOrtho(0, config.viewWidth, 0, config.viewHeight, -10, 10);
		for (int x = 1; x < numXLenses; x++){
			gl::drawSolidRect(Rectf(x*(config.viewWidth / numXLenses) - 4.0f, 0, x*(config.viewWidth / numXLenses) + 4.0f, config.viewHeight));
		for (int y = 1; y < numYLenses; y++){
			gl::drawSolidRect(Rectf(0, y*(config.viewHeight / numYLenses) - 4.0f, config.viewWidth, y*(config.viewHeight / numYLenses) + 4.0f));

I’m proud of the software and the math that went into the system. I felt the build quality of the box could have been better had I planned the system more before building. The outcome and final visual is blurry and less defined than I had expected. I attribute this to a lack of clarity from the lenses and a lack of precision in the lens setup. All in all, I appreciate the project and feel that I have made something pleasing to look at.


Written by Comments Off on Final Project: Aurora Posted in FinalProject