chewie-arsculpture

A wet floor sign springs to life in new dimensions with this AR experience. The beloved icon from signage across the globe takes on the task of guarding this slippery hazard.  Don't get too close or you'll make it nervous (tap the screen to watch it take a spill).

 

chewie-UnityEssentials

 

content creation checklist when making assets

  • low poly, fast rendering objects
  • have objects snapped to origin
  • properly laid out UVs... two channels:
    • texture
    • lightmaps
  • Keep textures power of 2
  • clean directories
  • .fbx

chewie-rigatoni-justaline

For the test project we chose to use the lines to visualize energy. In the first attempt rigatoni posed in a way that looked like he was pushing a large object. The lines illustrate force and motion. In the second attempt I added lines in front of a fan to illustrate the movement of the air.

In the full implementation the app would be able to detect moving objects in the static scene and draw in lines where there's a drastic movement. It would have to distinguish between camera movement and object movement because otherwise it would think that the room moving relative to the camera was a moving object in the room.

chewie-Reading03

I would say my interests align more with first word art than last word art. I love experimenting with mediums and disrespecting established form, however I also love exploiting the invisibly standard characteristics of the medium I'm working in. In a lot of my work I have attempted to use the barrier between physical matter and conscious realization to allow the audience a very self-aware and honest experience.

The ways in which technology shapes culture are easy to think about conceptually, but it's very difficult to practically realize where this shaping takes place. Because we use technology as a tool of expression, the limits of that technology become the limits of expression which become the limits of the resulting ideas that manifest themselves as culture and reality. As we add dimensions of complexity to the tools we're working with, so do we add dimensions of complexity to what is expressed through those tools. Audiences accepted the first videos as partial fact, as if the filmmaker (often personally invisible from their perspective) was an omnipotent dictator of reality. With the introduction of role playing video games this illusion of authority becomes much more complicated with many more layers of oblivion to the form. Because the action is up to you, you are forced to operate under a specific set of unflinchingly rigid laws, tuning your brain to a specific set of functions through repeated action. The game controls your mindset completely because it is yourself who is being expressed in their fabricated reality, rather than you watching someone else express themselves.

Culture shapes new technology in the sense that it decides what we pay attention to. Our minds operate based on a defined set of existing ideas. If technology is the manifestation of those ideas in a practical, operational system, technology caters only to culture and the benefit of the human experience. It builds upon what came before and is in fact our culture. I believe technology is culture and therefore it is inherently shaped by the manipulation of culture.

When a new technology is developed, the new abilities of those working with that technology become the center of attention as substantial features. As technology ages and becomes more familiar to us, these new abilities become standard and insignificant. The flashing lights and spectacular explosions that caught our attention become nothing more than a quiet, blank slate to jot ideas on. As we become more capable, our efforts of the past become more frivolous, and we almost feel embarrassed to have put so much time into seemingly "nothing".

chewie-LookingOutwards03

Olars, Kinetic Toy

This project initially caught my attention because it resembled the artificial evolution simulations done by Karl Sims. Olars, Kinetic Toy is a parts-based tool kit that allows you to connect bodies that manipulate and twist at the user's discretion. In this video demo there seems to be two main driver-blocks containing a motor and a knob to control how that motor behaves. The kit includes ligament-type bodies as well that attach to these motors in different ways to give the creature it's animation. These ligaments come in a variety of different shapes that result in surprisingly intricate movements.

Since I was familiar with Karl Sim's evolution simulator it was really fascinating to see a similar mechanism replicated with tangible objects, where the user has to manually provide what the simulation generated automatically. It's interesting how, if the user has a goal in mind for how the creature should move, they will most likely develop it with a particular understanding of how the parts interact, which changes over multiple iterations as they experiment. Eventually that understanding becomes more efficient and accurate at predicting how different structures will behave. I could imagine a person stuck in a room forever with only this kit - working away, generating endless combinations of parts and movements, mindfully crafting a super-organism from these basic components - similar to how the evolution of these creatures was originally calculated in 1994.

I think the biggest drawback of this implementation is it's limited arrangements given a small set of components. There is only one shape that contains motors which allow for a single axis of rotation, and the rest of the static bodies have only a few pre-defined attachment points. Multi-axis rotation would be much harder to implement physically, but if they were to use something like velcro, the shapes could be attached at any position or orientation. It would also be interesting to have different shapes with motors or to include a motor on each ligament, which would allow for much more complicated motion.

As I mentioned before, the essence of Karl Sims evolution simulator plays a major role in this piece. It seems to have taken a process meant to be solely carried out by computers and launches it into the tangible world for people to naturally do what was coded into the simulator. You can see how the limited nature of the physical joints is balanced by more intricate objects to be attached, making it easier to get interesting movements from them and keep the user entertained.

chewie-LookingOutwards04

After Deep Blue (ADB)

After Deep Blue (referencing the first chess computer to beat a world champion) focuses on the emotional connections we develop with computers. In these videos you can see how the snake-like robot seems to cuddle against the user in a convincingly organic way. Voice automation and the A.I. systems generating their words have been able to operate in more socially acceptable ways, catering to our perception of what natural language sounds like. Similarly, the features and nuances that make up tactile affection are seen reproduced in ADB aggregating into behavior that comes alive.

Because the piece is so simple and elegant, there's not much that I think would improve the overall experience. Different modes of activity would be interesting to see, for example the snake could get more or less hyperactive the more it's struggling to be touched or it could become fatigued. I would also like to see multiple snakes all snuggling together in a pile on the floor to see if that would create any interesting feedback-loop behavior.

There's a great blog post written by the creators about the influence the Deep Blue project had on this piece. Deep Blue in part made audiences realize computational advancement in relation to human function and ability. People wondered about what different human behaviors and processes could be replicated by computers to the point of allowing for seamless interaction. With ADB, rather than replicating a function as logical as playing chess, a much more real sensation - intimacy and affection - is exploited to make incredibly animalistic behaviors.

 

chewie-parrish

My biggest take away from this lecture was that the greatest tool to understanding chaos is to frame it in as many different ways as possible, finding which frame is the most compatible with our existing tools of perception. It was really interesting to see how organizing word relationships in 3D space gives you the ability to judge those relationships in a really tangible way. It takes information that exists in our minds (as reflected by the medium of recorded text) and represents the most significant aspects of that data in way that utilizes our spacial reason to show us those relationships. I also appreciated the connection she made between these data extrapolation fields and autonomous recording of weather data from balloons. It's important to realize what these systems are at their lowest level, which are systems that process information on our behalf when we are unable or too impatient to do so.

chewie-book

High Stakes -  A real account of what's happening in poker.
archive

When I was 6, every morning while I was eating cereal I would have a big Calvin and Hobbes book open in front of me. I never read it out of a newspaper, having to wait a whole day before the next little morsel, but we had these collection books full of strips that I could blow through like binge watching Netflix. There's no doubt a running narrative between the strips (some even continuing the same event) but each also exists as its own distinct story, like a joke in a comedy routine. This is why in researching and developing ideas for this project I was excited to find an online database containing plot descriptions of every published Calvin and Hobbes comic strip.

Thinking about the different uses of these texts, I was wondering what Calvin and Hobbes was at its lowest level and the types of events that transpired in the strip. Calvin is relentlessly true to himself and his beliefs despite the pressure he faces from his peers and superiors to act "normal": social rejection, being grounded, and getting scolded by his teachers to name a few. There's something admirable about the willingness to believe in yourself to that extent, but it also causes a great deal of inefficiency when you refuse to think based on observation or even speculation and only perpetuating and expanding on your existing ideas.

This is almost the complete opposite of some thoughts I've had about poker.

In this relentless game you are restricted to a finite set of actions, and (at least at the professional level) if you aren't able to make the most efficient set of actions based on the changing state of the game and the mechanisms of probability, you lose with no second chance. Because these two worlds are so contradictory I thought it would be interesting and amusing to combine references to both in these short, narrative descriptions.

I decided to use the texts by replacing the main characters with popular poker players, and replacing some of the Calvin and Hobbes language with poker terms. These terms were collected from an article describing all of the rules for playing no-limit Texas hold 'em. The results were interesting and at times amusing, although they definitely weren't completely coherent.

For the background images, my program went through each text, added each instance of a players name to a list and then used those frequency indices to select which player to show a picture of in the background. The front and back cover pages  are illustrations from one of the gorgeous full-color strips released on Sundays.

 

Code in Java for ripping summaries of every Calvin and Hobbes comic to a .txt file.

PrintWriter output;
 
 
void setup() {
  output = createWriter("positions.txt");
  String t;
  int max = 3150;
 
  output.println("{");
 
  for (int i=1; i<max+1; i++) {
    println(i);
    t = getJist(i);
    t = t.replace("\"","'");
 
    output.print("\""+str(i)+"\" : \"");
    output.print(t);
 
    output.print("\"");
    if (i<max) output.print(",");
    output.println("");
  }
  output.println("}");
  output.close();
}
 
String getJist(int n) {
  if ((n<1)||(n>3150)) return "invalid index";
  else {
    String[]t;
    t = loadStrings("http://www.reemst.com/calvin_and_hobbes/stripsearch?q=butt&search=normal&start=0&details="+str(n));
 
    String line = t[56];
    t = line.split(" "); line = t[1]; 
    return line; 
  }
}

Code in Javascript for modifying the texts and outputting to .json:

var t,p,corp, rm;
 
var availPos = ["nns","nn","jj"];//,"vbg","vbn","vb","vbz","vbp"];
 
 
 
var corp2 = {
  "nns": [],
  "nn": [],
  "jj": [],
  "vbg": [],
  "vbn": [],
  "vb": [],
  "vbz": [],
  "vbp": [],
  "rb": []
}
 
var nns = [];
var nn = [];
var jj = [];
var vbg = [];
var vbn = [];
var vb = [];
var vbz = [];
var vbp = [];
var rb = [];
 
 
 
 
var reps = [
  ["Calvin","Negreanu"],
  ["Hobbes","Dwan"],
  ["Mom","Selbst"],
  ["Dad","Ivey"],
  ["Susie", "Tilly"],
  ["Christmas", "WSOP"],
  ["parent", "sponsor"],
  ["Parent", "Sponsor"],
  ["Tiger", "Dealer"],
  ["tiger", "dealer"],
  ];
 
function preload() {
  t = loadJSON("jists.json");
  p = loadStrings("poker.txt");
}
 
function setup() {
  createCanvas(400, 400);
  //print("okay");
  loadToks();
  var texts = [];
  var fake;
  for (var i=0; i<3000; i++) { ttt = doer(int(random(3150))); fake = new RiString(ttt); fake.replaceAll("\"", ""); fake.replaceAll(" ,", ","); ttt = fake._text; texts.push(ttt); } var ret = {"a":texts}; saveJSON(ret,"all.json"); //print(availPos); } function draw() { background(220); } function doer(n) { var j = RiString(t[n]); for (var i in reps) { j.replaceAll(reps[i][0], reps[i][1]); } //print(j); return advRep(j._text); } function loadToks() { var movie = join(p," "); rm = new RiMarkov(3); var toks = movie.split(" "); var om,rs; var tooks = []; for (var i in toks) { if (toks[i].length>3) {
    	om = split(split(split(toks[i],".")[0],",")[0],"?")[0];
      //if (RiTa.isNoun(om)&& !(RiTa.isAdjective(om))) {
      rs = new RiString(om);
      rs = rs.replaceAll("(","");
      rs = rs.replaceAll(")","");
      rs = rs.replaceAll(":","");
      rs = rs.toLowerCase();
      rs = rs.trim();
    	var ppp = RiTa.getPosTags(rs._text)[0];
      if (availPos.indexOf(ppp)!=-1 ) {
        tooks.push(rs._text);
        corp2[ppp].push(rs._text);
      }
 
    }
    //print(toks[i]);
  }
  //print(corp2)
  //saveJSON(corp2,"corp2.json");
  rm.loadTokens(tooks);
}
 
function advRep(s) {
  var poss = RiTa.getPosTags(s);
  var toks = RiTa.tokenize(s);
  var stringy = "";
  var randInt;
  for (var i in toks) {
    if (availPos.indexOf(poss[i])!=-1 && int(random(3))==0) {
      randInt = int(random(corp2[poss[i]].length))
      if (!(RiTa.isVerb(toks[i])) || poss[i]=="vbg") {
        for (var j in poss) {
          if (toks[j] == toks[i]) toks[j] = corp2[poss[i]][randInt];
        }
      }
      else {
        for (var j in poss) {
          if (toks[j] == toks[i]) toks[j] = corp2[poss[i]][randInt];
        }
      }
 
    }
  }
  var stringgg = new RiString(join(toks, " "));
  stringgg.replaceAll(" .", ".");
  return str(stringgg._text);
 
}

Basil.js code:

#include "../../bundle/basil.js";
 
// Version for basil.js v.1.1.0
// Load a data file containing your book's content. This is expected
// to be located in the "data" folder adjacent to your .indd and .jsx.
// In this example (an alphabet book), our data file looks like:
// [
//    {
//      "title": "A",
//      "image": "a.jpg",
//      "caption": "Ant"
//    }
// ]
var jsonString;
var jsonData;
var text = ["*here is where I included the quotes*"];
];
 
//--------------------------------------------------------
function setup() {
  var randSeed = 2892;
  while (b.pageCount()>=2) b.removePage();
 
  // Load the jsonString.
  jsonString = b.loadString("lines.json");
 
  // Clear the document at the very start.
  b.clear (b.doc());
 
 
  var imageX = 72*1.5;
  var imageY = 72;
  var imageW = 72*4.5;
  var imageH = 72*4.5;
  var anImageFilename = "images/front.jpg";
 
  var anImage = b.image(anImageFilename, 35, 35, 432-35*2, 648-35*2);
  anImage.fit(FitOptions.FILL_PROPORTIONALLY);
 
  // Make a title page.
  b.fill(244, 215, 66);
  b.textSize(48);
  b.textFont("Calvin and Hobbes","Normal");
  b.textAlign(Justification.LEFT_ALIGN);
  b.text("CHEWIE", 60,540,360,100);
 
 
  // Parse the JSON file into the jsonData array
  jsonData = b.JSON.decode( jsonString );
  b.println(jsonData);
 
 
  // Initialize some variables for element placement positions.
  // Remember that the units are "points", 72 points = 1 inch.
  var titleX = 195;
  var titleY = 0;
  var titleW = 200;
  var titleH = 600;
 
  var captionX = 72;
  var captionY = b.height - 108;
  var captionW = b.width-144;
  var captionH = 36;
 
  var txt, tok, max;
 
  var just;
  var names = ["n"];
  // Loop over every element of the book content array
  // (Here assumed to be separate pages)
 
  for (var i = 0; i < 9; i++) { // Create the next page. b.addPage(); txt = text[randSeed+i]; tok = b.splitTokens(txt," "); for (var j=tok.length; j>=0; j--) {
      if (tok[j] === "Dwan"){
        names.push("d");
      }
      if (tok[j] === "Hellmuth") {
        names.push("h");
      }
      if (tok[j] === "Selbst") {
        names.push("s");
      }
      if (tok[j] === "Tilly") {
        names.push("t");
      }
      if (tok[j] === "Negreanu") {
        names.push("n");
      }
    }
 
    var ic = b.floor(b.random(0,names.length));
 
    ic = names[ic];
    names = [];
    if (ic == "d") max=6;
    if (ic == "h") max=7;
    if (ic == "i") max=11;
    if (ic == "n") max=9;
    if (ic == "s") max=6;
    if (ic == "t") max=3;
 
    anImageFilename = "images/"+ic + (i%max+1)+".jpg";
 
    // Load an image from the "images" folder inside the data folder;
    // Display the image in a large frame, resize it as necessary.
 // no border around image, please
 
 
 
    anImage = b.image(anImageFilename, 0, 0, 432, 648);
    anImage.fit(FitOptions.FILL_PROPORTIONALLY);
    b.opacity(anImage,70);
    if (i%2==0) {
      titleX = 50;
      just = Justification.LEFT_ALIGN;
    } else {
      titleX = 190;
      just = Justification.RIGHT_ALIGN;
    }
 
    b.textSize(16);
    b.fill(0);
 
    b.textFont("DIN Alternate","Bold");
    b.textAlign(just, VerticalJustification.BOTTOM_ALIGN  );
    var ttp = b.text(txt, titleX,titleY,titleW,titleH);
 
    // Create textframes for the "caption" fields
    b.fill(0);
    b.textSize(36);
    b.textFont("Helvetica","Regular");
    b.textAlign(Justification.LEFT_ALIGN, VerticalJustification.TOP_ALIGN );
    //b.text(jsonData."first"[0].caption, captionX,captionY,captionW,captionH);
 
  };
  b.addPage();
  imageX = 72*1.5;
  imageY = 72;
  imageW = 72*4.5;
  imageH = 72*4.5;
  anImageFilename = "images/back.jpg";
 
  anImage = b.image(anImageFilename, 35, 35, 432-35*2, 648-35*2);
  anImage.fit(FitOptions.FILL_PROPORTIONALLY);
}
 
// This makes it all happen:
b.go();

chewie-Body

Fullscreen: https://editor.p5js.org/full/B1LIV9a9m

I was interested in the idea of using corn on the cob as a visual medium. The kernels align in a way that is staggered uniformly. I also have never worked using pixel art with a hexagonal base, which ended up being surprisingly challenging.

I used a stock photo from this website. In photoshop I was able to create three different frames containing  "on" kernels spaced with enough room between them so that I could apply loose masks to display a single kernel at a time.

 

 

 

 

 

By using a rough mask I could isolate each kernel with enough room for variation so that the same mask could be used for each kernel while still getting precise edges.

Having never worked in hexagonal pixel space before, I opted to store the pixel data as 2d arrays because I figured it would be easier. It actually ended up being more difficult because I had to compensate for the offset columns of kernels. Especially with smaller, lower resolution sprites like this, even the difference of half of a pixel can make a big difference. I had to save two copies of each sprite so that when the 2d arrays are converted to the hexagonal pixels there isn't any inconsistency.

The following are a few sketches of the sprites. I thought about making the design rotate about a sort of sphere but I decided to keep the illusion 2-dimensional.

 

 

var kernels = [];
var activeKernels;
var colSize;
var rowSize;
var img;
var kern1;
var kern2;
var kern3;
var kernMask;
var randDelay;
 
var t = 0;
var blink = 0;
 
var faces = [
  [
    [
      [1, -1],
      [-1, -1],
      [-1, 1],
      [0, 1],
      [1, 1]
    ],
    [
      [1, -2],
      [-1, -2],
      [1, 0],
      [-1, 0],
      [0, 1]
    ]
  ],
  [
    [
      [2, -2],
      [-2, -2],
      [-2, 1],
      [-1, 2],
      [0, 2],
      [1, 2],
      [2, 1]
    ],
    [
      [2, -2],
      [-2, -2],
      [-2, 1],
      [-1, 1],
      [0, 2],
      [1, 1],
      [2, 1]
    ]
  ]
,[[[0,0],[0,-1]],[[0,0],[1,-1]]]];
 
var rat;
 
function preload() {
  img = loadImage("assets/corn_0.png");
  kern1 = loadImage("assets/corn_1.png");
  kern2 = loadImage("assets/corn_2.png");
  kern3 = loadImage("assets/corn_3.png");
  kernMask = loadImage("assets/corn_matte.png");
}
 
 
 
 
var ctracker;
 
function setup() {
  // setup camera capture
  randDelay = random(50,200);
  var videoInput = createCapture();
  videoInput.size(400, 300);
  videoInput.position(0, 0);
 
  // setup canvas
  cnv = createCanvas(800, 800);
  rat = 2000 / width;
  rowSize = height / 16;
  colSize = width / 11;
  loadKernels();
 
 
  cnv.position(0, 0);
  // setup tracker
  ctracker = new clm.tracker();
  ctracker.init(pModel);
  ctracker.start(videoInput.elt);
  noStroke();
}
 
function draw() {
  push();
 
  translate(width,0);
  scale(-1,1);
  clear();
 
  strokeWeight(5);
  image(img, 0, 0, width, height);
  //image(kern1,0,0,width, height);
  //image(kernels[ck[0]][ck[1]].img,0,0,width,height);
 
 
  // get array of face marker positions [x, y] format
  var positions = ctracker.getCurrentPosition();
  var p = positions[62];
  var q = positions[28];
  var r = positions[23];
 
 
 
//  for (var i = 0; i < positions.length; i++) { // stroke("black"); //if (i == 23) stroke("green"); // set the color of the ellipse based on position on screen //fill(map(positions[i][0], width * 0.33, width * 0.66, 0, 255), map(positions[i][1], height * 0.33, height * 0.66, 0, 255), 255); // draw ellipse at each position point //ellipse(positions[i][0], positions[i][1], 8, 8); // } if (p != undefined) { var d = dist(q[0],q[1],r[0],r[1]); var size = 0; if (d>110) size =1;
    if (d<40) size =2; var ck = getCoors(map(p[0],0,255,0, 800),map(p[1],0,255,0, 800)); drawFace(ck[0], ck[1], size); } pop(); if (t>=randDelay) {
    t=0;
    blink=7;
  	randDelay = random(50,200);
  }
  t++;
  if (blink>0) blink--;
  print (p);
}
 
function Kernel(col, row) {
  this.col = col;
  this.row = row;
  this.x = colSize * (col + 0.5);
  this.y = rowSize * (row + 0.75);
  this.upCol = col % 2 == 0;
  if (this.upCol) this.y -= rowSize / 2;
 
  this.img = img;
  if (col % 2 == 0) {
    if (row % 3 == 0) this.img = kern2;
    if (row % 3 == 1) this.img = kern3;
    if (row % 3 == 2) this.img = kern1;
  } else {
    if (row % 3 == 0) this.img = kern1;
    if (row % 3 == 1) this.img = kern2;
    if (row % 3 == 2) this.img = kern3;
  }
 
  this.show = function() {
    var l = this.x - colSize;
    var gLeft = l * rat;
    var t = this.y - rowSize;
    var gTop = t * rat;
    var w = colSize * 2;
    var gWidth = w * rat;
    var h = rowSize * 2;
    var gHeight = h * rat;
    var dis = this.img.get(gLeft, gTop, gWidth, gHeight);
    dis.mask(kernMask);
    image(dis, l, t, w, h);
  };
}
 
function loadKernels() {
  var col = [];
  for (var c = 0; c < 11; c++) {
    for (var r = 0; r < 16; r++) {
      col.push(new Kernel(c, r));
    }
    kernels.push(col);
    col = [];
  }
}
 
function getCoors(x, y) {
  var col = x / colSize - 0.5;
  if (round(col) % 2 == 0) y += rowSize / 2;
  var row = y / rowSize - 0.75;
  if (col < 0) col = 0; if (col > 10) col = 10;
  if (row > 15) row = 15;
  if (row < 0) row = 0;
  return [int(round(col)), int(round(row))];
}
 
 
 
function drawFace(col, row, size) {
  var face = faces[size][0];
  if (col % 2 == 0) face = faces[size][1];
  var p;
  var c = 0;
  var r = 0;
  for (var i = 0; i < face.length; i++) { p = face[i]; c = int(col + p[0]); r = int(row + p[1]); if ((c >= 0 && r >= 0 && c < 11 && r < 16)&&
        !(blink!=0&&p[1]<0))kernels[c][r].show();
  }
}

chewie-telematic

This is a chatroom where the more words you try to use in a single message, the more deteriorated and confusing the message becomes. If you send short messages of few words, the room will usually send the message as is. If the user tries to use too many words or letters, the message starts to disappear.

My original concept for this chatroom was a platform where the user has to collect the materials needed to construct their sentence. I planned on including some sort of pixel pile that you would have to collect from to get enough material to type out and send your message.  In the working app above, each message is given a certain amount of length tolerance- the longer a message is, the more likely characters are to be missing. The result is a tool that forces the user to use as few words as possible at the risk of having their message made unreadable.

The design aspect that this project addresses most clearly is its criticality and self-awareness. Ordinarily, chatrooms like this have no limit to the length of messages you can send. With most things we do in life, however, there is a trade off between pleasure or action and the resources spent to make that happen. By using this chatroom, where there is an implied limit to the length of your message it makes the user consider the "luxury" of digital tools their limitless theoretical resources.