nerual-arproject

Shoe sweet shoe.

The video is made with XR Remote, so it's very grainy unfortunately. Suffice to say, a lot of things didn't work.I really wanted to let the viewer explore the shoe in 3D, walk around, get in close to see small details about the shoe. I also wanted you to be able to tap the screen and set the shoe on fire.

Linux stuff:
Build & Run Settings: Gradle build system.
Export to apk first, then run apk on phone.

Build.
Open terminal, navigate to where the .apk file is located.
adb install .apk
Or drag into phone directory and install from file manager on phone.

nerual-UnityEssentials

General:

If game window isn't working or Unity is crashing, edit Player Settings, Project Settings, and/or Edit>Graphics Emulation and select lowest/oldest shader model and shader hardware tier. May need to repeat upon reopening.

1. importing

2.navigating unity

QWERT hot keys for navigation tools. Hold right-click and use WASD to move. Alt or Alt-Shift + mouse buttons.

3.assets
4.materials
5.prefabs
6.level building

hold v to snap corner

7.animation

dope sheets vs curves

Starting from here, started from progress scene in exercise files because I couldn't import the character correctly(maybe because different unity version and linux editor?)

9.adding audio

Turn the audio volume WAY down.

10.lighting
11.baking lighting

Careful of real-time option during editing to. May put strain on unity editor.

12.particles and fx

Use for chimney smoke.

I didn't get around to the other tutorials. Since I did this late anyway, I focused on the ones immediately helpful to the part of the project I was about to work on.

nerual-book

Chapter Title:
"A-Z...or something like that"
Description:
A cautionary introduction to existential crisis for the timid teen, based on what the internet thinks Nietzsche and others have to say about it.

URL:
https://drive.google.com/open?id=14COW_URiZ-VbcQXKRBhedCc26pi5anut

Screenshots:

PDFS:
old1
sample1-nerual
sample2-nerual

Gif(will fix later):

Interactive Version(updated):

Process:
I initially wanted to create some fake government documents or conspiracies, or fanfiction based on fake books. I liked the idea of a machine generating something that sounded like it could plausibly come from a crazy person, or a tween with bad writing. I ended up doing an alphabet book of existentialism, but with the same kind of mocking/cringe-viewing lens. I originally planned on using actual works of Nietzche and existential philosophers, but I thought it would make more sense to grab quotes from questionable sources like "101 Existential Quotes that'll Make you Question Everything." These included curated quotes from actual philosophers, but whether they were accurate or taken out of context, I didn't know. And that was the point. Unfortunately I couldn't find an elegant way to find it, so it was just Google Search and copy-paste.

In regards to certain choices made:

  • I liked the rhythm of two sentences, and the way it sounded more like a children's book caption.
  • My intention with the aesthetics was just to have something that looked images I used to find all over Facebook, of just "deep" thoughts stylized in an "edgy" way--so dingy white type-writer text on black it was.
  • I capitalized the word because I wanted it to stand out more, and make the reader read the sentence with a sense of rhythm, where the word is EMPHASIZED because it's IMPORTANT, but feel a little annoyed or cringe at the way it does it.

In general, most choices were guided by my experience/appreciation of the "uses-irony-incorrectly, edgy teenager" stereotype. I wanted it to read like something a 13-year-old would make in their free time back in 2012, because I was probably that 13-year-old.

Evaluation of Results:
In my opinion, the results are, mixed. There are some gems that are nonsensical in just the right way("Beware of letters. They are gay.") or are common seemingly "deep" statements often found and shared on the internet. The concept is really simple, but I think there was a lot I could have done with it, had I had time to play around with it. I think it's fun to look through some pages, but maybe not a whole 26 pages.

Because I didn't fine tune it, there are a lot of bugs that make their way on to the page too, like occasionally the program thinks "an" and "such" are adjectives. Also, I attempted to purposefully use every letter in order, but I think it just came off as laziness. There are also a lot of repeats because I had a rather small data set to draw from.

I made something that appealed to my sense of humor, and I'm not sure if the humor or intentions translate to other people, or if it seems mean-spirited in the way it mocks a certain type of person( or if it seems mocking at all).

Extra thoughts:
I had some observations while I was making the project. After I posted the first link, I added in a few full works by Nietzche(because I assume that's most people's immediate association with existentialism). This kind of messes up the idea of getting just the information cherry picked by internet folk, but it actually turned out more varied sentences, and had the same effect. I forgot that, obviously, shoving complex and nuanced ideas into this simple and narrow template would produce comical contradictions and nonsense. I also thought of my program as a 3rd "dumbness" filter of sorts. First the internet gets their hands on the original pieces, and then it goes through my program, and then we get this nonsense.

Code:
Rita, for generating text, with rhyming template provided by Char Stiles.

var rhymes, word, data, wordy, tit, letter;
var numS;
var pagesArr = [];
var t = 0;
var tmax = 7000;
 
class Page {
	constructor(title, text){
  	this.title = title;
    this.text = text;
  }
}
 
function setup()
{
  createCanvas(300, 300);
  //fill(255);
  //background(20);
 
  textFont("Courier New");
 
  //lexicon = new RiLexicon();
  //findRhymes();
  data = loadStrings('such_deep.txt');
 // words = data.split(' ');
  console.log('data loaded');
  //setInterval(findRhymes, 2000);
 
}
 
function keyPressed(){
  //findRhymes();
  console.log("getting pages...");
  getPages();
}
 
 
function getPages(){
  console.log("getting pages...");
  for (var i=0; i<26; i++){
    console.log(i);
  	findRhymes()
    console.log("got rhymes");
		var p = new Page(tit, rhymes);
    pagesArr[i] = p;
  }
  var jsonObj = {};
  jsonObj.pages = pagesArr;
  //saveJSON(jsonObj, 'angst_pages.json');
  createButton('SAVE THE ANGST')
    .position(width/2, height/2)
    .mousePressed(function() {
    saveJSON(jsonObj, 'angst_pages.json');
  });
 
}
 
function draw()
{
  background(20);
  fill(220);
  textAlign(LEFT);
  text("hit  to deprive\nyourself of some meaning.",10,height-30);
	if(data == undefined){
  return;
  }
  textAlign(LEFT);
  textSize(28);
  text(tit, 30, 40);
 
  textAlign(RIGHT);
  textSize(14);
  textLeading(17);
  text(rhymes, 280, 100);
 
 
}
 
function newWord(){
  console.log("new word...");
  var w = data[floor(random(data.length))];
  if (!w)
    return "";
  //lexicon.randomWord();
  w = w.split(' ');
    console.log("split word...");
 
  w = w[floor(random(w.length))];
  w = RiTa.stripPunctuation(w);
  return w;
}
 
function numSyllables(w){
  var syll = RiTa.getSyllables(w);
  var count = 0;
  for (var x = 0; x < syll.length; x++){
    if (syll.charAt(x) == '-')
      count++;
  }
  console.log(count);
  return count;
}
 
function rightWord(w){
	return RiTa.isAdjective(w) && 
     //w.charAt(0) == word.charAt(0) &&
		!RiTa.isAdverb(w) && 
		!RiTa.isNoun(w) &&
    (w != 'an') &&
    (w != 'such') &&
		!RiTa.isVerb(w) ;
    //(numS == numSyllables(w)) && 
}
 
function findRhymes() { // called by timer
	if(data == undefined){
      console.log('undef data');
			return;
  }
  console.log("rhymming...");
  do{
  	word = newWord();
  } while(!RiTa.isNoun(word));
  word = RiTa.singularize(word).toLowerCase();
  numS = numSyllables(word);
  letter = word.charAt(0);
  console.log("got word");
  t=0;
  do {
  	wordy = newWord();
    t++;
  } while(!rightWord(wordy) && t<tmax);
  //wordy = RiTa.singularize(wordy).toLowerCase();
  wordy = wordy.toLowerCase();
 
    console.log("got wordy");
 
 
  tit = letter.toUpperCase() + " is for\n " + word.toUpperCase() + ".";
  rhymes = "Beware of " + RiTa.pluralize(word) + ".\nThey are " + wordy + ".";
}

Basil.js, for creating the pages in InDesign

#include "../../bundle/basil.js";
// Version for basil.js v.1.1.0
// Golan Levin, November 2018

//--------------------------------------------------------
function makePage(i) {
 
  // Load a JSON file containing your book's content. This is expected
  // to be located in the "data" folder adjacent to your .indd and .jsx. 
  var j = i + 1;
  var jsonString;
  if (j > 9)
    jsonString = b.loadString("angst_pages_" + j + ".json");
  else
    jsonString = b.loadString("angst_pages_0" + j + ".json");
  makePage_fromFile(jsonString);
}

function makeSamplePage(){
  jsonString = b.loadString("angst_pages_sample.json");
  makePage_fromFile(jsonString);
}

function makePage_fromFile(jsonString){
  // Clear the document at the very start. 
  b.clear (b.doc());

  // Parse the JSON file into the jsonData array
  var jsonData = b.JSON.decode( jsonString );
  //var poems = jsonData.poems; 
  var poems = jsonData.pages; 
  b.println("Number of poems: " + poems.length);

  // Initialize some variables for element placement positions.
  // Remember that the units are "points", 72 points = 1 inch.
  var inch = 72;

  var titleW = inch * 5.0;
  var titleH = inch * 5;
  var titleX = (b.width / 2) - 3*(titleW / 5) ; // centered 
  var titleY = inch * 1.0;

  var textX = inch * -2; 
  var textY = inch * 5.0;
  var textW = inch * 7.0;
  var textH = inch * 3.0;
  


  // Loop over every element of the book content array
  // (Here assumed to be separate pages)
  for (var i = 0; i < poems.length; i++) {
    //----------------------------------
      // draw the background first
    var r = 3/4;
    b.noStroke();
    b.fill(30,30,30);
    b.rectMode(b.CENTER);
    b.rect(b.width/2, b.height/2, r*b.width, r*b.height);
    //----------------------------
    // Format and display the "title" field.
    var poemTitle = poems[i].title;
    
    b.noStroke(); 
    b.fill(240,240,240);
    b.textSize(20);
    b.textFont("Special Elite","Regular"); 
    //b.textFont("","Regular"); 
    b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.CENTER_ALIGN );
    var thePoemTitleFrame = b.text(poemTitle, titleX,titleY,titleW,titleH);
    var titleWords = b.words(thePoemTitleFrame);
    b.typo(titleWords[0], 'pointSize', 150);
    b.typo(titleWords[titleWords.length-2], 'underline');

    //----------------------------
    // Format and display the poem's "text" field. 
    // This time, let's do some typographic tricks.
    //b.textFont("Special Elite","Regular"); 
    b.textAlign(Justification.RIGHT_ALIGN, VerticalJustification.CENTER_ALIGN );
    
    // Get the text of the i'th poem
    var poemText = poems[i].text;
    
    // Create an InDesign TextFrame from this text
    var thePoemTextFrame = b.text(poemText, textX,textY,textW,textH);
    
    // Fetch the individual words in this TextFrame; 
    // Underline the first (0'th) word. 
    var poemWords = b.words (thePoemTextFrame);
    //b.typo(poemWords[0],'pointSize', 100);

    //----------------------------
    // Create the next page. 
    if (i < (poems.length-1)){ b.addPage(); } } // Ensure that there are an even // number of pages in the document. if ((poems.length % 2) > 0){
    b.addPage();
  }
}

function remPages(){
  while(b.pageCount() > 1){
    b.removePage();
  }
}

function savePage(i){

  if (i > 9)
    b.savePDF(i + "-nerual.pdf");
  else
    b.savePDF('0' + i + "-nerual.pdf");
  remPages();  
}

function saveSamplePage(){
  b.savePDF("sample-nerual.pdf");
  return;
}

function setup(){
  b.clear (b.doc());
  remPages();
  makeSamplePage();
  saveSamplePage();
  // var numPages = 1;
  // for( var i=0; i < numPages; i++) {
    // makePage(i);
    // savePage(i);
  // }  
}


// This makes it all happen:
b.go(); 

nerual-lass-Automaton

Karen
Karen would like to speak to the manager about the quality of food in this establishment. Place a yummy snack into Karen's hand and watch her eat it because god damn it she's going to get her money's worth.

(Concept Development) This was very much a creation guided by material. Our original idea turned out to be unfocused and too hard to do with our limited Arduino skills, but we had this very realistic wig and some bugs. So we decided to have a pretty lady eating bugs. As we continued working, Alyssa pointed out that she looked like a suburban mom, and the aesthetics followed from there. We thought it'd be interesting to mock the uptight middle class soccer mom stereotype by making her do something unconventional.

(Technical) Golan suggested we look at Linkage for some simulations for scooping motions, and that's where we got the design for the arm apparatus(laser cut). The linear motion of the jaw was thanks to some 3D printed parts from the Physical Computing Lab. I ended up building and troubleshooting the arm mostly, while Alyssa worked on the jaw, and later the software to synchronize their motion. We worked together to assemble the whole thing and troubleshoot(so much....too much, my knees still quake at the memory).

The results are "jank," because we intended to add some things to cover up the wiring and tape. There were probably things that could have been done to make the arm more a part of the character, but we didn't have enough time. However, I think the arm mechanism works well and looks impressive. Alyssa painted the face, and I think it ties the whole set up together very well.

The happy coincidence of that, however, is that one can appreciate the possible implications. During the critique, Caroline mentioned the possibility of an interpretation of the housewife as a soulless machine cog. In general, the messiness of the presentation and interaction(the latter was intentional), made for some fun comments throughout our work, on how messy Karen is, as if she was a real character.

nerual-LookingOutwards04

I really like this "Tangible Scores" project by Enrique Tomas, a PhD student at the University of Art and Design of Linz (at the time presumably). It is a tool for making music essentially, featuring a wood board with scores on it in various shapes, such as a scribble, or a series of parallel lines. A program senses the user's touch and gestures across the augmented surface, and translates it to sound. The actual mechanics of the project are very, very complicated (there is a paper explaining the design and concept in more depth).

I love when the human mind sees meaning in abstract patterns(such as a series of lines looking like instrument strings), and art and technology allows us to experience those imaginative thoughts for real(strumming your finger across it actually creates a strumming sound). And I find the combination of tactile, visual, and sonic elements particularly interesting, since you can essentially see and touch the musical forms created from it(because the musical forms were created from these visual and tactile forms to begin with).

What is somewhat off-putting about the project is the contrast between the way it is demonstrated in the Vimeo video and the way it was displayed in a gallery. It invites a very different kind of interaction, where it seems like you would feel more like you're making sounds than making music. Something about the gallery setting, despite being immersive, is rather empty and unwelcoming.

Vimeo video still of Enrique performing
Gallery showing

nerual-Body

It's clouded out there. Why go outside when you can vicariously experience the great outdoors as a cloud? Turn your head and blow yourself back and forth.

Video:

GIFs:

Image:

Process:

I really like clouds. One of the commonalities among some examples of face tracking I've seen is giving non-human objects human qualities, so I thought I'd try it with clouds. I was interested in a fun and cute interaction that lasted a few minutes.

These are some initial sketches and ideas for doing more than just blowing yourself around as a cloud. Unfortunately I didn't get to much beyond that.

I used this p5.js with Glitch and clmTracker template provided. I used Matter.js for the physics, although in rettrospect, the p5.js built in physics would have sufficed.

The CLM tracker gets an array of coordinates from the video input. Here is a screen shot I used to know which points went with which features, so I could isolate the eyes and mouth and calculate ratios to determine if the mouth was open or which way the face was facing.

It was very difficult to get the motion tracker to stabilize and reliably detect movements, so I didn't get around to implementing networking or my other ideas.

As I was transplanting the face points, I discovered some quirky movements that resulted, such as the interesting face created when the eye position, but not rotation, is tracked. It gave the cloud(I call him Claude) a wonky face that I liked. I originally left the mouth as a line/circle(like in the sketches), but I wanted to capture more of the user's orientation and movement, and let it dictate rather than just guide the movement. In general, I liked this aesthetic that arose from selectively misinterpreting and/or ignoring certain features.

Ultimately however, I feel like this project was a disappointment. Part of it was me not taking the time to set up the right structures to handle a bunch of bodies interacting with physics. But another part might have been deciding to go with an idea that relied mostly on a smooth final result, rather than having other aspects to lean on. I think I made some decent efforts with adding in the other clouds, but their movement(move way too fast when video input isn't showing), and in general the color scheme, are pretty lacking.

Code:

/* face-ctrl.js */
var ctracker;
//var mouth_pos;
var mouth_x = 300;
var mouth_y = 300;
var facingLeft, facingRight;
var blowRatio, turnRatio;
var m_width, m_height;
 
var ox,oy;
var prev;
var leye, reye;
var pos1 = [];
var pos2 = [];
var posMid = [];
 
function setup_clm(){
  //setup camera capture
  var videoInput = createCapture(VIDEO);
  videoInput.size(w/4,h/4); 
  //uncomment to view video
  //videoInput.position(w/4*3, h/4*3);
  //videoInput.hide();
 
  ctracker = new clm.tracker();
  ctracker.init(pModel);
  ctracker.start(videoInput.elt);
  mouth_x = 300;
  mouth_y = 300;
}
 
 
 
function updateVars(mx, my, scale){
  var positions = ctracker.getCurrentPosition();
 
  for (var i=positions.length-1; i&lt;positions.length; i++) { ox = positions[60][0]*scale; oy = positions[60][1]*scale; m_width = (positions[44][0] - positions[50][0]); m_height = (positions[47][1] - positions[53][1]); turnRatio = ((positions[44][0]-positions[1][0])/(positions[13][0]-positions[50][0])); facingLeft = turnRatio &gt; 1.5;
      facingRight = turnRatio &lt; 1;
  }
}
 
function isBlowing(){
  return blowRatio &lt; 2.5;
}
 
function drawBlowHole(mx, my, scale){
    var positions = ctracker.getCurrentPosition();
  blowRatio = m_width/m_height;
 
  push();
  noFill();
  strokeWeight(5);
  var mw = m_width*scale;
  var mh = m_height*scale;
 
  if (isBlowing()){
    //wind
    stroke(0);
    var bx1 = mx + (facingLeft ? -1 : 1)*mw*2;
    var bx2 = mx + (facingLeft ? -1 : 1)*mw*3;
    var by = my;
    var sep = 20;
    push();
    strokeWeight(3);
    line(bx1, by, bx2, by);
    line(bx1, by+sep, bx2, by+sep);
    line(bx1, by-sep, bx2, by-sep);
    pop();
  }
 
  //face
  for (var i=0; i&lt;positions.length; i++) {
    var x = positions[i][0]*scale;
    var y = positions[i][1]*scale;
    push();
    stroke(0);
    strokeWeight(scale*1.2);
    strokeJoin(ROUND);
    strokeCap(ROUND);
    //EYES
    if (i == 27 || i==32){
      stroke(0);
      noFill();
      pos1[0]= mx + x-ox;
      pos1[1] = my + y-oy +50;
      ellipse(pos1[0],pos1[1],40,20);
      fill(0);
      noStroke();
      ellipse(pos1[0],pos1[1],20,20);
    }
 
    //REAL MOUTH
    if(i&lt;44 || i&gt;61) continue;
    if (i==58 &amp;&amp; !isBlowing()){
      i=62;
      continue;
    }
    if (i == 55) pos2 = [positions[44][0]*scale,positions[44][1]*scale];
    else if (i == 61) pos2 = [positions[56][0]*scale,positions[56][1]*scale];
    else pos2 = [positions[i+1][0]*scale,positions[i+1][1]*scale,x,y];
 
    pos1[0]= mx + x-ox;
    pos1[1] = my + y-oy;
    pos2[0] = mx + pos2[0]-ox;
    pos2[1] = my +pos2[1]-oy;
    posMid[0] = (pos1[0] + pos2[0])/2;
    posMid[1] = (pos1[1] + pos2[1])/2;
 
    line(pos1[0],pos1[1],posMid[0],posMid[1]);
    line(pos2[0],pos2[1],posMid[0],posMid[1]);
  }
  pop();
}
 
/* sketch.js */
 
 
var socket = io();
var clients = {};
var data = {};
var w = 800;
var h = 600;
var size;
var videoInput;
var ctracker;
var claude;
var clouds = [{x:100,y:100,scalar:100, w:3, h:4},
              {x:200, y:h-100, scalar:-150, w:5, h:7},
              {x:w-300, y:h, scalar: 70, w:8, h:5},
              {x:w-100, y:200, scalar: -50, w:6, h:6}
             ]
var t = 0;
var Engine = Matter.Engine,
    Render = Matter.Render,
    World = Matter.World,
    Bodies = Matter.Bodies,
    Body = Matter.Body,
    Constraint = Matter.Constraint;
 
function setup_matterjs(){
  engine = Engine.create();
  world = engine.world;
  engine.world.gravity.y = 0;
  engine.timing.timeScale = 0.2;
  //makeBoundaries(width,height);
  Engine.run(engine);
}
 
function setup() {
  // setup canvas
  var cnv = createCanvas(w,h);
  cnv.position(0, 0);
  setup_clm(); //face-ctrl
  setup_matterjs();
  size = 50;
  claude = new Cloud(mouth_x, mouth_y, size);
  World.add(world, claude.body);
  started=0;
}
 
function drawCloudShape(x,y,h,w){  
  push();
  fill(250, 250, 250);
  noStroke();
  arc(x, y, 25 * h, 20 * h, PI + TWO_PI, TWO_PI);
  arc(x + 10*w, y, 25 * h, 45 * h, PI + TWO_PI, TWO_PI);
  arc(x + 25*w, y, 25 * h, 35 * h, PI + TWO_PI, TWO_PI);
  arc(x + 40*w, y, 30 * h, 20 * h, PI + TWO_PI, TWO_PI);
  pop();
}
 
function drawCloud(x,y,w,h){
  updateVars(x,y,w/2);
  drawCloudShape(x-w*10,y+h*7,w,h);
  drawBlowHole(x,y,w/2);
}
 
function drawSun(){
  push();
  noStroke();
  fill('#FFEC5E');
  var sun =  {x: map(hour(), 0, 60, 150, w-150), y: 150, r:200};
  ellipse(sun.x, sun.y, sun.r);
  pop();
}
 
function draw() {
  background('#A6D7FF');  
  drawSun();
  sass();
 
  push();
  //uncomment to mirror
  //translate(w,0);
  //scale(-1.0,1.0);
  strokeWeight(1);
  claude.update;
  drawCloud(claude.pos.x, claude.pos.y, 7, 5);
  pop();
 
  var wind = 0.1 * (facingLeft ? -1 : 1);
  var force = {x:wind,y:0};
  if (blowRatio &lt; 2) Body.applyForce(claude.body, claude.pos, force);
  for(var i =0; i&lt; clouds.length; i++) otherClouds(clouds[i]); } function sass(){ push(); if (claude.pos.x &gt; w+200 || claude.pos.x &lt; -200){
      textFont('Georgia');
      textSize(30);
      textAlign(CENTER);
      strokeWeight(1);
      fill(0);
      text("the real world doesn't have wrap-arounds, chump\n go outside",w/2,h/2);
    }
  pop();
}
 
function makeBoundaries(w,h){
  console.log("width="+str(w)+" height="+str(h));
  var strength = 20;
  var options = {isStatic:true, restitution: 0.2};
  var bottom = Bodies.rectangle(w/2, h, w+50, strength, options);
  var top = Bodies.rectangle(w/2, 0, w+50, strength, options);
  var left = Bodies.rectangle(0, h/2, strength, h+50, options);
  var right = Bodies.rectangle(w, h/2, strength, h+50, options);
  World.add(world,top);
  World.add(world,bottom);
  World.add(world,left);
  World.add(world,right);
}
 
function otherClouds(cloud){
  var scalar = cloud.scalar;
  var x1 = cloud.x + (scalar * cos(t));
  drawCloudShape(x1, cloud.y, cloud.w, cloud.h);
  //fill(0); noStroke();
  stroke(0); noFill();
  var dir = cloud.scalar &lt; 0 ? 1 : -1;
  var y = cloud.y-cloud.h*4;
  ellipse(x1 + cloud.w*15, y,20,10);
  ellipse(x1 + cloud.w*3, y, 20, 10);
  //ellipse(x1, cloud.y, 70, 70); //mouth?
  t+=0.005;
}

nerual-LookingOutwards03

I was really intrigued by Lauren McCarthy and Kyle McDonald's "How We Act Together," both by the interaction between individuals and between computer and individual. The project shows you a stream of different pictures of other people interacting with the project from before, as long as you continue acting certain facial movements. It explores modern interaction, in both positive and negative ways.

I think it's interesting that it attempts to pose these questions about modern interaction, which involves lots of virtual interaction, through virtual interaction itself, with very distant people. The whole set up itself is already "awkward and intimate" and it's cool that they chose to accentuate that with a kind of interaction that induces a bodily response.

http://lauren-mccarthy.com/How-We-Act-Together

nerual-Telematic

Visual Echos : Let your interactions leave a visual footprint. WASD to move.

Notes on bugs: A player isn't removed if they disconnect. If you refresh, you will start with a fresh screen, but on everyone else's screen, you will appear as a new player and your old particle will just be a moveable object. 

Looks cooler with more people, but also potentially gets buggier.

GIF:

I wanted to explore equal collaboration/competition, creating an environment where either can manifest. In the process of working with a physics engine, I became interested in incorporating the ceding of control to external forces. In this case, you and the other players may be collaborating, but there is still chaos that hinders that, yet creates satisfying after images. The white line between players makes the canvas itself dynamic, as it erases past drawings.

This is getting into "it's a feature not a bug" territory, but I actually like the freedom you have with the thin lines, because now you have to negotiate the speed of your movements as well, in order to create or avoid creating smooth shapes.

I didn't get to try everything I wanted to do, but I think I touch upon some ideas worth exploring further. I think it lacks a lot polish, in terms of the color choice and overall feel, as I definitely could have fiddled around with the design elements more.

My original idea was to create a many headed worm(inspired in part by the cartoon CatDog), but I think I end up exploring the visuals that result from interactions, rather than the gamified mechanics.

These are some progress screen shots of what it might have looked like with a chalkboard kind of aesthetic.

glitch
glitch
2 player interaction
one player

Some things to explore still:

  • using real colors
  • changing the aspect ratio
  • adding constraints
  • smoothing out
  • incorporating instructions
  • distinguishing features for the players
  • different shapes

Below are some sketches of the original idea. I discovered that you could record the path of the interaction and I thought it might be more interesting to deal with geometric relations instead.

Concept sketches
I successfully modeled a worm-like creature but I was unable to make one player the head and the other player the tail.

Code can be found on Glitch: https://glitch.com/edit/#!/visual-echoes

Future Work:

  • Fix bugs: make sure players disconnect properly
  • Fiddle with colors and transparency more
  • Fork project to explore having the midpoint between the two players be the drawing brush

nerual-viewing04

Spectacle describes work that uses technology in novel and spectacular ways, but does it at an inaccessible scale and doesn't acknowledge the context and implications that surround it.

Speculation uses technology as a medium to explore society and technology itself, but it is inaccessible because of its lack of polish, and it out on what could be explored when the artist themself is working directly with the technology.

One project that comes to mind are the "satisfying" simulations done by Andrea Wannerstedt. It seems to fall more into the category of spectacle, engaging the viewer with its polish and using the precision of technology to hit the sweet spot with timing. It comes closer to acceleration than drag, as it seems to have be operating at the height of 3D modeling in a 2D canvas and has some aesthetics that make it universal. It is hard to judge that at this time though. It falls more along visibility because it reaches a large audience. It comes sits between surplus and waste, simply because its scale is so small, although maybe more on the surplus side given its association with quick, cheap social media entertainment. It is closer to commerce than art, in its current contexts at least, because it is so easily accessible, spreading as GIF loops through social media and second-rate news blogs, eventually without credit on many. And it is a style that has already be used in many advertisements. It is very squarely on the function side, scrubbing out all chaos as much as possible to leave us with the impossibly perfect graphic.