dinkolas-arsculpture

Stair Into Space - casher & dinkolas

Stair Into Space is an augmented reality multi-floor installation; walk up the CFA stairwell to explore sea, land, sky, and space.

Stair Into Space superimposes a world of many wonders on top of the CFA staircase, and thereby transforms the act of walking up stairs into an exploration of a space from a perspective that would otherwise be impossible. One cannot normally freely move up and down in a space unless there are assisting structures. Our project utilizes existing structures as a means for moving about in an imagined world. Therefore, the viewer exploits the architectural reality in order to see an augmented one.

 

dinkolas-UnityTutorial

I watched a few of the scripting tutorials, and it's set up surprisingly similarly to the stuff we've done with p5, Processing, Glitch, etc. I've gotten pretty used to the functions called every frame, or called every mouse click, or whatever it is, so learning to make cool things in Unity scripts should just be a matter of familiarizing myself with the built in capabilities of the UnityEngine library.

Here's a screenshot of a little game I made where you can move around and click on cubes to make them shoot in a certain direction.

dinkolas-book

Bioinvasive Dingus

The U.S. judicial branch, bioinvasion, war, and sociology collide with the vocabularies of Luis Carroll and Steve Brule in their first and last ever mash-up.
Here is a .zip containing 25 iterations of 10-page chapters:

https://drive.google.com/file/d/1PfSEv24RcGyA8eCPXGnYXw5h3YIFgONi/view?usp=sharing

The text portion of this project was generated using a combination of Markov chains. First, the text body was generated from a corpus of a series of academic papers on serious subjects ranging from Supreme Court decisions to changing migration habits. These papers were selected from the MICUSP database of student papers. The Markov chain had an n-gram length of 4, and was word based.

Next, random nouns were selected from the text to be replaced with other generated words. The replacement words were generated letter by letter with an n-gram length of 2. They were generated from Luis Carroll's Jabberwocky and transcripts of Check It Out! With Doctor Steve Brule. These words in isolation can be read and heard here by clicking to generate a new word: https://editor.p5js.org/dinkolas/full/ryIcv99aX

The resultant text is a mishmash of technical jargon, actual nonsensical words, and serious problems with the world that are obscured by a dense dialect. Finally, images were generated by rotating and superimposing images from Yale's face database, and overlaying selected words from the "glossary" of generated words. These images are strung through the text, breaking it up, making it even more difficult to read line to line. Visually and textually, the generated nonsensical words make it almost impossible to parse the discussion of national and global crises.

Here's the code for the text:

var dingusGen, input, markBody, final
var bigJSON = { versions: [{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] }] };
 
function preload()
{
  dingus = loadStrings('dingus.txt');
  input = loadStrings('input1.txt'); 
}
 
function setup() { 
  createCanvas(500,400);
  textSize(12);
  textAlign(LEFT);
	dingusGen = new Markov(2, dingus.join(" ").split(""), " ", 100);
  markBody = new RiMarkov(4);
  markBody.loadText(input.join(' '));
 
  for (var version = 0; version < 25; version++)
  {
    for (var page = 0; page < 10; page++)
    {
      bigJSON.versions[version].pages[page] = genText(version, page);
    }
  }
  saveJSON(bigJSON, "final1.json");
}
 
function genText(version, page) {
  background(255);
  var final = RiTa.tokenize(markBody.generateSentences(20).join(' '));
  var glossary = [];
 
  for (var i = 0; i < final.length; i++)
  {
    if (RiTa.isNoun(final[i]) && random(1) < 0.5)
    {
      var result = dingusGen.generate().join("")
      while (result.length < final[i].length || result.length > final[i].length+5)
      {
        result = dingusGen.generate().join("")
      }
      if (final[i].charAt(0) == final[i].charAt(0).toUpperCase())
      {
        result = result.charAt(0).toUpperCase() + result.slice(1);
      }
      if (random(1) < 0.2)
      {
       	result = RiTa.pluralize(result); 
      }
      if (random(1) < 0.2)
      {
        glossary.push(result.charAt(0).toUpperCase() + result.slice(1)); 
      }
 
      final[i] = result;
    }
  }
 
  var thisJSON = {}; 
  thisJSON.glossary = glossary;
  thisJSON.text = RiTa.untokenize(final);
  //text(version + "_" + page, 25, 25);
  text(RiTa.untokenize(final), 50, 50, 400, 400);
  return(thisJSON);
}
 
function Markov(n, input, end, maxLen) {
	this.n = n;
  this.ngrams = [];
  this.tokens = [];
  this.end = end;
  this.maxLen = maxLen;
  this.N = 0;
  this.indexFromGram = function(gram) {
    var index = 0;
    for (var i = 0; i < n; i++)
    {
      index += this.tokens.indexOf(gram[i]) * pow(this.tokens.length, n - i - 1);
    }
    return index;
  }
  this.indexToGram = function(index) {
    var gram = [];
    for (var i = 0; i < n; i++)
    {
      gram.unshift(this.tokens[(Math.floor(index/(pow(this.tokens.length, i)))) % (this.tokens.length)]);
    }
    return gram;
  }
 
  for (var i = 0; i < input.length; i++)
  {
    if (!(this.tokens.includes(input[i])))
    {
      this.tokens.push(input[i]);
    }
  }
 
  for (i = 0; i < pow(this.tokens.length, n); i++)
  {
    this.ngrams.push(0);
  }
 
  var gram = [];
  for (i = 0; i < input.length - n + 1; i++)
  {
    gram = []
    for (var j = 0; j < n; j++)
    {
      gram.push(input[i + j]);
    }
    this.ngrams[this.indexFromGram(gram)] ++;
  }
 
  for (i = 0; i < this.ngrams.length; i++)
  {
    this.N += this.ngrams[i];
  }
 
 
  this.seed = function() {
    var randInd = Math.floor(random(this.N));
    var n = 0;
    for (var i = 0; i < this.ngrams.length; i++) { n += this.ngrams[i]; if (n > randInd) 
      {
        return this.indexToGram(i);
      }
    }
    print("seed is fucked");
    return [];
  }
 
  this.nextToken = function(gram) {
    gram.push(this.tokens[0]);
    var index0 = this.indexFromGram(gram);
    var N = 0;
    for (var i = 0; i < this.tokens.length; i++)
    {
      N += this.ngrams[index0 + i];
    }
    var n = 0;
    var randInd = Math.floor(random(N));
    for (i = 0; i < this.tokens.length; i++) { n += this.ngrams[index0 + i]; if (n > randInd) return this.tokens[i];
    }
    print("nextToken is fucked");
    print(gram);
    return 0;
  }
 
  this.generate = function() {
    var out = this.seed();
    //print("out", out);
    var i = 0;
    while (out.includes(this.end) && i < this.maxLen)
    {
    	out = this.seed();
      i++
    }
    i = 0;
    while (out[out.length - 1] != this.end && i < this.maxLen)
    {
      out.push(this.nextToken(out.slice(out.length - n + 1, out.length)));
      i++;
    }
    return out.splice(0,out.length-1);
  }
}

And the code for the images:

var book;
var images = []
var img;
var offset;
var c;
var wordsSoFar;
 
function preload() {
  for (var i = 1; i <= 230; i+=10) { var filename; if (i >= 100){ filename = "" + i +".png";}
    else if (i >= 10){filename = "0" + i + ".png";}
    else {filename = "00" + i + ".png";}
    images.push(loadImage("images/"+filename));
  }
  book = loadJSON('bigBoyFinal1.json');
  wordsSoFar = loadStrings('wordsSoFar.txt');
}
 
function setup() {
  c = createCanvas(300, 300);
  //imageMode(CENTER);
  textAlign(CENTER);
  textSize(40);
  background(200);
  var count = 0;
  print(book.versions[0].pages[0].glossary);
  for (var v = 0; v < book.versions.length; v++)
  {
    for (var p = 0; p < book.versions[v].pages.length; p++)
    {
      for (var w = 0; w < book.versions[v].pages[p].glossary.length; w++)
      {
        genImage(book.versions[v].pages[p].glossary[w]);
      }
    }
  }
}
 
function genImage(word) {
  background(255);
  blendMode(BLEND);
  var offset = 1;
  for (var i = 0; i < word.length/2; i++)
  {
    push();
    translate(150,150);
    rotate(random(360));
    translate(-150,-150);
    tint(255,127);
    image(images[Math.floor(random(images.length))], 0, 0, width, height*offset);
    blendMode(MULTIPLY);
    if (i % 2 == 0) blendMode(ADD);
    offset += 0.1;
    pop();
  }
  saveCanvas(c, Math.floor(random(50,100)), "jpg");
}

And the code for the PDF generation in basil.js:

#includepath "~/Documents/;%USERPROFILE%Documents";
 
#include "basiljs/bundle/basil.js";
 
// to run this example 
 
 
function draw() 
{
    console.log("hello");
 
	var dpi = 72;
	b.textSize(9);
	var json = b.JSON.decode(b.loadString("bigBoyFinal1.json"));
	b.clear(b.doc());
	for (var v = 0; v < json.versions.length; v++)
	{
		b.page(1);
		var wSum = 0;
		b.clear(b.doc());
		for (var p = 0; p < json.versions[v].pages.length; p++)
		{
			b.noStroke();
			b.fill(0);
			b.textFont('Gadugi', 'Regular');
			var currentText = b.text(json.versions[v].pages[p].text, 1*dpi, 1*dpi, 4*dpi, 7*dpi);
			var i = 0;
			for (var word in b.words(currentText))
			{
				if (word in json.versions[v].pages[p].glossary)
				{
					b.typo(b.words(currentText)[i], 'appliedFont', 'Gadugi\tBold');
				}
				i++;
			}
 
			for (var w = 0; w < json.versions[v].pages[p].glossary.length; w++)
			{
 
				wSum++;
				var x = 3*dpi + 2*dpi*b.sin(wSum/3);
				var y = b.map(w, 0, json.versions[v].pages[p].glossary.length, 1.2*dpi, 8.2*dpi);
				b.noFill();
				var wrapper = b.ellipse(x, y, 0.9*dpi+0.2*dpi*b.sin(wSum/4), 0.9*dpi+0.2*dpi*b.sin(wSum/4));
				wrapper.textWrapPreferences.textWrapMode = TextWrapModes.CONTOUR;
				var circle = b.ellipse(x, y, 0.75*dpi+0.2*dpi*b.sin(wSum/4), 0.75*dpi+0.2*dpi*b.sin(wSum/4));
				try {
					var imgCircle = b.image("FaceImgBgs/" + Math.floor(b.random(73)) + ".jpg", circle);
				}
				catch(error) {
					b.fill(b.random(10,70));
					b.ellipse(x, y, 0.75*dpi+0.2*dpi*b.sin(wSum/4), 0.75*dpi+0.2*dpi*b.sin(wSum/4));
				}
				b.fill(255,255,255);
				b.textAlign(Justification.CENTER_ALIGN);
				var myText = b.text(json.versions[v].pages[p].glossary[w],x-1*dpi,y-0.04*dpi,2*dpi,0.5*dpi);
				myText.textFramePreferences.ignoreWrap = true;
				b.textAlign(Justification.LEFT_ALIGN);
			}
			if (v == 0) 
			{
				if (p < json.versions[v].pages.length - 1){b.addPage();}
			}
			else
			{
				if (p < json.versions[v].pages.length - 1){b.page(p+2)}
			}
		}
		//if (v < 10){ b.savePDF("0" + v + "_dinkolas.pdf", false);}
		//else { b.savePDF(v + "_dinkolas.pdf", false);}
 
	}
}
 
 
 
b.go();

 

dinkolas-parrish

Parrish's presentation Exploring (Semantic) Space With (Literal) Robots brought up several interesting ideas surrounding the ethics of creating generative text utilizing corpora from other people, and of finding unexplored literary spaces. It reminds me of the question of whether math is invented or discovered. Are sentences invented or discovered? Surely once a language exists (whether invented or discovered), a simple algorithm could generate every possible combination of words (like the Library of Babel). This might imply that once a language exists, all one can do is discover (not invent) sentences because they were already implicitly invented. I think this is the crux of one of Parrish's ethical concerns about generative poetry from a given body of text. This is made more complicated by her project that generates entirely new words. Sure, they're new words, but they are constructed from old letters and letter patterns. I don't have answers to these problems (not to mention that Parrish brought up other concerns about generated texts' meanings/content, not just their origin) but they're interesting and important to think about.

dinkolas-LookingOutwards04

This is Zimoun's untitled work from 2016 consisting of 317 DC motors, paper bags, and a shipping container. The interior of the container is covered with crinkling paper bags, and lit by a single light bulb hanging from the center of the ceiling. The walls are colored the same as the bags, and the crinkling is produced by simple rotating motors inside the bags. The work was displayed publicly, with the shipping container suspended above the ground with a hole in its bottom so that viewers could duck into the space.

The materials are simple, but the craft is very professional with no glaring seams between surfaces and a clear unified aesthetic. The mechanics of the project are hidden by the paper bags, but they are intentionally unambiguous, leaving the viewer with very little to ponder besides the exploration of how the space makes them feel. There is no artist statement, no explanation of "meaning," just the artwork. This simplicity forces the viewer to confront themselves more than anything. It's difficult to pull that effect off, because you must capture attention, but not occupy the entire consciousness of the viewer. That's why I think architectural spaces are particularly well equipped for such an effect: they surround the viewer, but are still read as a background.

Zimoun's body of work consists of many sound spaces, often taking up entire rooms and surrounding viewers. What makes this particular piece unique is the way that the viewer enters the space. The fact that the sound is subtle enough and that the room is well enough insulated makes the piece a complete mystery from the outside. Usually Zimoun's pieces can be approached gradually, but this one gives no hints as to what's inside, and essentially transports passersby into an entirely new domain. That's why this is my favorite one of Zimoun's works. I find the notion of unexpected interiors very appealing.

dinkolas-Body

This project takes the motion from an entire human body and maps it to a face. Hands control eyes, head controls brow, hips control jaw, and feet control lips. Technically, the way this was done was by first recording the motion using the Brekel software for the Kinect, which outputs a BVH file. Then, I brought the MoCap data into Blender, which creates an animated armature. Using bpy, the Blender Python library, I took the locations of the armature bones as input and used them to animate location, rotation, and shape keys associated with the face model.

While the bulk of the work for this project was just writing the code to hook up the input motion to the output motion, the most important work, the work that I think gives this project some personality, was everything else. The modelling of the face, the constructing of the extreme poses, and of course the performing of the MoCap "dance" ultimately have the most impact on the piece. Overall, I'm happy with the piece, but it would have been nice if I had implemented a real-time version. I would have done a real-time version I could find a Python real-time MoCap library so I could connect it to Blender, but most of the real-time MoCap stuff is in Javascript.

Here are some images/GIFs/sketches:

Here's the code I wrote in Blender's Python interpreter:

import bpy
 
defaultBoneLocs = {'Spine1': (0.7207083106040955, 9.648646354675293, 4.532780170440674), 'LeftUpLeg': (0.9253001809120178, 9.532548904418945, 2.795626401901245), 'LeftLeg': (1.0876638889312744, 9.551751136779785, 1.751688838005066), 'LeftHand': (1.816838026046753, 8.849924087524414, 3.9350945949554443), 'Head': (0.7248507738113403, 9.63467788696289, 4.774600028991699), 'Spine': (0.7061706185340881, 9.661049842834473, 3.7947590351104736), 'RightArm': (0.17774519324302673, 9.660733222961426, 4.388589382171631), 'LeftArm': (1.259391188621521, 9.625649452209473, 4.377967834472656), 'RightUpLeg': (0.42640626430511475, 9.538918495178223, 2.812265634536743), 'LeftForeArm': (1.7386583089828491, 9.596687316894531, 3.683629274368286), 'RightShoulder': (0.6663911938667297, 9.649639129638672, 4.518487453460693), 'Hips': (0.6839468479156494, 9.647804260253906, 2.792393445968628), 'LeftShoulder': (0.7744865417480469, 9.646313667297363, 4.5172953605651855), 'LeftFoot': (1.2495332956314087, 9.810073852539062, 0.5447696447372437), 'RightForeArm': (-0.3234933316707611, 9.588683128356934, 3.855139970779419), 'RightFoot': (0.08095724135637283, 9.851096153259277, 0.5348520874977112), 'RightLeg': (0.23801341652870178, 9.571942329406738, 1.788295030593872), 'RightHand': (-0.3675239682197571, 8.814794540405273, 4.040530681610107), 'Neck': (0.7212125062942505, 9.647202491760254, 4.556796550750732)}
 
boneLocs = {}
normBoneLocs = {}
 
for frame in range(30,1491,3):
    if frame%90 == 0:
        print(frame/1491*100)
    bpy.context.scene.frame_set(frame)
 
    for arm in bpy.data.armatures[:]:
        obj = bpy.data.objects[arm.name]
        for poseBone in obj.pose.bones[:]:
            finalMatrix = obj.matrix_world * poseBone.matrix
            global_location = (finalMatrix[0][3],finalMatrix[1][3],finalMatrix[2][3])
            boneLocs[poseBone.name] = global_location
 
    for key in boneLocs:
        x = (boneLocs[key][0] - boneLocs['Hips'][0]) - (defaultBoneLocs[key][0] - defaultBoneLocs['Hips'][0])
        y = (boneLocs[key][1] - boneLocs['Hips'][1]) - (defaultBoneLocs[key][1] - defaultBoneLocs['Hips'][1])
        z = (boneLocs[key][2] - boneLocs['Hips'][2]) - (defaultBoneLocs[key][2] - defaultBoneLocs['Hips'][2])
        if key == 'Hips':
            z = boneLocs[key][2] - defaultBoneLocs[key][2]
        normBoneLocs[key] = (x,y,z)
 
    val = -0.6*(normBoneLocs['Hips'][2])
    bpy.data.meshes['Head'].shape_keys.key_blocks['JawOpen'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['JawOpen'].keyframe_insert("value")
    val = 0.6*(normBoneLocs['Hips'][2])
    bpy.data.meshes['Head'].shape_keys.key_blocks['JawUp'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['JawUp'].keyframe_insert("value")
 
    val = (normBoneLocs['LeftFoot'][0])
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthLOut'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthLOut'].keyframe_insert("value")
    val = -(normBoneLocs['LeftFoot'][0])
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthLIn'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthLIn'].keyframe_insert("value")
 
    val = -(normBoneLocs['RightFoot'][0])
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthROut'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthROut'].keyframe_insert("value")
    val = (normBoneLocs['RightFoot'][0])
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthRIn'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthRIn'].keyframe_insert("value")
 
    val = -(normBoneLocs['Head'][2])
    bpy.data.meshes['Head'].shape_keys.key_blocks['BrowDown'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['BrowDown'].keyframe_insert("value")
    val = (normBoneLocs['Head'][2])
    bpy.data.meshes['Head'].shape_keys.key_blocks['BrowUp'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['BrowUp'].keyframe_insert("value")
 
    bpy.data.objects['EyeRTrack'].location.z = 0.294833 + normBoneLocs['RightHand'][2]
    bpy.data.objects['EyeRTrack'].location.x = -0.314635 + normBoneLocs['RightHand'][0]
    bpy.data.objects["EyeRTrack"].keyframe_insert(data_path='location')
 
    bpy.data.objects['EyeLTrack'].location.z = 0.294833 + normBoneLocs['LeftHand'][2]
    bpy.data.objects['EyeLTrack'].location.x = 0.314635 + normBoneLocs['LeftHand'][0]
    bpy.data.objects["EyeLTrack"].keyframe_insert(data_path='location')
 
    bpy.data.objects['Head'].rotation_euler = bpy.data.objects['Armature'].pose.bones['Spine'].matrix.to_euler()
    bpy.data.objects["Head"].rotation_euler.x -= 1.7
    bpy.data.objects["Head"].rotation_euler.x *= 0.3
    bpy.data.objects["Head"].keyframe_insert(data_path='rotation_euler')

 

Also, here's a bonus GIF I made using a walk cycle from CMU's Motion Capture Database:

dinkolas-LookingOutwards03

Aerobanquets RMX is an immersive VR/gastronomy experience by Mattia Casalegno. The participants sit at a table and are served food on small platters, while wearing a VR headset. The headset displays forms that were generated based on the flavor profiles of the food items. The recipes and cooking were made by chef Flavio Ghignoni Carestia based on the Futurist Cookbook. The futurist cuisine fits perfectly with the rest of the concept, somewhat alien and pretty dang cool.

While I think the implementation of the project could have been pushed further, with a more seamless way of consuming the food, more refined graphics, and potentially more interpersonal interaction; the concept of involving food with a VR experience is a very good one. Taste and smell seem to be the most neglected senses in interactive art, so I really like the idea of integrating food into the experience. I also like the idea going the other way, integrating visual experience into eating a meal.

This piece suggests the possibility of a full sensory experience, and only with fairly recent technologies could the senses could be very carefully calibrated and coordinated. We could move beyond the visual/olfactory of scratch n' sniff markers or auditory/gastronomic of a restaurant with live music, and into an artificial and hand-tailored world that incorporates every sense.

dinkolas-telematic

Control your line segment by moving your mouse in the bottom left, collaborate with other line segments, avoid obstacles and get the gold!

App
GIF

Each player is a single line segment, which changes size and rotation based on the location of the mouse or finger. Everyone shares the same goal, but plays a slightly different role in achieving it. As the end segment, your job is often to shrink so as to give the other players room to move, and then finish off the task to get the gold. As the base, you carry a lot of responsibility, and can mess everyone up at any moment. Thus, the roles are subtly complementary. The gameplay is highly synchronous, with time coordination required for many of the obstacles. Despite this, it really doesn't matter if players share a physical space or not. It was fun, however, to have everyone looking at the same computer screen while playing with their fingers on their phones. There is no real critique of the medium, ideally the medium is ignored.