dinkolas-Body

This project takes the motion from an entire human body and maps it to a face. Hands control eyes, head controls brow, hips control jaw, and feet control lips. Technically, the way this was done was by first recording the motion using the Brekel software for the Kinect, which outputs a BVH file. Then, I brought the MoCap data into Blender, which creates an animated armature. Using bpy, the Blender Python library, I took the locations of the armature bones as input and used them to animate location, rotation, and shape keys associated with the face model.

While the bulk of the work for this project was just writing the code to hook up the input motion to the output motion, the most important work, the work that I think gives this project some personality, was everything else. The modelling of the face, the constructing of the extreme poses, and of course the performing of the MoCap "dance" ultimately have the most impact on the piece. Overall, I'm happy with the piece, but it would have been nice if I had implemented a real-time version. I would have done a real-time version I could find a Python real-time MoCap library so I could connect it to Blender, but most of the real-time MoCap stuff is in Javascript.

Here are some images/GIFs/sketches:

Here's the code I wrote in Blender's Python interpreter:

import bpy
 
defaultBoneLocs = {'Spine1': (0.7207083106040955, 9.648646354675293, 4.532780170440674), 'LeftUpLeg': (0.9253001809120178, 9.532548904418945, 2.795626401901245), 'LeftLeg': (1.0876638889312744, 9.551751136779785, 1.751688838005066), 'LeftHand': (1.816838026046753, 8.849924087524414, 3.9350945949554443), 'Head': (0.7248507738113403, 9.63467788696289, 4.774600028991699), 'Spine': (0.7061706185340881, 9.661049842834473, 3.7947590351104736), 'RightArm': (0.17774519324302673, 9.660733222961426, 4.388589382171631), 'LeftArm': (1.259391188621521, 9.625649452209473, 4.377967834472656), 'RightUpLeg': (0.42640626430511475, 9.538918495178223, 2.812265634536743), 'LeftForeArm': (1.7386583089828491, 9.596687316894531, 3.683629274368286), 'RightShoulder': (0.6663911938667297, 9.649639129638672, 4.518487453460693), 'Hips': (0.6839468479156494, 9.647804260253906, 2.792393445968628), 'LeftShoulder': (0.7744865417480469, 9.646313667297363, 4.5172953605651855), 'LeftFoot': (1.2495332956314087, 9.810073852539062, 0.5447696447372437), 'RightForeArm': (-0.3234933316707611, 9.588683128356934, 3.855139970779419), 'RightFoot': (0.08095724135637283, 9.851096153259277, 0.5348520874977112), 'RightLeg': (0.23801341652870178, 9.571942329406738, 1.788295030593872), 'RightHand': (-0.3675239682197571, 8.814794540405273, 4.040530681610107), 'Neck': (0.7212125062942505, 9.647202491760254, 4.556796550750732)}
 
boneLocs = {}
normBoneLocs = {}
 
for frame in range(30,1491,3):
    if frame%90 == 0:
        print(frame/1491*100)
    bpy.context.scene.frame_set(frame)
 
    for arm in bpy.data.armatures[:]:
        obj = bpy.data.objects[arm.name]
        for poseBone in obj.pose.bones[:]:
            finalMatrix = obj.matrix_world * poseBone.matrix
            global_location = (finalMatrix[0][3],finalMatrix[1][3],finalMatrix[2][3])
            boneLocs[poseBone.name] = global_location
 
    for key in boneLocs:
        x = (boneLocs[key][0] - boneLocs['Hips'][0]) - (defaultBoneLocs[key][0] - defaultBoneLocs['Hips'][0])
        y = (boneLocs[key][1] - boneLocs['Hips'][1]) - (defaultBoneLocs[key][1] - defaultBoneLocs['Hips'][1])
        z = (boneLocs[key][2] - boneLocs['Hips'][2]) - (defaultBoneLocs[key][2] - defaultBoneLocs['Hips'][2])
        if key == 'Hips':
            z = boneLocs[key][2] - defaultBoneLocs[key][2]
        normBoneLocs[key] = (x,y,z)
 
    val = -0.6*(normBoneLocs['Hips'][2])
    bpy.data.meshes['Head'].shape_keys.key_blocks['JawOpen'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['JawOpen'].keyframe_insert("value")
    val = 0.6*(normBoneLocs['Hips'][2])
    bpy.data.meshes['Head'].shape_keys.key_blocks['JawUp'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['JawUp'].keyframe_insert("value")
 
    val = (normBoneLocs['LeftFoot'][0])
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthLOut'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthLOut'].keyframe_insert("value")
    val = -(normBoneLocs['LeftFoot'][0])
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthLIn'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthLIn'].keyframe_insert("value")
 
    val = -(normBoneLocs['RightFoot'][0])
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthROut'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthROut'].keyframe_insert("value")
    val = (normBoneLocs['RightFoot'][0])
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthRIn'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['MouthRIn'].keyframe_insert("value")
 
    val = -(normBoneLocs['Head'][2])
    bpy.data.meshes['Head'].shape_keys.key_blocks['BrowDown'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['BrowDown'].keyframe_insert("value")
    val = (normBoneLocs['Head'][2])
    bpy.data.meshes['Head'].shape_keys.key_blocks['BrowUp'].value = val
    bpy.data.meshes['Head'].shape_keys.key_blocks['BrowUp'].keyframe_insert("value")
 
    bpy.data.objects['EyeRTrack'].location.z = 0.294833 + normBoneLocs['RightHand'][2]
    bpy.data.objects['EyeRTrack'].location.x = -0.314635 + normBoneLocs['RightHand'][0]
    bpy.data.objects["EyeRTrack"].keyframe_insert(data_path='location')
 
    bpy.data.objects['EyeLTrack'].location.z = 0.294833 + normBoneLocs['LeftHand'][2]
    bpy.data.objects['EyeLTrack'].location.x = 0.314635 + normBoneLocs['LeftHand'][0]
    bpy.data.objects["EyeLTrack"].keyframe_insert(data_path='location')
 
    bpy.data.objects['Head'].rotation_euler = bpy.data.objects['Armature'].pose.bones['Spine'].matrix.to_euler()
    bpy.data.objects["Head"].rotation_euler.x -= 1.7
    bpy.data.objects["Head"].rotation_euler.x *= 0.3
    bpy.data.objects["Head"].keyframe_insert(data_path='rotation_euler')

 

Also, here's a bonus GIF I made using a walk cycle from CMU's Motion Capture Database:

dinkolas-LookingOutwards03

Aerobanquets RMX is an immersive VR/gastronomy experience by Mattia Casalegno. The participants sit at a table and are served food on small platters, while wearing a VR headset. The headset displays forms that were generated based on the flavor profiles of the food items. The recipes and cooking were made by chef Flavio Ghignoni Carestia based on the Futurist Cookbook. The futurist cuisine fits perfectly with the rest of the concept, somewhat alien and pretty dang cool.

While I think the implementation of the project could have been pushed further, with a more seamless way of consuming the food, more refined graphics, and potentially more interpersonal interaction; the concept of involving food with a VR experience is a very good one. Taste and smell seem to be the most neglected senses in interactive art, so I really like the idea of integrating food into the experience. I also like the idea going the other way, integrating visual experience into eating a meal.

This piece suggests the possibility of a full sensory experience, and only with fairly recent technologies could the senses could be very carefully calibrated and coordinated. We could move beyond the visual/olfactory of scratch n' sniff markers or auditory/gastronomic of a restaurant with live music, and into an artificial and hand-tailored world that incorporates every sense.

dinkolas-telematic

Control your line segment by moving your mouse in the bottom left, collaborate with other line segments, avoid obstacles and get the gold!

App
GIF

Each player is a single line segment, which changes size and rotation based on the location of the mouse or finger. Everyone shares the same goal, but plays a slightly different role in achieving it. As the end segment, your job is often to shrink so as to give the other players room to move, and then finish off the task to get the gold. As the base, you carry a lot of responsibility, and can mess everyone up at any moment. Thus, the roles are subtly complementary. The gameplay is highly synchronous, with time coordination required for many of the obstacles. Despite this, it really doesn't matter if players share a physical space or not. It was fun, however, to have everyone looking at the same computer screen while playing with their fingers on their phones. There is no real critique of the medium, ideally the medium is ignored.

dinkolas-viewing04

Spectacle is that which is designed for the explicit purpose of attracting attention through aesthetic means, often by making things bigger, faster, and in greater quantity.

Speculation is work that is less goal-oriented, and more about unpacking, understanding, and critiquing that which already exists.

Midge Sinnaeve's "VJ Loops" is a series of 60 seamlessly looping videos made in Blender 3D. In some ways this project is very much so about spectacle: the loops employ techniques that are showy, there is no primary moral or message, and of course there are 60 of them. However, the artist is not directly profiting off of the creation of the loops, and in open source fashion, even released the videos and source files free to download.

The project is definitely on the side of acceleration as opposed to drag. During the creation of the loops, new features were implemented in Blender, and Sinnaeve always had the latest daily build of the software, utilizing the latest tools. It is also on the side of visibility, because he often exposed the polygons and wireframes by distorting meshes and highlighting sharp edges. The project certainly tends towards waste and dysfunction rather than surplus and function, because it doesn't serve any commercial or practical goals. It seems trapped between commerce and art, without really embracing either one.

dinkolas-clock

This project was very frustrating for me. I had an initial idea of a monster that would grow parts every second, and by the end of each day would be a total mess of limbs. However, I couldn't figure out a good/efficient way to do that, so I simplified my idea to faces. I also wanted there to be a calendar mode in which you could see characters from previous days/hours/minutes. I spent a long time trying to get the calendar to work, but for some reason it kept breaking in different ways.

Anyway, the final result that I do have is not very satisfying to me. It doesn't really have any personality, and each minute is hardly unique. Maybe next time things will turn out better.

var seed;
var ear;
var eye;
var eyeIris;
var mouthLip;
var mouthJaw;
var mouthTongue;
var noseFront;
var noseBack;
var t = new Date();
var currentMin = -1;
var face = [];
function setup() {
  createCanvas(600, 360);
  colorMode(HSB, 1);
  ear = loadImage("https://i.imgur.com/il1frFT.png")
  eye = loadImage("https://i.imgur.com/8yvvzFW.png")
  eyeIris = loadImage("https://i.imgur.com/3ZpiPHQ.png")
  mouthLip = loadImage("https://i.imgur.com/YAltCcA.png")
  mouthJaw = loadImage("https://i.imgur.com/UZXYDA5.png")
  noseFront = loadImage("https://i.imgur.com/5QIAcvW.png")
  noseBack = loadImage("https://i.imgur.com/khomfzm.png")
}
function realRand()
{
 	for (var i = 0; i < seed; i++)
  {
   	random(); 
  }
}
 
function draw() {
 
    t = new Date();
    if (currentMin != t.getMinutes())
    {
      bgCol = color(random(0,1),random(0.3,0.8),random(0.7,1));
      newFace(seed);
      currentMin = t.getMinutes();
    }
    background(0,0,1);
 
    for (var s = 0; s <= t.getSeconds(); s++)
    {
      drawFeature(face[s]); 
    }
    blendMode(MULTIPLY);
    background(bgCol);
    blendMode(BLEND);
}
 
function newFace() {
  var order = randomSixtyList();
  print(order);
 
  for (var i = 0; i < 60; i++)
  {
    var X = order[i] % 10;
    var Y = floor(order[i] / 10);
    //face format: [x,y,featureType,phaseShift,rotation, scale]
    face.push([X*60,Y*60,random(0,1),random(0,1),random(-PI,PI), random(0.5,1.5)]);
  }
}
 
function drawFeature(feature)
{
  push();
  translate(feature[0]+30,feature[1]+30);
  scale(feature[5]);
  if (feature[2] < 1/4){drawMouth(feature);}
  else if (feature[2] < 2/4){drawNose(feature);}
  else if (feature[2] < 3/4){drawEye(feature);}
  else {drawEar(feature);}                   
  pop();
}
 
function drawEar(feat)
{
  phase = feat[3]
  rot = feat[4]
  xOff = mouseX - feat[0];
  yOff = mouseY - feat[1];
  push()
  rotate(rot);
  scale(1 + 0.02*sin((millis()/1000+phase)*2*PI), 1)
  image(ear, -20, -30, 60, 60);
  pop()
}
 
function drawEye(feat)
{
  phase = feat[3]
  rot = feat[4]
  xOff = mouseX - feat[0] - 30;
  yOff = mouseY - feat[1] - 30;
 
  push()
  translate(xOff/50,yOff/50)
  rotate(rot);
  image(eyeIris, -30, -25, 60, 60);
  pop()
 
  push()
  rotate(rot);
  image(eye, -30, -30, 60, 60);
  pop()
}
 
function drawNose(feat)
{
  phase = feat[3]
  rot = feat[4]
  xOff = mouseX - feat[0];
  yOff = mouseY - feat[1];
 
  push()
  translate(0,0);
  rotate(rot);
  image(noseBack, -30, -30, 60, 60);
  pop()
 
  push()
  translate(xOff/100,yOff/100)
  rotate(rot);
  scale(1.2);
  image(noseFront, -30, -31, 60, 60);
  pop()
}
 
function drawMouth(feat)
{
  phase = feat[3]
  rot = feat[4]
  xOff = mouseX - feat[0];
  yOff = mouseY - feat[1];
 
  push()
  translate(-xOff/100,-yOff/100)
  rotate(rot);
  translate(0,0 * (1 + sin((millis()/1000+phase)*2*PI)))
  scale(0.8)
  image(mouthJaw, -30, -20, 60, 60);
  pop()
 
  push()
  translate(xOff/300,yOff/300)
  rotate(rot);
  image(mouthLip, -30, -30, 60, 60);
  pop()
}
 
function randomSixtyList()
{
  var out = [];
  for (var i = 0; i < 60; i++)
  {
    out.push(i);
  }
  for (var j = 0; j < 60; j++)
  {
   	var swap = floor(random(0,60));
    var J = out[j];
    out[j] = out[swap];
    out[swap] = J;
  }
  return out;
}

 

dinkolas-LookingOutwards02

Michael Hansemeyer's Digital Grotesque II is an eleven and a half foot tall, 3D printed sandstone structure which was modeled using generative architectural algorithms. In topology, a cube and a sphere are considered the same object, however a torus (donut) is not. The algorithm that designed this piece can start out with a topological sphere, and continue to add holes and loops indefinitely, so as to create any shape. The algorithm was incentivised to create stimulating structures through crowd sourced feedback in several cycles, each more fine-tuned than the last.

I like this method because it puts most of the hard work in the beginning -designing the algorithm- and then allows the artist to become curator. Thus the sensibilities of the artist are very directly present in the artwork, because the artist serves as the reward function. And in this case, there were hundreds of people giving feedback to the computer, which even further complicates the notion of authorship for generative artwork. I would say that the effective complexity of the piece is a little past the complexity of fractals, because it shares a fractal aesthetic, but the rules of its creation are far more complex.

http://www.michael-hansmeyer.com/digital-grotesque-II

dinkolas-Reading03

It's pretty clear that ideally one would make art that is both "first word" and "last word," but that this is not a practical goal. My artwork tends to fall on the first word side of things, although I haven't been quick enough (yet) to consider myself among the very first to adopt any technique, and I don't have any strong allegiance with new techniques versus old. As a career goal, I definitely want to make something that stands the test of time, and if I can do that I don't care so much if it's with new media or not. Granted, I find that working with newer media is more exciting, so I'm more likely to be making that sort of work.

The first word/last word classification is not as simple as Naimark makes it out to be. For example, Pixar movies are visually first word, but do not use particularly new storytelling strategies. In theory they are developing technology for the sake of making stories that last, which seems like an honorable goal. Of course, it is still difficult to consistently make quality work of any kind, and sticking with a formula can lead to stale art.

dinkolas-AnimatedLoop

About

I wrote a function with a bunch of parameters that makes a character, and then called the function several times to populate the GIF. I used DoubleExponentialSigmoid, CircularEaseInOut, AdjustableCenterEllipticWindow, and ExponentialSmoothedStaircase to change the pace of the legs and bodies of some characters. I think that the individual characters look good, but I could certainly improve the over all color selection and composition of the piece if I had more time. Now that the character building function is working, a better application of it might be some sort of random walk-cycle generator, so I might implement that at some point. Maybe this concept doesn't fit the GIF format very well, and would be better if the viewer could see more of the possible walks.

Sketches

Code

I used Golan's Java Processing Template, here's my code:

// This is a template for creating a looping animation in Processing/Java. 
// When you press the 'F' key, this program will export a series of images
// into a "frames" directory located in its sketch folder. 
// These can then be combined into an animated gif. 
// Known to work with Processing 3.3.6
// Prof. Golan Levin, January 2018
 
//===================================================
// Global variables. 
String  myNickname = "dinkolas"; 
int     nFramesInLoop = 120;
int     nElapsedFrames;
boolean bRecording; 
 
//===================================================
void setup() {
  size (640,640); 
  bRecording = true;
  nElapsedFrames = 0;
}
//===================================================
void keyPressed() {
  if ((key == 'f') || (key == 'F')) {
    bRecording = true;
    nElapsedFrames = 0;
  }
}
 
//===================================================
void draw() {
 
  // Compute a percentage (0...1) representing where we are in the loop.
  float percentCompleteFraction = 0; 
  if (bRecording) {
    percentCompleteFraction = (float) nElapsedFrames / (float)nFramesInLoop;
  } else {
    percentCompleteFraction = (float) (frameCount % nFramesInLoop) / (float)nFramesInLoop;
  }
 
  // Render the design, based on that percentage. 
  renderMyDesign (percentCompleteFraction);
 
  // If we're recording the output, save the frame to a file. 
  if (bRecording) {
    saveFrame("frames/" + myNickname + "_frame_" + nf(nElapsedFrames, 4) + ".png");
    nElapsedFrames++; 
    if (nElapsedFrames >= nFramesInLoop) {
      bRecording = false;
    }
  }
}
 
//===================================================
void renderMyDesign (float percent) {
  color c1 = color(59,58,133);
  color c2 = color(137,52,109);
  color c3 = color(199,96,88);
  color c4 = color(255,178,72);
  color c5 = color(255,225,150);
  background(c5);
 
  //1
  color bodyCol = c3;
  color legCol =  c2;
  float originX = 100;
  float originY = 200;
  float xShift = -20;
  float altitude = 100;
  float diameter = 70;
  float bounce = 10;
  float legLength = 50;
  float legWeight = 10;
  float footLength = 10;
  float cycles = 2;
  float footH = 40;
  float footV = 20;
  float phase = 0;
  float legT = function_DoubleExponentialSigmoid(percent,.5);
  float bodyT = function_DoubleExponentialSigmoid(percent,.5);
  drawCharacter(bodyCol, legCol, originX, originY, xShift, altitude, diameter, bounce, legLength, legWeight, footLength, cycles, footH, footV, phase, legT, bodyT);
 
  //2
  bodyCol = c1;
  legCol = c2;
  originX = 320;
  originY = 190;
  xShift = 10;
  altitude = 90;
  diameter = 120;
  bounce = 5;
  legLength = 25;
  legWeight = 15;
  footLength = 10;
  cycles = 1;
  footH = 20;
  footV = 10;
  phase = .5;
  legT = function_ExponentialSmoothedStaircase(percent,.05,4);
  bodyT = legT;
  drawCharacter(bodyCol, legCol, originX, originY, xShift, altitude, diameter, bounce, legLength, legWeight, footLength, cycles, footH, footV, phase, legT, bodyT);
 
  //3
  bodyCol = c4;
  legCol = c3;
  originX = 520;
  originY = 250;
  xShift = 0;
  altitude = 200;
  diameter = 50;
  bounce = 15;
  legLength = 100;
  legWeight = 8;
  footLength = 10;
  cycles = 3;
  footH = 30;
  footV = 60;
  phase = .6;
  legT = 1-percent;
  bodyT = 1-percent;
  drawCharacter(bodyCol, legCol, originX, originY, xShift, altitude, diameter, bounce, legLength, legWeight, footLength, cycles, footH, footV, phase, legT, bodyT);
 
  //4
  for (int i = 0; i < 3; i++){
    bodyCol = c3;
    legCol = c1;
    originX = 50+55*i;
    originY = 350;
    xShift = 0;
    altitude = 40;
    diameter = 40;
    bounce = 3;
    legLength = 20;
    legWeight = 4;
    footLength = 3;
    cycles = 2;
    footH = 10;
    footV = 6;
    phase = .1+.3*i;
    legT = function_AdjustableCenterEllipticWindow (percent, .5);
    bodyT = percent;
    drawCharacter(bodyCol, legCol, originX, originY, xShift, altitude, diameter, bounce, legLength, legWeight, footLength, cycles, footH, footV, phase, legT, bodyT);
  }
 
  //5
  bodyCol = c3;
  legCol = c1;
  originX = 490;
  originY = 430;
  xShift = 10;
  altitude = 90;
  diameter = 50;
  bounce = 15;
  legLength = 60;
  legWeight = 8;
  footLength = 10;
  cycles = 1;
  footH = 80;
  footV = 130;
  phase = .6;
  legT = percent;
  bodyT = percent;
  drawCharacter(bodyCol, legCol, originX, originY, xShift, altitude, diameter, bounce, legLength, legWeight, footLength, cycles, footH, footV, phase, legT, bodyT);
 
  //6
  bodyCol = c3;
  legCol = c4;
  originX = 300;
  originY = 380;
  xShift = -30;
  altitude = 100;
  diameter = 60;
  bounce = 25;
  legLength = 40;
  legWeight = 10;
  footLength = 10;
  cycles = 2;
  footH = 40;
  footV = 40;
  phase = .8;
  legT = 2*function_CircularEaseInOut(percent);
  bodyT = percent;
  drawCharacter(bodyCol, legCol, originX, originY, xShift, altitude, diameter, bounce, legLength, legWeight, footLength, cycles, footH, footV, phase, legT, bodyT);
 
  //7
  bodyCol = c1;
  legCol = c4;
  originX = 400;
  originY = 600;
  xShift = 0;
  altitude = 70;
  diameter = 30;
  bounce = 5;
  legLength = 60;
  legWeight = 10;
  footLength = 5;
  cycles = 1;
  footH = 120;
  footV = 35;
  phase = 0;
  legT = percent;
  bodyT = 1-percent;
  drawCharacter(bodyCol, legCol, originX, originY, xShift, altitude, diameter, bounce, legLength, legWeight, footLength, cycles, footH, footV, phase, legT, bodyT);
 
  //8
  bodyCol = c2;
  legCol = c4;
  originX = 100;
  originY = 580;
  xShift = 20;
  altitude = 130;
  diameter = 70;
  bounce = 40;
  legLength = 70;
  legWeight = 20;
  footLength = 30;
  cycles = 2;
  footH = 30;
  footV = 35;
  phase = .5;
  legT = percent;
  bodyT = 1-percent;
  drawCharacter(bodyCol, legCol, originX, originY, xShift, altitude, diameter, bounce, legLength, legWeight, footLength, cycles, footH, footV, phase, legT, bodyT);
}
 
void drawCharacter(color bodyCol, color legCol, float originX, float originY, float xShift, float altitude, float diameter, float bounce, float legLength, float legWeight, float footLength, float cycles, float footH, float footV, float phase, float legT, float bodyT) {
  //set origin and coordinate system so up is positive
  translate(originX,originY);
  scale(1,-1);
  legT = ((legT%(1/cycles))*cycles+phase)%1;
  bodyT = ((bodyT%(1/cycles/2))*2*cycles+phase)%1;
  stroke(bodyCol);
  strokeWeight(legWeight);
  line(-footH-20,-legWeight/2,footH+20+footLength,-legWeight/2); 
 
  //body positions
  float bodyX = xShift+map(-sin(TWO_PI*bodyT), -1,1, -bounce/5, +bounce/5);
  float bodyY = map(-cos(TWO_PI*bodyT), -1,1, altitude-bounce, altitude+bounce);
 
  //back leg
  //hip=(x1,y1), ankle=(x2,y2), knee = (a,b), toe = (x3,y3)
  translate(0,legWeight/2);
  strokeWeight(legWeight);
  stroke(legCol);
  noFill();
  float x1 = bodyX;
  float y1 = bodyY-diameter/3;
  float x2 = map(-cos(TWO_PI*legT), -1,1,-footH,footH);
  float y2 = max(0,map(sin(TWO_PI*legT), -1, 1, -footV, footV));
  float mult = -.5*sqrt(4*pow(legLength,2)/(pow(x2-x1,2)+pow(y2-y1,2))-1);
  float a = .5*(x1+x2)+mult*(y2-y1);
  float b = .5*(y1+y2)+mult*(x1-x2);
  float x3 = x2 + footLength; 
  float y3 = 0;
  if (2*legLength < sqrt(pow(x2-x1,2)+pow(y2-y1,2))){line(x1,y1,x2,y2);}
  else{line(x1,y1,a,b);line(a,b,x2,y2);}
  if (y2==0) {line(x2,y2,x3,y3);}
  else {
    float angle = atan((y2-y1)/(x2-x1));
    angle = PI/2-angle;
    if (x2<x1) {angle+=PI;}
    translate(x2,y2);
    rotate(-angle);
    line(0,0,-footLength,0);
    rotate(angle);
    translate(-x2,-y2);
  }
  translate(0,-legWeight/2);
 
  //body
  noStroke();
  fill(bodyCol);
  ellipse(bodyX,bodyY,diameter,diameter);
 
  //front leg
  //hip=(x1,y1), ankle=(x2,y2), knee = (a,b), toe = (x3,y3)
  legT = (legT+.5)%1;
  translate(0,legWeight/2);
  strokeWeight(legWeight);
  stroke(legCol);
  x1 = bodyX;
  y1 = bodyY-diameter/3;
  x2 = map(-cos(TWO_PI*legT), -1,1,-footH,footH);
  y2 = max(0,map(sin(TWO_PI*legT), -1, 1, -footV, footV));
  mult = -.5*sqrt(4*pow(legLength,2)/(pow(x2-x1,2)+pow(y2-y1,2))-1);
  a = .5*(x1+x2)+mult*(y2-y1);
  b = .5*(y1+y2)+mult*(x1-x2);
  x3 = x2 + footLength; 
  y3 = 0;
  if (2*legLength < sqrt(pow(x2-x1,2)+pow(y2-y1,2))){line(x1,y1,x2,y2);}
  else{line(x1,y1,a,b);line(a,b,x2,y2);}
  if (y2==0) {line(x2,y2,x3,y3);}
  else {
    float angle = atan((y2-y1)/(x2-x1));
    angle = PI/2-angle;
    if (x2<x1) {angle+=PI;}
    translate(x2,y2);
    rotate(-angle);
    line(0,0,-footLength,0);
    rotate(angle);
    translate(-x2,-y2);
  }
  translate(0,-legWeight/2);
 
  //reset origin
  scale(1,-1);
  translate(-originX,-originY);
}
 
 
 
//===================================================
// Taken from https://github.com/golanlevin/Pattern_Master
float function_DoubleExponentialSigmoid (float x, float a) {
  // functionName = "Double-Exponential Sigmoid";
 
  float min_param_a = 0.0 + EPSILON;
  float max_param_a = 1.0 - EPSILON;
  a = constrain(a, min_param_a, max_param_a); 
  a = 1-a;
 
  float y = 0;
  if (x<=0.5) {
    y = (pow(2.0*x, 1.0/a))/2.0;
  } else {
    y = 1.0 - (pow(2.0*(1.0-x), 1.0/a))/2.0;
  }
  return y;
}
 
float function_ExponentialSmoothedStaircase (float x, float a, int n) {
  //functionName = "Smoothed Exponential Staircase";
  // See http://web.mit.edu/fnl/volume/204/winston.html
 
  float fa = sq (map(a, 0,1, 5,30));
  float y = 0; 
  for (int i=0; i<n; i++){
    y += (1.0/(n-1.0))/ (1.0 + exp(fa*(((i+1.0)/n) - x)));
  }
  y = constrain(y, 0,1); 
  return y;
}
 
float function_AdjustableCenterEllipticWindow (float x, float a){
  //functionName = "Adjustable-Center Elliptic Window";
 
  float min_param_a = 0.0 + EPSILON;
  float max_param_a = 1.0 - EPSILON;
  a = constrain(a, min_param_a, max_param_a);
 
  float y = 0;
 
  if (x<=a){
    y = (1.0/a) * sqrt(sq(a) - sq(x-a));
  } 
  else {
    y = (1.0/(1-a)) * sqrt(sq(1.0-a) - sq(x-a));
  }
  return y;
}
 
float function_CircularEaseInOut (float x) {
  //functionName = "Penner's Circular Ease InOut";
 
  float y = 0; 
  x *= 2.0; 
 
  if (x < 1) {
    y =  -0.5 * (sqrt(1.0 - x*x) - 1.0);
  } else {
    x -= 2.0;
    y =   0.5 * (sqrt(1.0 - x*x) + 1.0);
  }
 
  return y;
}

dinkolas-Scope

Design PDF: dinkolas-praxinoscope-output

About

I wanted to create a caterpillar-like creature for my praxinoscope. Most of the motion was simple enough, just sines and cosines with offset phases took care of it. However, it took some tricks that I'm pretty happy with to implement knees and a ground. The location of the knees is found by intersecting two circles, one centered at the hip and the other centered at the foot (thanks to johannesvalks on Stack Exchange for the equations for circle intersection). In order for there to be a ground, the feet move in a circle except the y coordinate is clipped to a minimum value. The code is pretty rough, but it works!

Code

I used Golan's Java Processing template. Here's my code in drawArtFrame():

void drawArtFrame (int whichFrame) { 
  // Draw the artwork for a generic frame of the Praxinoscope, 
  // given the framenumber (whichFrame) out of nFrames.
  // NOTE #1: The "origin" for the frame is in the center of the wedge.
  // NOTE #2: Remember that everything will appear upside-down!
 
  //Intersection of two circles from johannesvalks on Stack Exchange
  stroke(128);
  strokeWeight(1);
  line(50,-29,-50,-29);
  stroke(0);
  strokeWeight(2);
  fill(0);
  float t = map(whichFrame, 0, nFrames, 0, 1); 
  float segments = 6;
  for (float i = 0; i < segments; i++)
  {
    float x1 = map(i,0,segments-1,-35,35);
    x1 += map(-sin(t*TWO_PI-i+.75),-1,1,-1,1);
    float y1 = map(cos(t*TWO_PI-i),-1,1,-10,0);
    ellipse(x1,y1,15,15);
    float x2 = map(-cos(t*TWO_PI-i),-1,1,x1+5,x1-5);
    float y2 = map(sin(t*TWO_PI-i),-1,1,-30,-20);
    y1 -= 15/2;
    y2 = max(y2,-27);
    float mult = sqrt(2*20/(pow((x1-x2),2)+pow((y1-y2),2)));
    float kneexLoc = ((x1+x2)+mult*(y2-y1))/2;
    float kneeyLoc = ((y1+y2)+mult*(x1-x2))/2;
    line(x1,y1,kneexLoc,kneeyLoc);
    line(kneexLoc,kneeyLoc,x2,y2);
    rect(x2-4,y2,4,1);
 
    x2 = map(-cos(t*TWO_PI-i+2.5),-1,1,x1+5,x1-5);
    y2 = map(sin(t*TWO_PI-i+2.5),-1,1,-30,-20);
    y2 = max(y2,-27);
    kneexLoc = ((x1+x2)+mult*(y2-y1))/2;
    kneeyLoc = ((y1+y2)+mult*(x1-x2))/2;
    line(x1,y1,kneexLoc,kneeyLoc);
    line(kneexLoc,kneeyLoc,x2,y2);
    rect(x2-4,y2,4,1);
  }
 
}

 

dinkolas-Reading02

A) Slime mold is one of my favorite examples of a natural system that exhibits effective complexity. There was an experiment that put food sources in locations imitating the destinations of the Tokyo railway system, and then allowed the slime mold to grow. It ultimately formed a network that matched the efficiency of the actual Tokyo area railway system. Each cell makes decisions on the highly ordered and simple side of the spectrum, but the emergent system with many cells vastly boosts effective complexity.

(Images from the American Association for the Advancement of Science, GIF by Mark Fricker)

B) Galanter's The Problem of Meaning is the problem that I have struggled with the most in generative art. I would like my art to be relevant beyond just the intrigue of the medium, and I would argue that generative art is wholly capable of achieving meaning in just about every subject matter that non-generative art can. However, in artwork where the system that made it is so significant, it's difficult to make something interesting enough that the viewer sees past medium and into content.