lass-arsculpture

There is a frog on the toilet

For this project, I wanted to put a frog on the toilet. I think that the shape of a toilet is similar to a tiny pond, so my goal was to make a nice toilet pond. Tapping the screen makes the frog stick her tongue out. I think I could use this mechanic to make a game, maybe.

I used Unity, Maya, and Vuforia!

lass-parrish

The thing that stuck with me from Allison Parrish's talk was that her goal as a poet is "not to imitate existing poetry but to find new ways for poetry to exist." This reminded me a bit of the talk given by Robbie Barrat, who also uses computers to generate art. The goal of AI-driven art isn't to replace or mimic artists, but to create new things that could never be conceived by a human. I really liked the example with the cute robot explorer going into the unknown, because robots are truly helping us to explore new areas and should be thought as our helpers rather than competitors.

lass-Book

link to zipped file

Discusses ailments found in    plants, animals, and computers.

 

For this project, I wanted to create a book that contained made-up diseases. My goal was to combine information about diseases with computer errors to create diseases that a robot might encounter. The actual project ended up straying from this quite a bit.

At first, I was really interested in using recurrent neural networks to generate text. I followed this tutorial for ml5's LSTMGenerator, but I started this way too late and didn't have the time/knowledge to train a model to my liking. This is the state I got to before giving up:

Even though I didn't follow through with this approach, I think that I would like to learn more about LSTM in the future. I really liked a lot of the examples I saw that used this method!

I reused my training text with RiMarkov to generate my text. The text included these three books and some system error codes . This was actually very entertaining. I spent a good amount of time clicking through and enjoying the sentences it created.

For the illustrations, I used makehuman to generate some random models, and blender to mess them up. This was actually my favorite part of the project. Here is my personal favorite illustration:

Overall, I would say I had a lot of fun with the process, but I'm not too sure if I like the final product. I think that there are a couple of pages that are very good, but a lot of it is just confusing!

lass-LookingOutwards04

sixfortybyfoureighty1.jpg

sixfortybyfoureighty4.jpg

Six-Forty by Four-Eeighty is an interactive installation by Marcelo Coehlo Studio in which physical square "pixels" can be rearranged on a "screen." The pixels respond to a user's touch, and although each pixel is an independent computer, they can send signals to each other through the user's body.
I like this project because I really like squares, and I think that it is very visually successful. I also like this project because it creates a new mechanic for something that we are used to seeing daily (pixels). The project describes itself as "physically immersing viewers into an interactive computing experience," which is a pretty accurate description. I think that the most successful aspect of this project is that it created a new language/medium that can be explored infinitely.
Marcelo Coehlo Studio also made a sequel, Resolution, which has the same concept but with circular units. Personally I prefer the effect of the square pixels, but I think it is interesting that they chose to do it since it represents different forms of media (newspaper halftones and older television screens).

lass-Body

Each of the figures is created randomly using the names of people from this class:

You can play with the random generation tool here:

For this project, I wanted to randomly create humanoid figures that look like they are made of garbage. I am pretty satisfied with the final product, and I think a lot of the success is thanks to these nice color palettes. I also got all of my bvh files from the CMU Graphics Lab. Finally, I used seedrandom so that each random figure is dependent on its name. I was very inspired by Generative Machines by Michael Chang, and other generative artworks shown in class.

Here are my sketches:

 

    //starter code for loading BVH from https://github.com/mrdoob/three.js/blob/master/examples/webgl_loader_bvh.html
    var clock = new THREE.Clock();
    var camera, controls, scene, renderer;
    var mixer, skeletonHelper, boneContainer;
    var head, lhand, rhand, torso, rfoot, lfoot; 
    var miscParts = []; 
    var limbs = [
    new THREE.BoxGeometry( 10, 10, 10 ), 
    new THREE.SphereGeometry( 10, 20, 15 ), 
    new THREE.ConeGeometry( 10, 10, 30 ), 
    new THREE.CylinderGeometry( 10, 10, 10, 5 ), 
    new THREE.TorusGeometry( 7, 3, 10, 30 ), 
    new THREE.TorusKnotGeometry( 7, 3, 10, 30 ), 
    new THREE.DodecahedronGeometry(7)
]
var bodies = [
    new THREE.BoxGeometry( 30, 60, 30 ), 
    new THREE.SphereGeometry( 30, 20, 15 ), 
    new THREE.ConeGeometry( 30, 60, 30 ), 
    new THREE.CylinderGeometry( 20, 30, 50, 5 ), 
    new THREE.TorusGeometry( 20, 10, 10, 30 ), 
    new THREE.TorusKnotGeometry( 20, 10, 10, 30 ), 
    new THREE.DodecahedronGeometry(20)
]
var colorArray = [new THREE.Color(0xffaaff), new THREE.Color(0xffaaff),  new THREE.Color(0xffaaff),  new THREE.Color(0xffaaff),  new THREE.Color(0xffaaff)]; 
var uniforms = {u_resolution: {type: "v2", value: new THREE.Vector2()}, 
                u_colors: {type: "v3v", value: colorArray}};
 
var materials = [
    new THREE.ShaderMaterial( {  
        uniforms: uniforms,
        vertexShader: document.getElementById("vertex").textContent,
        fragmentShader: document.getElementById("stripeFragment").textContent,
    }), 
    new THREE.ShaderMaterial( {  
        uniforms: uniforms,
        vertexShader: document.getElementById("vertex").textContent,
        fragmentShader: document.getElementById("gradientFragment").textContent,
    }), 
    new THREE.ShaderMaterial( {  
        uniforms: uniforms,
        vertexShader: document.getElementById("vertex").textContent,
        fragmentShader: document.getElementById("plainFragment").textContent
    }), 
    new THREE.ShaderMaterial( {  
        uniforms: uniforms,
        vertexShader: document.getElementById("vertex").textContent,
        fragmentShader: document.getElementById("plainFragment").textContent,
        wireframe: true 
    })
    ]
 
    init();
    animate();
 
    var bvhs = ["02_04", "02_05", "02_06", "02_07", "02_08", "02_09", "02_10", "pirouette"]
 
    uniforms.u_resolution.value.x = renderer.domElement.width;
    uniforms.u_resolution.value.y = renderer.domElement.height;
 
    var loader = new THREE.BVHLoader();
    loader.load( "models/" + random(bvhs) + ".bvh", createSkeleton);
 
    function createSkeleton(result){
        skeletonHelper = new THREE.SkeletonHelper( result.skeleton.bones[ 0 ] );
        skeletonHelper.skeleton = result.skeleton; // allow animation mixer to bind to SkeletonHelper directly
        boneContainer = new THREE.Group();
 
        boneContainer.add( result.skeleton.bones[ 0 ] );
        var geometry = new THREE.BoxGeometry( 10, 10, 10 );
        head = new THREE.Mesh( random(limbs), materials[0] );
        lhand = new THREE.Mesh( random(limbs), materials[0] );
        rhand = new THREE.Mesh( random(limbs), materials[0] );
        lfoot = new THREE.Mesh( random(limbs), materials[0] );
        rfoot = new THREE.Mesh( random(limbs), materials[0] );
        torso = new THREE.Mesh( random(bodies), materials[0] );
        // torso.scale.set(Math.random() * 1.5, Math.random() * 1.5, Math.random() * 1.5);
 
        skeletonHelper.skeleton.bones[4].add(head); 
        skeletonHelper.skeleton.bones[12].add(rhand); 
        skeletonHelper.skeleton.bones[31].add(lhand); 
        skeletonHelper.skeleton.bones[50].add(rfoot); 
        skeletonHelper.skeleton.bones[55].add(lfoot); 
        skeletonHelper.skeleton.bones[1].add(torso); 
        for(var i=9; i<14; i++){
            var part = new THREE.Mesh(  new THREE.BoxGeometry( Math.random() * 10, Math.random() * 5, Math.random() * 5 ), materials[0] ); 
            miscParts.push(part); 
            skeletonHelper.skeleton.bones[i].add(part);
        }
        for(var i=28; i<31; i++) {
            var part = new THREE.Mesh(  new THREE.BoxGeometry( Math.random() * 10, Math.random() * 5, Math.random() * 5 ), materials[0] ); 
            miscParts.push(part); 
            skeletonHelper.skeleton.bones[i].add(part);
        }        
        for(var i=47; i<56; i++) {
            var part = new THREE.Mesh(  new THREE.BoxGeometry( Math.random() * 10, Math.random() * 5, Math.random() * 5 ), materials[0] ); 
            miscParts.push(part); 
            skeletonHelper.skeleton.bones[i].add(part);
        }
        scene.add( skeletonHelper );
        scene.add( boneContainer );
        skeletonHelper.material = new THREE.MeshBasicMaterial({
            color:"white", 
            transparent:"true", 
            opacity:"0.0"}); 
        mixer = new THREE.AnimationMixer( skeletonHelper );
        mixer.clipAction( result.clip ).setEffectiveWeight( 1.0 ).play();
        changeName(); 
    }
    function init() {
        camera = new THREE.PerspectiveCamera( 60, window.innerWidth / window.innerHeight, .1, 1000 );
        camera.position.set( 0, 200, 400 );
        scene = new THREE.Scene();
        scene.add( new THREE.GridHelper( 400, 10 ) );
        scene.background = new THREE.Color(0xdddddd); 
        renderer = new THREE.WebGLRenderer( { antialias: true } );
        renderer.setPixelRatio( window.devicePixelRatio );
        renderer.setSize( window.innerWidth, window.innerHeight );
        document.body.appendChild( renderer.domElement );
        controls = new THREE.OrbitControls( camera, renderer.domElement );
        controls.minDistance = 300;
        controls.maxDistance = 700;
        window.addEventListener( 'resize', onWindowResize, false );
    }
    function onWindowResize() {
        camera.aspect = window.innerWidth / window.innerHeight;
        camera.updateProjectionMatrix();
        renderer.setSize( window.innerWidth, window.innerHeight );
    }
    function animate() {
        requestAnimationFrame( animate );
        var delta = clock.getDelta();
        if ( mixer ) mixer.update( delta );
        renderer.render( scene, camera );
    }
    function changeBody(seed) {
        console.log(seed); 
        Math.seedrandom(seed); 
 
        color = random(colors); 
        for(var i = 0; i < 5; i++){
            colorArray[i] = new THREE.Color(color[i]); 
        }
        for(var i = 0; i < materials.length; i++){
            materials[i].clone(); //idk why i do this but its the only way to make randomness match with class.html
        }
        lhand.geometry = random(limbs); 
        lhand.material = random(materials); 
        rhand.geometry = random(limbs); 
        rhand.material = random(materials); 
        torso.geometry = random(bodies); 
        torso.material = random(materials); 
        lfoot.geometry = random(limbs); 
        lfoot.material = random(materials); 
        rfoot.geometry = random(limbs); 
        rfoot.material = random(materials);
        head.geometry = random(limbs); 
        head.material = random(materials); 
        head.scale.set(2, 2,2); 
        for(var i=0; i<miscParts.length; i++){
            miscParts[i].geometry =  new THREE.BoxGeometry( Math.random() * 15, Math.random() * 15, Math.random() * 10 );
            miscParts[i].material = random(materials); 
        }
    }    function changeName() {
        changeBody(document.getElementById("nameInput").value); 
    }
    function random(arr) {
        return arr[Math.floor(Math.random() * arr.length)];
    }
    function changeBvh(){
        scene.remove(skeletonHelper); 
        scene.remove(boneContainer); 
        loader.load( "models/"+ random(bvhs) +".bvh", createSkeleton);
    }

lass-LookingOutwards03

Daniel Rozin has created many mechanical "mirrors" using video cameras, motion sensors, and motors to display people's reflections. I had seen the popular pompom mirror before, but I was interested to see the other mirrors he created. One mirror that I found interesting was the penguin mirror. Rather than facing the user directly, this mirror is flat on the ground and takes the shape of a projected shadow. As the user moves, the stuffed penguins turn so that their white bellies are showing. I really enjoy how Rozin uses his mirrors to take a simple shadow and turn it into a huge mechanized process.

I think that penguins were very fitting for this mirror, because the colors of the penguin allow for a transition between black and white as they turn. The appearance of a huge group of penguins together also gives the appearance of a penguin huddle. The sound of this mirror is  also very pleasant. As you move more, the clicking sound of all the penguins turning increases. There is something very soothing about listening to an army of penguins follow your movements.

 

lass-telematic

(you will probably need to open the app in a new tab to allow webcam permissions, and as far as I know it only works on google chrome.)

My telematic environment shows the optical flow of up to nine users in a square grid. I used oflow.js to find optical flow, and also started with this template that Char made using p5.js and socket.io.

Some things that I appreciate about optical flow after doing this project are that 1) it allows more anonymity than a video chat,  and 2) it focuses on expression through movement (change), so nothing will show if you stay still. At times I was worried that the user wouldn't be able to distinguish optical flow from just a pixelated video, but I think that by staring for a bit it becomes apparent that your movements are being tracked. 

(animated gif) 

Something to note with this project is the lag. It can track the optical flow of the user at a fine rate, but transferring all of the flow data takes a while and makes the other squares become choppy. They play about one second behind in time (in the example above you can see that the orange user moves much more fluidly than the others). Since the project was meant to be synchronous, ideally this wouldn't happen, but I think it has an interesting and slightly spooky effect. 

Honestly, I struggled  with ideas for this project and I wish the final product involved more communication between users. My initial idea was to overlay the feeds on top of each other so people could collaboratively "draw" with their motions, but that was too messy and difficult to discern what was going on, which is why it is a grid now. I also tried having instructions appear on the screen for every user to follow (such as telling them to freeze, or wave at each other), but I removed that since it felt disruptive. Although I like the appearance of the uniform squares, it is a bit of a letdown that they are just 9 independent boxes.

Thank you to Char for the templates, and Golan for the project title!

 

lass-viewing04

I think that Akinori Goto's 3D Printed Zoetrope is a good example of spectacle. It takes something that has been around for many years (the zoetrope) and refines it using impressive modern technology. The project is very clean, polished, and precise. Although the project is definitely explorative, it was created by a very controlled and deliberate process, which is outlined in detail in the video. In addition to software, the piece requires lighting and a spinning mechanism to show anything of interest. The project is successful in showing off new and difficult technology, and the idea is very awe-inspiring. The combination of all these things makes this zoetrope fit in with the idea of spectacle.

In terms of the dichotomies presented by Warburton, I would say that this project is very visible, as its simple beauty can be enjoyed even without context. I don't see the project as having much waste, since it seems to have been crafted efficiently with a moderation of software. I would also say that this project is more art than commerce, and also more dysfunctional, since it holds little commercial value. Finally, I think that it has more drag than acceleration, since there is not much else to be done with this technology in the future (at least that I can think of).