sapeck-parrish

I really like her point about how "nonsense is frustrating and scary." People resist nonsense, in this case odd or previously-unspoken orders of words, because it makes them uncomfortable. This makes me think about where something stops being sensible and becomes nonsensible. By an extreme definition, anything original is nonsense, as what people see as sensible comes form what they know. It is the familiar parts of something original that makes the whole thing sensible. Parrish's example of nonsense has so few familiar parts that it is often considered nonsense.

But should I create nonsensible work? I don't think that I should make nonsense to the far extreme. If I am trying to convey a message, then I may be trying to be relatable. True nonsense cannot be relatable outside the fact that it is nonsense at all.

sapeck-arsculpture

Cars exploding away from the perspective of the AR viewer. Created with videography assistance from breep.

I have created a surreal parking lot where the cars drift away. The viewer is the center of the parking lot that is infinitely growing in size. Cars normally exist in a parking lot, but they are parked in the lot, not moving.

sapeck-justaline

I contained one of my collaborators in a cage. Created with ocannoli and the 60-212F18 TA.

sapeck-08-justaline-screengrab

sapeck-08-justaline-over-the-shoulder

The piece can be experienced in two ways: 1) one or more things or people stand within the bounds of the cage and the viewer moves around it, or 2) the viewer stands within the cage.

sapeck-Book

Antisemitic Absurdities
A list of antisemitic generalizations applied to show absurdity
https://drive.google.com/file/d/1qWCuH2fUSKBwqD2ek9eJw3ZtDVWXcxbP/view

In response to the attack on the Tree of Life Synagogue in Pittsburgh, PA, I created a book to show the absurdity of antisemitic sentiments. When I attended religious school at my synagogue many years ago, the Anti-Defamation League (ADL) would visit and give talks on antisemitism and how to identify it. These never really resonated with me, as I had never experienced any antisemitism beyond bullying at school or playfully-intended stereotyping. This incident was the first time I had experienced someone who really did not like my people.
First, I searched for antisemitic data. This involved an email to the ADL (who have a giant database of antisemitic Tweets), a post on 4chan, and lots of Twitter scraping. I settled on scraping Twitter for tweets with the exact phrase "Jews are." This captures only generalizations about the Jewish people. Tweets consisted of antisemitic remarks and responses to antisemitic remarks. I then filtered out tweets pertaining to Israel or certain people (ex. Soros); those issues can be polarizing and deviate from my goal of showing that making generalizations about an ethnic group is absurd. There were very few Tweets about Judaism as a religious practice. All of the Tweets pertained to how the Jewish people fit into the world.
Next, I gathered a list of ethnic groups from Wikipedia. I replaced each instance of "Jews are" in the Tweets with a random ethnic group. I showed the modified text on the the adjacent page, where the "Jews are" side is black with white text and the opposite side is white with black text. I think that it becomes more absurd and in some cases more relatable.
Lastly, I ordered the tweets by the first word. I start with "Jews are." The first few first words after that are ordered by incresingly narrow generality: "all," "American," "some," "only," "these." Next, I try to create logic with the order: "because" tries to answer a question in "how" and "but" tries to make an exception in "because." I finish with "You" and a colophon.

The code for this project consists of more than a dozen files (full NodeJS project with compilation, Python scraper, BasilJS jsx, etc.), so I have compressed it into a ZIP file:
sapeck-07-book-code.zip

sapeck-LookingOutwards04

While researching and exploring ideas and methods for a Concept Studio 1 project, the professor introduced me to a work by Lea Albaugh. Her work Clothing for Moderns is an emotionally-reactive dress that responds to a lack of human interaction. It creates the image of the head of a person as a flower that blooms when cared for.

The flower is made of fabric in a pattern that can compress and uncompress. Wires push and pull the structure open and closed. It is not clear how the system is controlled technically. It may listen for a drop in sound level or simply be remote-controlled. The wires are most likely actuated by hobby servos.

I am drawn to this work because of the way that it represents an often hidden or private emotion externally. I explored this concept in my last Concept Studio project. Machines and even an external yet natural-appearing forms that represent internal emotions are uncomfortable for both the wearer (aka the embarrassed) and the viewer. Piece that use this concept are expressive and meaningful because they establish their meaning from the wearer.

Clothing for Moderns by Lea Albaugh

sapeck-Body

My goal was to create an environment of balloons where a motion-captured person interacts by popping the balloons. I chose a fight BVH from mocapdata.com that causes the figure to send a serious of punches at the wall of balloons. I looked through a lot of different fight sequences, and this one seemed to fit the best. I think it would have worked better if I had a continuously/endlessly walking and fighting person going through a never-ending stream of balloons. Otherwise, it's just boring.

I wanted to play with the placement of the balloons and having them move out of the way or drift off, but three.js made things difficult. For example, I found a good way to create a matte finish on the balloons, but that would prevent me from setting the opacity to hide the popped ones. I also found a good balloon 3D model, but I could not get three.js to display it. If I use three.js in the future, I need to have a much better understanding of it.

import colors from '../data/colorsHex'
 
var clock = new THREE.Clock()
 
var camera, controls, scene, renderer
var mixer, skeletonHelper
 
init()
animate()
 
var loader = new THREE.BVHLoader()
loader.load('bvh/fighting-31-syotei-yokoyama.bvh', result => {
  skeletonHelper = new THREE.SkeletonHelper(result.skeleton.bones[0])
  skeletonHelper.material.linewidth = 10
  skeletonHelper.skeleton = result.skeleton // allow animation mixer to bind to SkeletonHelper directly
 
  var boneContainer = new THREE.Group()
  boneContainer.add(result.skeleton.bones[0])
 
  scene.add(skeletonHelper)
  scene.add(boneContainer)
 
  // play animation
  mixer = new THREE.AnimationMixer(skeletonHelper)
  mixer.clipAction(result.clip).setEffectiveWeight(1.0).play()
})
 
// create an AudioListener and add it to the camera
var listener = new THREE.AudioListener()
camera.add(listener)
 
// create a global audio source
var sound = new THREE.Audio(listener)
 
// load a sound and set it as the Audio object's buffer
var audioLoader = new THREE.AudioLoader()
audioLoader.load('audio/Balloon Popping-SoundBible.com-1247261379.wav', buffer => {
  sound.setBuffer(buffer)
  sound.setLoop(false)
  sound.setVolume(1)
})
 
var ambientLight = new THREE.AmbientLight(0x000000)
scene.add(ambientLight)
 
var lights = []
lights[0] = new THREE.PointLight(0xffffff, 1, 0)
lights[1] = new THREE.PointLight(0xffffff, 1, 0)
lights[2] = new THREE.PointLight(0xffffff, 1, 0)
 
lights[0].position.set(0, 2000, 0)
lights[1].position.set(1000, 2000, 0)
lights[2].position.set(-1000, -2000, 0)
 
scene.add(lights[0])
scene.add(lights[1])
scene.add(lights[2])
 
let newBalloon = (r, color, x, y, z, o) => {
  var geometry = new THREE.SphereGeometry(r, 32, 32)
  var material = new THREE.MeshStandardMaterial({
    color: color,
    wireframe: false,
    transparent: true,
    opacity: o
  })
  var sphere = new THREE.Mesh(geometry, material)
  sphere.position.set(x, y, z)
  return sphere
}
 
let newBalloonGrid = (r, i, s, o) => {
  let balloons = []
  let pad = (r * 2) + s
  let c = ((i - 1) * pad) / 2
  for (let x of Array(i).keys()) {
    for (let y of Array(i - 4).keys()) {
      for (let z of Array(i - 2).keys()) {
        let color = colors[Math.floor(Math.random() * colors.length)]
        let bx = x * pad - c + 100
        let by = y * pad + r
        let bz = z * pad - c + 250
        let balloon = newBalloon(r, color, bx, by, bz, o)
        scene.add(balloon)
        balloons.push({
          pos: {
            x: bx,
            y: by,
            z: bz
          },
          r: r,
          o: o,
          color: color,
          mesh: balloon
        })
      }
    }
  }
  return balloons
}
let balloons = newBalloonGrid(20, 10, 5, 1)
 
function init () {
  camera = new THREE.PerspectiveCamera(90, window.innerWidth / window.innerHeight, 1, 1000)
  camera.position.set(0, 450, -400)
 
  controls = new THREE.OrbitControls(camera)
  controls.minDistance = 300
  controls.maxDistance = 700
 
  scene = new THREE.Scene()
 
  scene.add(new THREE.GridHelper(200, 10))
 
  // renderer
  renderer = new THREE.WebGLRenderer({ antialias: true })
  renderer.setClearColor(0xeeeeee)
  renderer.setPixelRatio(window.devicePixelRatio)
  renderer.setSize(window.innerWidth, window.innerHeight)
 
  document.body.appendChild(renderer.domElement)
 
  window.addEventListener('resize', onWindowResize, false)
}
 
function onWindowResize () {
  camera.aspect = window.innerWidth / window.innerHeight
  camera.updateProjectionMatrix()
 
  renderer.setSize(window.innerWidth, window.innerHeight)
}
 
var set = false
function animate () {
  // if (!isPlay) return
  window.requestAnimationFrame(animate)
 
  var delta = clock.getDelta()
 
  if (mixer) mixer.update(delta)
  // if (skeletonHelper) skeletonHelper.update()
 
  renderer.render(scene, camera)
 
  if (skeletonHelper) {
    if (!set) {
      console.log(skeletonHelper.skeleton.bones)
      set = true
    }
    if (skeletonHelper.skeleton) {
      for (let bone of skeletonHelper.skeleton.bones) {
        if (bone.name !== 'ENDSITE') {
          for (let balloon of balloons) {
            // console.log(skeletonHelper.skeleton.bones)
            let ballPos = balloon.pos
            let bonePos = bone.position
            let dist = Math.sqrt(Math.pow(ballPos.x - bonePos.x, 2) + Math.pow(ballPos.y - bonePos.y, 2) + Math.pow(ballPos.z - bonePos.z, 2))
            // console.log({ dist, ballPos, bonePos, name: bone.name })
            if (dist <= balloon.r * 4 && balloon.mesh.material.opacity !== 0) {
              console.log('KILL BALLOON')
              // console.log({ dist, ballPos, bonePos, name: bone.name })
              // scene.remove(balloon.mesh)
              if (balloon.mesh.material.opacity !== 0) {
                if (sound.isPlaying) sound.stop()
                sound.play()
              }
              balloon.mesh.material.opacity = 0
              // balloons.splice(balloons.indexOf(balloon))
              // scene.add(newBalloon(balloon.r, balloon.color, ballPos.x, ballPos.y, ballPos.z, balloon.o))
            }
          }
        }
      }
    }
  }
}

sapeck-LookingOutwards03

483 Lines Second Edition (2015) by Mimi Son explores how light and image can create a surreal digital environment. The interactivity is in how the viewer views the piece. Today, I viewed one of Memo Atken's pieces that explores creating different environments in each eye in virtual reality. The users explore the space by moving their head in the VR environment and by attempting to focus on different parts. Son's work attempts to create similar surreal environments in reality through projection. Standing closer or farther away from the lines create senses of motion through the plane of lines. Looking at the piece from the angle of a tunnel creates a sense of motion along the plane of lines.

sapeck-Viewing04

Spectacle is the ability to prioritize the technical and aesthetic properties of a medium over the conceptual exploration of a medium. Speculation is the ability to prioritize conceptual exploration of a medium over the technical and aesthetic properties of a medium.

Universal Everything's Walking City (2014) is mostly spectacle. The piece explores the methods of animating the walking motion and the transitions between the methods. The character's motion is constant and continuous. The character does not exhibit any emotion and is walking toward nothing. The background is blank. The purpose is to show the animation skill, not display a meaning.

The piece leans toward acceleration, is it shows off a new technical boundary of animation. The piece is very visible, as it very clearly shows what it is demonstrating. It is surplus, as it is useful for future work but useless from a conceptual standpoint. It was created commercially as a technical demonstration of the studio's ability. It shows only function.

sapeck-telematic

Unfortunately, this demo sometimes has issues being embedded due to camera integration. Please run it here if it doesn't work: https://face-game.github.io/

The Face Game is an attempt to create awkward, anonymized interactions by pairing two players' facial expressions.

Players move around their face in various positions without knowing that their picture is being taken. After a few head-moving tasks, players are shown their face side-by-side with other players who have completed the same tasks. The intended affect is to make two players seem like they may be kissing or licking each other. After the game completes, the player's images are upload to a database where (pending approval to filter out NSFW content) they can then be randomly selected to be shown when a new player plays the game. The game's interaction is one-to-many where the many is infinitely growing. The anonymous yet intimate nature of the game makes players both uncomfortable seeing their intimate faces next to a stranger but comfortable in that they don't know the stranger and the stranger did not witness the interaction.

I think that my project is successful in creating an awkward interaction. When I tested the game on my peers, it took them a moment to figure out how to move the cursor, but they then got the hang of it very quickly. One pitfall is that moving the head to the edge of the screen often moves it out of frame if the player is too close to the camera. Another pitfall is in the varying quality of computer webcams. However, the game works fine most of the time. My peers found it odd to see a picture of someone else at the end, but they always laughed. They would then want to play it again and see if they got a different result.

My original idea was to have two face silhouettes side by side. One user on one side and one user on the other. The game would coerce the two players to kiss and then snap a picture. However, this was difficult to implement and hard to understand. I think that the one-to-many approach with the hidden goal is much more successful.

Source code on GitHub: face-game/face-game
Image database on GitHub: face-game/face-game-content
Backend hosted on Glitch: face-game

sapeck-LookingOutwards02

Glenn Marshall combined GIF loops and generative neural styling to cover the loops in flowing, pixelating, continuous plasma. I like the combination of order and disorder that each GIF shows. In one way, the GIFs obviously show a human head, but they are filled with noise. What is so satisfying is that the pixelation and graininess that usually comes from such noise creates moving, flowing patterns in the pieces. There is no write up for these pieces other than that they are a neural style transfer. I assume that Marshall started with the flowing GIFs and a reference image with noise or scales and applied a neural style transfer to it. So, the GIF loops transformed to take on the style of the reference image. Marshall's touch is in how he trained the neural network and which GIF loop he choose to work with. Marshall used the computer to build off of his own work. The computer created a derivative, not an entirely new piece.
Glenn Marshall Neural GIF Loop
Glenn Marshall Neural GIF Loop
Neural GIF Loops by Glenn Marshall (2018)