Guodu-Final Process


p5* Calligraphy

An interactive experience where you can practice writing and calligraphy with different types of randomly selected font templates and brushes


Enter your practice word below

Esc – Resets Canvas | Shift – Change Brush Style | Up or Down to Change Brush Thickness

sketch

Process 

 

 

 

 

 

 


Sketches

Next Time

There’s a lot of interaction issues like non-intuitive controls for the brushes characteristics and not knowing what brush you are on. Also, I think it will be beneficial in teaching calligraphy to show which direction one’s stroke should go.

Overall I had a lot of fun creating this, especially the limitless brush styles. When thinking about a concept for this project, I looked to my hobbies and interests, which always came back to drawing and typography. I found the idea of being able to use a tool (p5*) to make another tool and hopefully share it with others to be empowering.

When I was exposed to so many programming artists in this course, Zach Lieberman left a deep impression on my with one of his EyeO Talks (here). He talked about his interests in

  • Intersection of Drawing and Code
  • What does drawing on a computer feel like?
  • How do we describe drawings on the computer?
  • What is the sketchbook of today’s age?
  • Beginner (turn off background and you have a paintbrush) –> Advanced drawing in code (recording data)

Ultimately this exploration of bridging digital and physical in addition to drawing makes me wonder how drawing in these different mediums affects and influences a person. Would someone get better at calligraphy by hand if they practiced on this template and used a tablet? And if someone is already good at calligraphy, how well do they transfer to a digital program?

 

 

 

 

 

 

 

 

 

 

 

 

Guodu-Interruptions

Still trying to get the interruptions to work….

Observations of Vera Molnar’s Interruptions:

1) Artwork is square
2) Artwork has a light grey background
3) Artwork has a uniform margin
4) Lines are black
5) There are about 50×50 lines
6) Lines have the same length and stroke
7) Lines are all rotated at different angles
8) Lines slightly overlap
9) There are “interruptions” where a cluster of lines are not drawn
10) There seems to be a general direction, either horizontal or vertical, that the lines are facing despite each line’s rotation

sketch

 

Guodu-LookingOutwards09

screen-shot-2016-11-27-at-10-20-30-pm screen-shot-2016-11-27-at-10-20-03-pm

Kyuha Shim’s Generative Type Video

For my final project, I’d like to improve my alphabet book. As stated in my proposal, I’m working on making it actually alphabetical in the fonts that appear and more generative.

As I was looking for inspiration, I found myself looking more at generative typography than generative books. I ended up looking at Kyuha Shim’s work (Q was one of the reviewers!). I really admire Q’s work because of how effectively and beautifully he’s been able to use data and software as a medium to create such dynamic type. I really aspire to develop such technical skills in programming so I can be able to use it more fluidly in my design work. I would like to work towards making a generative book of generative typography one day. But first, improving my initial concept of a generative alphabet type book for babies. I’m not sure how generative typography would suit such a young audience, yet.

____________________

http://www.typeroom.eu/article/q-s-perpetual-and-amazing-quest-algorithmic-typography
http://code-type.com
http://printingcode.runemadsen.com/lecture-typography/
https://runemadsen.com/blog/on-meta-design-and-algorithmic-design-systems/

Guodu-Proposal

I want to create an interactive calligraphy experience where people can enter a word they want to practice writing, select different brushes, write on top of a template, and save their calligraphy. I’ve gotten a few opportunities to help out at some drawing and calligraphy workshops. After a recent opportunity to help out at a digital drawing workshop, I got inspired by how excited people were at trying digital drawing. But a small problem I see when people just recently purchased their new iPad or bought a new sketch program is they do not know where to start on their blank white pixels. I’m also curious about the transfer of practicing drawing or calligraphy between digital and physical mediums.

I’d like to improve my Generative Alphabet Book by making it actually alphabetical and improving the graphic design while making it seem more generative. uh no time to print 

 

 

Guodu-ManifestoReading

7. The Critical Engineer observes the space between the production and consumption of technology. Acting rapidly to changes in this space, the Critical Engineer serves to expose moments of imbalance and deception.

In my own words — Critical engineers must be aware, understand, and take responsibility when creating a new innovation that will be provided to the general public. Critical engineers must strike a balance between the idea of new innovation, and natural human behavior when humans interact with new technologies.

I find this tenet interesting because it resonates with a lot of what we are taught in CMU’s School of Design. Until today, the main “manifesto” that most designers talk about is Dieter Ram’s 10 Principles for Good Design. I think there’s a lot of similar themes such as sustainability and responsibility that we have the ability to affect human behavior, especially for product designers who work with engineers.

CMU Design revamped their curriculum when I entered in 2014 to focus on Transition Design, or designing for sustainability. I came here thinking I’d learn how to make pretty, aesthetic things that people would buy because they looked pretty. NOW I realize the responsibility we have as makers to think about the magnitude of our decisions and how we can have a real influence in how people live their lives. While new technology and the “Internet of Things” sounds like cool stuff, are conversations and decisions being made about user needs, important intentions, and what type of future we want to live in?

dieter_rams_english_ten_principles__1600px

 

Guodu-Mocap

In collaboration with Lumar, we explored displaying the kinetic energy of the body’s movements. The spheres of the body grow and shrink depending on how much kinetic energy there was given the body part we chose.

ballguychickendrumstickrainbowcube    rainbowopacoty

Guodu-LookingOutwards08

How Do You Design the Future?

“Transform Beyond Pixels, Towards Radical Atoms” by Hiroshi Ishii

Intro

  • Last time Hiroshi was in this room was Randy Pausch’s Last Lecture September 18, 2007
  • Ars Electronica
  • Students are the future, how do you inspire them?

Timeline

  • 1992: ClearBoard: Seamless Collaboration Media
  • 1995: TRANS-Disciplinary: Finding opportunity in conflict between disciplines & Breaking  down old paradigms to create new archetypes
  • Ideas Colliding, Opportunities Emerging, Disciplines Transcending, Arts + Sciences
  • Music Technology MirrorFugue III by Xiao Xiao – embodied interaction to artistic interaction
  • Lexus Design in Milan 2014 – Transform f1
  • 1. Visions >100 years 2. Needs ~10 years 3. Technologies ~1 year
  • Tangible Bits embody digital information to interact with directly with hands
  • Origin: Weather Bottle – the sound of weather coming out of a soy sauce bottle in her kitchen
  • I/O Brush by Kimiko Ryokai, Stefan Marti & Hiroshi Ishii 2004
    • It looks like a painting but goes beyond that
    • Capturing and weaving history
  • PingPongPlus
  • Audio pad by James Patten and Ben Recht (Physics & Media)
  • Urp: Urban Planning Workbench
  • Sandscape:
  • Two Materials:
    • 1. Frozen Atoms
    • 2. Intangible Pixels
  • Third Material
    • 3. Radical Atoms
  • Time Scape: based on relief, manipulate in real time
  • TRANSFORM
    • inFORM 2013: http://tangible.media.mit.edu/project/inform ART NOT UTILITY
    • Sean Follmer, Phillip Scholl, Amit Zoran,
    • Opposing Elements / Design vs Technology / Stillness vs Motion / Atoms vs Bits
    • Materiable is an interaction framework that build a perspective
    • Flexibility, Elasticity, Viscosity
  • Biologic: “Bio is the new interface” http://tangible.media.mit.edu/project/biologic/
  • “Making material Dance
  • Why do you have to obey?
  • The Future is not to predict but to invent – Alan Kay 1971 “This is the century in which you can be proactive about the future; you don’t have to be reactive. The whole idea of having scientists and technology is that those things you can envision and describe can actually be built”
  • Envision — Art and philosophy,
  • Embody — Design and Technology,
  • Inspire — Art and Aesthetics
  • Eye –> Telescope –> Observatories –> Hubble Space Telescope –> Voyager 1
  • People could only see the world from their own perspective
    • Towards Holistic Worldview
    • Holistic Perspective –> Heuristic Focus –> (“Life is short”)
    • Inspiration: Douglas Engelbart, Mark Weiser, William Mitchell, Bill Buxton, Alan Kay, Nicholas Negroponte (Heroes and Gurus)
  • Who are friends? Bouncing ideas back, this tension is friendship
    • Golan Levin – Director of Studio for Creative Inquiry, CMU 🙂
    • Austin Lee
    • Lining Yao
  • Technology soon becomes obsolete
  • How do you focus on vision? What is the most exciting
    • Abacus – a physical embodiment of a digit
    • Abacus – sound of accounting
    • What do I care about?
    • Get more legs to your chair so people understand because art is abstract
  • Virtual Reality is completely opposite of Randy Pausch’s Dream and what I do, but I’m nice and I just say let them do it
  • Your one hour listening to me is beyond art, design, and technology
  • What do you want to communicate, and influence?
  • Reacting to Failure, sometimes the floor gets so low, the ceiling gets so high, but what’s the new potential?
  • Try not to think of Art, Design, Science, and Technology as boundaries

Guodu-Visualization

Data Visualization for Pittsburgh Healthy Ride

Initially I was interested in how do customers or subscribers choose their bikes? Do they examine the newness / wear and tear of the bikes? Of course our data set did not have a rating of the bike’s physical condition but maybe a trend in the bike ids would show something?

top10bike

In the end I compared the top 10 bike trip durations and their bike id’s between customers and subscribers. While I was able to achieve the simple interactive component of toggling between customers and subscribers, I could not manage to get the text working like the following where I was exploring static d3 bl.ocks.

screen-shot-2016-11-04-at-10-18-11-pm

screen-shot-2016-11-04-at-10-21-39-pm

Overall a bar graph would have been more effective for understanding what the information where the trip duration is numerically visualized instead of just proportional section, or another bl.ock where I could include more information like start and end stations. While it’s interesting to see that subscribers logged a lot of hours in  bike #70145, if this bike was placed in the customer pie chart, it would have been more than half of the pie chart.

Hope I can one day ride the legendary #70145 bike.

__________

Just for some fun because I was thinking how I could get other people interested in data visualization about bikes…

bikeforfun

It’s impossible to read the text though… 😛

 

Guodu-LookingOutwards07

Nick Felton // Computational Information Visualization

far14_101415a-1-1-1024x640
2014 Annual Report

Nick Felton, or Feltron, is a data visualizer who is known for his personal annual reports of mapping his self-tracked data (how much sleep he gets, how much he codes, where he’s been, how many pictures he’s taken etc.).

I was almost going to write about his annual reports and just how much I appreciate the fact that he has not only visualized data but aesthetically visualized data, but I think this is plain obvious after exploring his website. Feltron has both a code and design sensibility. He’s really the middle man between designers who can’t code and coders who can’t design. Goals.

PhotoViz Book

screen-shot-2016-10-30-at-9-56-39-pm

I then became really interested in PhotoViz, another project that was just released in May 2016 that explores the intersection of photography, infographics, and data visualization. Trying to find out what’s inside the book, I think it’s more of a collection of other people’s work, and I’m not sure if Felton included any of his own projects related to photographic data viz. I’ll have to purchase the book to find out.

What’s fascinating is that with the proliferation of smartphones and thus amateur photographer with selfie taking abilities, people aren’t just taking photos at one decisive moment where they are limited to only 10 shots, they are taking many photos because they have cloud or 128gb of storage and need. There’s even cameras that are persistent cameras, meaning they can take a picture every few minutes for let’s say a month or year. It’s almost like surveillance and it’s meant to document rather than to artistically frame a shot. “In 2015, people around the world took a staggering 1 trillion photographs, according to research firm InfoTrends. By 2017, 4.9 trillion images will be stored online” (source). This opens up a lot of conversations about what to do with all this data and how to tell the story or identity that this data represents.

The overlap between photography and data visualization is blurry, and that’s what PhotoViz is.

photography <———–> data visualization <———–> code

Photography can successfully visualize data, like the cover of the book where all the planes within a 6 hour timeframe fly through LAX, but it isn’t necessarily grounded in code. In fact, the photographer painstakingly photoshopped each airplane. Photomontages of Olympic athletes also help visualize the unseen, the steps that go into a blink of the eye routine. Yet photoshop can also be used or photomontage softwares. Overall, I found a handful of examples of the books that have striking visuals, but I can’t tell which one’s were created through code or by hand, except for Tega Brain’s project. Either way, I think PhotoViz is going to be a thing soon.

screen-shot-2016-10-31-at-1-30-30-am
Simone Biles Photomontage
photoviz_dylanmason_p032-033
365 Selfies, Dylan Mason

screen-shot-2016-03-24-at-2-40-00-pm-e1458854519898

Singapore Sunset, 2016. All Rights Reserved.
Singapore Sunset, 2016
screen-shot-2016-10-31-at-1-49-59-am
“Since early 2008, roughly 40 million images have been uploaded to Flickr® every month making a rich, ever-growing digital collection that documents a vast range of human experience including our observations of other species. Keeping Time is made from over 5000 of these photos taken during the years 2002-2013.” (Tega Brain)

_____________

More on PhotoViz in this 30 min vid … Nick talks about his conception of Photoviz: here

There was a particular line that Felton said that struck me, “I think of data as the new wood, as a material. Historically designers worked with text and image, but now that’s not enough, you have to play with data. It’s like wood, you can use it in many different and precise ways.”

It made me think about what being a designer will be like in the next 50 years. It used to be the letterpress, and now, it’s data viz, what’s the next medium?

screen-shot-2016-10-31-at-12-45-45-am-copy

 

 

Guodu-LookingOutwards06

screen-shot-2016-10-30-at-6-05-30-pm

At first I looked at Every Color (@everycolorbot) and was astonished by the number of followers (90.7K!!!!!! that’s way more than Carnegie Mellon’s 39.6K followers @CarnegieMellon) it had for simply tweeting a few random colors throughout the day. Maybe I am not excited about colors as these followers are. Continuing to investigate, I found what I think is a much more interesting color bot: ColorSchemer @colorschemez

screen-shot-2016-10-30-at-6-01-42-pm

“I’m trying to find colors that go well together. I’m probably not very good at it because I’m a robot with no sense of style.”

Having just a small introduction in first person makes the bot more interesting and makes it seem like there’s more meaning behind these random color choices. Then the additional random adjectives that go with the colors produces some humorous content. Comparatively to the Every Color bot, I find this ColorSchemer bot more entertaining and something I’d subscribe to. The screenshots below are some of my favorite descriptions and/or color combinations. Avocado Sanstone…really? 😉

screen-shot-2016-10-30-at-6-04-44-pm screen-shot-2016-10-30-at-6-04-28-pm screen-shot-2016-10-30-at-6-04-01-pm screen-shot-2016-10-30-at-6-03-48-pm screen-shot-2016-10-30-at-6-03-12-pm screen-shot-2016-10-30-at-6-02-24-pm

 

 

 

Guodu-Book

Alphabet Fonts 

hold hands open

 

dsc02378_1

dsc02434

PDF Version

Final: AlphabetFonts.pdf

My book is about introducing fonts alphabetically. Start your baby early 🙂

Here’s a video of Golan flipping through my book.

On the right page there are 3 randomly generated alphabetical letters of different fonts and on the left is the names of those fonts. I thought it would be interesting to not make it extremely apparent which font matched with each letter so the letters are randomly placed with a slight change in opacity and either being white, transparent white, or transparent black. In this way, I hoped that the difficulty of matching the letters would prompt people to observe the letters more closely and see the subtle or extreme differences.

My own experience to getting into typography and the nuanced differences between fonts and their history was through examining fonts up close. I would print the same letter but of different fonts at 600+ pt on paper because it was easier to trace and note their differences when the fonts are so big.

example
Things to observe and learn when you start becoming a type nerd

Inspiration

I became really interested in how to better display the randomization of my font library after Marius Watz‘s awesome demo on Basil.js for scripting InDesign. I began searching baby alphabet books and was inspired by the illustration style of Anna Kövecses, a Hungarian graphic designer. Anna also made the alphabet book of 44 Hungarian letters (pictured below) for her 4-year old daughter.

il_570xn-754715927_qqvq
Baby Alphabet Book
anna-kovecses_hungarian-alphabet
Anna Kovecses’s Hungarian Alphabet Book

Process + Sketches

20161029_145747
Initial Ideas for an ABC book
20161029_150208
Hand Binding Notes
draft
Experimenting with the appearance, opacity, and number of letters
process
Hand Binding in Progress

 

Self Evaluation/Future Iteration

  • I would like to make the fonts actually be presented alphabetically. For instance the letter A should have 3 different types of fonts starting with A, like Avenir, Arial, and American Typewriter. I ran into a small problem of actually not having a few fonts starting with certain letters or just not having at least 3 (J, Q,  U, V, X, Y). I’m going to need to download some fonts to make this work…but going to make this happen! I regret that I didn’t figure out how to make this happen for this iteration.
  • Definitely going to have an outside printing service print my book (Espresso, Blurb etc). While it was fun learning how to hand bind and having control over the quality of paper and color, it was definitely laborious and I think even contradictory to the limitless/endless iterative quality of generative books.
  • I still want to explore the composition and placement of the letters of the right page. Right now I have random placement of letters / colors = white, transparent, and transparent black. I’m not convinced this is the best way to represent the letters because there’s not a whole lot of meaning behind it other than it looks pretty. While the random placement produces some interesting compositions, sometimes it is really off.
  • For the left page, I am thinking of adding more text like what does the font look like from A –> Z instead of just naming the font. Use Rita.js or Temboo…what about a simple sentence or phrase where every word of it starts with the letter (An Ant, Big Bunny, Crazy Corn etc) or just #ahhhhh #bae #cool. Overall, I’d like to strike a balance between randomization/generated and a well composed book.
  • Do people feel like I just did this by hand in InDesign, no scripting? or is it obviously programmed? or both?

Code 
Having some trouble embedding syntax-colored code (WP-Syntax plugin). So here’s p5.

upload

 

#includepath "~/Documents/;%USERPROFILE%Documents";
#include "basiljs/bundle/basil.js";

//many thanks to Golan and Marius for the demos 
var jsonString = b.loadString("alphabet.json");
var jsonData;

function draw() {

    b.clear(b.doc()); // clears previous output

    var numOfLetters = 3; //numOfLetters generated on colored page
    var fonts = app.fonts; //object of all fonts
    var fontOpacity = 60;
    var fontInfoSize = 15;
    var fontSize = b.width-100;
    var margin = b.width*.1;
    b.println(fonts);
    b.println("font length"+app.fonts.length);

    //yay colors
    var fontColor1 = b.color(0,0,0); //black
    var fontColor2 = b.color(255,255,255); //white
    var backgroundColor1 = b.color(243, 173, 0); //yellow
    var backgroundColor2 = b.color(234, 158, 147); //pink
    var backgroundColor3 = b.color(50, 160, 255); //blue
    var backgroundColor4 = b.color(216, 66, 30); //red
    var backgroundColor5 = b.color(27,177,91); //green

    jsonData = b.JSON.decode(jsonString);

    //Cover Page
    for (var i = 0; i0){ 
                b.opacity(printLetter, fontOpacity); 
            }
            //textframe resize to content
            printLetter.fit(FitOptions.FRAME_TO_CONTENT);

            //left page fontInfo
            //what font is it?
            var posX = -b.width + 54;
            b.textAlign(Justification.LEFT_ALIGN);
            b.fill(fontColor1);
            b.textSize(12);
            var fontInfo = b.text(fontName, posX, posY, b.width*.5,36);
            fontInfo;

            //leading 
            posY = posY + 24;
        }
    }
    //Back Cover
     b.addPage();
    for (var i = 0; i
	

Guodu-FaceOSC

fsdf

Inspired by Text Rain and Typeface2 by Mary Huang. In the process, I became intrigued by painting with type, with one’s face movements.

gif1
Type as a path
gif2
Got into rotating type

gif4

gif5

// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230
//

import oscP5.*;
OscP5 oscP5;
// num faces found
int found;
float[] rawArray;
//which point is selected
int highlighted;
String word = "HelloWorld";
//int wordIndex = int(random(wordBank.length));
float textX;
float textY;
//float speed = 1;
float speed = .01;
//float posX = rawArray[0];

//float textSize = random(0 30);
float textSize = 30;
float rgbColor = random(255);

void setup() {
size(640, 480);
frameRate(30);
oscP5 = new OscP5(this, 8338);
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "rawData", "/raw");

textAlign(CENTER);
textX = random(width);
}
void draw() {
//background(255);
//stroke(0);
//noStroke();
//background(255, 0, 0, 10);
textSize(textSize);

if(found > 0) {
for (int val = 0; val < rawArray.length -1; val+=2){
if (val == highlighted){ fill(255,0,0);}
else{fill(100);}

//ellipse(rawArray[val], rawArray[val+1],8,8);
//text("Use Left and Right arrow keys to cycle through points",20,20);
//text( "current index = [" + highlighted + "," + int(highlighted + 1) + "]", 20, 40);
//println(rawArray[val]);
fallingWords();
}
}
}
void fallingWords(){

float rgbColor = random(255);
textY = textY+speed;
float x1=map(textY, 0, height, 0, 255);
fill(rgbColor+x1, 102, 153, 80);
textSize = rawArray[128]/5;
rotate(rawArray[100]);
text(word, rawArray[100], textY);
if (textY > height){
//reset text's falling placement (textX, textY) and speed
textX = random(width);
textY = 0;
//speed = 1;
speed = random(.01,.1);
background(204,204,204);

}
}
void keyPressed(){
if (keyCode == RIGHT){
highlighted = (highlighted + 2) % rawArray.length;
}
if (keyCode == LEFT){
highlighted = (highlighted - 2) % rawArray.length;
if (highlighted < 0){highlighted = rawArray.length-1;
}
}
}
/////////////////////////////////// OSC CALLBACK FUNCTIONS//////////////////////////////////
public void found(int i) {
println("found: " + i);
found = i;
}
public void rawData(float[] raw) {
println("raw data saved to rawArray");
rawArray = raw;

}

 

Guodu-LookingOutwards05

daydream-labs-experiments

Lessons Learned from Prototyping VR Apps + Weird Reality Conference 

Stefan Welker (GoogleVR / Daydream Lab)

VR is something I’m not too knowledgeable about (yet), and still skeptical. The Weird Reality conference was my first exposure and experience related to this technology. I’m mostly concerned because of the potential motion sickness one can get from staying in a VR environment and the consequences from becoming disconnected from the physical world. But this conference has changed my perspective to view VR as a medium to further understand our natural world, collaborate in interdisciplinary teams, and help those experience or see something they normally cannot.

I was really intrigued by Stefan’s talk because of the parallels I saw between the way Google Daydream Lab approaches to designing for VR and the design process that I’ve been learning and applying in school. In design, we learn to feel comfortable with failure in order to improve; to iterate and test quickly to find the most appropriate solution to a problem. Stefan described their motto as Explore everything. Fail fast. Learn fast. It almost feels like they are in a rush to learn everything in order to have VR become a more widely accepted and helpful tool. In the past year they’ve built two new app prototypes each week, and the successes and failures show in just a few examples out of many that Stefan shared with us. Stefan even joked that their teams thought it wasn’t sustainable at first.

Lots of realizations, setting criteria, challenges and discussions arose from their experiments like

  • users will test the limits of VR
  • without constraints in a multi-player setting, users may invade the privacy or personal space of other users
  • users can troll by not participating or responding in a timely manner
  • ice breakers are also important in a social VR setting because without an initiation of some sort, their is still social awkwardness
  • cloning and throwing objects is a lot of fun (experienced the throwing aspect in  the Institute for New Feeling’s Ditherer, in which it was possible to throw avocados on the ground)
  • adding play and whimsy into VR because you can and it’s fun

 

Even after listing some of these observations, I realize that with the seemingly limitless explorations that VR provides, understanding natural human behavior and psychology is integral in creating an environment and situation that encourages positive behavior from users.

Ultimately, (as cliche as this sounds), Stefan’s talk and the Weird Reality Conference opened up a new world for me in terms of the new possibilities and responsibilities that come with designing for VR, or AR.

As Vi Hart says, VR is powerful; designer and developer’s have the ability to create anything in their imagination, and user’s will have new found capabilities to experience the sublime and fly, or maybe flap.

Guodu-LookingOutwards04

Adrien M /  Claire B

I stumbled upon some of the recent works of Adrien M /  Claire B, a french company headed by artists Adrien Mondot and Claire Bardainne. They create a range of digital arts for performances and exhibitions, combining the virtual and physical world. Their motto is “placing the human body at the heart of technological and artistic challenges and adapting today’s technological tools to create a timeless poetry through a visual language based on playing and enjoyment, which breeds imagination.”

I particularly enjoyed this performance, Coincidence (2011), where a juggler dances, juggles both a metal and digital sphere, and interacts with a background of living type. Adrien and Claire have been developing eMotion, a tool they implement in their projects to create objects (particles, text, drawing strokes, quartz compositions) that move and interact live with a performer.

Typography is always around us everyday, from the nutrition facts on Nutella spread to street crossing signs to Facebook etc. I thought the projection of large type surrounding, and even attacking the performer is so poetic; it is no longer that the human controls and has influence over type (type designers, readers, writers), but the type equally influences us in good and bad ways (clarity, legibility, information, helpful, demanding). But what’s even more impressive, is the ability for the type to seem alive and aware of the performers. Both are having a conversation with each other.  I think it’s so much more natural and right that projections for performances are generated in real time instead of pre-recorded, it brings us into a more convincing new world. It’s just like a pit that responds accordingly to the actors and singers of musical. Humans will always make mistakes and algorithms are new lending hands.

More projects by Adrien M and Claire B:

Guodu-Plot

My interpretation of a gradient/transition from order to disorder was inspired by design drawing warmups that I do everyday. It mostly begins with drawing straight lines, then drawing a few 2D shapes (squares and circles) then moving to 3D forms, the most important being the cube.

20160930_061611-copy

Here are some warmup pages from my sketchbook:

20160930_063520-copy

20160930_061354-copy

Deconstructing the cube, and ultimately drawing, it is all about connecting points and drawing lines. I wanted a gradient that gave the sense of building upon each other, from dots, to lines, to squares, to cubes. I also started to think about machines or robots “warming up in drawing” because by design, they are really meant to draw complicated, generative, and parametric designs that humans typically can’t, or at least not in an efficient, and perfectly precise manner. How do other people perceive machines drawing imperfectly? I’m intrigued by the idea to have a machine mimic my drawing style.

In the end, the concept that I had didn’t translate too well into code. I was too focused on the idea of a machine mimicking my drawing style and I realized too late that it’s extremely easy for a machine to draw built in drawing functions, which have taken me months to master. Secondly the drawings and code are in sections, they aren’t really related or parametric. For next time, I’ll focus more on parametric design and compositions that challenge the plotter more, or are appropriate for laser cutting.

While I was really excited to compare my hand drawings to the plotter drawings, the plotter had some technical difficulties. I apologize for not having the intended deliverable, but here it is in on laser cut 1/8″ plywood:

20161003_002940
Power was a little too high for my first laser cut, but it was interesting to see certain parts poking through.

20161003_003019 20161003_002952 20161003_002904 20161003_003007

Processing Screenshot:

screen-shot-2016-10-03-at-4-41-01-pm

 

 

Guodu-Clock-Feedback

My concept was sort of a humorous and light take on a clock: What TIME is it?

TIME represented:

T – Seconds / I – Minutes / M – Hours / E – Milliseconds

The feedback I got was overall very kind, positive, and constructive. I thought I did much worse than some of the comments I received, but I was glad that the concept felt “cute, simple, and effective.” I apologize if my clock gave anxiety because it was glitchy, that was not my intention.

Overall there is a LOT I need to work on. Right now the TIME does not actually represent the time, and I would like it to so it is at least somewhat clear as to what time it is. The letters are also rotating on the left, bottom corner access, whereas I’d like it to be in the bottom center. Aside from these technical difficulties, I also need to work on:

  • having more intentional design (i.e. color palette choice, text choice)
  • more process and sketches as to how I arrived at my current design and typography interest
  • working on its potential, such as possible an interactive mouse click that changes the color, font, enlarges the clocks, or more text integrated into it so it doesn’t just great “what time is it?” 

Overall, “This project has potential but seems a bit simple, just rotating numbers”.

 

Guodu-LookingOutwards03

Product Design and Parametric Forms

I got really into looking at generated parametric 2D designs on tangible materials (mostly lasercut) and 3D forms because I just never knew it was possible to create programs to generate 3D forms or on 3D mediums. My interest in this area of generative products comes from playing with the laser cutters in Ideate and the 3D printer in my products studio.

There were so many amazing programmer artists and designers from the lectures that I wish I could all write about (my top three being Marius Watz’s laser drawings, John Edmark’s Fibinocci bloom sculptures, and Wertel Oberfell’s Fractal Tables). Something I find similar and extremely intriguing in most programed 3D forms is this sort of meta design going on of nature inspired patterns and forms are applied on either natural materials (wood, plywood) or a nature inspired pattern/form grows from 3D printers (stereolithography). It’s a new way of seeing nature and materials. The end results are beautifully nature inspired and only possible through generative programs.

Marius Watz left the deepest impression on me because before this summer started, I bumped into a designer in my studio who created similar forms like Watz’s below. He was designing various organic and nature inspired “knobs” he called. One of them looked like a sea urchin shell:

61fi275tcml

He told me he was designing for patients who have suffered a stroke and can no longer understand or realize what they are interacting with. That’s why he was exploring some organic forms that have more tactile grip and interaction compared with a door knob’s smooth cylinder.

Marius Watz’s forms (2011) make me think about how his design process could be applied to more fields like helping medical patients improve their senses and ability to feel, acting as indicators on walls or products for changes in environmental setting, or simply enhancing our interaction with objects instead of just swiping or tapping on smart phones.

I’m not entirely sure what the steps would be but it seems like he created an algorithm to create these forms, entered the data in CAD software like Rhino, then exported it as stls for the 3D printer, and had to adjust the design to create the best fidelity since the 3D printer can be janky. His CAD files look more chaotic in complex layers and forms than some of the final results, so there are definitely adjustments. His effective efficiency looks to be more on the disorderly and complex side with forms that look more inspired by natural organisms and the ones that are less rotationally symmetrical. Overall I find Marius Watz’s forms both beautiful to look at, potentially functional in their strong tactility, and just fun. It looks like he enjoyed his process and explorations just by the quantity, varying designs, and his bright choice of color.

See Marius’s form studies and more of his work

7660-sf-kinotek-form-studies-makerbot-800-1 screen-shot-2016-09-25-at-11-20-07-am screen-shot-2016-09-25-at-11-19-02-am screen-shot-2016-09-25-at-11-17-03-am

Really inspiring and work 🙂

Guodu-Reading03

tokyo-adventure-354-of-379-motion

1a. Effective Complexity

I think Tokyo’s Shibuya Cross is an interesting example of effective complexity. This pedestrian crossing system is orderly yet chaotic. There are traffic laws and order when looking at the urban planning from above; we recognize the intersection, light signals, and an indicated pathway for people to walk from one street to another. Yet as many as 2,500 people navigate across every time the light changes, creating chaos yet somehow avoiding deathly collisions.

But if everyone crossed with their smartphones (as humans naturally do), then the system becomes even more complex:

https://www.youtube.com/watch?v=3NDuWV9UAvs

1b. The Problem of Uniqueness

Does it diminish the value of the art when unique objects can be mass-produced? (Gallant 2016)

Digital generative art introduces a completely new problem: rather than offering an endless supply of copies, it provides an endless supply of original and unique artifacts.

 

The arguments surrounding the idea of human touch and an artist’s uniqueness in products, digital or analog, really excites me. Studying product design, we’ve had discussion about designing products and forms that lie on the spectrum between emotive and personal, to cold and machined. Basically understanding when a product looks like there was a human behind it (personal touch, possibly imperfect, individualistic), or a machine (looks like it was meant for the masses, too perfect, utilitarian).

So far in my design education, I’ve been making everything by hand so all my products are uniquely mine, a signature if you will. As much as I can say with pride that I not only designed the product but I made it with my own hands (yay for human touch), it is a tiring process. For me, I find the idea that an algorithm can create unlimited unique products to be extremely helpful and efficient for the design process. As much as I would enjoy thinking and designing originally, there have been so many instances where I get stuck and can’t iterate off a concept. I don’t want to sound like I’m lazy and that I want a robot to do my job for me. But I think as systems and problems become more complicated to design for, being able to iterate to an unlimited degree is efficient in the design process. It’s almost scary to think about a future where robots can make creative decisions, and may replace designers and artists though that it still quite a ways in autonomous technology. But ultimately, I think this is an emerging field where we can’t even visualize the potential and power of limitless iterations. When it comes to the value in unique products created algorithmically, I think that will be up to the user and audience. Whether something was created by a machine or human, I find it extremely gratifying and appreciative that it is one of its kind. But who knows, if everyone started to say, “I have a one of a kind iPhone ” it may become mainstream and less valuable really quickly.

 

 

Guodu-LookingOutwards02

Senior Year of High School Inspiration

The first time I learned about computational art was my senior year of high school in my first graphic design class, I sat next to a buddy of mine who was a “hacker”. At the time, I didn’t understand what this latter word meant or what programming really was but he showed me some abstract “art” that “he told the computer to do.” At first I thought, why would you do this when you can just make it in illustrator, but then he outputted like 20 different compositions in a few seconds and I thought, “woah”.

One Field Trip in 2014

The second time that computational and interactive art left a deep impression on me, which inspired me to  to take 15-110 and attempt 15-112 sophomore year, was when I went on a design field trip my first year at CMU. We toured a a few places but the two that I still remember to this day are two NYC design agencies with an appreciation for coding (I mean one of them literally has the word code in their name): Code and Theory and BreakfastThese two places made me realize that designers can program and programmers can design, that these two fields are not mutually exclusive.

__________

Code and Theory

10295434_10152465507893716_2280347349063362912_o
Code and Theory presentation (2014)

Code and Theory showed us their website upon entering their studio, check it out below or optimally at their website:

code-and-theory

code-and-theory_1

So as flashy and visual candy as this is, for me, this was the first time I linked together design and programming as a way to visually communicate. Code and Theory could have easily decided not to include supplementary animations and interactions to go with their descriptions, but they didn’t and I’m glad.

At the time I didn’t ask how and who created these animations for their website, but now I understand that these were definitely coded, mayhaps in Javascript.

__________

Breakfast

10550174_10152465509323716_4255258074782758469_o
visiting Breakfast with some of my peers (2014)

Breakfast blew my mind with this project: The Electromagnetic Dot Screen 

And here’s more of the Dot Display in action:

Breakfast was commissioned by TNT to create this interactive Billboard advertising a new crime-solving show Perception. The protagonist can apparently see anagrams among large blocks of text so Breakfast “revived a sign technology of yesteryear to create an anagram-finding experience on the streets of New York”. Zolty, Breakfast’s creative director and founder, explained that they had to write their own software in order to have the dots flip from black to white 15 times faster than it was originally designed to, so when anyone walked by, the interaction was in real-time.

In some ways this project reminds me of Text Rain by Romy Achituv and Camille Utterback (1999). Both are an interactive installation that tracks body movement, allowing the audience to play with letters and words. Breakfast’s installation and execution are definitely more complex in its medium, code, and technology, but I find that both concepts to have people’s movements tracked to interact with a screen (digital / physical) are simple and similar.

While I don’t imagine myself ever being the programmer on a team, I think Breakfast demonstrates what I believe can happen when a small team that has a diverse range of experiences and skill sets unifies. I really admire Breakfast’s ambition and philosophy of improving “how connected devices can look and act in the real world”. “Technology doesn’t need to stand out and look like technology. It can blend in and hide the complexity behind great design.”

“The future is not a touch-screen on a wall”

Some more pictures below from this memorable visit:

10694464_10152465509448716_8674539806132835402_o
We experienced it up close!
10687884_10152465509698716_6186259141845030745_o
40,000 flip dots
10699821_10152465509358716_4486998966672864596_o
Meeting Andrew Zolty, the creative genius, director, and founder of Breakfast
10549111_10152465509888716_8006350318872927708_o
Checking out another project, Points, a robotic sign system

Guodu-LookingOutwards01

Eyeo 2015 – Zach Lieberman, From Point A to Point B

zach-profile_original

Zach Lieberman is an American new media artist and programmer who has “a simple goal: he wants you surprised”. He describes the core of his work to be “augmenting the body’s ability to communicate”. He is the co-creator of openFrameworks and currently teaches at Parsons School of Design, where he received his MFA in Design and Technology.

Immediately upon speaking, Zach seems like such a down-to-earth, fun, and approachable guy, asking if the audience, “felt like family”. We all went on a “journey” because his talk was so humorous, engaging, deep, and human, and this latter point is want I was to discuss about Lieberman.

Zach stated he’s been to Eyeo 4 times and it’s been an emotional ride, where the first time he was really happy because it was his first time, the second time he was heartbroken for some reason, the third time he was happy again because he had just opened a new school (School for Poetic Computation), but in this fourth talk, his father had just died three weeks ago. I was really moved because Zach still came to Eyeo to give his talk, making it a tribute to his Father’s belief that “The world needs stories. We are drowning in data, and we need people to weave stories.”

What struck me was how genuinely passionate he was about his work, you can just hear it by his enthusiastic tone and the number of times he said he was “obsessed” with things like how a line (he talked A LOT about this, it was so cool) can be used to separate or disconnect people, how the “world is all around us, we just need a way to see or hear it”, and how everything is about connections. To me, his work doesn’t come off as computational or cold, which is what I associate with most programmers (but then again, I haven’t had much exposure to a lot of creative computing projects yet so more to come in Looking Outwards :^) ).

I really admire this because I think some of Lieberman’s work can be considered human-centered design, which is what I’ve been learning in the School of Design. I think the way that he talks about his thought process for his Play the World piano radio is inspirational for me as a designer. I saw it as another way to think about creating products that are engaging, emotional, memorable, and understanding of natural human behavior. Basically he created a program that makes the keys on a piano have the same pitch as a random song from around the world, so people can see and hear what African music might sound like and then suddenly from Rio de Janiero (go to 27:19 in the video to see how excited kids and adults got when they played this piano). One guy who played it said to Zach, “I need this in my life”. The current, sad stereotype with technology is that we’d rather be spending time on our smart phone screen than with close family and friends. I think Zach’s attitude and work shows that technology can touch upon topics that get people to see other people’s world views and stories, making it seems like the technology that we interact with now (mostly phones, tablets, and computers) is just the beginning and we will be having more intricate and thought provoking experiences.

Other cool projects that I didn’t touch upon:

  • Last year, I designed a font with a friend, but Zach created a program to visualize a car’s movement into a font :O
  • Zach, my instructor Golan (!!!!!!!), and others worked together to create an abstract, playful speech performance

 

 

 

 

 

Guodu-FirstWordLastWord

The first thing I thought of as I was reading this was the quote, “Amateurs borrow, professionals steal”. If the world was really judged solely by if it were “First Word Art” then we’d all be screwed, not everyone is Haydn or the flip side Beethoven in the music world, or Da Vinci in fine arts, or the Beatles in pop music, or Steve Jobs for Apple.

For me, I don’t consider myself even close to a creative genius, which I believe contributes to creating either First Word Art, or Last Word Art, one offs basically. But I definitely consider myself creative because I’ve always enjoyed drawing and playing music ever since I could remember. It began with learning classical piano, then taking art classes, then loving to draw anime (seriously), then switching to the Trombone, then learning Jazz Trombone, then taking my first graphic design classes, and now today, where I express myself through design style drawing and most of all photography. My interests are obviously not about mastery and delving deep into one area, but learning across instruments and ways to draw. I find that whenever I start with a blank canvas, whether it’s the beginning to a jazz improvisation or a new illustrator document, my first thought is thinking about all the work that has already been produced, especially the masters. Then I think, what can I offer? How can I still be novel? And the answer is usually through developing and honing a voice as well as understanding one’s world values. The latter is still a work in progress, but now being a third year design student and having so much more experience seeing what others have already done, I find that I’m “borrowing” less and less from others.

Another quote that First Word Art and Last Wort Art reminds me of is original work. Paul Rand, a famous graphic design who designed the IBM logo once said, “Don’t try to be original, just try to be good.” I think this quote is more relevant than ever now that the internet and social media provide a platform to share content and see each other’s lives. It’s becoming harder and harder to stand out when everyone is learning from each other and sometimes knocking off from other people’s styles on platforms like Instagram and Tumblr. Aside from all the plagiarism and copyright problems the internet has caused, I personally think the internet has allowed for people to find new outlets of expression and do innovative work. It does constantly feel like there is a lot of First Word Art being produced because people are getting so good at their hobbies and using new technology.

Overall I think what I’m trying to say is that I think the internet and technological advancements have shaped our culture to become more creative and collaborative. I think it’s such a wonderful thing to be able to google Beethoven and also hear about how a new kid, like Joey Alexander is the “next big thing”, even a reincarnation of past legends. We can constantly get inspiration and feedback from each other to product our best work. Some people are just better at thinking and design and drawing and painting with more innovative techniques and naturally produce First or Last Word Art. I think a lot of people might get caught up between being the first or last because they want to be acknowledged or remember. In the end, the truly great artists are just focusing on their work, and it’s really up to how his or her work affects people on an emotional level that allows his or her work to transcend beyond eras.