lsh-telematic

Experience footageThe concept for this project was a multiplayer game of jumprope within the browser. The idea was that each player has their own role (jumper, swinger).

In its original concept, the goal was to have the experience be a one-to-two person connection, where two people swing the rope while the third jumps. Due to the limitations of trying to get the physics to work across browsers, the scope has been limited to one-to-one person connections in synchrony. The complementary roles of jumper and swinger would need to be fleshed out if I was to continue working on this sketch. There is a certain power dynamic between being the one swinging the rope and the one jumping. Participants are anonymous, but anonymity does not have a significant impact on the experience. Location is also less important, though due to timing, being in the same room probably makes the experience easier to manage. The project is ultimately trying to explore a novel interaction over wireless connections in browser. Unfortunately, the project is unfinished currently as the logic turned out to be difficult to crack. In order to solve the physics issues, I have the rope swinger using real physics, while the jumper just has points mirroring the swinger. If I were to take this further, I would implement a reward/interaction system for the jump interaction.

MoMar- Telematic

Stop talking and get back to work: The video game

My project is an asynchronous one-to-many experience. Users can interact with each other only if they encounter other people at the watercooler. Each player is the same as everyone else, no one has any special roles and everyone is in constant danger of being discharged. Participants can write their own nicknames that will be used at the water cooler message board. This project allows for remote-communication, people can use this app from anywhere in the world.

My project was designed to emulate social distractions at the workplace by giving users a long time between tasks so they have time to talk. If someone gets distracted and misses a couple of deadlines, they get fired! Normally, the tab that the game is on will close after the user gets fired. Problem is,  Glitch doesn't let me do that.

Some notes on graphics:

Brief Story

Imagine that you are working in an office building.
You are given a meaningless task with a long deadline.
You get bored, so you go to the water cooler to fill up your bottle. Your friends are there and you guys start talking.
But oh no! Look at the time, you should've submitted that project a little while ago...
While your friends shuffle away, your angry boss comes up to you.
"You're fired!" he screams.

Instructions:

Type in your name.
Press buttons on the computer display.
If you press the wrong buttons, you get strikes.
Three strikes and you're fired.
You need to keep an eye out on your water level.
If you run out, you can collapse from dehydration!
Fill up your bottle at the cooler by clicking the water cooler icon.
Talk to your friends while your bottle fills up.

I didn't embed my project because it uses windows.alert(), so please press the button below.

Link to game

Link to code

sansal-Telematic

A Simple Music-Based Chatroom

 

https://glitch.com/edit/#!/harmonize?path=public/sketch.js:124:17

I edited the project after making the .gif, to make it easier to visualize what sound each client was making.

My main idea behind this project was about how to make a music-creating service that could be edited by multiple people at once, like a Google Doc. I first thought of using clicks on specific spaces to record a note, but I later realized it would be more stimulative if I mapped certain keys on the keyboard to piano keys. As such, the keys from a to k map to a corresponding note from a3 to a4, and w, r,t,u,and i map to the sharps/flats.

As per Golan's suggestion and general user confusion over the column location for each note, I've changed it so that each client's notes only occur within a specified column, and changing the note played only changes the row location. Now, each user's "cursor" will be constrained to within a specific column. Also, the text on the left-most column tells you which key to press to play that row's specific note.

My network model is effectively many-to-one(-to-many). I'm taking multiple clients' music data and note location, and sharing it with all clients on the same canvas. The project is synchronous, so people will be "communicating" in real-time. Participants are unknown, as there is no recorded form of identification, so communication across the server is anonymous.

The biggest problem for my project was getting the oscillator values to be emitted from/to multiple clients. I couldn't directly emit an oscillator as an object with socket.emit, so with some help, I found that I needed to just emit the parameters for the functions each client's oscillator used, and create a new oscillator to play that frequency upon calling socket.on. This'll definitely lead to memory leaks, but it was the best hack to solve this problem. Another earlier problem I had was that since music used time at a specific rate and because draw() updated 60 times per second, I needed to decrease that number so that the note changes would actually be visible on each client's screen. This was an easy fix, and I used frameRate(5) to decrease it down to a visually manageable number.

A problem that I found unsolvable was that glitch kept saying that all p5.js objects or references were undefined, even though I defined p5 in my html file. I later found out that this was because though there was a definition of p5.js on the browser side (thus the code compiled and the p5 elements were visible), the glitch server did not contain a reference to p5, so it would throw an error.

 

vingu – telematic

Chaotic Garden Glitch

Enter Chaotic Garden 

This was inspired by Ken Goldberg & Joseph Santarromana's TeleGarden. I really liked the idea of maintaing a garden together, and the idea of community.

Users collaborate simultaneously, but plant their own seeds independent of each other. ( The user can only see their own plants). When they water their plants, they are watering everyone's plants (whoever is online). Users are anonymous, only shown by a cursor. This makes taking care of your garden somewhat chaotic. If someone is watering their plants, then it seems like water is appearing out of nowhere and watering your plants as well. Each user's watering action affects all the other users' gardens

Initially, I tried to make a virtual shared musical garden. The y position of the plants determine music note, like notes on a music sheet. (In addition, the plants would die not watered within 10 minutes). I was not able to implement the shared plants and music in time. The only thing shared is the watering action.

(first ideas of motion tracking and hand drawings)

zapra – telematic

Spaghett.io - A shared drawing space for noodley lines (among other things)

A shared masterpiece between me and some friends

View app

Using the template from the cmuems-drawing-game, I created a shared drawing space where lines have a maximum length and can "weave" through existing lines. I was interested in playing with the idea of the "shared" space by creating constraints that prevent canvas hogging. Since intersecting lines will weave under existing ones, new drawings integrate into old ones rather than covering them up. Each line cannot exceed 1000 pixels, discouraging canvas-sized scribbles and unthoughtful space consumption.

I had an alternate concept for this app where touching lines would be deleted, encouraging people to add lines without intersecting the other clients' in a Blokus-like fill-up-the-space game. In the code you can see an area where I made an attempt (I plan on returning to it), but deleting the right lines proved to be more complex than I anticipated.

Process:

Initial sketches / concepts: some chat rooms, some shared "jukebox" apps using spotify, and a game of motion-tracked foursquare
Some semi-coherent scribblings where I try to make sense of array structures

I think one of my problems for this assignment was that I could come up with ideas faster than I could figure out how to make them. For this project, I originally had planned on making a "radio chat" where you could use a slider or dial to speak to people on the same "channel" as you. Landing on an unstable channel would produce a certain level of "static" from the other chats. I spent the first week of this assignment dissecting the chat template and trying to teach  myself how to use html with p5. After a lot of frustration and confusion with the template, I switched over to the drawing app to see if I would have better luck. I had gotten about a third of the way through with this project when I had the idea of a drawing game that responded to intersections. Hesitant to abandon my first week of progress, I experimented with adding the intersection assignment's code to the drawing app and found myself much more interested in the new concept. Though I feel like I ended this project with a handful of semi-finished apps rather than a more robust deliverable, I really enjoyed working through these problems and would like to revisit some of them in the future.

original "radio chat" concept, demonstrated through drawings instead of text

szh-Telematic

full process on vicky's page

Moood is a collaborative listening(ish) experience that connects multiple users to each other and Spotify, using sockets.io, node.js, p5.js, and Spotify API's.

In this "group chatroom" of sorts, users remotely input information (song tracks) asynchronously, and have equal abilities in leveraging the power (color, or mood) of the room.

This first iteration, as it currently stands, is a bare minimum collaborative Spotify platform. Users type a song track (must be available on Spotify), which would then be sent to a server, to be communicated to Spotify. Spotify would then analyze the track based on six audio features: 1. Valence (a measure describing the musical positiveness), 2. Danceability (a measure describing how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity), 3. Energy (a measure representing a perceptual measure of intensity and activity), 4. Acousticness (a confidence measure of whether the track is acoustic) 5. Instrumentalness (a prediction on whether a track contains no vocals), and 6. Liveness (a detection of the presence of an audience in the recording) These six audio features are then mapped onto a color scale, and are the aspects in which dictate the color gradient being represented on the screen, which will then be broadcasted to all clients, including the sender.

At this stage, users would be able to share current songs they are listening to and dictate the way in which the "mood" of the room is represented, by changing the color in which room would be. Color is an extremely expressionistic and emotional visual cue, which has the ability to tie in beautifully with the aspect of music.

Our initial idea is a lot more ambitious, however, we ran into several (an understatement lol), issues. The original play was to create a web player environment that would consist of the 3 RGB colors, and CMY color overlaps, with white in the middle. Users would be able to click onto different colors, and the combination / toggle of the colors would trigger different songs to be played based on our mapping of colors to the Spotify API endpoints used above (in our current iteration). Users would then be able to dictate the visual mood of the room, as well as audio mood of the room, by mixing colors and playing different songs. First, there was the issue of the being able to create user authorization; there are several different types of it, some not being compatible with certain codes, and others having certain time limits. Next, there was the issue of being able to handle playback on Spotify Web API, versus Spotify Playback SDK, versus using Spotify Connect. SDK did not allow for collaboration with node.js, but the other two ended up creating issues in overlapping sockets, listening ports, and so on. We were also unable to manipulate / figure out how to pull apart certain songs from select playlists, but that was an issue that we could only have dip into due to the other issues that were more pressing. Because there is not only server and clients being communicated across here, and instead the entire addition of another party (Spotify), there was often conflicting interests in where that code intersected.

That being said, because we have managed to overcome the main hill of having all these parties communicate to each other, we would want to further work on this project to incorporate music (duh). It is quite sad that it is a project revolving around Spotify and music as a social experience, without the actual audio part.

vikz-Telematic

 

Moood is a collaborative listening(ish) experience that connects multiple users to each other and Spotify, using sockets.io, node.js, p5.js, and Spotify API's.

In this "group chatroom" of sorts, users remotely input information (song tracks) asynchronously, and have equal abilities in leveraging the power (color, or mood) of the room.

This first iteration, as it currently stands, is a bare minimum collaborative Spotify platform. Users type a song track (must be available on Spotify), which would then be sent to a server, to be communicated to Spotify. Spotify would then analyze the track based on six audio features: 1. Valence (a measure describing the musical positiveness), 2. Danceability (a measure describing how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity), 3. Energy (a measure representing a perceptual measure of intensity and activity), 4. Acousticness (a confidence measure of whether the track is acoustic) 5. Instrumentalness (a prediction on whether a track contains no vocals), and 6. Liveness (a detection of the presence of an audience in the recording) These six audio features are then mapped onto a color scale, and are the aspects in which dictate the color gradient being represented on the screen, which will then be broadcasted to all clients, including the sender.

At this stage, users would be able to share current songs they are listening to and dictate the way in which the "mood" of the room is represented, by changing the color in which room would be. Color is an extremely expressionistic and emotional visual cue, which has the ability to tie in beautifully with the aspect of music.

Our initial idea is a lot more ambitious, however, we ran into several (an understatement lol), issues. The original play was to create a web player environment that would consist of the 3 RGB colors, and CMY color overlaps, with white in the middle. Users would be able to click onto different colors, and the combination / toggle of the colors would trigger different songs to be played based on our mapping of colors to the Spotify API endpoints used above (in our current iteration). Users would then be able to dictate the visual mood of the room, as well as audio mood of the room, by mixing colors and playing different songs. First, there was the issue of the being able to create user authorization; there are several different types of it, some not being compatible with certain codes, and others having certain time limits. Next, there was the issue of being able to handle playback on Spotify Web API, versus Spotify Playback SDK, versus using Spotify Connect. SDK did not allow for collaboration with node.js, but the other two ended up creating issues in overlapping sockets, listening ports, and so on. We were also unable to manipulate / figure out how to pull apart certain songs from select playlists, but that was an issue that we could only have dip into due to the other issues that were more pressing. Because there is not only server and clients being communicated across here, and instead the entire addition of another party (Spotify), there was often conflicting interests in where that code intersected.

That being said, because we have managed to overcome the main hill of having all these parties communicate to each other, we would want to further work on this project to incorporate music (duh). It is quite sad that it is a project revolving around Spotify and music as a social experience, without the actual audio part.

by Sabrina Zhai and Vicky Zhou 

sovid – Telematic

Unfortunately for this project, I was unable to make it work completely with the server in Glitch. For this post, I'm sharing my trials and showing things that did end up working.

Shown above is an example of the convex hull algorithm I implemented in the p5 editor with some mouse interaction. I was interested in creating a generative gem or some kind of abstract shape with the cursors of each client visiting the page, but as these things sometimes go, I could not get it to work in Glitch. I liked the idea of a shape growing and shrinking as a group of people collaborate on its formation, all the while leaving history of each move made with the trail of points each cursor would leave.

The sketch in the p5 editor can be found here, and the sketch I ended up on in Glitch can be found here.

lubar – telematic

The past message sender, the keeper-upper, the interrupter

An app that plays with the idea of trying to keep up with an ongoing conversation and coming up with something to add only once the conversation has already moved on. The chat app sends the previously written message in the conversation and sends it in your name when you add a new comment. This creates a new way to try to navigate a communication space, because the user has a lack of control over the direction of the conversation as they press send, the messages intercept and disrupt the smooth flow of send and reply, they are all collaborating simultaneously while at the same time, always one step behind.

Link to webpage           Link to Glitch Program

- Process -

For the telematic piece I wanted to create a translating chat app which takes outgoing typed messages and translates them on the screen in the language of the other chatters (excluding your language), losing the original written text (in translation (heh!)). All incoming messages to you are translated into your chosen language, creating a possibility for dialogue and language untangling across boundaries.

I thought that using a google translate API to change the text would be a great opportunity to learn more about APIs as a part of this project. I wish that I had not chosen to do so while also learning how to navigate glitch, and node.js and trying to untangle why examples of working translations immediately failed when remixed.  I ran into so many obstacles and problems trying to implement the google translate API in glitch, and found an alternate resource and  got the language detection and translation working! This was a glorious yet short lived victory as I later discovered that the alternate API resource limited the amount of translations it would allow, thus stopping the program from working entirely at 10:40pm on Tuesday (yay!):

I'm incredibly frustrated that I was unable to get this to work, however I feel that I learned a lot from the process (not necessarily the things I set out to learn, however still useful). Link to this project. I will be continuing to work on this.

So setting that aside and working with some of the framework that I had in place for the translation project, I switched gears in order to have a functioning program.