Category Archives: looking-outwards

Amy Friedman

14 Feb 2015

City Pulse

City Pulse – a biometric interface from Kalle Hübinger on Vimeo.

Installation created by Kalle Hubinger called City Pulse in January 2014 for Interior Design Week Köln 2014. This installtion utilizes pulse rhythms to inform the light and sound of the space. The biometric data is analyzed, stored and displayed. The piece comments “The user is part of an experimental interfaces at the edge between action/reaction, inside/outside and part of the overall pulse of the city”. I dont know if I truly understand how this connection is informed, but the more people who participate the greater the data set becomes. Except the installation only remembers the last 3 users and the most current is displayed, therefore I dont understand the connection to the city as a whole. This installation uses MaxMSP, LEDs, and arduinos to create the ambiance. It sees to create an immersive atmosphere, but I dont know if it succeeds at encapsulating the pulse of the “city” itself. This reminds me of the installation I had posted in the past “Qi Visualizer” by Yuan Yifan.

Humanthesizer

Calvin Harris – Humanthesizer by CInq7

Calvin Harris takes a new perspective on synthesizers by combining human touch and foot connections to create music. Using MaxMSP, Arduinos, and bodysafe conductive paint Harris is able to program and create his own music. Human contact closes the circuits and allows for instruments to orchestrate. I think this installation succeeds in creating interactive experience, but I dont know if it is necessary to have half naked girls for the musics to succeed. This was an interesting take and allows for people to personalize their own music, based on what they are programmed to produce when the circuit closes. This reminds me of the Piano Steps by Fun Theory but this creates music on people, it would be intriguing to allow for this to be apart of clothing to have a collective of people orchestrate their own music and be a way for people to create their own rhythms through others.

 

Zack Aman

13 Feb 2015

D.O.R.T.H.E. by Lasse Munk and Søren Andreasen

D.O.R.T.H.E. is takes input from a typewriter and analyzes the words and uses both the easily quantifiable aspects (such as number of letters in a word) as well as the more intangible aspects (sentiment analysis) and then uses Max/MSP to route that input to a noise-generating machines built from scrap electronics.

The part that I find most inspiring about the project is the output; the use of actual, electronics and mechanics give the sound produced a much richer, expressive tone. My favorite part is when the dangling plastic gets moved onto the fan for a quick percussive roll. There’s as much an art with finding great sounds as there is for composition (harkening back to found sound pieces starting in the early 1900’s), and these guys nail the sound selection. It reminds me of a project I wanted to do with recording the sounds of 3D printers and trying to compose music based on the sound output and then having physical, printed representations of the music. It’s important to remember that generative form is both input and output, and both sides are solid in D.O.R.T.H.E.

 

City Symphonies by Mark McKeague

This project uses a traffic simulation built in Processing and MaxMSP to create generative music based on the connections and placements of different cars.

The concept of this piece is great: cars are an interesting, dynamic spatial data set that are ripe for audio synthesis. Where I think this project falls down is on the output side; the sounds themselves are uninteresting and the relation between input and output is unclear. The simple sin-wave form of the synthesized output is clean (too clean), a pure signal that harkens back to dial-up modems. It gives the impression of music by computers for computers — if this is the intention, it would be great if the sound could hold enough information for the visual to be reconstructed from the sound alone.

Making music from transit has a long history, such as Pierre Schaeffer’s “Etude aux Chemins de Fer,” but in each case the musical output should be something interesting. Schaeffer was revolutionary for suggesting that the everyday sounds of trains could be art. McKeague is attempting to find beauty in traffic patterns, but does not manage to find beautiful patterns nor does he translate complexity into intriguing sounds.

dantasse

13 Feb 2015

Machine Learning in the arts

Computers Watching Movies by Benjamin Grosser. Takes computer vision algorithms and applies them to movies. At first it’s just kind of cute, but upon a closer look, there’s something really interesting there. For example, look at the Matrix and Inception links vs. Annie Hall and American Beauty. Matrix and Inception cover more of the screen; they look more “epic” maybe. For example, maybe before watching the movie, you watch the computer watching the movie. Or maybe there’s something you can tell about people’s movie tastes based on the CV outputs of what they watch. (I know Netflix would love to know about that.)

Genetic Algorithm Walkers by Rafael Matsunaga.

Genetic algorithms are a lot of fun, but it’s particularly fun when you’re genetic-algorithming something silly like humanoids walking. It shows the algorithm in action in a way that’s pretty easy to understand. Also, it maps the variables to pretty-easily-rememberable names. It’s a neat way to show evolution, to teach how genetic algorithms can come up with something that works pretty well for difficult tasks like walking with joints.

 

Ron

13 Feb 2015

DJ Light Peru

This Max MSP-based sound and light installation allows users to serve as a DJ, controlling the audio and visual performance over a large physical area. By using a thermal camera to detect movement, this project translates live arm movement and hand gestures to produce sound and specify light color. I think the concept is well done; the light is not only influenced by the user’s gestures but is also generated by the audio feeds. To me, the sound that is generated by this system seem to cacophonous and dissonant; they seem to clash with the lighting’s soft colors. This installation was part of a Christmas celebration in Lima, Peru, and I expected the effect to be more celebratory, harmonious.

LOVE – Interactive 3D Mapping

A monochromatic version of Robert Indiana’s LOVE sculpture is present at the Pratt Institute in Brooklyn, NY. At the time of the Max MSP-based installation, the sculpture wasn’t well lit at night, leading the artist to create an interactive element. Looped videos were projected three-dimensionally onto the sculpture, and colors would shift based on proximity sensors that detected a person’s position as she walked by. This implementation brings a certain liveliness to an existing monochromatic sculpture without physically altering it, and provides passers-by with an element of surprise. From the video, it was a little difficult to tell how a user’s position was mapped to the shifting of colors; I think presenting a more obvious mapping would make it more engaging. I also felt like the projection’s movements could have been slowed down to make it seem less busy/distracting.

rlciavar

12 Feb 2015

This week I’m going to focus on MAX MSP based projects.

AudioDome SoundBlox is an interactive sequencer installation. Different sound inputs are controlled by flipping and positioning large blocks. These blocks are tracked from above sounds are played back based on the image tracker exposed.

I found this project most interesting because of its scale. There is a history of sound generating projects using image tracking or object positioning. However, these projects usually remain table top pieces and maintain a lot of the characteristics of classic digital sound manipulation tools. This SoundBlox are interesting because the blox themselves begin to take on anthropomorphic qualities because of their sound and the positioning of the speaker inside of the block. Their position in relation to a person experiencing the installation becomes relevant. The method of sound manipulation is more related to the sound itself.

I wish that the image trackers were more indicative of the type of sound they produced. It often became difficult to tell which block was responsible for which sounds or how they would change. It could also be interesting if the shape of the box itself was reflective of this. Maybe the boxes aren’t box shaped at all.

NOISY JELLY from Raphaël Pluvinage on Vimeo.

Noisy Jelly is a fun, silly project that uses capacitive touch sensing in molded colorful jello shapes to generate sounds through a MAX patch. I think the most interesting moments with this project are when the shapes create an unexpected sound in relation to their form. And when people being to experiment with the shapes in a new way. particularly when the shapes are broken to create new shapes or stacked on top of each other to activate multiple sounds at once.