Ward Penney – Text Rain
Completed in Processing with openCV.
Completed in Processing with openCV.
monome tonematrix by Andre Michelle
The tonematrix is a 16 step, 16 note, sequenced web-based music generator. Notes are highlighted when pressed and then triggered when the sequences steps through horizontally. It is a tremendously expressive instrument for appearing as just an array of 256 buttons. It’s easy to use and understand and the feedback, both visual and audio are satisfying and informative. At the same time, the sound is simple and predictable, and variations in button combination don’t create enough sounds that feel new.
Reproducing Text Rain using OpenFrameworks:
The kinect piano is a larger than life way to play piano. The users can step on a projected keyboard on the ground and the corresponding tone will be played. The two fellas in the second half of the video demonstrated collaborative music making. This is a simplistic demo, but the idea behind it seems like so much fun. However I’m sure that the accuracy on this thing is pretty much atrocious since the user does not have any markers to rely on. It is probably very simple to mis-step, true to every sense of the word, especially finding footing between the black and white keys.
I think a large scale collaborative music space would be something interesting to explore. The challenge with music related applications is to strike a balance between creating dynamic yet pleasant music versus building something that sounds too pre-canned. I’d entertain the idea that this can support the idea of a flash mob emerging from a crowd somewhere and it would be pretty awesome. It can perhaps be derived from various user movement, especially since the Kinect can detect more that just locations, and produces a 3d vector that can be translated to various sound effects. It is my belief that music applications if done well, holds much more potential to captivate people since working with music is simply so much fun.
Ryoji Ikeda’s live set. This is a music piece that experiments with black and white visual styles reminiscent of bar codes and digital tags. The piece uses high speed visuals synchronized with music or sound manipulated by the artist on stage.
I think it is notable because music based pieces are difficult to find. I have a personal interest in working with sound, and it seems that most audio pieces always end up looking like a visualizer of some sort. This piece does not exhibit such a property. In fact, it is highly peculiar, reminiscence of a dystopian future and a japanese science fiction aura. However, my critique is that the project doesn’t have much enjoyable sound other than annoying beeps and sounds that are sampled from various electronic devices in the real world. If I want to listen to my computer beep at me, I’d do that myself at my own time. I’m looking forward to see an artist present a work using sound that changes how sound is used or perceived.
However, it leads me to think, what if there is a possibility of making a soundscape that sounds like a sampling of real life. The audience will be surrounded with voices that is very familiar in daily life, yet the visuals are stark and hostile.
Ryoji Ikeda Live from Sheikh Ahmed on Vimeo.
Reproducing Schotter | 1965 | George Nees
using processing.js ::
as a processing applet ::
/* * Alex Wolfe - Spring 2011 * Interactive Art and Computational Design * Reproducing Schotter(1965) by Georg Nees */ int rectSize = 20; int rows = 23; int col = 12; int xOffset = 30; int yOffset = 10; int r = 8; void setup(){ size(300,500); background(255); rectMode(CORNER); smooth(); noFill(); stroke(0); for( int x=xOffset; x |
openFrameworks:
This is an example of my Open Frameworks “Text Rain” interpretation. I used the openCVExample including the various ofxCv classes (particularly ofxCvContourFinder.cpp).
OF “Text Rain” Interpretation from eric brockmeyer on Vimeo.
My introduction to Interactive Art is through Robert Hodgins.. and I’m a big fan of his work. My favorite work of his, although not prominently featured, is his magnetic ink project.
Magnetic ink is a project that attempts to document the process of magnetic ink prints; that which attracts me is the entire beauty of the presentation. There are a few orbs spinning ontop of what seems like a surface, and ink splatters off the orbs to form nice traces on the surface. There is a zen like quality to this entire video – I love it very much and would very much like to reproduce it one day.
My only critique of this, and many of his work, is that they tend to not be real time.
From this project, I see the possibility of generating ink or paint based artwork using some sort of process that is generated by a larger system of forces. I think the entire idea is very enticing and has lots of potential for organically generated forms.
Magnetic Ink, Process video from flight404 on Vimeo.
wu tang forever
a tribute to Text Rain (1999) by Camille Utterback in Processing
TextRain:
Created by running a filter, changing colors beyond the average pixel brightness to white, and the rest to black. When a letter hits a “black” pixel, it is moved upwards, otherwise it moves downwards.
100.000.000 Stolen Pixels is a project by Kim Assendorf in which a web crawler, starting from 10 urls, searched pages for images and hyperlinks. Hyperlinks were added to it’s list of urls to search, and images had a 10×10 square cut from them, all of which were assembled into a massive, google-maps-viewable mosaic comprising 100,000,000 pixels from 100,000,000 images. It’s a beautiful and bewildering amount of data, and it’s almost like staring into a void of noise. When you begin to pick out patterns, such as the oft-repeated image of a pencil, perhaps from blog software of some kind or wikipedia, the mosaic becomes easier to view. The massive 6-degrees-of-kevin-bacon game with the internet is perhaps more intriguing than the visuals, but this is also one of the projects flaws. I found the url log even more intriguing then the image mosaic because I could pull understandable data from it – the mosaic, while astounding, gives little clues to what the images are of originally or where they came from. If I could, say, click on a square and be told what webpage it was downloaded from (perhaps with a link to the original image?), that would push this way further in terms of time that can be spent exploring this massive collection and how rewarding spending that time is.
This is a video of my Text Rain reproduction. I coded it in Processing, but had trouble embedding the applet (It couldn’t find the Capture class when it tried to run it on the blog), so I made a video instead. This uses code from the background subtraction example from processing.org. There are a few small bugs, but they are mostly based on lighting and the color of my walls. My screen capture software was acting up as well, which explains the video stuttering.
Nonmanifold Mandible is a video by Ben F. Carney, in which a digital model of a human face has it’s controls linked to 4 channels of audio, distorting and warping it depending on the music, in order to “parallel unreasonable human behavior”. The music that plays is fast-paced, and the face quickly warps between a recognizable human form, a jagged mass of swirls and spikes, and something in-between. I’ve always found the distorted, glitchy aesthetic appealing, but applying it to a human, more recognizable form, and switching between different areas and magnitudes of distortion so rapidly moves past just an exploration of glitch and distortion into a disturbing and monstrous realm. While the visuals are grotesque and engaging, and the rapid music and pace add a lot, it moves so fast that its hard for me to draw a mental connection between sound and distortion beyond a rhythmic one. I wish I could unravel more of what’s going on behind the scenes just through watching and analyzing. Because it’s hard to establish that link, I almost begin to wonder if there is one.
This is my version of TextRain in OpenFrameworks:
The idea is very simple. A background image is used to estimate the regions in the video that are being occupied by active observers. The difference image between the current frame captured by the camera and the background is smoothed, thresholded and dilated. These operations generate a foreground map that allows to determine when to change the position of the text.
“Light Butterflies” is a lighting piece executed by Chiara Lampugnani for the Milan International LED Light Festival. What caught my eye was the abundance of like shapes forming a field effect at the urban scale. Lighting projects are often interiors which need little luminosity to create a big effect. These butterflies create a (seemingly) living canopy over a city street.
Their beauty lies in their simplicity but the inanimate nature of these forms may dull their impact over time. There is a great opportunity to create subtle variations in lighting (changes which are imperceptible to the casual observer) which may allow this piece to take on an element of time.
eCLOUD from Dan Goods on Vimeo.
“eCloud” by Aaron Koblin, Nik Hafermaas and Dan Goods is an interactive sculpture at the San Jose airport which provides real-time weather information to air travellers. The project reminded me of Natalie Jeremijenko’s “Live Wire”, which visualized web traffic on a local router.
The eCloud is fascinating in it’s use of polycarbonate plates that somehow (not clearly explained) move between opaque and transparent states. the field effect here is , again, stunning.
The project seems to fall short in that it is beautiful to look at but the sculpture itself seems to provide little useful data to users. As it is intended to be a sculptural display I view this as shortcoming.
I am fascinated by the development of CNC food production. These types of machines have (subtly) inserted themselves in many facets of our lives and food seems like a critical and final frontier. Food preparation has been traditionally, an intimate act which modern food production has almost succeeded in destroying. I’m curios how new tools like the “3d food printer” could scale down enough to have an impact in homes instead of factories or high end restaurants.
I’m less interested in the effectiveness of this particular instantiation of CNC food, however it does suggest that we will be confronted with questions similar to the CNC fabrication revolution. How does this change hand crafts perceived value? Will mass production dissolve into mass customization? And, who chooses the flavors in the tubes?
Now That’s What I Call MIDI! is a project by the group Internet Archaeology to take MIDI versions of songs that were popular during the earlier days of the internet (~1996-~2002) and release them on a physical vinyl record EP. I think that the reframing and relocation of something outdated and purely digital (MIDI’s of 10 year old pop songs) onto a format that’s even more outdated, revival in popularity withstanding, and purely analog (Vinyl records) is a quite pointless, but somehow sort of appropriate act, and it’s that paradox that makes this project interesting to me. It does, however, come off as almost not enough. 16 MIDI’s aren’t much – and it’s only EP length. While money is certainly an issue, it could be interesting to play and remix some of these artifacts instead of just drag-and-dropping them onto a record. The tracklist isn’t given, nor do users (who must contribute money to make the project happen at all) have any say on what will be included. The world of old MIDI’s is gigantic, and this seems like a tiny and unprepared slice.