Mark Shuster – Text Rain
Text Rain
Far from perfect, but it does respond to visual data. This was created in openFrameworks.
Far from perfect, but it does respond to visual data. This was created in openFrameworks.
Processing
Processing.js
Processing code
background(255); smooth(); size(420, 724); fill(0, 0); float entropy = 0; for (int i = 0; i < 20; i++) { for (int j = 0; j < 12; j++) { int x = 25 + 30*j + int(random(entropy*-1, entropy)); int y = 15 + 30*i + int(random(entropy*-1, entropy)); int angle = 0 + int(random(entropy*-1, entropy)); pushMatrix(); translate(x,y); rotate(radians(angle)); rect(0,0,30,30); popMatrix(); entropy += 0.08; } } |
openFrameworks
Gives you a new version each time it runs.
1. Delicate Boundaries
This is an awesome project. Small bugs made of light can crawl out of computer screen and crawl on people’s body. The most interesting thing of this project is that it change people’s mind of what a screen can do. It make the screen and the interaction evolve into a more tangible space. This experience beyond people’s expectation, that’s why it can call people’s eyes.
If the bugs can understand people’s body, not only random crawling, but have some purpose, that would be more interesting.
An interactive artwork that allows the spaces inside our digital devices to move into the physical world. Small bugs made of light, crawl out of the computer screen onto the human bodies that make contact with them.
ENVISION : Step into the sensory box
Envision is an installation that maps a projected animation onto a physical space.
This project first caught my attention because of its impressive visuals, animation transitions, and synesthetic effects. I am fascinated by augmented reality projects and other projects that use limited and localized hardware to drastically enhance relatively large environments.
I think this project could benefit from some physical interactivity- maybe interacting with the physical surfaces could have some influence on the nature or outcome of the animation.
pCubee
pCubee is an handheld multifaceted display that allows users to interact with virtual environments by tracking the user and its current orientation in space- resulting in the illusion of perspective and dimensionality from the user’s point of view.
This is a novel project because it allows an individual to interact with a virtual 3D environment through tangible manipulation. It presents a way of interacting with 3D virtual environments that is less dissociative than conventional virtual reality and more physically interactive than most augmented reality applications I’ve seen.
My only criticism is that they don’t suggest any practical applications besides mere amusement or gaming…after all it’s just a tech demo.
F L U X by Candas Sisman
This is a short digital animation that evokes some qualities of synesthesia due to its synchronized audio and evolving forms- kind of reminiscent of works by Universal Everything (creators of Advanced Beauty).
Given the quality of the animation and mastery of the artist, I don’t have much to critique- I would like to see more of her work.
Just Landed is a visualization of airline flights by Jer Thorp using Processing. Twitter is searched for passenger’s Tweets containing the phrase “just landed”. Tweets are parsed for a destination, and an home location is taken from that passenger’s profile. Each data point becomes an arc in the animation.
It is a clever use of social media as a new data set. Information which ten years ago would have been available only to large airlines keeping careful records is now published freely online, and by no conscious decision of anyone involved. However, the animation is a bit misleading; many assumptions are made when interpreting the data. (For example, all passengers are assumed to be coming from their home.) The inaccuracy is a compromise made in order to create impressive visuals from informally-gathered data.
More info available on Thorp’s blog.
void setup() { size(300, 600); smooth(); noFill(); background(255); rectMode(CENTER); noLoop(); } void draw() { int x, y; int offset = 50; for (y = 0; y < 24; y++) { for (x = 0; x < 12; x++) { resetMatrix(); translate(offset + x*20,offset + y*20); float r = random(-y, y); rotate((PI * 2) / 360 * r * 2); rect(r/2, r/2, 20, 20); resetMatrix(); } translate(-x*20,0); } } |
Seidel uses a particle system to create vivid complex forms inspired by sound.
I love the gorgeous colors and surprising intricacy of the image at the peak of generation. The resulting composition and purity of color from something that is pseudo random is enviable. A longer duration or multiple iterations would help though, the moment where the system finally resolves itself is far too tantalizingly short. The great linear quality of that first black stroke is lost in the original mess of colors, and it would be great to capture and relish similar moments. (Though a mere 35 second clip time makes it simple to additively watch and pick out the details)
MorphoLuminesence | PROJECTiONE
MorphoLuminesence is a digitally fabricated kinetic light sculpture that reacts to its viewers. Arduino controlled infrared sensors pick up people standing or moving underneath the work and servos shift the individual petals to create an ever fluctuating surface. Between the constantly shifting form and the range of colors from the multiple RGB leds virtually countless lighting effects can be achieved.
Flawless execution asides, “Morpho” offers a dynamic and customizable experience to the user. Embracing both form and function, its gorgeous, interactive, and creates a beautiful atmosphere. I especially like the form of the original curve. It would be interesting to have a version that instead of creating a choppy moving surface once people were detected changed the arc of the surface to focus the most amount of light where the person was standing.
Crystallization | Iris van Herpen and Daniel Widrig in collaboration with .MGX
Crystallization is a generative 3D printed shirt created for Iris van Herpen’s Spring 2010 collection shown at Amsterdam International Fashion Week and London Fashion Week. Utterly impractical yet completely mesmerizing, Crystallization is one of the first few pieces utilizing digital fabrication/rapid prototyping in couture, which by definition sort of undermines the point of couture (you pay the ridiculous prices for a handmade one of a kind), but still is ridiculously cool. I love the way it just balances on the figure using two simple shoulder holds, its actually extremely visually interesting from the back as well. It of course begs the question of a future where you can buy clothes online and simply print them out at your desk. Although the complex form and seashell like design work well on the runway, unfortunate shorts aside, it would be interesting to have a more accessible ready to wear version that actually could be mass produced taking full advantage of the technology.
Processing.js:
Processing:
Processing Code:
boolean toChange = true; void setup() { size(400, 760); smooth(); noFill(); background(248); } void mouseClicked() { toChange = true; } void draw() { if(toChange) { fill(248); rect(-1,-1,401,801); noFill(); for(int j = 0;j<24;j++) { for(int i = 0;i<12;i++) { pushMatrix(); translate(20+i*30+random(j*.3+i/30)-(j*.3+i/30)/2,20+j*30+random(j*.3+i/30)-(j*.3+i/30)/2); rotate(radians(random((6*j+i/2))-(6*j+i/2)/2)); rect(0,0,30,30); popMatrix(); } } toChange = false; } } |
Cinder:
Meischeid is a small animation by French director Matray.
3D primitives are displaced with various noise textures corresponding to different frequencies on the piano. The textures are animated by hand along to the music in After Effects. So, somewhat disappointingly, there is no music-driven procedural generation. But it is a beautiful interpretation of deconstructivist forms, and one which would lend itself naturally to automatic generation. I appreciate the use of simple polygons and particles as a material; ever since Pixar’s sub-pixel subdivision, digital animators have been trying to hide these basic building blocks of computer graphics. I also happen to be a freak about Blender, the 3D software used here.
More info at BlenderNation
The Text Rain project is coded using processing.
Part of the code is borrowed and revised from Frame Differencing by Golan Levin.
The Main idea is to find the difference between two frames and match the different point with text’s position.
Here is the video:
[youtube clip_id=”HY_5GxodZcQ”]And the link to the Video:
Video
Here is a Processing version of the classic TextRain (1999) by Camille Utterback and Romy Achituv:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | // Text Rain (Processing cover version) // Original by Camille Utterback and Romy Achituv (1999): // http://camilleutterback.com/projects/text-rain/ // Implemented in Processing 1.2.1 by Golan Levin import processing.video.*; Capture video; float brightnessThreshold = 50; TextRainLetter poemLetters[]; int nLetters; //----------------------------------- void setup() { size(320,240); video = new Capture(this, width, height); String poemString = "A poem about bodies"; nLetters = poemString.length(); poemLetters = new TextRainLetter[nLetters]; for (int i=0; i<nLetters; i++) { char c = poemString.charAt(i); float x = width * (float)(i+1)/(nLetters+1); float y = 10; poemLetters[i] = new TextRainLetter(c,x,y); } } //----------------------------------- void draw() { if (video.available()) { video.read(); video.loadPixels(); image (video, 0, 0, width, height); for (int i=0; i<nLetters; i++) { poemLetters[i].update(); poemLetters[i].draw(); } } } //=================================================================== class TextRainLetter { float gravity = 1.5; char c; float x; float y; TextRainLetter (char cc, float xx, float yy) { c = cc; x = xx; y = yy; } //----------------------------------- void update() { // Update the position of a TextRainLetter. // 1. Compute the pixel index corresponding to the (x,y) location of the particle. int index = width*(int)y + (int)x; index = constrain (index, 0, width*height-1); // 2. Fetch the color of the pixel there, and compute its brightness. float pixelBrightness = brightness(video.pixels[index]); // 3. If we're in a bright area, move downwards. // If we're in a dark area, move up until we're in a light area. if (pixelBrightness > brightnessThreshold) { y += gravity; //move downward } else { while ((y > 0) && (pixelBrightness <= brightnessThreshold)){ y -= gravity; // travel up intil it's bright index = width*(int)y + (int)x; index = constrain (index, 0, width*height-1); pixelBrightness = brightness(video.pixels[index]); } } if ((y >= height-1) || (y < 0)){ y = 0; } } //----------------------------------- void draw() { // Draw the letter. Use a simple black "drop shadow" // to achieve improved contrast for the typography. fill(0); text (""+c, x+1,y+1); text (""+c, x-1,y+1); text (""+c, x+1,y-1); text (""+c, x-1,y-1); fill(255); text (""+c, x,y); } } |
Here is a version for OpenFrameworks v.062, based on the MovieCapture example. First, the header (.h) file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | #ifndef _TEST_APP #define _TEST_APP #include "ofMain.h" class TextRainLetter { public: char c; float x; float y; TextRainLetter (char cc, float xx, float yy); float getPixelBrightness (unsigned char *rgbPixels, int baseIndex); void update (unsigned char *rgbPixels, int width, int height, float threshold); void draw(); }; class testApp : public ofBaseApp { public: void setup(); void update(); void draw(); void keyPressed(int key); void keyReleased(int key); void mouseMoved(int x, int y ); void mouseDragged(int x, int y, int button); void mousePressed(int x, int y, int button); void mouseReleased(int x, int y, int button); void windowResized(int w, int h); ofVideoGrabber vidGrabber; int camWidth; int camHeight; float brightnessThreshold; vector <TextRainLetter> poemLetters; int nLetters; }; #endif |
And the C++ (.cpp) file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | #include "testApp.h" //-------------------------------------------------------------- void testApp::setup(){ camWidth = 320; camHeight = 240; vidGrabber.setVerbose(true); vidGrabber.initGrabber(camWidth,camHeight); brightnessThreshold = 50; string poemString = "A poem about bodies"; nLetters = poemString.length(); for (int i=0; i<nLetters; i++) { char c = poemString[i]; float x = camWidth * (float)(i+1)/(nLetters+1); float y = 10; poemLetters.push_back( TextRainLetter(c,x,y) ); } } //-------------------------------------------------------------- void testApp::update(){ vidGrabber.grabFrame(); if (vidGrabber.isFrameNew()){ unsigned char *videoPixels = vidGrabber.getPixels(); for (int i=0; i<nLetters; i++) { poemLetters[i].update(videoPixels, camWidth, camHeight, brightnessThreshold); } } } //-------------------------------------------------------------- void testApp::draw(){ ofBackground(100,100,100); ofSetHexColor(0xffffff); vidGrabber.draw(0,0); for (int i=0; i<nLetters; i++) { poemLetters[i].draw(); } } //-------------------------------------------------------------- void testApp::keyPressed (int key){} void testApp::keyReleased(int key){} void testApp::mouseMoved(int x, int y ){} void testApp::mouseDragged(int x, int y, int button){} void testApp::mousePressed(int x, int y, int button){} void testApp::mouseReleased(int x, int y, int button){} void testApp::windowResized(int w, int h){} //===================================================================== TextRainLetter::TextRainLetter (char cc, float xx, float yy) { // Constructor function c = cc; x = xx; y = yy; } //----------------------------------- void TextRainLetter::draw(){ char drawStr[1]; sprintf(drawStr, "%c", c); ofSetColor(0,0,0); ofDrawBitmapString(drawStr, x-1,y-1); ofDrawBitmapString(drawStr, x+1,y-1); ofDrawBitmapString(drawStr, x-1,y+1); ofDrawBitmapString(drawStr, x+1,y+1); ofSetColor(255,255,255); ofDrawBitmapString(drawStr, x,y); } //----------------------------------- void TextRainLetter::update (unsigned char *rgbPixels, int width, int height, float threshold) { int index = 3* (width*(int)y + (int)x); index = (int) ofClamp (index, 0, 3*width*height-1); float pixelBrightness = getPixelBrightness(rgbPixels, index); float gravity = 2.0; if (pixelBrightness > threshold) { y += gravity; } else { while ((y > 0) && (pixelBrightness <= threshold)){ y -= gravity; index = 3* (width*(int)y + (int)x); index = (int) ofClamp (index, 0, 3*width*height-1); pixelBrightness = getPixelBrightness(rgbPixels, index); } } if ((y >= height-1) || (y < 10)){ y = 10; } } //----------------------------------- float TextRainLetter::getPixelBrightness (unsigned char *rgbPixels, int baseIndex){ // small utility function int r = rgbPixels[baseIndex + 0]; int g = rgbPixels[baseIndex + 1]; int b = rgbPixels[baseIndex + 2]; float pixelBrightness = (r+g+b)/3.0; return pixelBrightness; } |
1. Schotter in js file
2. Schotter in applet
3. The Video for Openframeworks
4. The Code for Schotter
/**
*/
int length = 20;
float randomseed = 0.0;
void setup() {
background(255);
smooth();
size(200,400);
display();
}
void display() {
for(int y=0;y for(int x=10;x pushMatrix();
translate(x*(1+random(-randomseed,randomseed)),0);
rotate(random(-randomseed-y/500,randomseed+y/500));
rect(0,5,length,length);
popMatrix();
}
translate(0,20*(1+random(-randomseed,randomseed)));
}
}
Version for Processing and Processing.JS. Pressing a key pauses the animation.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | // Schotter (1965) by Georg Nees. // http://www.mediaartnet.org/works/schotter/ // Processing version by Golan Levin // There are versions by other people at // http://nr37.nl/software/georgnees/index.html // http://www.openprocessing.org/visuals/?visualID=10429 // But IMHO these are inferior implementations; // note e.g the misplaced origins for square rotation. //------------------------------- boolean go = true; void setup() { size(325,565); } //------------------------------- void draw() { if (go) { background (255); smooth(); noFill(); int nRows = 24; int nCols = 12; float S = 20; float marginX = (width - (nCols*S))/2.0; float marginY = (height - (nRows*S))/2.0; float maxRotation = (abs(mouseX)/(float)width) * 0.06; //radians float maxOffset = (abs(mouseY)/(float)height) * 0.6; for (int i=0; i<nRows; i++) { for (int j=0; j<nCols; j++) { float x = marginX + j*S; float y = marginY + i*S; float rotationAmount = (i+1)*random(0-maxRotation, maxRotation); float offsetX = i*random(0-maxOffset, maxOffset); float offsetY = i*random(0-maxOffset, maxOffset); pushMatrix(); translate(S/2, S/2); translate(x+offsetX, y+offsetY); rotate(rotationAmount); translate(0-S/2, 0-S/2); rect(0,0,S,S); popMatrix(); } } } } //------------------------------- void keyPressed() { go = !go; } |
Version for OpenFrameworks v.062 (C++). This is based off the EmptyExample provided in the OF download. Note that WP-Syntax uses lang=”cpp” as the shortcode for embedding C++ code.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | #include "testApp.h" bool go; //-------------------------------------------------------------- void testApp::setup(){ go = true; } //-------------------------------------------------------------- void testApp::draw(){ ofBackground(255,255,255); glEnable(GL_LINE_SMOOTH); ofSetColor(0,0,0); ofNoFill(); int nRows = 24; int nCols = 12; float S = 20; float marginX = (ofGetWidth() - (nCols*S))/2.0; float marginY = (ofGetHeight() - (nRows*S))/2.0; float maxRotation = (abs(mouseX)/(float)ofGetWidth()) * 0.06; //radians float maxOffset = (abs(mouseY)/(float)ofGetHeight()) * 0.6; for (int i=0; i < nRows; i++) { for (int j=0; j < nCols; j++) { float x = marginX + j*S; float y = marginY + i*S; float rotationAmount = (i+1)*ofRandom (0-maxRotation, maxRotation); float offsetX = i* ofRandom (0-maxOffset, maxOffset); float offsetY = i* ofRandom (0-maxOffset, maxOffset); ofPushMatrix(); ofTranslate (S/2, S/2, 0); ofTranslate (x+offsetX, y+offsetY); ofRotateZ (RAD_TO_DEG * rotationAmount); ofTranslate (0-S/2, 0-S/2); ofRect(0,0,S,S); ofPopMatrix(); } } } //-------------------------------------------------------------- void testApp::keyPressed(int key){ go = !go; if (!go){ ofSetFrameRate(1); } else { ofSetFrameRate(60); } } //-------------------------------------------------------------- void testApp::update(){} void testApp::keyReleased(int key){} void testApp::mouseMoved(int x, int y ){} void testApp::mouseDragged(int x, int y, int button){} void testApp::mousePressed(int x, int y, int button){} void testApp::mouseReleased(int x, int y, int button){} void testApp::windowResized(int w, int h){} |
Screencapture of OpenFrameworks version (at https://www.youtube.com/watch?v=bW-aE3OmVzo):
Processing
Processing.js
OpenFrameworks
Electric Sheep is a distributed computing project by Scott Draves (PhD in Computer Science from Carnegie Mellon) generating a continuous fractal animation. Users anywhere can contribute their computers to the effort, view the results in real-time, and select their favorite patterns to be favored in the genetic evolution of future “sheep”.
The project amazes me aesthetically and technically–the way simple rules, when iterated, produce landscapes too vast and too volatile to ever be quite grasped by human senses.
The project succeeds in all it attempts. The functionality of audience-selected patterns connects users around the world in wordless visual play. By using viewer’s personal computing devices, it reveals the power available to us in consumer technology. However, it stops short of extending into the real world. Once we step away from the computer, the ongoing parade becomes invisible to us; we can only experience it with the help of our digital prosthetics.
Text Rain clone à la openFrameworks:
The significant lines of code for Text Rain are:
void app::createLetter() { letter *l; // Index in textStr from which to take this letter int lPos; // Find first open position in text for(int i=0; ix = ofRandom(0.0, (float) screenWidth); lPos = l->x / ((float) screenWidth) * ((float) textStrLen); if(lPos < 0) lPos = 0; if(lPos >= textStrLen) lPos = textStrLen - 1; l->c = textStr[lPos]; l->y = 0; l->vel = 0.5; text[i] = l; break; } } } void app::destLetter(int i) { free(text[i]); text[i] = (letter *) 0; } void app::setup(){ camWidth = 320; camHeight = 240; pixelMax = 3*camWidth*camHeight; vidGrabber.setVerbose(true); vidGrabber.initGrabber(camWidth,camHeight); textStrLen = strlen(textStr); // Create text textMax = 1000; text = new letter* [textMax]; for(int i=0; i y > yMax) { destLetter(i); } else { // Move this letter down at its own speed l->y += l->vel; // Move this letter up as long as it is on an obstacle for(;;) { pixel = 3 * ( ((int)l->y)*camWidth + ((int)l->x) ); if(pixel >= 0 && pixel < pixelMax && img[pixel] + img[pixel+1] + img[pixel+2] < obShade) { l->y -= 1.0; } else { break; } } } } } // Create a new letter every 60 frames if( !(ofGetFrameNum()%60) ) { createLetter(); } } void app::draw(){ letter *l; char str[2] = "a"; ofSetColor(0xffffff); vidGrabber.draw(0,0); ofSetColor(255,255,255); // Draw all letters for(int i=0; i c; ofDrawBitmapString(str, (int) l->x, (int) l->y); } } }
The Reverse Geocache™ Puzzle is a locked box that contains a GPS module and LCD screen powered by an Arduino designed to unlock only at a certain destination. By pressing the button on the box the LCD displays the distance to the secret destination. This project is delightful for many reasons: The mystery of the box’s surprise; creating a catalyst for a new and exciting adventure in the real world; and forcing contemplation of blindly following an electronic device.
It would be interesting to see a modified version that could iterate through a list of coordinates, making for an adventure with many stops. This could be used in a scavenger hunt around a museum or zoo. Instead of a passive tour, students would become active learners enticed by the unknown contents and driven to explore. Trivia questions at each stop could even have to be answered and inputted before proceeding.
[ Reverse GeoCache Puzzle Covered by Hack a Day ]
[ The Reverse Geocache™ Puzzle Project Site ]
The Real-time Cartoonifier takes a video feed and using an FPGA, transforms and simplifies it, ultimately displaying the result. The prime benefit is the ability to see a cartooned version of something in effectively real-time. Contrasting a real-life image of yourself with a cartooned or otherwise digitally-altered version is fascinating: It reminds me of a funhouse mirror, but with infinitely more possibilities.
For the final project of a course on FPGAs it’s impressive, but we can also consider the augmented reality opportunities. Though out of the scope of their project (and likely to require more processing power than an FPGA can provide), a reasonable expansion would be interleaving the cartooned result with an actual cartoon adventure or cartoon-styled game. This addition would embrace the whimsy of the original project and give the viewer something to do after the novelty of looking at himself as a cartoon has worn off.
[ Cornell ECE 5760 Final Projects Covered by Make: ]
[ Real-time Cartoonifier Project Site ]
Schotter clone à la Processing:
à la Processing.js:
and à la openFrameworks:
The openFrameworks version takes the random seed from the mouse position. Otherwise, the code is nearly identical for all three. The significant lines of code are:
float rot; float tran; translate(10, 10); for(int col=0; col<12; col++) { rot = 0.0; tran = 0.0; translate(20, 0); pushMatrix(); for(int row=0; row<24; row++) { translate(0, 20); pushMatrix(); translate(random(-tran, tran), random(-tran, tran)); rotate(random(-rot, rot)); rect(-20, -20, 20, 20); popMatrix(); rot += 0.03; tran += 0.03; } popMatrix(); }
The Noun Project aims to collect universally recognized symbols and make them available in high-res for free.
This project is fairly new (launched early December 2010) and is inspiring for providing for the greater good in the visual world. It helps us to not have to re-invent the wheel. It also helps uncover design/visual history (see author and year created in detailed view).
As the database grows, the ability to browse will be necessary. If something like this was open sourced, it would be fun to synthesize information such as most popular icons vs. most downloaded icons.
Source: The Noun Project