Category Archives: project-1

John

28 Jan 2013

Screenshot_1_28_13_1_22_AM-2

In my implementation of Textrain I read a string into an array of letter objects. Each object knows its character, xOffset, what column its in and where it is currently located vertically. Each object scans it’s column for the highest dark spot and then checks that against it’s current location. Thus, each object can make a determination about whether it should fall as normal or cling to the highest dark pixel.
A few details worth noting.

  1. I flipped the pixel array so that the image is mirrored, this is nicer for display. 
  2. I added a meager easing function to ‘captured’ letters to help keep them from wiggling
  3. I basically ignore the bottom 50 pixels due to vignetting.

Textrain from john gruen on Vimeo.

Code is available at Github:https://github.com/johngruen/Textrain

import processing.video.*;
Capture v;
int totalOffset = 0; //helper var for counting xOffset for each Letter obj
PImage flip; //buffer for horizontally flipped image
String fallingLetters = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam non aliquam ante. Nullam at ligula mi. Nam orci metus";
Letter[] letters; 
PFont font;
int fontSize = 32;
float thresh = 100; // brightness threshold, Use UP and DOWN to alter
float sampleFloor = 40; //stop sampling pixels 20 rows from the bottom
color activeColor = color (255,0,255);
color fallColor = color(20,100);


void setup() {
  size(1280,720);
  v = new Capture(this,width,height,30);
  flip = createImage(width,height,RGB);
  font = loadFont("UniversLTStd-LightCn-32.vlw");
  textFont(font,fontSize); 
  initLetters();
}

void draw() {
  v.read();
  v.loadPixels();
  flip.loadPixels();
  flipImage(); //flip the image
  v.updatePixels();
  image(flip,0,0);
  for(int i = 0; i < letters.length; i++) {
    letters[i].draw();
  }
  flip.updatePixels();

}

void initLetters() {
  letters = new Letter[fallingLetters.length()];
  for(int i = 0; i < letters.length; i++) {
        letters[i] = new Letter(i,totalOffset,fallingLetters.charAt(i));
        totalOffset+= textWidth(fallingLetters.charAt(i));
  }
}

void flipImage() {
  for(int y = 0; y < v.height; y++) {
    for(int x = 0; x < v.width; x++) {
      int i = y*v.width + x;
      int j = y*v.width + v.width-1-x;
      flip.pixels[j] = v.pixels[i];
    }
  }
}

void keyPressed() {
   if (keyCode == UP) thresh++;
   else if (keyCode == DOWN) thresh--;
   println(int(thresh));
}


class Letter {
 int index; //just a good thing to know, used for debugging
 int xOffset; //offset horizontally of letter registration
 float speed; //speed
 char c; //what letter am i
 float curYPos, prevYPos; //current and previous y position. used for easing.
 int state = 0; // we use a tiny state switcher to control the flow. either the letter is falling or it isn't
 float topBrightPixel;//get top bright pixel;
 
Letter(int index_,int xOffset_,char c_) {
  index = index_;
  xOffset = xOffset_;
  c = c_;
  curYPos = int(random(-200,-100));//set currentYPos somewhere above the video
  speed = int(random(5,12));
} 

void draw() {
   senseBrightness();
   compareToCurrent();
   update();
   text(c,xOffset,curYPos); 
}

void senseBrightness() {
  topBrightPixel = 0;
  for(int i = xOffset; i < flip.pixels.length-flip.width*sampleFloor; i+=flip.width) {
    if(brightness(flip.pixels[i]) < thresh) { 
      break;
   }
   topBrightPixel++;
  }
}

void compareToCurrent() {
   if (topBrightPixel > curYPos + 2*speed || topBrightPixel >= flip.height - sampleFloor) {
     state = 0;
     //speed = random(5,12);
   } else {
     state = 1;
   }
}

void update() {      
    switch (state) {
      case 0:
        fill (fallColor);
        curYPos+=speed;
        speed = speed * 1.02;
        if (curYPos > height + 50)  {
          curYPos = random(-50,-200);
          speed = random(5,12);
        }
        break;
      case 1:
        fill(activeColor);
        curYPos += .6* (topBrightPixel - prevYPos);
        speed = random(5,12);
        break;
    }
      prevYPos = curYPos;
 }

}

Anna

28 Jan 2013

My goal in life is to get the ofxFern addon to work. Until I achieve that, I (and you, I suppose) will have to settle for ‘shimmer shimmy’, a little monster that I created taking the ofxDrawnetic and the associated ofxGenerative addons by rezaali and smushing them into the openframeworks sound player example.

The original sound player has three sections for synths, beats, and vocals. The synths and vocal sections relies on mouse clicks, and the beats section is driven by click and drag. Most of the generative brushes available in ofxDrawnetic rely (as brushes tend to) on click and drag mechanics as well. So, I got rid of the vocals and synths options for the sound player, expanded the beats section, and imported the brushes so you could draw while you created sick beats (yo).

Screen Shot 2013-01-28 at 2.04.42 AM

I chose the ‘flocking brush’ example inside ofxDrawnetic because I liked the shimmery effect of the brush, and thought it looked a little bit like the polygons were dancing along to the beats as they shimmer, or fly, or whatever you’d like to call it.

It would be pretty awesome if there were a way to make the frequency of the ‘shimmering’ match the speed of the beats one generates. I’m sure there is, but that’s a bit beyond me at that moment. Something for next time, perhaps!

GitHub Repo here!

Anna

28 Jan 2013


My original idea for a sifteo app was a game where each cube would display a word belonging to a different part of speech (noun, verb, adjective, etc). You’d have some amount of time to piece the cubes together in a logical sentence, before they would all switch to some other word/part-of-speech, and you’d have to begin from scratch.

Turns out this sifteo SDK was a lot harder to understand than I hoped it would be. Given that this is my first week ever looking at C++, much less a an undocumented offshoot of it, I had to adjust my goals significantly. (Thanks, Golan, for your help in understanding what I was working with!)

This version takes the original sensors demo and modifies the “shake” detection (through acceleration sensing) so that a random word will be displayed on each cube when you meddle with it. You can arrange the words to form (hopefully) amusing sentences. The name of the app comes from some somewhat taboo ‘dice’ you might have seen at a party sometime in your existence. There is nothing very taboo about this app.

If I were to take this project forward, I would like to be able to load background images or background colors into each cube, depending on the part of speech displayed. I was having a hard time with it this time around, I eventually came to realize, because I’m currently using bg0_ROM mode to create the text, and .png background assets weren’t playing very nice with that mode. Given more time, I would re-write the whole thing so that I was working in a different draw mode, and the displayed text was done using png files.

Visit the Github Repo

NoDice

Dev

28 Jan 2013

This was the one part of this assignment I had most trouble with! Although I found the Sifteo docs to be only minorly helpful in this assignment, I should have for seen this and started earlier.

Screen Shot 2013-01-28 at 12.51.55 AM

I began this assignment thinking I would do something like a word game for the blind. The blocks would each represent a letter. When tapped on the block would say what the letter was. If the letters were arranged from left to right in order, then a affirmative sound would be played, and the next word would be auto scrambled.

I was able to get the word split into letters and divided among cubes, but when it came time for me to see if the user had spelt the word out correctly I ran into a problem where I was not understanding how to reference each cube individually. This lack of understanding caused me to spend a bunch of time looking over the SDK website, which had few answers.

Ultimately, given the time, I decided to scratch the initial idea for a simpler one that was closer to an example I fully understood – the sensor’s example. I decided to make use of screen colors and assign these based on cube adjacencies. At first I had some problems with the background colors, but managed to set these in the end.

For a bit of extra interaction I decided to make use of touch. When a cube is touched, it changes the entire color scheme, while maintaining the fact that the color coding is based on adjacencies.

Overall the effect looks like it would be fun for a baby or a cat.

GitHub: https://github.com/dgurjar/SifteoColorSwap

Keqin

28 Jan 2013

This is a small game prototype. I use three block to display different numbers. User’s goal is to use the numbers on cube 0 and 1’s summation to equal to the number on cube 3. And the summation of cube 0 and 1 will appear on the right cube when they are combined. For example, the cube 0’s number is 2 and cube 1 is 3. And use the 0’s right side to touch the left side of cube 1. The number on cube 1 becomes 5. And touch again the number will become 7. But now the cube 1’s number becomes 7. It means you cannot get 5 again. And if the 3’s number is 5. You will lose the game. But you make 5 with cube 0 and 1. You can touch 3 and you will win. But now I don’t have a algorithm to generate a bunch of hard problem. It needs more time.

Here’s code link: https://github.com/doukooo/textrain

 

QQ20130128-5

Dev

28 Jan 2013

buddha_big Screen Shot 2013-01-28 at 12.34.29 AM

 

I wanted to capture the essence of the passage of Nirvana in a clean elegant way. There is nothing   I find more annoying, absorbing, and distracting than a loading bar – a curse of modern technology. You know that feeling when you have that smidgen of a bar left, but you can’t get to the end? That is one of many emotions that you have to throw away on your passage to freedom.

I used ofxProgressBar to create the loading bar. I used ofxTimer to interval the “Give up” voice which was synthesized using ofxSpeech.


void testApp::setup(){
value = 0;
max = 8000;
progressBar = ofxProgressBar(10, 10, 500, 20, &value, &max);
go = true;

timer.setup(10000, true);
timer.startTimer();

synthesizer.listVoices();
synthesizer.initSynthesizer("Ralph");
}

//--------------------------------------------------------------
void testApp::update(){

//you can never really get to the end from this application
if(go && value < = max){ value += 50; max += 51; } //Every 5 seconds encourage the user to give up his quest if(timer.getTimeLeftInMillis() < 5000){ synthesizer.speakPhrase("Give up"); } }

GitHub: https://github.com/dgurjar/ToNirvana/

Keqin

28 Jan 2013

I use two addons in this project. One is the FaceTracker to track people’s face and the other one is Box2d which is a physical engine to make a real world. I use the face tracker to get the people’s expression change. If the mouth becomes bigger, it will produce many circles. And it will fall down like in real world. And if the eye moves, the rect will be produced and fall down. At last the window will be filled with different colored shapes. Maybe it can be some beautiful pics.

Here’s the code link:https://github.com/doukooo/textrain

 

QQ20130128-2

Can

28 Jan 2013

FaceOSC connected to OSCulator, connected to Logic Pro. Generative Music.