Category Archives: project-1

Kyna

27 Jan 2013

For this implementation of Textrain I made a Letter class and kept track of an array of Letter objects. Each character from the string is stored in this array and updated via for loop in the draw function. They detect the brightness of the pixel immediately beneath them and only drop if the pixel is bright. There is also a function that fades the letters from teal to navy blue over time. The string is the first line of the e.e. cummings poem ‘anyone lived in a pretty how town,’ which really doesn’t have any significance other than the fact that it’s great.

GitHub -> soon, having trouble with it currently…

Code!

import processing.video.*;
Capture cam;

int height = 480;
int width = 640;

int g = 0;
int b = 100;
boolean flipG = false;
boolean flipB = true;

class Letter {
  int y;
  int x;
  int speed;
  char c;

  Letter(char character, int xPos, int yPos) {
    c = character;
    x = xPos;
    y = yPos;
    speed = (int) random(1, 2);
  }
}

ArrayList currentLetters;
Letter testLetter = new Letter('a', 50, 50);

void setup() {
  size(width, height);
  background(255);
  frameRate(10);

  textSize(18);

  cam = new Capture(this, width, height);
  cam.start();

  smooth();

  currentLetters = new ArrayList();
  String poem = "anyone lived in a pretty how town (with up so floating many bells down)";

  int xInit = 0;
  int yInit = 0;

  for (int i=0; i < 71; i++) {
    if (i<33) yInit = (int)random(-10, 10);
    else yInit = (int)random(-15, -25);
    currentLetters.add(new Letter(poem.charAt(i), xInit, yInit));
    xInit += (9 + (int)random(-3, 3));
    if (xInit > width) xInit = 9;
  }
}

void draw() {
  cam.read();
  cam.loadPixels();

  pushMatrix();
  translate (width, 0); 
  scale(-1, 1);
  image (cam, 0, 0, width, height);
  popMatrix();

  fill(0, g, 100);

  for (int i=0; i < 71; i++) {
    Letter curr = currentLetters.get(i);

    int index = width*curr.y + (width-curr.x-1);
    index = constrain (index, 0, width*height-1);

    if ((brightness(cam.pixels[index])) > 105) curr.y += curr.speed;
    else {
      while ((curr.y > 0.0) && ((brightness(cam.pixels[index])) < 95)) {
        curr.y -= .025;
        index = width*curr.y + (width-curr.x-1);
        index = constrain (index, 0, width*height-1);
      }
    }
    text(curr.c, curr.x, curr.y);

    if (curr.y >= height) {
      curr.y = (int)random(-15, 15);
      curr.speed = (int)random(1, 2);
    }
  }

  if(!flipG) {
    if (g<100) g++;
    else flipG=true;
  }
  else if(flipG) {
    if (g>0) g--;
    else flipG=false;
  }

  if(!flipB) {   
    if (b<150) b++;
    else flipB=true;
  }
  else if(flipB) {
    if (b>0) b--;
    else flipB=false;
  }
}

Yvonne

27 Jan 2013

mykittens

Originally I wanted to do a cat that you could have fall through the cubes, similar to an App in the Sifteo video with water. I quickly abandoned that once I realized I could barely get a cat to animate staying still, let alone fall through from one cube to the next. With that said, I focused primarily on animating sprites. I used the Sensors example as my basis with bits of the Connection and Stars examples tossed in. Music was from the Connection example and the kitten sprites were from the Sprites Database; I wish I could have made my own, but unfortunately time did not permit.

Github Repository: https://github.com/yvonnehidle/Sifteo_myKittens
Original Blog Post @ Arealess: http://www.arealess.com/kittens-and-sifteo/
Link to Better Quality Video: http://www.arealess.com/wp-content/uploads/2013/01/myKittens.mp4

Sorry for the crappy video. Vimeo wouldn’t load it and YouTube made it illegible.

Michael

27 Jan 2013

Sifteo GigaViewer from Mike Taylor on Vimeo.

This is an application for viewing tiled images using Sifteo cubes.  I took inspiration for this app from the GigaPan project, which creates massive panoramas using multiple stitched camera images.  The GigaViewer takes a different approach by splitting a single image into many tiles.  The Sifteo cubes can be used to explore the minute details of the image by exploring it tile-by-tile.  In a sense, this deliberately prohibits “seeing the forest for the trees.”  For images like the watch mechanism shown in the demo, this prevents the user from being overwhelmed by the complexity and instead invites them to explore each component separately.

Sample image by Michel Villeneuve can be found here:commons.wikimedia.org/wiki/File:Innards_of_an_AI-139a_mechanical_watch.jpg

The Sifteo code can be found here.

Screen Shot 2013-01-27 at 8.58.07 PM

Robb

27 Jan 2013


Concept


Simple. Elegant. Shell game.
No reason to corrupt this old-time classic with additional dimensions on interactivity.
The backlit displays allow the game to be played in the darkest of alleys.
In the event of a gambling sting, the confiscated evidence will dissolve as the batteries die in the evidence locker, allowing the perp to walk.

2013-01-27 20.35.42

Process


I originally imagined a version of the shell game where you could not lose. After carefully examining game theory and cultural paradigms, I realized this further perpetuated the societal issues often associated with the trophy generation. The shell game is ancient. It has always been challenging, and should remain so. It is an excellent lesson in humility, as it lives within the class of games that depend on the player falsely believing they are smarter than their opponent.

Michael

27 Jan 2013

This was my attempt at combining two openFrameworks addons: ofxEliza and ofxSpeech.  The goal was to create an implementation of the historic keyword-based Eliza chatbot that could utilize ofxSpeech to both recognize audible keywords and respond using synthesized speech.  Both addons successfully compiled together, but the Eliza module seems to have some issues, as demonstrated in the video.  Namely, the chatbot is great at detecting edge cases like repetitions and short responses, but doesn’t actually pick up any keywords, even when typed into the console.  This doesn’t make for a great therapist.  I spent time trying to debug the input parser for Eliza, but didn’t make much progress and as a result I didn’t dive deep into speech recognition.  An alternative to ofxSpeech is ofxGSTT, which uses Google’s speech to text engine but is more complicated and requires the integration of additional addons.  Eliza’s keyword-based responses should match well with ofxSpeech’s dictionary-based recognition.

The OF code can be found here.

Screen Shot 2013-01-28 at 12.20.49 AM

Michael

27 Jan 2013

OSC Face Ball from Mike Taylor on Vimeo.

This simple Processing demo uses facial movements from Kyle McDonald’s FaceOSC application and Dan Wilcox’s OSC template.  At it’s core, this app is a very simple face-controlled physics simulator. As the user looks up, down, left, and right, forces are exerted on the ball which cause it to accelerate in the direction of the user’s gaze.  Each time the ball hits the wall, it loses a fraction of its kinetic energy to avoid things from going out of control.  There is certainly some room for additional functionality by taking advantage of facial features like mouth and eye height, but these were difficult to test because of faceOSC insisting that my mouth is my mustache.

The Processing code can be found here.

And here:

import oscP5.*;
OscP5 oscP5;

int found;
PVector poseOrientation;
float ballX = 640;
float ballVX = 0;
float ballY = 360;
float ballVY = 0;

float ballM =1;

void setup(){
size(1000,720);
oscP5 = new OscP5(this, 8338);
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "poseOrientation", "/pose/orientation");
poseOrientation = new PVector();
}

void draw(){
background(255);
if (found > 0) {
ballVX -= poseOrientation.y/ballM;
ballX += ballVX;

ballVY += poseOrientation.x/ballM;
ballY += ballVY;
// noStroke();
// lights();
// translate(ballX, 360, 0);
// sphere(25);
// println(ballX);
if (ballX>1000){
ballX=1000;
ballVX=-.7*ballVX;
}
if (ballX<0){
ballX=0;
ballVX=-.7*ballVX;
}

if (ballY>720){
ballY=720;
ballVY=-.7*ballVY;
}
if (ballY<0){
ballY=0;
ballVY=-.7*ballVY;
}
} 
ellipseMode(CENTER);
noStroke();
fill(256,0,0);
ellipse(int(ballX),int(ballY),50, 50);

}

public void found(int i) {
found = i;
}

public void poseOrientation(float x, float y, float z) {
println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
poseOrientation.set(x, y, z);
}

// all other OSC messages end up here
void oscEvent(OscMessage m) {
if (m.isPlugged() == false) {
}
}

FaceBall

Michael

27 Jan 2013

Punctuation Rain from Mike Taylor on Vimeo.

This is a basic reimplementation of Text Rain by Camille Utterback and Romy Achituv.  The Processing code is relatively efficient as it extracts brightness values directly from the camera pixels that have already been written to the screen.  It also only examines pixels at the current location of the falling letters, rather than performing operations over the entire image.  As with most simple implementations, the app relies on a light-colored background to work properly.  The letters fall at a speed proportional to the brightness difference between the pixel and the light/dark threshold.  Below that threshold, the letters rise until finding a light region again, resulting in the apparent tendency of the letters to “ride” arms and other moving objects.  Additional techniques like background subtraction or true boundary detection could improve performance.

The Processing code can be found here.

And here:

import processing.video.*;

Capture cam;

int xp = 1;
char ch;
int spacing = 30;

//String str = "***************************************";
String str = "!@#$%^&*(^%$##$(#@@#$%&!@#$%^&^%$)%#^&*";
int[] chary = new int[str.length()];

void setup() {
size(1280,720);

textSize(32);

for(int i=0; i<chary.length; i++){
chary[i]=0;
}

String[] cameras = Capture.list();

if (cameras.length == 0) {
println("There are no cameras available for capture.");
exit();
} else {
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {
println(cameras[i]);
}

// The camera can be initialized directly using an 
// element from the array returned by list():
cam = new Capture(this, cameras[0]);
cam.start(); 
} 
}

void draw() {
if (cam.available() == true) {
cam.read();
}
image(cam, 0, 0);
// The following does the same, and is faster when just drawing the image
// without any additional resizing, transformations, or tint.
//set(0, 0, cam);

for (int i = 0; i < str.length(); i=i+1) {
ch = str.charAt(i);
chary[i]=chary[i]+int((brightness(get((i+1)*spacing,chary[i]))-80)/20);
if((chary[i]>720)||(chary[i]<0)) {
chary[i]=0;
}
fill(0,102,153);
text(ch,(i+1)*spacing,chary[i]);

}

//println((brightness(get(mouseX,mouseY)))-100);
}

Punctuation Rain

Yvonne

27 Jan 2013

screenshot

This is my simple compilation of two ofxAddons:
ofxBeatTracking by zenwerk (http://ofxaddons.com/repos/63)
ofxOscilloscope by mazbox (http://ofxaddons.com/repos/295)

The whole thing basically consists of a background image and different sound graphs I rotated and translated onto the screen of the TVs. I skinned the graphs to my taste… nothing particularly special. Kind of looks cool though.

I selected these two addons because I wanted to do something with sound and graphing. I don’t typically work with sound, so I figured something new would be good. In addition, I wanted something fairly easy code-wise because I’ve never worked with openFrameworks or C++ before.

Github Repository: https://github.com/yvonnehidle/beatTVs
Original Blog Post @ Arealess: http://www.arealess.com/compiling-ofxaddons/

Yvonne

27 Jan 2013

packitty_easyScreen

Simple game using FaceOSC. I was originally going to combine FaceOSC with a pacman game I made for Jim’s class last semester, but I ran into difficulty with FaceOSC and collision mapping. So, instead, I combined FaceOSC commands with a really simple “point-and-shoot” game.

Basically, I am using my face position to move the character around the screen. When I open my mouth, the character opens its mouth, enabling it to eat the flying cherry.

Github Repository: https://github.com/yvonnehidle/faceOSC_pacKitty
Original Blog Post @ Arealess: http://www.arealess.com/working-with-faceosc/

Yvonne

27 Jan 2013

blog-featured

The program is really simple. I took a string of words, split them up using char and took those individual characters/letter and put them in an array. Then I made an array for the y position of each letter. The program just checks the y position of each letter as it falls and overlays that information with the pixel array of the video. If the y position of the letter is on a pixel in the video that is darker than a set threshold, then the speed of the letter reverses (which basically results in the letters seemingly staying in place when in between a dark and a light pixel, or between the background and an object).

Github Repository: https://github.com/yvonnehidle/textrain
Original Blog Post @ Arealess: http://www.arealess.com/text-rain-version-1/

//////////////////////////////////////////////////////////////////
// GLOBAL VARIABLES
//////////////////////////////////////////////////////////////////
import processing.video.*;
Capture video;

// string array for poem
String poem = "Fancy lines and dancing swirls have nothing on simplicitys curls";
char[]letters = new char[poem.length()];

// letter positioning
final int max = letters.length;
float[]letterY = new float[max];
//////////////////////////////////////////////////////////////////

//////////////////////////////////////////////////////////////////
// BASIC SETUP
//////////////////////////////////////////////////////////////////
void setup()
{
// general setup
size(640,480);
noStroke();
textSize(20);
fill(255,0,0);

// video
video = new Capture(this,640,480,30);

// divide poem into individual characters
for(int i=0; i<poem.length(); i++)
{
letters[i] = poem.charAt(i);
}
}
//////////////////////////////////////////////////////////////////

//////////////////////////////////////////////////////////////////
// TABLE OF CONTENTS
//////////////////////////////////////////////////////////////////
void draw()
{
background(255);

// falling letters
fallingLetters();
}
//////////////////////////////////////////////////////////////////

//////////////////////////////////////////////////////////////////
// FALLING LETTERS
//////////////////////////////////////////////////////////////////
void fallingLetters()
{
// if a webcam is available, load the video
if(video.available())
{
video.read();
}
video.filter(GRAY);
image(video, 0, 0);

// variables
float letterS=1;
float letterX=0;
float letterXSpace=width/letters.length;
int darknessThreshold = 180;

// generate the letters and have them interact
// pixel by pixel of the video
video.loadPixels();

for(int i=0; i<letters.length; i++) {

// what is the pixel number in the array?
int loc = int( letterX + letterY[i] * video.width );

// draw letters
text(letters[i],letterX,letterY[i]); letterX=letterX+letterXSpace;

// if the letters reach the bottom of the screen, start them at top again
if(letterY[i] >= video.height - 1)
{
letterY[i] = 0;
}

// if the brightness of the pixel is less than our darkness threshold
// then do not move the letter
else if(brightness(video.pixels[loc]) < darknessThreshold) { if(letterY[i] > 10)
{
letterY[i]-=letterS;
}
}

// else always move the letter
else
{
letterY[i]+=letterS;
}
}

}
//////////////////////////////////////////////////////////////////