Category Archives: project-1

Anna

26 Jan 2013

Tldr : All the jagged lines in FaceOSC’s mask-thingy made me think of teeth. The fact that I could control something by chewing also made me think of teeth. I really like oceans, bizarre villainous things, and distractingly shiny things. All these facts put together resulted in a Processing approximation of an anglerfish.

Title credit to classmate Kim Harvey, who saw me meddling with this at my desk and promptly exclaimed “It’s like Nemo’s Nightmare…!”

If you don’t know what she’s talking about, here, have some Pixar:

The full implementation of this is in shambles right now. My intent was to create a set of schooling fish that would direct themselves toward the “light”, which is controlled by the coordinates of your nose in FaceOSC. Right now, that doesn’t quite work, but I did manage to create a set of random ‘fishies’ that don’t exactly school but will move toward the approximate location of your nose using the overall facial position coordinates (pos.x, pos.y) instead. The other thing I wanted to do was enable ‘eating’ of the little fishies, by writing some kind of collision detection between them and the (hidden) mouth ellipse.

But like I said in my other post: I am so not that classy yet. If you don’t believe me, check out the scary anglerfish of a code below, or at the GitHub Repo.

nemostill

// Nemo's Nightmare - a processing app using FaceOSC
// created by Anna von Reden, 2013
// for the IACD Spring 2013 class at the CMU School of Art
//
// created using a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230
//
//TEARDROP CURVE
//M. Kontopoulos (11.2010)
//Based on the parametric equation found at
//http://mathworld.wolfram.com/TeardropCurve.html
//
//STEERING BEHAVIOR
//Based on code examples by iainmaxwell, found here:
// http://www.supermanoeuvre.com/blog/?p=372

import oscP5.*;
OscP5 oscP5;

// num faces found
int found;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;

// shape constants & variables
float r = 5;
float a = 5;

ArrayList nemos; // an arraylist to store all of our nemos!
PVector shiny; // The shiny will be the current position of the mouse!

int numnemos = 50;
int stageWidth = 640; // size of the environment in the X direction
int stageHeight = 480; // size of the environment in the Y direction

void setup() {
size(stageWidth, stageHeight, P3D);
frameRate(30);

nemos = new ArrayList(); // make our arraylist to store our nemos

shiny = new PVector(stageWidth/2, stageHeight/2, 0); // make a starting shiny

// loop to make our nemos!
for (int i = 0; i < numnemos; i++) { nemos.add( new Nemo() ); } smooth(); oscP5 = new OscP5(this, 8338); oscP5.plug(this, "found", "/found"); oscP5.plug(this, "poseScale", "/pose/scale"); oscP5.plug(this, "posePosition", "/pose/position"); oscP5.plug(this, "poseOrientation", "/pose/orientation"); oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width"); oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height"); oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left"); oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right"); oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left"); oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right"); oscP5.plug(this, "jawReceived", "/gesture/jaw"); oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils"); } void draw() { background(0, 0, 30); noStroke(); if (found > 0) {
for (int i = 0; i < nemos.size(); i++) { Nemo A = (Nemo) nemos.get(i); A.run(); // Pass the population of nemos to the nemo! } translate(posePosition.x, posePosition.y); scale(poseScale); shiny = new PVector(posePosition.x, posePosition.y-50, 0); // if mouse is pressed then update shiny // fill(140, 180, 240); // stroke(0,0,30); // ellipse(-10, eyeLeft * -9, 5, 4); //ellipse(10, eyeRight * -9, 5, 4); //fill(0,0,30); //ellipse(0, 20, mouthWidth*5, mouthHeight * 5); noStroke(); fill(25, 45, 45); beginShape(TRIANGLES); vertex(mouthWidth-2, mouthHeight*5); vertex(mouthWidth+2, mouthHeight*5); vertex(mouthWidth+6, mouthHeight); vertex(mouthWidth-10, mouthHeight*6); vertex(mouthWidth-6, mouthHeight*2); vertex(mouthWidth-2, mouthHeight*5); vertex(mouthWidth-14, mouthHeight*6); vertex(mouthWidth-12, mouthHeight*2); vertex(mouthWidth-10, mouthHeight*6); vertex(mouthWidth-18, mouthHeight*6); vertex(mouthWidth-16, mouthHeight*2); vertex(mouthWidth-14, mouthHeight*6); vertex(mouthWidth-26, mouthHeight*5); vertex(mouthWidth-22, mouthHeight*2); vertex(mouthWidth-18, mouthHeight*6); vertex(mouthWidth-34, mouthHeight); vertex(mouthWidth-30, mouthHeight*5); vertex(mouthWidth-26, mouthHeight*5); vertex(mouthWidth-2, mouthHeight*-7); vertex(mouthWidth+2, (mouthHeight*-5)+20); vertex(mouthWidth+6, mouthHeight*-3); vertex(mouthWidth-10, mouthHeight*-9); vertex(mouthWidth-6, (mouthHeight*-5)+20); vertex(mouthWidth-2, mouthHeight*-7); vertex(mouthWidth-14, mouthHeight*-9); vertex(mouthWidth-12, (mouthHeight*-5)+20); vertex(mouthWidth-10, mouthHeight*-9); vertex(mouthWidth-18, mouthHeight*-9); vertex(mouthWidth-16, (mouthHeight*-5)+20); vertex(mouthWidth-14, mouthHeight*-9); vertex(mouthWidth-26, mouthHeight*-7); vertex(mouthWidth-22, (mouthHeight*-5)+20); vertex(mouthWidth-18, mouthHeight*-9); vertex(mouthWidth-34, mouthHeight*-3); vertex(mouthWidth-30, (mouthHeight*-5)+20); vertex(mouthWidth-26, mouthHeight*-7); endShape(); fill(180, 200, 100); beginShape(); for (int i=0; i<360; i++) { float x = (nostrils*-1)/4 + sin( radians(i) ) * pow(sin(radians(i)/2), 1.5) *r; float y = (nostrils*-1)/4 + cos( radians(i) ) *r; vertex(x+3, -y-15); } endShape(); //ellipse(0, nostrils * -1, 10, 10); } } class Nemo { PVector pos, vel, acc; float maxVel, maxForce, nearTheShiny; int nemoSize; Nemo() { pos = new PVector( random(0, width), random(0, height), 0 ); vel = new PVector( random(-1, 1), random(-1, 1), 0 ); acc = new PVector(0, 0, 0); maxVel = random(.5, 1.0); maxForce = random(0.2, 1.5); nearTheShiny = 200; nemoSize = 20; } void run() { seek(shiny.get(), nearTheShiny, true); // update position vel.add(acc); // add the acceleration to the velocity vel.limit(maxVel); // clip the velocity to a maximum allowable pos.add(vel); // add velocity to position acc.set(0, 0, 0); // make sure we set acceleration back to zero! toroidalBorders(); render(); } //Get to the Shiny! void seek(PVector shiny, float threshold, boolean slowDown) { acc.add( steer(shiny, threshold, slowDown) ); } //Steering PVector steer (PVector shiny, float threshold, boolean slowDown ) { PVector steerForce; // The steering vector shiny.sub(pos); float d2 = shiny.mag(); if ( d2 > 0 && d2 < threshold) {
shiny.normalize();
if ( (slowDown) && d2 < threshold/2 ) shiny.mult( maxVel * (threshold/stageWidth) );
else shiny.mult(maxVel);
shiny.sub(vel);
steerForce = shiny.get();
steerForce.limit(maxForce);
}
else {
steerForce = new PVector(0, 0, 0);
}
return steerForce;
}

//keep fishies on screen
void toroidalBorders() {
if (pos.x < 0 ) pos.x = stageWidth; if (pos.x > stageWidth) pos.x = 0;
if (pos.y < 0 ) pos.y = stageHeight; if (pos.y > stageHeight) pos.y = 0;
}

void render() {
stroke(120, 190, 200);
fill(120, 190, 200);
ellipse(pos.x, pos.y, nemoSize/(int(poseScale)+1), nemoSize/(int(poseScale)+1));
line(pos.x, pos.y, pos.x-(vel.x*nemoSize/(int(poseScale)+1)), pos.y-(vel.y*nemoSize/(int(poseScale)+1)) );
}
}

// OSC CALLBACK FUNCTIONS

public void found(int i) {
println("found: " + i);
found = i;
}

public void poseScale(float s) {
println("scale: " + s);
poseScale = s;
}

public void posePosition(float x, float y) {
println("pose position\tX: " + x + " Y: " + y );
posePosition.set(x, y, 0);
}

public void poseOrientation(float x, float y, float z) {
println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
poseOrientation.set(x, y, z);
}

public void mouthWidthReceived(float w) {
println("mouth Width: " + w);
mouthWidth = w;
}

public void mouthHeightReceived(float h) {
println("mouth height: " + h);
mouthHeight = h;
}

public void eyeLeftReceived(float f) {
println("eye left: " + f);
eyeLeft = f;
}

public void eyeRightReceived(float f) {
println("eye right: " + f);
eyeRight = f;
}

public void eyebrowLeftReceived(float f) {
println("eyebrow left: " + f);
eyebrowLeft = f;
}

public void eyebrowRightReceived(float f) {
println("eyebrow right: " + f);
eyebrowRight = f;
}

public void jawReceived(float f) {
println("jaw: " + f);
jaw = f;
}

public void nostrilsReceived(float f) {
println("nostrils: " + f);
nostrils = f;
}

// all other OSC messages end up here
void oscEvent(OscMessage m) {
if (m.isPlugged() == false) {
println("UNPLUGGED: " + m);
}
}

Anna

26 Jan 2013

“Listen as the bonds fall off, which hold you, above and below….”

So… hey there: AvR here, with a bucket of gatorade, a pile of saltines, and a bunch of code to share with you. Delightful!

This is my solution for recreating TextRain in Processing. In short, it knocks out the background by making everything two-toned, and then searches for the darker pixel color. Nothing super fancy, but fancy by my standards, because I’m not super fast at this yet…

I wanted to say a bit about the text itself. The lines are from Guillaume Apollinaire’s famous French calligram “Il Pleut” (It Rains). Calligrams are poems where the typographic layout and shape of the words contribute to the meaning of the poem. The original looks like this:

ilpleut

I’ve always loved the work of Apollinaire, and this assignment seemed like the perfect opportunity to breathe new life into his ideas. Obviously ‘Gui’ didn’t have a computer available to him, and he was forced to rely on static shapes to convey the idea of rain. I liked the idea of being able to add a little motion to the mix. The idea of ‘freeing’ the letters from their frozen position on the page, to me, fits nicely the final line of the poem: “Listen as the bonds fall off, which hold you, above and below….”

Credit where credit is due: I listened to John‘s suggestion to redraw the webcam image as a set of larger rectangles — not because it looked cool, but because it helped Processing deal with the ridiculous output of my retinabook. Mike also nudged me in the right direction about locating individual pixels to gauge brightness, because on my first attempt I’d written myself into a processing abyss with a whole pile of letter classes, and couldn’t figure out if I was the dimmest bulb in the chandelier, much less if a pixel was the dimmest pixel in the window…….

Merci Beaucoup.

textrain shots

Github Repo

import processing.video.*;
Capture retinacam;
PFont f;
final int pixeler = 4;
int[] time = { 0, 20000, 40000 };

String lineone = "Il pleut des voix de femmes comme si elles étaient mortes même dans le souvenir";
String linetwo = "c’est vous aussi qu’il pleut, merveilleuses rencontres de ma vie. ô gouttelettes";
// String linethree = "et ces nuages cabrés se prennent à hennir tout un univers de villes auriculaires";
// String linefour = "écoute s’il pleut tandis que le regret et le dédain pleurent une ancienne musique;"
// String linefive = "écoute tomber les liens qui te retiennent en haut et en bas":

int[] charx = new int[lineone.length()];
int[] chary = new int[lineone.length()];
int[] charx2 = new int[linetwo.length()];
int[] chary2 = new int[linetwo.length()];

void setup() {
  size(1280, 720);

  String[] cameras = Capture.list();

  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  }
  else {
    println("Available cameras:");
    for (int i = 0; i < cameras.length; i++) {
      println(cameras[i]);
    }
  }
  retinacam = new Capture(this, cameras[0]);
  retinacam.start();
  colorMode(HSB, 255);
  noStroke();
  f = createFont("Futura", 24, true);
  textFont(f);
//
//for (int i = 0; i < linetwo.length(); i++) {
// chary2[i] = -200;
//}

  charx[0] = 10;
  for (int i=1; i < lineone.length(); i++) {
    charx[i]= charx[i-1] += 15;
    println(textWidth(lineone.charAt(i-1)));
  }

    charx2[0] = 10;
  for (int i=1; i < linetwo.length(); i++) {
    charx2[i]= charx2[i-1] += 15;
    println(textWidth(linetwo.charAt(i-1)));
  }
}

void draw() {
  if (retinacam.available()==true) {
    retinacam.read();
  }

 // retinacam.loadPixels();
  int threshold = 60;

  for (int x = 0; x < retinacam.width; x+=pixeler) {
    for (int y = 0; y < retinacam.height; y+=pixeler) {
      int loc = x + y*retinacam.width;
      if (brightness(retinacam.pixels[loc]) > threshold) {
        fill(160, 100, 100);
      }
      else {
        fill(200, 100, 50);
      }
      rect(x, y, pixeler, pixeler);
    }
  }

  retinacam.updatePixels();

  //image(retinacam, 0, 0);

if( millis() >= time[0] ){
for (int i = 0; i < lineone.length(); i++) {
if((chary[i] > retinacam.height)||(chary[i]<0)){
     chary[i] = 0;
     }
     else {
     chary[i] = chary[i] + int((brightness(get(charx[i], chary[i]))-60)/random(10,15));
     }
    fill(112, 250, 180);
    text(lineone.charAt(i), charx[i], chary[i]);
  }
  }

if( millis() >= time[1] ){
  for (int i = 0; i < linetwo.length(); i++) {
if((chary2[i] > retinacam.height)||(chary2[i]<0)){
     chary2[i] = 0;
     }
     else {
     chary2[i] = chary2[i] + int((brightness(get(charx2[i], chary2[i]))-60)/random(5,29));
     }
    fill(35, 250, 250);
    text(linetwo.charAt(i), charx2[i], chary2[i]);
  }
}

}

Robb

26 Jan 2013

I combined the AudioOutput Example and the Billboard Example.
One thousand, one hundred eleven asses fill the screen as a horrible screech takes over your mind.
This is very conceptual. You might not understand.
I spent some serious time tweaking the numbers on the skin tone of those butts.
The artifacts of noise and blurriness are intentional. Easy enough to remove.
Enjoy.

Butt Image Credit:High-school Robb

Robb

25 Jan 2013

Using my face to control a pan/tilt mirror.

faceOSC, MaxMSP, maxuino, two servos, a teensy 2.0 running standard firmata, and a shard of glass combine to make a system of directing laser beams using your head.
Maxuino is useful for MAX to Arduino communication. It is developed by CMU art professor Ali Momeni.
I used MAX because it is really easy to smooth and visualize data. This project took very little time.
Maxuino actually expects OSC formatted messages, meaning that my patch is simply translating OSC to OSC and then to Firmata.

Screen Shot 2013-01-25 at 4.31.05 PM
GitHub Link

Robb

25 Jan 2013

I went with the threshold method. Ideally, I think it should use background subtraction, but thats for another day.
The rainbow circles give the letters legibility, playfulness, and greater visual weight. I encountered several challenges with this project.
It took me about 4 hours.
I commented the heck out of my mediocre code if anyone is curious.
I have trouble with logic, so that section is likely verbose and roundabout. I’d love some tips.
w/♡,
-Robb

GitHub Link

////RobbRain/////Textrain Spinoff, after Camille Utterback
////2013/////www.robb.cc//////////////////

import processing.video.*; ///add the library

Capture video;

int threshold = 122; //brightness value to count as dark
int letterSize = 20; //diameter of circles, real or imaginary that bound the falling shapes
int columnWidth = 20; //how wide you want the columns to be
int columnQty; //dependent on above
int numPix; //dependent, how many pixels are there?
int[] fallingY; //array of all the y values of our objects
String poem = "Trust not he who winces in rain.";

//char[] poem = { o,h, };  // Alternate syntax

void setup() {
  size(640, 480, P2D); // Change size to 320 x 240 if too slow at 640 x 480
  video = new Capture(this, width, height);
  video.start();  
  colorMode(HSB); //easy rainbows man
  noCursor(); //cursors are for sissys
  noStroke();//as are strokes
  smooth();//doesn't work.
println(poem.length());
  numPix = video.width*video.height; //calc the pixel qty
  columnQty = width/columnWidth; //calc the column qty

  fallingY = new int[columnQty]; //size that array
  for (int i = 0; i height||fallingY[i]<0) {//if it falls off the bottom or top, it should go again. no giving up
        fallingY[i] = 10;
      }
      fallingLetter(sampleX, fallingY[i], poem.charAt(i),i); ///draws the thing.
    }
    if(mousePressed){setup();}////cute.. puts the letters at top if click.
  }
}


void fallingLetter(int letterX, int letterY, char letter, int i) {
  fill(map(i, 0, columnQty, 0, 255), 255, 255, 100);
  ellipse(letterX, letterY, letterSize, letterSize);
  fill(0);
  text(letter, letterX-letterSize/3, letterY+letterSize/5);
}


int brightnessXY(int xxx, int yyy) { ///this lil guy calc the brightness of a video pixel and returns it as an int.
  int  videoIndex = constrain(yyy * video.width + xxx, 0, numPix-1);   //index = y*videoWidth + x
  int briteSpot = int(brightness(video.pixels[videoIndex]));
  return briteSpot;
}

void mouseTrack() { //useful for finding the proper threshold.
  float briteMouse = brightnessXY(mouseX, mouseY);
  if (briteMouse > threshold) { 
    fill(0);
  } 
  else { 
    fill(100);
  }
  rect(mouseX, mouseY, 20, 20);
}

Ersatz

25 Jan 2013

For this assignment I have combined the addons: ofxMacamPS3Eye and ofxFern, creating a simple Augmented reality application.

ofxMacamPs3Eye

This addons is probably the only way to get the long praised Playstation3 EyeCam working inside Openframeworks. It’s capability to run at 60fps at a resolution of 640×480 make it the best affordable solution for AR or any kind of computer vision tracking applications.

ofxFern

ofxFern is an implementatin of the Fern tracker from EPFL CVLab. It’s written by Theo Watson and used in an interactive application they did for the Boards Magazine, back in 2010 – More about his project

Demo Video:

Concept and Process:

A cover of a book with 8bit art is augmented with animation, when shown on the cam.

I tried couple of different images, but colorful magazine and book covers works best with Fern tracking, two color card, won’t work. Then the process is pretty simple, ofxFern will give you coordinates of the four corners of your book cover, which you could use to draw a rectangle textured with a image from a video, in my case the video was created in After Effects using the original static image.

Github: https://github.com/kamend/IACD_ofxFern

Ersatz

24 Jan 2013

Screen Shot 2013-01-24 at 9.49.08 AM

Lines of Emotion

Recently I read how drawn lines can express emotion, for example more curved lines express calm mood and more harsh, straight lines could express excitement . This application takes users face expression values via Kyle McDonalds’ FaceOSC and transform them into animated lines. For example, your mouth control how curved the lines should be, your eyes the speed they move, your eyebrows the distance between them.

Github: https://github.com/kamend/IACD_OSC

Project 1: TextRain

screenshot

Here is my reimplementation of TextRain. I am offering more of a proof of concept solution, so I could really keep things simple. The implementation process is the following: reading a text file with famous quotes on every line, at start initializing an ArrayList of all the letters from every loaded text line, positioned regarding their order in the sentence. Then on every frame I get a live video feed from my webcam and going through all “flying” letters, I check if they collide with these areas of the image that are darker then my “collisionThreshold”, if they collide, I keep the letter still, if not the letter continues down the road.

Code: https://github.com/kamend/IACD_TextRain

import processing.video.*;

// objects
class Letter {
  PVector vel;
  PVector pos; 
  char ch;
  color col;
};

// global vars
int videoWidth = 640;
int videoHeight = 480;
Capture cap; // video capture device

ArrayList(Letter) LetterList = new ArrayList(Letter)(); // the list that will hold the particles

float letterXOffset = 10; // how far we should draw the individual letters
float letterYOffset = 400; // how far every sentence should be
float collisionThreshold = 30; // haw dark the areas of collision must be
PFont font;

void setup() {
  size(640, 480);

  // initalizes video capture
  cap = new Capture(this, videoWidth, videoHeight);
  cap.start();

  // read text and launch "particles"
  launchSentences();

  // setup display font
  font = createFont("Helvetica Bold", 16, true);
  textFont(font, 16);
}

void launchSentences() {

  // read sentences from a file
  String lines[] = loadStrings("text.txt");

  for (int lineNum = 0;lineNum < lines.length;lineNum++) {
    String sentence = lines[lineNum];
    int setanceLength = sentence.length();
    for (int i=0;i height) {
      l.pos.y -= height+letterYOffset;
    }
  }
}

boolean isColliding(PVector pos) {
  float radius = 10.0;
  if (pos.y > radius && pos.y < = videoHeight-radius && pos.x >=0 && pos.x < videoWidth) {
    PVector pixelPos = pos.get();
    pixelPos.add(0.0, radius, 0.0);
    int pixelIndex = floor(pixelPos.x) + floor(pixelPos.y)*videoWidth;
    color pixelColor = cap.pixels[pixelIndex];

    if (brightness(pixelColor) < collisionThreshold)
      return true;
  }
  return false;
}

void draw() {
  update();

  background(0);
  image(cap, 0, 0); 

  for (int i=0;i= 0.0) {
      fill(l.col, 255 - abs(map(l.pos.y, 0, height, -200, 200)));
      text(l.ch, l.pos.x, l.pos.y);
    }
  }
}

Anna

21 Jan 2013

It’s probably worth pointing out at the get-go that I have never used openframeworks, and as such I have very little knowledge about what half of these addons are even capable of. So here is my hand-wavy, blue-sky assessment of what I think looks interesting.

ofxPXCU 
by IntelPerceptual

openFrameworks addon for Intel’s Perceptual Computing SDK (PXCUPipeline) [view on Github]

The thing that drew me to this add-on was the fact that Intel’s Perceptual Computing SDK apparently supports Nuance Language Processing. Having worked with some of the Nuance people and seen their speech to text programs in action in high-pressure situations (like electronic medical record generation), I’d love the opportunity to experiment with it in my personal projects.

Consider: a program where two people talk into a microphone on separate occasions and tell a short story of their own making (or maybe two versions of the same ‘truth’). The program merges their tales into a new, third story by looking at the sentences and substituting clauses or appropriate parts of speech. Like mad-libs, only more fluid, and actually blending full sentences into each other rather than just filling in blanks.

ofxFern 
by ofTheo

An implementatin of the Fern tracker from EPFL CVLab [view on Github]

This is sort of cheating, since we watched the ‘magic book’ demo in class on Monday, but I fell madly in love with that project, so here I am. I feel like this addon would help me take the next step in a project I began last semester in Sequential Visual Narrative. There, I created a set of photographs of objects which told a fictional story. An evidence form was attached to each photo, which had been filled out by one of the characters in the story: a police detective. The aim of the game was to piece together what was going on in the larger narrative based solely on the artifacts the detective had amassed, and his very biased opinions about them.

When I saw what the Fern tracker was capable of doing, I immediately wondered what other secrets I could hide within those photographs. Maybe in addition to the detective’s written narratives, the Fern tracker would reveal the history of the object —hidden poems from the owner, memories, daydreams, promises… Maybe tilting the photograph would reveal a fingerprint, or a monogram … Maybe tilting the camera in different directions would reveal the opinion of a different character…

ofxTesseract 
by kylemcdonald

tesseract-ocr wrapper for openFrameworks [view on Github]

My thoughts on this addon are similar to the previous two, but harnessing Tesseract could give me the ability to interact with printed text, and once I have the text, I could do any number of language processing activities with it, illustrate it, re-format it, translate it, or just use it as a data set.

Words, words, words!

Anna

20 Jan 2013

Audience – rAndom International from Chris O’Shea on Vimeo.

A few weeks back, Mike sent me a link to this webcomic about the process of coming up with new and off-the-wall ideas. It made me pretty happy — as did the protagonist’s almost manic enthusiasm about the possibility of letting the stars see us.

This project doesn’t quite make it to the stars, but it’s a powerful, whimsical and ‘reversive’ installation that makes us consider the purpose of objects and the purpose of ourselves.

The idea of having mirrors turn their faces to follow a person isn’t all that extreme — we see similar types of motion with solar panels following the sun. In my opinion the success of this installation is all in the details: the decision to give each mirror a set of ‘feet’ instead of a tripod or a stalk, or the fact that each mirror has an ‘ambient’ state as well as a reactive state (see video). They are capable of seeing you, but they weren’t made to see you — they seem to pay attention to you because they decide they want to, and so their purpose transcends their task, in a way.

There is a strange subtlety in their positioning too–clumped, but random, like commuters in a train station or traders on Wallstreet. Everything comes together to give the mirrors an eerie humanlike quality, and makes the participant want to engage — because maybe something really is looking back.

The Treachery of Sanctuary by Chris Milk

I’m probably being really obvious about my tastes, posting about this installation right after gushing about how much I loved the spider dress. Even though at face-value the idea of giving someone’s silhouette a pair of wings seems—I don’t know, adolescent and cliche, maybe?—there’s something elegant, bleak and haunting about this piece. Think Hitchcock, or Poe. I’m less drawn to the final panel (the one where the participant gets wings) than I am to the first two. I really enjoy Milk’s commentary (see video HERE) about how inspiration can feel like disintegrating and taking flight. And there’s something powerful about watching (what appears to be) your own shadow—something constant and predictable, if not immutable—fragment and disappear before your eyes. The fact that Milk has created the exhibit to fool the audience into thinking they are under bright light, rather than under scrutiny from digital imaging technology, lends the trick this power, I think.

All in all, the story Milk tells about the creative process works, and puts the ‘wing-granting’ in the final panel into a context where it makes poetic sense, instead of just turning people into arch-angels because ‘it looks cool’. (It does.)

Sentence Tree from Andy Wallace on Vimeo.

This is a quirky little experiment that organizes sentences you type into trees, based on punctuation and basic grammar structures. The creator, Andy Wallace, described the piece as ‘a grammar exercise gone wrong’, but I wonder if the opposite isn’t true. Even as a lover of words, it’s hard to think of something more boring than diagraming sentences the traditional way: teacher at a whiteboard drawing chicken-scratch while students sleep. I like the potential of this program to inject some life into language and linguistics. Think of the possibilities: color code subject, object, verb, participle, gerund. Make subordinate clauses into subordinate branches. Structure paragraphs by transitional phrases, evidence, quotations, counterarguments. Brainstorm entire novels or essays instead of single sentences! This feels like the tip of an iceberg.