Heather Knight – Face OSC

by heather @ 2:05 am 24 January 2012

An antagonistic photon-cannon whose main directive is to cause damage to your eyesight.

[vimeo=https://vimeo.com/35554377]

Please enjoy my audio description. In a department where 50% of our funding comes from the military and some of our most exciting innovations are in the war zones, I decided to use FaceOSC to attack our own citizens. Technology can be used for good or evil, for suspicious or silly. Several of our drones have recently been reassigned from the Middle Easy to domestic skies in the name of law and order. Opensource software could easily bootstrap the development of creatively nefarious systems. Vigilance.

At first, I though I’d build this for real, using servo-motors to hone in on human faces and laser pointers to threaten mild damage. Then I fell victim to the playfulness of particle systems. The logo for this project would be a bullseye with a winking orb at the center, complete with luscious lashes.

EvanSheehan-LookingOutwards-1

by Evan @ 1:56 am

Mandala (A Musical Palindrome)

[youtube=https://www.youtube.com/watch?&v=_4GbaK22jjw]

A friend of mine sent me this piece recently because of my FaceOSC project. The guy used the planets of our solar system to generate a piece of music by assigning each planet a pitch and then playing that pitch each time that planet completed an orbit about the sun. The periodicity of the orbits of the planets results in a piece of music that is the same as backwards and forwards if you let it play out long enough.

I love this piece for several reasons. One: SCIENCE. Two: I really like the notion of making art from physical phenomenon. Three: as a musician, I enjoy it when people take data typically represented visually and present it aurally.

I really enjoyed the visualizations in this piece: the demonstration of a planet’s natural sine wave and the evidence that the piece is a palindrome. But I think the piece as a whole was too text heavy. A lot of information could have been conveyed visually and might have made the piece more fun to watch.

Meek•FM
[vimeo=https://vimeo.com/35146511]

Here’s another interesting project that converts visual data to sound. They built a board that allows you to control letterforms projected by the device. But the letterforms are parsed into sounds as well as light, so the board is also an instrument of sorts.

Again, I love the blending and blurring of audio and visual information. I also love the craftsmanship and interactivity of the board. It strikes me as a very engaging piece. It’s the kind of thing that’s inspired me ever since I was a kid visiting science and natural history museums to make things.

Magnetic Movie
[youtube=https://www.youtube.com/watch?v=IT2AQC3X5bk]

Based, I believe, on the audio track, this movie is an artist’s conception of what magnetic fields look like. I love the lines; they remind me of many renderings of chaotic maps I’ve seen. I don’t know that this is really a form of computational art, but it is art inspired by science. It’s something of a shame that those are not real magnetic fields, but it’s still fun to watch.

Mahvish Nagda – Looking Outwards 1

by mahvish @ 1:52 am

Visualizing data non-visually

Using senses other than eyesight to visualize data is a pretty broad topic. There are a couple of interesting projects that come to mind.

A recent article in Interactions (link) highlights experiments that test if people can match sonifications of a data set (with a particular sampling rate) with their visual graphs. 70 listeners had about a 60% accuracy rate (higher than the 25% random rate).

Another project, called Olly (link & article) aka Smelly Bot,  is a stackable “robot” that releases a smell/”aroma” when you receive a specific event from Facebook, Twitter and the like.

My favorite, though, is not really a visualization, but I’d like to think it opens up possible avenues for exploration. It’s a TED talk by Homaro Canto & Ben Roche about the work they are doing in their Chicago restaurant, Moto. I don’t know if they entire talk is applicable, but being able to print out the taste of your food on a sheet of paper is pretty cool.

I think all of these different senses could potentially open up richer ways to visualize data.

@0:44

Soundmachines

Soundmachines are table-sized instrument for performing electronic music by DJing visual patterns on record-sized discs. Each table has three units of what look like unconventional record players. Each unit spins a disc with concentric geometric patterns that translate into control signals.

I’m pretty tone deaf so I especially appreciate this project. In fact, making music or having to understand music scares me and sometimes I’ll have to ask other people to count down beats for me. It’s pretty intimidating. What I loved about this project is that the mapping between beats, tunes, etc and the geometric patterns on the discs was pretty easy to grasp and it makes the act of making music easy and accessible. Although this project uses Arduino and actual discs, I don’t see a reason why the player couldn’t be virtual: like your iPad. You could also add your own discs for new patterns.

lumiBots

lumiBots are autonomous UV light emitting robots that roam on top of a 1 x 2 meter phosphorescent surface. So, I didn’t particularly love this project. What I did like were the light traces that the bots left behind. I think there’s definitely a lot of potential to make that look interesting. I didn’t really find the final complex pattern that emerged interesting and felt it was pretty random. I do think that there must be other algorithms that create interesting patterns with the traces that are left behind. This could also be because there were only 9 bots: maybe more would make the effect better.

Jonathan-13-9-65

by jonathan @ 1:49 am

I kind of approached this part of a project in a different way. Instead of trying to come up with an algorithm to derive the art piece, I decided to turn it into one derived from the user through the use of sound. Though it’s obviously not perfect and is missing a few vital components, it more or less kind of resembles the original 13/9/65.

import ddf.minim.*;
import ddf.minim.signals.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
 
Minim minim;
AudioInput in;
 
int  y, barMic, micIn, count;
float x;
 
ArrayList underLine;
 
void setup() {
 
  size(400, 400);
  // underLine = new ArrayList();
  background(0);
  x = 0;
  y = 40;
  count = 0;
  minim = new Minim(this);
  in = minim.getLineIn(Minim.STEREO, 512);
}
 
void draw() {
 
  noFill();
  strokeWeight(1);
  stroke(255);
  rectMode(CENTER);
  smooth();
  followLine();
  create();
  println(micIn);
  frameRate(60);
}
 
void followLine() {
 
//initialize mic input
 
  for (int i = 0; i  width) {
      x = 0;
      y = y+40;
 
      if (y > height) {
        y = 0;
        x = 0;
      }
    }
  }
}
 
void create() {
 
  //float [] underLine = new float [500];
 
  stroke(0); 
 
  if (micIn > 200) {
 
    ellipse(x, y, micIn/5, micIn/5);
  } 
 
  if (micIn > 15 && micIn  50 && micIn < 90) {
    line(x+random(-20, 20), y-20, x+random(-20, 20), y+20);
  }

SankalpBhatnagar-LookingOutwards-1

by sankalp @ 1:49 am

Project 1:

font-face from Andy Clymer on Vimeo.

Essentially, this project uses face tracker software like Face OSC to tilt, recolor,and increase the stroke weight of a given font (particularly, the letter “a”), tilt. As a BSA student concentrating in Communication Design, I find this type of thing extraordinarily inspiring! The possibilities and implications of using a designer’s expressions to control the design elements of typography are immense and far-reaching. If I had to do anything, I would add the ability to resize the letter and possibly place it in the background so that a user could edit letters and place them in the background, effectively spelling out words in various alterations to the original typeface.

 

 

Project 2:

My other concentration through the BSA program is in Mathematical Sciences. As a math major, I’ve often been asked “so, Math, huh? what do you plan to do with that?” Well now, I can reference this awesome information design chart that actually depicts the annual salary of various forms of employment in relation to how much math they had to study. This type of thing is not only interesting, but really motivational to me. As this weird sort of hybrid of two fields, my major really urges me to use information design like this to explore some common perceptions of a person who majors in just math. If I had to change anything, I’d probably better explain the bottom axis. Right now, I can’t figure our the use of the units on the bottom axis. I mean, I get what it’s attempting to depict, but I’d at least include a blurb about what the axis does.

Project 3.

found over here at visualcomplexity.com

Outside of school, I’m a huge fan of Rap & Hip-Hop. This image is the collective visualization of the distribution of rapper names based on ideas,objects,or titles that the rapper  derived his/her name from. I could honestly look at this thing for hours on end. It’s really inspiring to see design and information visualization come together to form something that I generally find the public not know or care about. If I had to add anything, I’d see if I could tie in the dates in which the rapper was born to see if any interesting facts emerged from the timeline of choice of rap name.

Jonathan – Face OSC

by jonathan @ 1:48 am

I wanted to have a little fun with this part of Project 1. I took a quick picture of my friend Ethan, and used his image as the little avatar in my game. My aim of this program was to just get a bit familiar with how OSC functions as well as how the library Face OSC works. The object of the game is to catch the little balls in Ethan’s mouth controlled by the movements of your own mouth. Though this interaction is nothing special, my mental image of people’s opening and closing mouths while sitting at their computer completely engrossed with the game was especially comical, which in the end was a good enough reason for me to make it. Overall the face tracking was surprisingly accurate and held calibration pretty easily, to my surprise. I had expected many more hiccups along the way, but was quite pleasantly surprised not to run into nothing I couldn’t easily troubleshoot myself. What I am looking forward is to keep challenging myself and to keep exploring various modes of interactions.

import oscP5.*;
import netP5.*;
 
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();
 
OscP5 oscP5;
 
int found, x, y, ellipseSize, score;
float mouthHeight, mouthWidth, totalMouthHeightB, totalMouthHeightT;
PImage eT, eB;
 
void setup() {
  size(640, 480);
  background(255);
 
  eT = loadImage("ethantop.jpg");
  eB = loadImage("ethanbottom.jpg");
  x = 0;
  y = int(random(1, 10));
  score = 0;
  ellipseSize = int(random(20, 50));
  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");
}
 
void draw() {
  background(255);
  smooth();
  totalMouthHeightB = 22+mouthHeight;
  totalMouthHeightT = -mouthHeight*4;
 
  flyingObjects();
  eating();
}
 
void eating() {
  if (found > 0) {
    //draw image
    image(eB, 0+posePosition.x, posePosition.y-100+totalMouthHeightB, eB.width/12, eB.height/12);
    image(eT, 8+posePosition.x, posePosition.y-100+totalMouthHeightT, eT.width/12, eT.height/12);
  }
}
 
void flyingObjects() {
  x = x + int(random(1, 10));
  fill(random(255), random(255), random(255));
  noStroke();
  //println(y);
  ellipseMode(CENTER);
  //draw balls
  ellipse(x, y, ellipseSize, ellipseSize);
  //check for border
  if (x > width) {
    x = 0;
    y = int(random(height/2));
    ellipseSize = int(random(20, 30));
  }
  //check for hit
  if (y > posePosition.y-100+totalMouthHeightT && y  posePosition.x - 1 && x < posePosition.x + 2 ) {
    x = 0;
    score = score+1;
    y = int(random(height/2));
    ellipseSize = int(random(20, 30));
    background (255, 0, 0);
    //println("hit");
  }
 
  //score
  textSize(100);
  fill(0);
  text(score, width-110, 100);
  println(score);
}
 
public void found(int i) {
  println("found: " + i);
  found = i;
}
 
public void posePosition(float x, float y) {
  println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);
}
 
public void mouthWidthReceived(float w) {
  println("mouth Width: " + w);
  mouthWidth = w;
}
 
public void mouthHeightReceived(float h) {
  println("mouth height: " + h);
  mouthHeight = h;
}

BillyKeyes-13-9-65

by Billy @ 1:23 am

Press SPACE to generate a new image.

/**
 * 13/9/65 on 23/1/12 
 *
 * Interactive Art and Computational Design
 * Billy Keyes, 2012
 * 
 * A generator of images similar to "13/9/65 Nr. 2" by Frieder Nake
 */
 
final int NUM_LINES = 9;
final int SEGMENTS = 6;
 
ArrayList lines;
 
void setup() {
  size(650, 650);
  smooth();
  noLoop();
 
  stroke(0);
  strokeWeight(1.25);
 
  lines = new ArrayList(NUM_LINES + 2);
}
 
void draw() {
  background(240);
 
  // Draw horizontal lines
  lines.add(drawSegmentedLine(SEGMENTS, 50, 0, 0));
  for (int i = 1; i < NUM_LINES; i++) {
    lines.add(drawSegmentedLine(SEGMENTS, 50, i * height / NUM_LINES, 40)); 
  }
  lines.add(drawSegmentedLine(SEGMENTS, 50, height, 0));
 
  // Draw vertical "connecting" lines
  for (int i = 0; i < lines.size() - 1; i++) {
    float[][] t = lines.get(i);
    float[][] b = lines.get(i+1);
    // Prefer higher numbers of connections between two given rows
    int connections = int(sqrt(random(1, 49)));
    for (int j = 0; j < connections; j++) {
      // Choose a segment to connect
      int seg = int(random(SEGMENTS));
      if (random(1) < 0.5) {
        drawStraightConnectingLines(new float[][]{t[seg], t[seg+1]}, 
                                    new float[][]{b[seg], b[seg+1]},
                                    int(random(3, 9)), random(0.2, 0.6), random(0.25, 1.0));
      } else {
        drawDiagConnectingLines(new float[][]{t[seg], t[seg+1]}, 
                                new float[][]{b[seg], b[seg+1]},
                                int(random(3, 9)), random(0.2, 0.6));
      }
    }
  }
 
  // Draw circles
  ellipseMode(CENTER);
  noFill();
  for (int i = 0; i < NUM_LINES; i++) {
    for (int j = 0; j < SEGMENTS; j++) {
      if (random(1) < 0.1) {
         float hf = (float) height;
         float wf = (float) width;
         float side = sqrt(random(16, sq(hf / (NUM_LINES - 3))));
         ellipse(wobble(wf / (SEGMENTS * 2) + j * wf / SEGMENTS, wf / SEGMENTS),
                 wobble(hf / (NUM_LINES * 2) + i * hf / NUM_LINES, hf / NUM_LINES),
                 side, side);
      }
    }
  }
}
 
 
/**
 * @param segs  The number of segments to draw
 * @param dx    The amount of x variation in segment endpoints
 * @param y     The base y coordinate of segment endpoints
 * @param dy    The amount of y variation in segment endpoints
 */
float[][] drawSegmentedLine(int segs, float dx, float y, float dy) {
  float[][] points = new float[segs + 1][2];
  points[0][0] = 0;
  points[0][1] = wobble(y, dy);
 
  for (int i = 1; i <= segs; i++) {
    points[i][0] = (i == segs) ? width : wobble(i * width / segs, dx);
    points[i][1] = wobble(y, dy);
    line(points[i - 1][0], points[i - 1][1], points[i][0], points[i][1]);
  }
  return points;
}
 
 
/**
 * @param top       The top set of segments
 * @param bottom    The bottom set of segments
 * @param density   Approximately the number of clusters to draw
 * @param cluster   Influences how many lines are in each cluster (greater than 0.25)
 * @param spread    How much of the segment is filled with lines (between 0.0 and 1.0)
 */
void drawStraightConnectingLines(float[][] top, float[][] bottom, int density, float cluster, float spread) {
  for (int i = 0; i < top.length; i += 2) {
    float xl = max(top[i][0], bottom[i][0]);
    float xr = min(top[i+1][0], bottom[i+1][0]);
    float diff = spread * (xr - xl);
    float base = random(xl, xr - diff);
 
    density = density + int(random(0, 2));
    for (int j = 1; j < density; j++) {
      for (int k = 0; k < cluster * 4; k++) {
        float x = base + wobble(j * diff / density, cluster * diff / density);
        float yt = ylerp(top, i, x);
        float yb = ylerp(bottom, i, x);
        line(x, yt, x, yb);
      }
    }
  }
}
 
 
/**
 * @param top       The top set of segments
 * @param bottom    The bottom set of segments
 * @param density   Approximately the number of clusters to draw
 * @param cluster   Influences how many lines are in each cluster (greater than 0.25)
 */
void drawDiagConnectingLines(float[][] top, float[][] bottom, int density, float cluster) {
  for (int i = 0; i < top.length; i += 2) {
    float[][] src, dest;
    if (random(1) < 0.5) {
      src = top;
      dest = bottom;
    } else {
      src = bottom;
      dest = top;
    }
 
    float diff = src[i+1][0] - src[i][0];
    density = density + int(random(0, 2));
    for (int j = 1; j < density; j++) {
      for (int k = 0; k < cluster * 4; k++) {
        float xs = src[i][0] + wobble(j * diff / density, cluster * diff / (density + 1));
        float xd = random(dest[i][0], dest[i+1][0]);
        float ys = ylerp(src, i, xs);
        float yd = ylerp(dest, i, xd);
        line(xs, ys, xd, yd);
      }
    }
  }
}
 
 
/**
 * Produces a random value at most dv/2 away from the given value.
 */
float wobble(float v, float dv) {
  return v + random(-dv / 2, dv / 2);
}
 
 
/**
 * Linear interpolation to find the y coordinate on the degment for the
 * given x coordinate.
*/ 
float ylerp(float[][] points, int i, float x) {
  return lerp(points[i][1], points[i+1][1], (x - points[i][0]) / (points[i+1][0] - points[i][0])); 
}
 
 
void keyPressed() {
  if (key == ' ') {
    lines.clear();
    redraw();
  }
}

Download (PDF, 17KB)

KaushalAgrawal – LookingOutwards – 1

by kaushal @ 1:12 am

Augmented Shadow

[youtube=https://www.youtube.com/watch?v=0arZMuPK58w]
The project was created by Joon Y. Moon, which uses light and boxes to create shadows and portrays life form around it. The cube blocks cast distorted shadows on the table top surface which looks like a house. The other elements such as trees, birds and people car projected on the surface. The design exhibits a form of life, where people move towards light sources and come back to their houses to light up the house. The project may be just an art form but can be used to do other interesting things. The idea of using shadows is very compelling and makes me think what more can we do with shadows and light forms.

Project Cascade

[youtube=https://www.youtube.com/watch?v=yQBOF7XeCE0]
Project Cascade was initiated by Mark Hansen at the NYTimes R&D. The project visualizes the spread of a message such as a tweet over social media. The project captures spread of a tweet takes over the social space and the people involved. The project maps beautifully captures the complex spread making it intuitive. The visualizations can be drilled down or rolled up to see key components, which makes it even more interesting. The idea can also be used to see how people interact in a more social setting like facebook, to identify outward people in your network as opposed to just being friends on the list.

All Eyes on you

[vimeo=https://vimeo.com/33186969 width=”600″]
This an installation named the Britzpetermann shop installation. It uses kinect, openframeworks and arduino to project generated eyes with different radius onto a glass pane. The installation is programmed such that the eyes are always looking at the person walking outside. What is really interesting about the project is that it take a simple concept of concentric circles and programs it to attract attention. The render of the eyes are really strong which adds to the overall effect. It would be interesting to see how this project is installed for advertising purposes, where the eyes look at you for a while and then diverts back to the product displayed on the window.

Zack J-W Face_OSC

by zack @ 1:01 am

One of the first things I noticed about Face_OSC is that it’s awesome and my wife thought you had to yell at it for it work.  The next thing I noticed is that it doesn’t read facial hair well and, well, I have some.

So I thought it would be appropriate to use Face_OSC to deal with the one thing it hadn’t, err, faced at least in a metaphorical sense.  I came to find out that it was a known unknown.  Golan lamented that it dealt with beards in a maddening way and Dan Wilcox suggested with one frown that it was a computational nightmare.  So I hope programmers get a kick out of it.

One technical hurdle was, using a camera for both the background image and face tracking meant two cameras (at least in my limited capacity as a coder).  Processing would not share with Face_OSC and vice versa from the computers built in camera.  Having an external web-cam meant I was always going to have an offset between the two focal points which is exacerbated by scale and movement.  A better code may be able to compensate for this.

Credit where credit is due:  Thanks to Dan Wilcox for bringing the Titty Tracker into the world.  Clearly an inspiration.

Video becomes worthless after 1:30…

[youtube https://www.youtube.com/watch?v=VToNr8IQTp4]

 

import processing.video.*;
Capture video;
 
import oscP5.*;
OscP5 oscP5;
 
PImage[] images = new PImage[6];
 
int b, found;
 
// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();
 
// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;
 
void setup()
{
  size(640,480);
  frameRate(2);
  imageMode(CENTER);
  b = 0;
 
  video = new Capture(this, width, height);
 
  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");
 
  for ( int i = 0; i< images.length; i++ )   {     images[i] = loadImage( i + ".png" );   // make sure images "0.png" to "?.png" exist   } } void draw() {     if (video.available()) {     video.read();     image(video, width/2, height/2);       // background(0);      if(found > 0)
  {
    scale(poseScale*.12);
    image(images[b], posePosition.x*poseScale - width*.75 ,posePosition.y*poseScale + height*.1);
  }
}
}
 
void keyPressed()
{
  if (keyPressed && key == ' ' )
  {
    b = (b + 1) % 6;
  }
  redraw();
}
 
// OSC CALLBACK FUNCTIONS
 
public void found(int i) {
  println("found: " + i);
  found = i;
}
 
public void poseScale(float s) {
  println("scale: " + s);
  poseScale = s;
}
 
public void posePosition(float x, float y) {
  println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);
}
 
public void poseOrientation(float x, float y, float z) {
  println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);
}
 
public void mouthWidthReceived(float w) {
  println("mouth Width: " + w);
  mouthWidth = w;
}
 
public void mouthHeightReceived(float h) {
  println("mouth height: " + h);
  mouthHeight = h;
}
 
public void eyeLeftReceived(float f) {
  println("eye left: " + f);
  eyeLeft = f;
}
 
public void eyeRightReceived(float f) {
  println("eye right: " + f);
  eyeRight = f;
}
 
public void eyebrowLeftReceived(float f) {
  println("eyebrow left: " + f);
  eyebrowLeft = f;
}
 
public void eyebrowRightReceived(float f) {
  println("eyebrow right: " + f);
  eyebrowRight = f;
}
 
public void jawReceived(float f) {
  println("jaw: " + f);
  jaw = f;
}
 
public void nostrilsReceived(float f) {
  println("nostrils: " + f);
  nostrils = f;
}
 
// all other OSC messages end up here
void oscEvent(OscMessage m) {
  if(m.isPlugged() == false) {
    println("UNPLUGGED: " + m);
  }
}

CraigFahner-FaceOSC

by craig @ 1:00 am

[youtube=https://www.youtube.com/watch?v=iDYR3JWxDqo]

For the FaceOSC project, I decided to focus one one gesture in particular: blinking eyes. I was thinking about Walter Murch’s book “In The Blink of an Eye”. Murch was a film editor. He argues that the blink of an eye is a means for our minds to delineate experience, to punctuate our perception. The job of a film editor is, too, to delineate experience, and it’s no surprise that film audiences have been observed to blink in tandem, synchronizing with the editor’s cuts. I thought it would be interesting to force my perception of experience on a viewer by projecting my blink into the space around me. I used FaceOSC, Max/MSP and Arduino to wire the lights in my studio up to turn off each time I blink. The effect is that the entire space goes dark whenever I blink. While this is barely perceptible to me, the viewer gets to experience the same delineations of time that I do. Perhaps blinks are like sneezes – maybe they are contagious.

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity