Ghost egg avatar clair

I wanted to create a simple avatar while learning how to  manipulate OSCface. I also wanted to manipulate how long the image stays on the screen. Heightening the movement of the mouth and eyebrows. was also one main goal. The expressions of my avatar don’t range too drastically but mostly show varying levels of anger intensity. This was really fun to make.

 

 

//ghost egg gonna fuck you up!!!!!!

import oscP5.*;
OscP5 oscP5;

// num faces found
int found;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;

void setup() {
background(167,41,41,12);
fill(0, 12);
rect(0, 0, width, height);
size(640, 480);
frameRate(30);

oscP5 = new OscP5(this, 8338);
oscP5.plug(this, “found”, “/found”);
oscP5.plug(this, “poseScale”, “/pose/scale”);
oscP5.plug(this, “posePosition”, “/pose/position”);
oscP5.plug(this, “poseOrientation”, “/pose/orientation”);
oscP5.plug(this, “mouthWidthReceived”, “/gesture/mouth/width”);
oscP5.plug(this, “mouthHeightReceived”, “/gesture/mouth/height”);
oscP5.plug(this, “eyeLeftReceived”, “/gesture/eye/left”);
oscP5.plug(this, “eyeRightReceived”, “/gesture/eye/right”);
oscP5.plug(this, “eyebrowLeftReceived”, “/gesture/eyebrow/left”);
oscP5.plug(this, “eyebrowRightReceived”, “/gesture/eyebrow/right”);
oscP5.plug(this, “jawReceived”, “/gesture/jaw”);
oscP5.plug(this, “nostrilsReceived”, “/gesture/nostrils”);
}
//ghost egg is gonna fuck you up

void draw() {
fill(167,41,41,12);
rect(0, 0, width*2, height*2);
stroke(255);
if(found > 0) {
translate(posePosition.x, posePosition.y);
scale(poseScale);
noFill();
ellipse(-20, eyeLeft * -9, 10, 8);
ellipse(20, eyeRight * -9, 10, 8);
ellipse(0, 10, mouthWidth* 1.5, mouthHeight * 5);
ellipse(-5,eyeLeft+-11,80,100);
line(-30,eyebrowLeft*-6,0,-40);
line(30,eyebrowLeft*-6,0,-40);

rectMode(CENTER);
fill(0);
}
}

// OSC CALLBACK FUNCTIONS

public void found(int i) {
println(“found: ” + i);
found = i;
}

public void poseScale(float s) {
println(“scale: ” + s);
poseScale = s;
}

public void posePosition(float x, float y) {
println(“pose position\tX: ” + x + ” Y: ” + y );
posePosition.set(x, y, 0);
}

public void poseOrientation(float x, float y, float z) {
println(“pose orientation\tX: ” + x + ” Y: ” + y + ” Z: ” + z);
poseOrientation.set(x, y, z);
}

public void mouthWidthReceived(float w) {
println(“mouth Width: ” + w);
mouthWidth = w;
}

public void mouthHeightReceived(float h) {
println(“mouth height: ” + h);
mouthHeight = h;
}

public void eyeLeftReceived(float f) {
println(“eye left: ” + f);
eyeLeft = f;
}

public void eyeRightReceived(float f) {
println(“eye right: ” + f);
eyeRight = f;
}

public void eyebrowLeftReceived(float f) {
println(“eyebrow left: ” + f);
eyebrowLeft = f;
}

public void eyebrowRightReceived(float f) {
println(“eyebrow right: ” + f);
eyebrowRight = f;
}

public void jawReceived(float f) {
println(“jaw: ” + f);
jaw = f;
}

// all other OSC messages end up here
void oscEvent(OscMessage m) {

/* print the address pattern and the typetag of the received OscMessage */
println(“#received an osc message”);
println(“Complete message: “+m);
println(” addrpattern: “+m.addrPattern());
println(” typetag: “+m.typetag());
println(” arguments: “+m.arguments()[0].toString());

if(m.isPlugged() == false) {
println(“UNPLUGGED: ” + m);
}
}

Peek-a-boo

Rather than just allow my face to be a puppeteer of a “solid” virtual object, I want to allow the motions of my face to create my puppet as my program ran. I also didn’t want the puppet to seem like a mass, but more like a swarm and so I opted to create my puppet out of particles. In order to create these particles, the user must blink or close their eyes. This effectively only allows the user to see themselves as a mirrored particle-based reflection after looking away from it. The particles are based on the particle system I created for my Thousand line Project, but modified for circular motion.

Here’s the program!

//
// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230
//
import oscP5.*;
OscP5 oscP5;

ArrayList<ZoomLine> zoomers;
ZoomLine guy;

// num faces found
int found;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;

void setup() {
  size(640, 480,OPENGL);
  frameRate(30);
  zoomers = new ArrayList<ZoomLine>();
  
  guy = new ZoomLine(50,50,200,200,10);
  
  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");
}

void draw() {  
  background(255);
  stroke(0);
  makeZoomer();
  
  
  
  if(found > 0) {
    translate(posePosition.x, posePosition.y);
    rotateY(poseOrientation.y);
    rotateX(poseOrientation.x);
    rotateZ(poseOrientation.z);
    scale(poseScale);

    drawLines();
  }
}

// OSC CALLBACK FUNCTIONS

public void found(int i) {
  println("found: " + i);
  found = i;
}

public void poseScale(float s) {
  println("scale: " + s);
  poseScale = s;
}

public void posePosition(float x, float y) {
  println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);
}

public void poseOrientation(float x, float y, float z) {
  println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);
}

public void mouthWidthReceived(float w) {
  println("mouth Width: " + w);
  mouthWidth = w;
}

public void mouthHeightReceived(float h) {
  println("mouth height: " + h);
  mouthHeight = h;
}

public void eyeLeftReceived(float f) {
  println("eye left: " + f);
  eyeLeft = f;
}

public void eyeRightReceived(float f) {
  println("eye right: " + f);
  eyeRight = f;
}

public void eyebrowLeftReceived(float f) {
  println("eyebrow left: " + f);
  eyebrowLeft = f;
}

public void eyebrowRightReceived(float f) {
  println("eyebrow right: " + f);
  eyebrowRight = f;
}

public void jawReceived(float f) {
  println("jaw: " + f);
  jaw = f;
}

public void nostrilsReceived(float f) {
  println("nostrils: " + f);
  nostrils = f;
}

// all other OSC messages end up here
void oscEvent(OscMessage m) {
  
//  /* print the address pattern and the typetag of the received OscMessage */
//  println("#received an osc message");
//  println("Complete message: "+m);
//  println(" addrpattern: "+m.addrPattern());
//  println(" typetag: "+m.typetag());
//  println(" arguments: "+m.arguments()[0].toString());
  
  if(m.isPlugged() == false) {
    println("UNPLUGGED: " + m);
  }
}


void makeZoomer() {
  if (found > 0) {
      if ((eyebrowLeft > 8.8) && (eyebrowLeft > 8.8)) {
          zoomers.add(new ZoomLine(random(width) / poseScale, random(height) / poseScale ,
                        random(width) / poseScale, random(height) / poseScale, int(random(10,20))));
      }
  }
}
  
void drawLines() {
  for (int i = 0; i < zoomers.size(); i++) {
    zoomers.get(i).drawMe();
    zoomers.get(i).update();
  }
  for (int j = 0; j < zoomers.size(); j++) {
    if (zoomers.get(j).aliveFor >= zoomers.get(j).lifeSpan) { 
       zoomers.remove(j);
    }
  }
}

class ZoomLine {
  Position start;
  Position lineLoc;
  Position lineEnd;
  Position goal;
  
  Position body;
  Position velocity;
  
  float lineLength;
  float angle;
  
  boolean moving;
  
  float noiseStart = random(100);
  float noiseChange = .007;
  
  float speed;
  
  float offSetX;
  float offSetY;

  float startOffX;
  float startOffY;
  
  int faceIndex;
  int posIndex;
  
  int index;
  
  float radius;
  float points;
  
  float partWidth;
  float partHeight;
  
  float bodyPart;
  
  int aliveFor;
  int nextTime = millis() + 1000;
  
  int rot;
  
  float lifeSpan;
  
  ArrayList<Position> posList;
  
  ZoomLine(float startX, float startY, float goalX, float goalY, int points) {
    
    this.start = new Position(startX, startY);
    this.goal = new Position(goalX, goalY);
    
    this.speed = 5;
   
    this.lineLoc = new Position(startX, startY);
    
    this.index = 0;
    
    this.lifeSpan = random(4,7);
    
    if (random(1) < .5) {
      this.rot = -1;
    } else {
      this.rot = 1;
    }
   
    this.points = points;
    
    this.bodyPart = (int)random(4);
    
    findBody();
    
    makePosList();
    updateAngle();
    updateLength();
    updateLineEnd();
    
    float largest = max(width,height) / poseScale;
    this.offSetX = random(-largest * .01,largest * .01);
    this.offSetY = random(-largest * .01,largest * .01);
  }
  
  //Finds a body to float around
  void findBody() {
    //Left Eye
    if (this.bodyPart == 0) { 
      this.radius = 20;
      this.partWidth = 20;
      this.partHeight = 10;
      this.body = new Position(-25, eyeLeft * -9);
    }
    // Right Eye 
    else if (this.bodyPart == 1) {
      this.radius = 20;
      this.partWidth = 20;
      this.partHeight = 10;
      this.body = new Position(25, eyeRight * -9);
    } 
    //Nose
    else if (this.bodyPart == 2) {
      this.radius = 10;
      this.partWidth = 10;
      this.partHeight = 7;
      this.body = new Position(0,nostrils * -1);
    } 
    // Mouth
    else {
      this.radius = mouthWidth * 2;
      this.partWidth = mouthWidth * 2;
      this.partHeight = mouthHeight * 2;
      this.body = new Position(0, 20);
    }
  }
  
  //Creates a list of positions around a given point (goalX and Y in this case)
  void makePosList(){
    this.posList = new ArrayList<Position>();
    float w = this.partWidth;
    float h = this.partHeight;
    for (int i = 0; i < this.points; i++) {
        float theta = i * (6.28 / this.points );
        this.posList.add(new Position(this.body.x + (w * cos(theta)), this.body.y + (h * sin(theta)) ) );
    }
  }
  
  //Updates speed of the line based on distance to the target position.   
  void updateSpeed() {
    float distance = dist(this.goal.x, this.goal.y,
                          this.start.x, this.start.y);
    this.speed = distance / this.lineLength;
  }
      
  //Updates the angle between the goal and lineLoc.
  void updateAngle() {
    float dx = this.goal.x - this.lineLoc.x;
    float dy = this.goal.y - this.lineLoc.y;
    
    this.angle = atan2(dy,dx);
  }
  
  //Semi randomly determines length based on the start, goal and a noise value. 
  void updateLength() {
    float distance = dist(this.goal.x, this.goal.y,
                          this.start.x, this.start.y);
    
    this.noiseStart += this.noiseChange;
    
    this.lineLength = (distance * .4) * .5;
  }
  
  //Updates the end position of the line bases on the angle between the location and goal
  //as well as the length of the line. 
  void updateLineEnd() {
    float yChange = sin(this.angle) * this.lineLength;
    float xChange = cos(this.angle) * this.lineLength;
    this.lineEnd = new Position(this.lineLoc.x + xChange, this.lineLoc.y + yChange);
  }
  
  //Draws the line on the screen. 
  void drawMe() {
    strokeCap(ROUND);
    strokeWeight(4);
    line(this.lineLoc.x + offSetX, this.lineLoc.y + offSetY, 
         this.lineEnd.x + offSetX, this.lineEnd.y + offSetY);
  } 
  
  //Updates goal position.
  void updateGoal() {
    this.goal = this.posList.get(this.index);    
  }
  
  //Updates all variables for movement and drawing. 
  void update() {
    this.findBody();
    this.makePosList();
    this.updateGoal();
    this.updateAngle();
    this.updateLineEnd();
    this.move();
    if (millis() >= this.nextTime) {
      this.aliveFor += 1;
      this.nextTime = this.nextTime + 1000;
    }
  }
  
  //Moves line between lineLoc and goal. Resets Line if it hits goal.
  void move() {
    float distance = dist(this.lineLoc.x,this.lineLoc.y,this.goal.x, this.goal.y);
    this.lineLoc.x += this.speed*((this.goal.x - this.lineLoc.x)/distance);
    this.lineLoc.y += this.speed*((this.goal.y - this.lineLoc.y) / distance);
    
    if (dist(this.lineLoc.x, this.lineLoc.y,this.goal.x, this.goal.y) <= (width * .02)) {
          this.index = abs(this.index + 1) % (int)this.points;
          this.start.x = this.goal.x;
          this.start.y = this.goal.y;
          this.updateGoal();
          this.updateLength();
          this.updateSpeed();

    }
  }
}

//Returns a position vector with the input x, y.
class Position {
    float x;
    float y;
    Position(float x, float y) {
      this.x = x;
      this.y = y;
    }
}

//Returns a random Position in a rectangle drawn from startX, startY, to endX, endY.
class RandomPosition extends Position {
      RandomPosition(float startX, float endX, float startY, float endY) {
      super(random(startX, endX),random(startY, endY));
    }
}

Looking Outwards

Project that inspired me: “The Gary” by V Squared Labs

I’m very interested in audio-visualization, so I was really excited to stumble upon one of V Squared Labs new projects they made for Dillon Francis (DJ). The video shows VSL’s process of design and production of a DJ booth that will be used in Dillon Francis’s shows. It is inspiring to see a group of people collaborate to create an immersive and overwhelming concert experience. The intensity and mind-blowing factor of visualizations at concerts continues to increase at every show that I attend, and I want to be a person who continues enhancing audio/visual experiences for concert-goers.

http://vsquaredlabs.com/project/dillon-francis-the-gary/

Project that surprised me: Infected Mushroom 3D experience – V Squared Labs

The immersive experience mentioned in the previous project is found whole-heartedly in V Squared Lab’s stage design for Infected Mushroom. The 3D visual experience is so powerful. Projection mapping is designed around 2 DJ ‘orbs’ on either side of the stage. The visuals are absolutely mind-blowing, and puts concert-goers in a completely foreign environment as compared to a concert from 4-6 years ago. This video at (26:00), (1:30), and (7:00) has some nice excerpts.

 

Project that has potential: Mixology by Rob Goodson

This project is also expansive on audio/visual relationships, but in a different context than the last two projects because of the interactive component. This project is projection mapped onto a hand made surface, with controls programed with Arduino. The viewer can use the controls to change the colors and sounds of the installation. I see a lot of different iterations of this project, with possible explorations for the controller component and visualizations.

Face OSC

Since I was little one of my favorite character was Totoro from the movie My neighbour Totoro by Hayao Miyazaki. The way that Totoro moved around in the movie was so funny to me that I tried to change his size and tried to make him move with my face and change in size and shape when I moved my mouth and eyes. I think it was quite successful since I could move him around and change in sizes but I had trouble trying to resizing him since it was too big when I first had drawn it. Also another problem that was big was that the program wouldn’t recognise my face properly and every time I moved my mouth or eyes the program would loose me and not follow me effectively enough. (That’s why I couldn’t embed a post since the program would loose my face every two seconds and it would be cut off)

import oscP5.*;
OscP5 oscP5;

// num faces found
int found;
// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float leftEyebrowHeight;
float rightEyebrowHeight;
float eyeLeftHeight;
float eyeRightHeight;
float nostrilHeight;
float jaw;

void setup() {
  size(800, 650);
  frameRate(30);

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "poseScale", "/pose/scale");
  oscP5.plug(this, "posePosition", "/pose/position");
  oscP5.plug(this, "poseOrientation", "/pose/orientation");
  oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
  oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
  oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
  oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
  oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
  oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
  oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");
  oscP5.plug(this, "jawReceived", "/gesture/jaw");
}

void draw() {  
  background(255);
  stroke(0);
  
  if(found > 0) {
    translate(posePosition.x, posePosition.y);
    scale(poseScale*0.1);
    noFill();
    noStroke();
    fill(220);
    triangle(-50,30-leftEyebrowHeight*5, -75-leftEyebrowHeight*5,70, -30,140);
    triangle(50,30-rightEyebrowHeight*5, 75+rightEyebrowHeight*5,70, 30,140);
    
    beginShape();
    curveVertex(-130,160);
    curveVertex(0,80+mouthHeight*5);
    curveVertex(120,160);
    curveVertex(190+mouthWidth*5,500);
    curveVertex(0,620);
    curveVertex(-190-mouthWidth*5,500);
    curveVertex(-130,160);
    curveVertex(0,80+mouthHeight*5);
    curveVertex(120,160);
    endShape();
    
    fill(255);
    ellipse(-65,145,45+eyeLeftHeight*5,45+eyeLeftHeight*5);
    ellipse(65,145,45+eyeRightHeight*5,45+eyeRightHeight*5);
    ellipse(0,410, 300+mouthWidth*5,350+mouthHeight*5);
    fill(0);
    ellipse(-65,145,18+eyeLeftHeight*5,18+eyeLeftHeight*5);
    ellipse(65,145,18+eyeRightHeight*5,18+eyeRightHeight*5);
    
    triangle(0,175+nostrilHeight*5, 25,155+nostrilHeight*5, -25,155+nostrilHeight*5);
    stroke(0);
    strokeWeight(5);
    line(0,250, -15,260);
    line(0,250, 15,260);
    line(50,260, 35,270);
    line(50,260, 65,270);
    line(-50,260, -65,270);
    line(-50,260, -35,270);
    line(25,280, 10,290);
    line(25,280, 40,290);
    line(-25,280, -10,290);
    line(-25,280, -40,290);
    line(70,285, 85,295);
    line(70,285, 55,295);
    line(-70,285, -55,295);
    line(-70,285, -85,295);

  }
}

// OSC CALLBACK FUNCTIONS

public void found(int i) {
  println("found: " + i);
  found = i;
}

public void poseScale(float s) {
  println("scale: " + s);
  poseScale = s;
}

public void posePosition(float x, float y) {
  println("pose position\tX: " + x + " Y: " + y );
  posePosition.set(x, y, 0);
}

public void poseOrientation(float x, float y, float z) {
  println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
  poseOrientation.set(x, y, z);
}

public void mouthWidthReceived(float w) {
  println("mouth Width: " + w);
  mouthWidth = w;
}

public void mouthHeightReceived(float h) {
  println("mouth height: " + h);
  mouthHeight = h;
}

public void jawReceived(float f) {
  println("jaw: " + f);
  jaw = f;
}

public void eyebrowLeftReceived(float h) {
  println("eyebrow left: " + h);
  leftEyebrowHeight = h;
}
 
public void eyebrowRightReceived(float h) {
  println("eyebrow right: " + h);
  rightEyebrowHeight = h;
}
 
public void eyeLeftReceived(float h) {
  println("eye left: " + h);
  eyeLeftHeight = h;
}
 
public void eyeRightReceived(float h) {
  println("eye right: " + h);
  eyeRightHeight = h;
}
 
public void nostrilsReceived(float h) {
  println("nostrils: " + h);
  nostrilHeight = h;
}

// all other OSC messages end up here
void oscEvent(OscMessage m) {
  
  /* print the address pattern and the typetag of the received OscMessage */
  println("#received an osc message");
  println("Complete message: "+m);
  println(" addrpattern: "+m.addrPattern());
  println(" typetag: "+m.typetag());
  println(" arguments: "+m.arguments()[0].toString());
  
  if(m.isPlugged() == false) {
    println("UNPLUGGED: " + m);
  }
}

IMG_20141006_162452

il-mio-vicino-totoro-img-pesca

Looking Outwards

Looking Outwards

What inspired ME

Smile TV from David Hedberg on Vimeo.

I really liked the concept and the execution of smile tv, a piece by David Hedberg which is essentially a TV that only functions when the viewer is smiling. For one thing, I that that conceptually, the simplicity of the piece really works to make it both endearing and interesting. Even while watching the piece I found myself smiling. Personally, I really like artwork that is interactive in that sense and I especially enjoyed watching the reactions of people as they used the piece. In addition, what I really admired about the piece is that it uses just one really simple function of FaceOSC which is the function that tracks the movement of the lips. As a piece, I think it is really successful in the way it works to make the viewer an essential part of the piece. Also, having one chair makes it a really intimate piece, between the viewer and the tv. I’m of two minds when it comes to this. For one thing, I do like how intimate it makes the because, however because watching tv is such a communal for me-I always watch with my family or friends—I’d like to see a piece that only works when everyone is smiling while watching it. Overall though I was just really drawn to the piece and I really like it as a whole.
https://www.creativeapplications.net/maxmsp/smile-tv-works-only-when-you-smile/

What surprised Me

SoundAffects: Behind the Scenes from Tellart on Vimeo.

The first thing one might come to know about me is I’m not really musically inclined. When I say this I don’t mean I don’t enjoy listening to music, but that when it comes to notes, or levels, I’m essentially tone deaf which makes me especially clueless to the idea of mixing sounds together. SoundAffects, a collaboration between Parsons the New School for Design, Tellart and mono, is a piece I wouldn’t have thought of in my wildest dreams. Basically it essentially translates the sounds of street activity into music that people passing by can listen too on their headphones. A data visualize also translates the vsuals into audio that goes along the music online. It’s a really interesting idea and I even liked some of the sounds complied by it. The only thing I would really critique is the presentation of the piece. When you are listen to the sounds you are looking at a white wall that says sound effects. You’re not looking at the city or the environment, which I feel is an essential part of the piece seeing as it wishes to change your perceptions of your environment. I think if the wall were in a position where you could look around better, or if it was even made out of glass so you could see through it, it would work to create visuals for the piece rather than having to rely on a computer to do so.
https://www.creativeapplications.net/maxmsp/soundaffects-maxmsp-sound-arduino/

What could do better

Voice Lessons from AudioCookbook.org on Vimeo.

John Keston’s piece Voice Lessons is a visual and digital piece that reacts to the touch of the user as they move up, down and left and right on the screen. As a whole, I really like the way the voice changes seamlessly according to the touch. The visuals also surprisingly transition seamlessly along with the shift in tone. The sounds made as the user moves along the screen are really eerie and unsettling. With that said I don’t know why I find this piece a bit disappointing. I suppose I expected it to be something that also interacts more with the viewer than it does at this point. I think that this surprisingly low amount of interactivity is what makes the piece disappointing .I don’t know if technologically this is a viable option but if the viewer’s voice could play a roll in the piece I think that would be really interesting. Also, if find the position in which his father(the instructor), begings to be odd. He begins looking away from you but turns to face you when you touch him. I think just having him continuously facing the viewer might be a better approach.

Looking Outwards Max MSP

Something that inspired me

Mew is an interactive sound piece where a mounted strip of fur responds to the viewers position and touch. If you approach the piece, it emits a purring noise, and if you touch it, distorted cat sounds are emitted depending on the pressure of your hand as well as the direction. I have an immense affinity for animals, taxidermy, and our interactions with pets and creatures, and despite the visual and interactive simplicity of this, this piece very much so feeds into that area of thought for me. I do wish however that the sounds emitted upon being touched were a bit more refined, and less chaotic.

Something that disappointed me

Speaking Angle fell somewhat flat in my opinion. I enjoyed the idea of using a fixed architectural space to form an interaction, especially with the use of words, but the fact that the words did not change, and their movement was accompanied by a very superfluous “wooshing” sound, it left me wanted something more. I think this piece would have turned out much stronger if the viewer’s movement generates different phrases or words, instead the same phrases moving up and down a plane.

Something that surprised me

Musical Skin is an interactive installation where two people interact with one another through touching, which directly creates a sound that is played by a water-filled glass. I found something very sweet in this concept, as well as the execution. I enjoyed the fact that people were the medium, as well as the creators of the sound. In addition to this, the installation creates a reason for people to touch, which is something that as we grow older, we tend to forget how fundamental it is to our social natures, as well as its role in brain development.

Looking Outwards

Art work that Inspired Me

I couldn’t choose just one art work that had inspired me so I’ll introduce the studio which has made the works that had inspired me. Kimchi and Chips is an mainly interactive installation art studio which is located in Seoul. Most of their art consist using Max/MSP/Jitter. Their art consist of code, form, material, concept and mechanism, where their concept is creating an emulsion of imagined reality within the real world developing natural interactions between people, nature and digital network. Their works focus on the smallest part of the nature such as light, wave and the small movements.

Kimchi and Chips

One of the works that I especially found interesting is Lunar Surface. The concept of the Lunar Space, the fact that they were drawing space in space was really interesting to me. They used computer and lights to project it into space, and our eyes see it as an  space due to after image. This is fully captured by the long exposure photographs but seeing with the human eyes also makes the work more stand out to me. This art was inspired by Murakami Haruki’s book 1Q84 and since I had liked the book a lot the art was more appealing and interesting to me, and I could read what they were trying to say through the art.

Project that surprised me

Brilliant Cube by Jin Yo Mok had surprised me since his work was comparably simple in concept and its function but it give a lot to think about in our life. It is a 6meter square cube with LED lights that express the brilliant moments in our lives by pattern. The theme “Live Brilliant” is simple but pure to my thought. It is installed at one of the most busiest and crowded place in Seoul and most might not recognise it as an artwork but many people take time to appreciate it rather than go their own way. I think the placing of this work was really effective since it would not be as effective when it was a museum setting. By placing it at a busy district area, it could have its full potential.

Jin Yo Mok

Project that could have been great

Time of Doubles by Artificial Nature, group that is made up of two artists Haru Ji and Graham Wakefield was inspiring but I personally think it had more chances to be a better and more interesting project. They create an artificial nature where people can interact with the fluid space, where it actually is a sustainable nature with in it. It reproduces, grows and sustains itself. However, it is simulating nature, the real world, inside not a real world, inside a screen. I thought that this was similar to the zoo or the aquarium where it is more like a one time viewing event rather than providing a chance to think about nature and its mechanism. I think it might have been better if the viewer could interact with it and modify the lives behind the screen.

Artificial Nature

Looking Outwards (MaxMSP)

Tripwire (2011)

Tripwire by Jean-Michel Albert and Ashley Fure

Tripwire is a set of 18 or 24  (depending on which installation) strings attached to motors. By spinning the motors at certain frequencies, and projecting light onto them, a captivating audio visual display is created. I specifically like that the colors projected onto the strings create the appearance of a volumetric light similar to a light projected into fog, but with much more defined boundaries. I find it interesting that the audience influences the piece by moving in front of it. This is done using infrared sensors that trach the presence of viewers.

I wonder though what this project would have been like if rather than rotating the strings were moved back and forth. I think that there would have been a possiblility to give the piece more tone, by vibrating at appropriate frequencies, rather than just whirring.

Tele-Present Water

 

This is a project where the artist David Bowen, simulates the movement of water in a grid sculpture controlled by maxMSP, Arduinos, and some DC motors. It collects data from the actual topology in the ocean near a buoy station in Alaska, scales it down, and displays it in the grid.

I liked this installation because I have always been fascinated by the movement of water. I find it interesting that the artist chose to simulate the fluid movement of water in a rigid grid. The installation reminds me of a wireframe of a video game terrain. However, the sculpture does appear to be slightly jittery. I think I would enjoy seeing it in a larger scale moving more “fluidly”.

Electric Stimulus

Daito Manabe has created a piece in which he associates a facial expression to sounds by forcing an expression on another persons face using electrodes that are activated differently depending on which finger is used to touch the other person. I’m not sure but this seems kind of dangerous. However, it is very interesting. I did notice that a finger is associated with a sound and facial expression, but the spot where the other person is touched is not. I would think that the project would be more interesting if the “instrument” reacted differently depending on where it was touched. It would also be interesting to see this paired with swept frequency capacitive sensing rather than single frequency binary capacitive sensing like they are using. For an example of swept frequency capacitive sensing, please see Touché.

Looking Outwards

Wow ok so when Golan brought up my synesthesia  in class the other day a couple of folks were asking me about it, which always happens, but I’m never super good at explaining it beyond “song x is colors a b and c” or something. Mainly the problem is that I can’ t show people what it looks like inside my head when I’m listening to stuff.
But this.
This.
This is what music feels like.

I’m not exactly clear on what’s going on beyond the fact that the video is being distorted in response to the sound (I tried to dig deeper into the sources but several of them are in Japanese and as much of that as I can speak I still can’t read it super well). From what I can tell from watching the video, the percussion seems to be having the most visible effect on the distortion of the image, but I don’t know enough about music or programming to say whether that’s a music thing or a deliberate programming choice.

Either way it’s rad and fun to look at.

https://www.creativeapplications.net/sound/brdg004-hazcauch-%C3%97-vokoi-maxmsp-sound/

Looking Outwards MAX MSP

SOMETHING THAT INSPIRED ME
Fragile Territories by Robert Henke

In Fragile Territories, Robert Henke uses four fast moving lasers to create a network of flowing light on a 30 meter wide wall. Globules of light quickly flow through paths which themselves bend and twist according to some incomprehensible rules. The piece inspired me because of its new take on the familiar idea of particle systems. The particle systems work and move in a more geometric fashion, unconcerned with “flocking” or any sort of attraction behavior. Instead, the laser particles serve only to render the true focus of the piece: various interweaving networks of curves and lines which whip about the wall. The whole piece gives me a very distinct feeling that I am watching data move through execution threads within a program structure, but I get that feeling all the time so Fragile Territories is not unique in that sense. Nevertheless, it remains quite inspiring in its subtle use of particle systems.

SOMETHING THAT DISAPPOINTED ME
GPS Beatmap by Face Removal Services

The GPS Beatmap lays the framework for a beautifully conceptual way to explore musical combinations. Exploring a spatial map of music, or any other multi-faceted subject matter, engages the user and provides a new perspective on said subject matter. What disappoints me is how little the GPS Beatmap builds upon this framework. The music map is static. The user has no way to specify genre, nor do they have control over the shape of the map. A feature where one ‘breakbeat’ remains at medium volume while the user moves throughout the map would allow a more cohesive mash-up than attaching every loop to a geographic location. Overall, the GPS Beatmap is a wonderful concept ripe with opportunity which Face Removal Services takes little advantage of.

SOMETHING THAT SURPRISED ME
Chaper 1: The Discovery by Félix Luque

The Discovery surprised me on multiple fronts. Firstly, Luque takes control of the user experience before they even see his sculpture. An entire room exists solely to condition the user’s perception of the sculpture. Videos play of fictitious discovery scenarios in which the sculpture is stumbled upon within mysterious environments. This preps the user for their own ‘discovery’ of the sculpture. I never thought of dedicating an entire space to building up expectations for interaction with another space.
Luque’s sculpture’s behavior surprised me as well. The sculpture’s lighting patterns, a sort of pseudo-communication, change based on the number of people interacting with it at a given time. If one person stands before the sculpture, it emits distinct and directional patterns to that specific person. As more people surround it, the lighting patterns become more hectic and disorganized. When fully surrounded, the sculpture flashes all of its lights on and off in what I interpreted as a display of fear.