I call my FaceOSC the Symphony of the nature. I use the FaceOSC.app to capture the movement of the face in front of the computer and send it to FaceOSC receiver built in openframeworks. In this receiver, I use the mouth’s height and width and eyeflows’ position to trigger the different animals’ sounds. Such as if the mouth’s height exceeds some threshold, it will play the cow’s sound. And also there are birds and pig’s sound. So you can move your mouth and eye to make a animal’s sound Symphony.
Being a citizen of the Internet is no fun if you don’t look like one. BUT NOW YOU CAN!
Introducing TrollFaceOSC. TrollFaceOSC checks how wide you are smiling, how raised your eyebrows are, and even if you are mad. Given these values, a mapping is made to the appropriate face (At the moment there are 4 (and to be counting)).
So fairly basic stuff as far as programming goes. A lot of the code was inspired by the Processing example given by Dan (FaceOSCSmiley I believe). After I got the basics coded, I spent some time calibrating it to my face, and then voila.
In the future I hope to add more faces, and make the app more general. Imagine – you can take any youtube video and add a troll face to it! SUPER PRODUCTIVE!
I made this on my desktop mac mini, it doesn’t have a camera, so I used Kinect instead. The idea is the same. You can adjust the tilt using the arrow keys, and the threshold using “a” , “s” , “z” and “x” keys.
//TEXT RAIN KINECT//
//IACD ASSIGNMENT//
//CAN (JOHN) OZBAY//
//2013//
import org.openkinect.*;
import org.openkinect.processing.*;
Kinect kinect; //kinect settings
int kWidth = 640;
int kHeight = 480;
int kAngle = 0;
PImage depthImg; //depth settings
int minDepth = 60;
int maxDepth = 660;
String alphabet = "abcdefghijklmnopqrstuvqxyz";
String [] letters = new String [100];
float [] x = new float [100];
float [] y = new float [100];
boolean [] falling = new boolean[100];
int savedTime; //when the function was last called
int interval = 300; //how often we want to call the function(milis)
int bThreshold = 70;
void setup () {
smooth();
noStroke();
size(640, 480);
fill(0);
savedTime = millis();
size(kWidth, kHeight);
kinect = new Kinect(this);
kinect.start();
kinect.enableDepth(true);
kinect.tilt(kAngle);
depthImg = new PImage(kWidth, kHeight);
}
void draw () {
//depth threshold
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < kWidth*kHeight; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImg.pixels[i] = 0xFFFFFFFF;
}
else {
depthImg.pixels[i] = 0;
}
}
// draw threshold image
depthImg.updatePixels();
image(depthImg, 0, 0);
stroke(255,0,0);
line(10,434,160,434);
fill(255, 0, 0);
text("Tilt: " + kAngle, 10, 450);
text("Depth Thresh: [" + minDepth + ", " + maxDepth + "]", 10, 466);
for (int i=0; i savedTime+interval) {
makeLetter();
savedTime = millis();
}
}
//name speaks for itself
void moveLetter (int tempI) {
y[tempI]++;
if (y[tempI] > height) {
falling[tempI] = false;
}
}
//self explanatory
void drawLetter (String tempS, float tempX, float tempY, int tempI) {
fill(0,200,255);
text(tempS, tempX, tempY);
}
//Making of "The Letter"
void makeLetter () {
boolean madeLetter = false;
int randomNumber = int(random(alphabet.length()));
String tempChar = alphabet.substring(randomNumber, randomNumber+1);
for (int i=0; i 0 && loc <pixels.length) {
//get its brightness
float b = brightness(pixels[loc]);
if (b > bThreshold) {
isPixelDark = true;
}
}
return isPixelDark;
}
void keyPressed() {
if (key == CODED) {
//UP & DOWN ARROW KEYS, FOR KINECT TILT
if (keyCode == UP) {
kAngle++;
}
else if (keyCode == DOWN) {
kAngle--;
}
kAngle = constrain(kAngle, 0, 30);
kinect.tilt(kAngle);
}
//A, S, Z, X - SET min Depth and max Depth
else if (key == 'a') {
minDepth = constrain(minDepth+20, 0, maxDepth);
}
else if (key == 's') {
minDepth = constrain(minDepth-20, 0, maxDepth);
}
else if (key == 'z') {
maxDepth = constrain(maxDepth+20, minDepth, 2047);
}
else if (key =='x') {
maxDepth = constrain(maxDepth-20, minDepth, 2047);
}
}
//don't forget to stop
void stop() {
kinect.quit();
super.stop();
}
Firstly, I use the camera to capture the users’ video. And use the code to capture people’s outline and fill it with black shadow.
Then, I write a randomly falling down letters generator. It produces randomly letters in the random position in the top. The letters will fall down if it doesn’t meet the black shadow.
I look pretty pissed in this video – its probably due to inclement weather.
My implementation of Text Rain is fairly straightforward. The code is probably longer than needed for the simple effect, but I added some things, and wrote it because I thought this was something that could be made better in the future. One thing I really want to do is add awesome lightning effects that I can control with my hands.
Each droplet is an object with position variables and it’s derivatives. Every run through the draw loop, these are updated according to the preset numbers. Since I have acceleration in my droplets, it was important for me to limit the droplet speed using a terminal velocity. Finding these numbers was done by trial and error, and no doubt there is still room for improvement.
/* Droplets fall from the top of the screen and stop when they reach a certain threshold */
class Droplet{
//The character we are drawing
char c;
//The font size the droplet is being drawn
int size;
//The current position, velocity, and acceleration of the drop
float x, y, x_v, y_v, x_a, y_a;
//The starting position, velocity, and acceleration of the drop
int init_x, init_y, init_x_v, init_y_v, init_x_a, init_y_a;
//The width and height of the
int width, height;
//Terminal velocity for the droplet
//Change this value if you want it to snow instead
int terminal_v = 12;
//Whether this droplet is now snow
boolean isSnow;
By default droplets have an alpha value, and so, drops get darker when they are stacked. This made sense to me, since I think of the droplets as combining like they would in real life.
Aside from what Camille Utterback and Romy Achituv made, my implementation allows you to control the direction of the wind (using left and right keyboard) so that droplets move horizontally as well. Since it is winter, I also added a snow toggle (press shift), which can slow down the droplets. All of this is possible by simply setting the properties of all Droplet objects.
Originally by Camille Utterback and Romy Achituv, I was able to run “Text Rain” in Processing. It was my first time back to Processing in a while, but I was able to figure it out after some help and practice. I simple indexed the camera’s pixels and went about it the brightness() way. Fairly reliable and quite a lot of fun getting it going. I think a performance of my lip-singing my favorite song is in order for a finished piece.
//Nathan Trevino 2013
//Text Rain re-do. Original by Camille Utterback and Romy Achituv 1999
//Processing 2.0b7 by Nathan Trevino
//Special thanks to the processing example codes (website) as well as Golan Levin
//=============================================
import processing.video.*;
Capture camera;
float fallGravity = 1;
float fallStart = 0;
int threshold = 100;
Rain WordLetters[];
int myLetters;
//==============================================
void setup() {
//going with a larger size but am giving up speed.
size(640, 480);
camera = new Capture(this, width, height);
camera.start();
String wordString = "For all the things he could lose he lost them all";
myLetters = wordString.length();
WordLetters = new Rain[myLetters];
for (int i = 0; i < myLetters; i++) {
char a = wordString.charAt(i);
float x = width * ((float)(i+1)/(myLetters+1));
float y = fallStart;
WordLetters[i] = new Rain(a, x, y);
}
}
//==============================================
void draw() {
if (camera.available() == true) {
camera.read();
camera.loadPixels();
//Puts the video where it should be... top left corner beginning.
image(camera, 0, 0);
for (int i = 0; i < myLetters; i++) {
WordLetters[i].update();
WordLetters[i].draw();
}
}
}
//===================================
//simple key pressed fuction to start over the Rain
void keyPressed()
{
if (key == CODED) {
if (keyCode == ' ') {
for (int i=0; i < myLetters; i++) {
WordLetters[i].reset();
}
}
}
}
//=============================================
class Rain {
// This conains a single letter of the words of the entire string poem
// They fall as "individuals" and have their own position (x,y) and character (char)
char a;
float x;
float y;
Rain (char aa, float xx, float yy)
{
a = aa;
x = xx;
y = yy;
}
//=============================================
void update() {
//IMPORTANT NOTE!
// THE TEXT RAIN WORKS WITH A WHITE BACKGROUND AND THE DARK AREAS
// MOVE THE TEXT
// Updates the paramaters of Rain
// had some problems here for the pixel index, but a peek at Golan's code helped
int index = width*(int)y;
index = constrain (index, 0, width*height-1);
// Grayscale starts here. Range is defined here.
int thresholdGive = 4;
int thresholdUpper = threshold + thresholdGive;
int thresholdBottom = threshold - thresholdGive;
//find pixel colors and make it into brighness (much like alpha channeling video
// or images in Adobe photoshop or AE)
float pBright = brightness(camera.pixels[index]);
if (pBright > thresholdUpper) {
y += fallGravity;
}
else {
while ( (y > fallStart) && (pBright < thresholdBottom)) {
y -= fallGravity;
index = width*(int)y + (int)x;
index = constrain (index, 0, width*height-1);
pBright = brightness(camera.pixels[index]);
}
}
if ((y >= height) || (y < fallStart)) {
y = fallStart;
}
}
//============================
void reset() {
y = fallStart;
}
//=======================================
void draw() {
// Here I also couldn't really see my letters that well so I
// used Golan's "drop shadow" idea and some crazy random colors for funzies
fill (random(255), random(255), random(255));
text (""+a, x+1, y+1);
text (""+a, x-1, y+1);
text (""+a, x+1, y-1);
text (""+a, x-1, y-1);
fill(255, 255, 255);
text (""+a, x, y);
}
}
This is so freaking fun. I was able to use the FaceOSC application along with simple sketch pad style processing sketch. I really believe in an interface that is very intuitive, if not, almost unnecessary. This project was the most beautiful of my Intensive, using colors coordinated in triples by mapping eyebrows and mouth width. Smiling and eye-raising are natural things that humans do and I wanted to make that the mainstay of the project. Watching yourself make a drawing is quite beautiful and I think they make the project very successful.
My sifteo app is an adaptation of Simon Says and a memory game. There are two players, each with their own cube. The first player starts, and performs one of six available actions (touching the screen, shaking their cube, or touching any of the four sides of the middleman cube with their own cube). The second player then must repeat this action, and add an action of their own. Then the first player repeats both actions and adds a new one, and so on, forming a chain of actions that must be remembered and repeated. The first player to mess up loses, unless the sequence reaches 20 steps, at which point the game ends in a draw. If I was cool I would have added audio cues to each action, so you could be sure that it was completed and to aid in memorization, but I’m not so I didn’t.
ofxAddons! This was the hardest and most frustrating project for me. Being totally unfamiliar with the openFrameworks environment as well as both the Code::Blocks and VisualStudio 2010 environments, it took me a good long while to figure out how to do anything in these projects. As it turns out, most of the pre-built project examples for ofxAddons are for Xcode. Of the 15 or so addons I tried, I eventually got three to work, and this is the most interesting combination I found. This is a combination of underdoeg’s openSteer flocking example and toruurakawa’s FakeMotionBlur. While maybe not particularly interesting or ‘lazy like a fox’, I think the result is actually pretty graceful.
Git -> maybe someday, when github and I reconcile our differences
This project uses FaceOCS to track the position of your face, and maps it to a face in the Processing environment. It uses the orientation of your face to steer up, down, left and right on the screen. It also maps your mouth, and whether it is open or closed. By looking the direction you want the face to move in and opening your mouth, it is possible to eat the small glowing sprites that wander around the frame. They leave a splatter where they were eaten which fades with time. The bugs utilize a modified version of Daniel Shiffman’s boid class.
Git –> Soon, github hates me
Code?
The simple Splatter class:
class Splatter {
PVector loc;
int life;
PImage splat;
Splatter(PVector l, PImage s) {
loc = l;
life = 255;
splat = s;
}
void run() {
pushMatrix();
translate(loc.x, loc.y);
tint(255, 255, 255, life);
image(splat, 0, 0);
if (life > 0) life--;
popMatrix();
}
}
Function used to determine if the bug is within range of the mouth: