1. Overall
When talking about data visualization, most of the people will think of computer graphic visualization. However, from my view, this is only one of the possible ways to do it. Why not trying visualizing data in physical ways? People can not only see the visualization result, but can also touch and manipulate the visualization device, which could be really interesting.
In this project, I explores the physical/tangible way of visualizing data. Using a paper globe as the data media, people can learn the language of a certain area by spinning the globe and adjusting the probe.
a. Prepare the paper globe
Using google images to download one LANGE size global map. Download a Photoshop plugin called Flexify 2 to revise the map images. Here is the tutorial. Plot the revised image, cut and glue.
b. Fix the variable resistor
Laser cut 4 pieces of round woods to fix the shape of the paper globe. May use extra timber to do so. Install one of the variable resistor to the bottom of the globe. See below.
c. Install all the other parts
Install another variable resistor as the probe which points to the globe. Lazer cut a seat for the probe and the globe. Hook up two different Analog Input pins with the Arduino and resistors.
d. Calculate the position
Spin the globe and alter the probe. Different position has different resistor value, which can be mapped to the sound track. Calculate the position and map the sound track.
e. Prepare the sound
Download the language sound from Google translate and store them in the waveshield.
4. Video
5. Code
#include
#include
#include
#include "WaveUtil.h"
#include "WaveHC.h"char decision =0;
SdReader card;// This object holds the information for the card
FatVolume vol;// This holds the information for the partition on the card
FatReader root;// This holds the information for the filesystem on the card
FatReader f;// This holds the information for the file we're play
WaveHC wave;// This is the only wave (audio) object, since we will only play one at a time
#define DEBOUNCE 100// button debouncerint mySwitch =7;// this handy function will return the number of bytes currently free in RAM, great for debugging!int freeRam(void){
extern int __bss_end;
extern int*__brkval;int free_memory;if((int)__brkval ==0){
free_memory =((int)&free_memory)-((int)&__bss_end);}else{
free_memory =((int)&free_memory)-((int)__brkval);}return free_memory;}void sdErrorCheck(void){if(!card.errorCode())return;
putstring("\n\rSD I/O error: ");
Serial.print(card.errorCode(), HEX);
putstring(", ");
Serial.println(card.errorData(), HEX);while(1);}void setup(){// set up serial port
Serial.begin(9600);
putstring_nl("WaveHC with 6 buttons");
pinMode(mySwitch,INPUT);
putstring("Free RAM: ");// This can help with debugging, running out of RAM is bad
Serial.println(freeRam());// if this is under 150 bytes it may spell trouble!// if (!card.init(true)) { //play with 4 MHz spi if 8MHz isn't working for youif(!card.init()){//play with 8 MHz spi (default faster!)
putstring_nl("Card init. failed!");// Something went wrong, lets print out why
sdErrorCheck();while(1);// then 'halt' - do nothing!}// enable optimize read - some cards may timeout. Disable if you're having problems
card.partialBlockRead(true);// Now we will look for a FAT partition!
uint8_t part;for(part =0; part <5; part++){// we have up to 5 slots to look inif(vol.init(card, part))break;// we found one, lets bail}if(part ==5){// if we ended up not finding one :(
putstring_nl("No valid FAT partition!");
sdErrorCheck();// Something went wrong, lets print out whywhile(1);// then 'halt' - do nothing!}// Lets tell the user about what we found
putstring("Using partition ");
Serial.print(part, DEC);
putstring(", type is FAT");
Serial.println(vol.fatType(),DEC);// FAT16 or FAT32?// Try to open the root directoryif(!root.openRoot(vol)){
putstring_nl("Can't open root dir!");// Something went wrong,while(1);// then 'halt' - do nothing!}// Whew! We got past the tough parts.
putstring_nl("Ready!");}void loop(){//putstring("."); // uncomment this to see if the loop isnt runningif(digitalRead(mySwitch)== HIGH){
Serial.println("switch is ok");int spin = analogRead(5);int probe = analogRead(2);
Serial.println(spin);
Serial.println(probe);if(spin>=0&& spin<=576&& probe >=179&& probe <=276){
playcomplete("r.wav");}elseif(spin>=85&& spin<=313&& probe >=35&& probe <=160){
playcomplete("c.wav");}elseif(spin>=580&& spin<=780&& probe >=0&& probe <=142){
playcomplete("a.wav");}elseif(spin>=980&& spin<=1023&& probe >=7&& probe <=22){
playcomplete("p.wav");}elseif(spin>=980&& spin<=1023&& probe >=0&& probe <=7){
playcomplete("s.wav");}elseif(spin>=1023&& probe >=47&& probe <=288){
playcomplete("e.wav");}
delay(1000);}}// Plays a full file from beginning to end with no pause.void playcomplete(char*name){// call our helper to find and play this name
playfile(name);while(wave.isplaying){// do nothing while its playing}// now its done playing}void playfile(char*name){// see if the wave object is currently doing somethingif(wave.isplaying){// already playing something, so stop it!
wave.stop();// stop it}// look in the root directory and open the fileif(!f.open(root, name)){
putstring("Couldn't open file "); Serial.print(name);return;}// OK read the file and turn it into a wave objectif(!wave.create(f)){
putstring_nl("Not a valid WAV");return;}// ok time to play! start playback
wave.play();}
Word Lens is an augmented reality application designed to translate text in a foreign language to one that the viewer can understand by overlaying the translated text on the mobile device screen where the original text should be.
This project is nifty because it attempts to create a more authentic experience for the viewer by preserving the context of the words instead of spewing it out as simple text. It also attempts to translate any text in the camera’s field of vision on the fly, surpassing many other applications that require a still image to be processed. They adhere to the text color pretty well, but it would be nice to improve the preservation of type style.
BendDesk combines two projectors and three cameras to make a superior workspace, allowing you to handle digital documents similarly to physical documents. The project is awesome because it removes the limitations of previous implementations done only with common interface devices (e.g.: virtual desktops with stacks of documents that you interact with via a mouse and keyboard). The missing component was always the freedom to interact with the document using your body as the manipulator, and now users are able to drag the document around with their hands across two planes. It would be great to incorporate another camera that could project the user or another user on the vertical plane for a cool teleconferencing environment.
This project has a person controlling a humanoid robot through a kinect. I think that the implications of the project are cooler than this base implementation. For example a person could exhibit fine motor control in a remote location, eliminating the need to actually travel to the location. This has excellent potential for something as important as specialized surgeons operating on patients in remote areas or a person attending a conference able to more actively participate in subtlety important human transactions like conversations in a hallway. While a nice basic implementation, the next steps would be to increase the size of the robot and improve the accuracy of its corresponding motion.
Comments Off on Meg Richards – Looking Outwards – 6: Word Lens, 7: BendDesk, 8: Robot Teleoperation
Andy Wilson is a senior researcher at Microsoft Research. He was doing Kinect-like projects before the Kinect came out. This one in particular involves a digital rc car being controlled by xbox wireless controller driving around in digital world. That digital terrain you are navigating can be altered by placing physical objects on the projection surface. The physics of the car changes as ramps, blocks, and mounds are introduced. This is a great example of table top digital gaming that will be the future.
Much of his new research about depth sensing as touch sensor is very interesting
Dot matrix display is cool. I like that how this is so high tech and yet so low tech at the same time. There’s something attractive about the abstraction of the form on the wall of OLED panels.
Posted this for no other reason than the fact that it is Street Fighter and kinect. On a serious note, it is interesting how physical motions maps very naturally to games that uses physical motion (duh) but I wonder if the kinect can be responsive enough to make playing a game like this even mildly competitive. However, this comes to a very obvious shortcoming of the kinect: when used in a setup like this, the user can only move within perhaps 3 feet by 3 feet square, which means that Kinect games might be a largely stationary affair.
Cheese is a set of recordings of actresses trying to hold a smile. A computer continuously “rates” the strength of their smile, and sounds an alarm if they falter. It’s technically impressive. I’m used to computer-detection of human expressions being jittery and fickle, but the ratings plunge immediately when the actresses begin to relax. We tend to consider smiles idiosyncratic and personal, an attitude which is threatened by objective measurement.
(Skip to ~50 seconds in.) This is a pretty minor tech demo of putting a person and a virtual character into the same space. It turns the Kinect’s field of view into a middle ground between reality and virtuality. The proximity between imagined and actual is exciting.
Mehmet Akten’s Webcam Piano is a beautiful both visually and acoustically, but it unfortunately is not nearly as interesting as a real piano, because it gives the user no control. The notes are all pre-set to a pleasing scale so that there is no possibility for dissonance, and the compact arrangement notes prevents granular control. Instruments and art-making devices should ease, enable, and respond to the choices of the user, and not try to make up for the user’s lack of talent.
A very simple presentation using kinect and openframeworks. Not very interesting style wise I think, but there’s a few things to note here. Firstly, it is kinda similar to classic CV projects such as text rain and even the works by Theodore Watson but using the Kinect now, it means that these kind of installations can now potentially work for a much larger audience. As seen on the projection, the shapes of individual humans can be made out and therefore interaction can be separated. I think that’s something interesting and potentially can be exploited in the future.
A minimalist Kinect visualization demo by Gregg Wygonik. In the midst of its simplicity, this demo manages to exhibit a range of visual variation through fairly small modifications to the code.
Kinect VR
A virtual reality demo using Kinect in which spatial depth is used to determine the view relative to the angle you are positioned at.
Kinect Piano
I don’t know how practical this is, most would probably prefer the tactile feedback associated with playing a real piano, but its an interesting demo that suggests some applications for hand/gesture recognition with virtual or augmented interfaces.
Comments Off on Charles Doomany- Looking Outwards: Project 3
I am in love with this project, the virtual marionette, which uses the Kinect to do real-time puppeteering of a humanoid marionette; it attempts to stay true to traditional marionettes, while bringing tradition into the technological age. I would love to work on something similar, riffing off of finger puppets or shadow hand puppets; I’m an animator at heart, and I find real-time animations that mimic the user very endearing and compelling. I’ve seen similar projects by Theo Watson like this funky bird puppet, but the amount of articulation in the marionette project is particularly wonderful.
Audience is an amazing project that uses vision tracking to then mess with the users vision. Its quite clever in that once you approach the mirrors orient on a normal from your head and thus you see yourself. The conceptual play of watching and be watched makes for an experience that is unique. I very much appreciate the cleanliness of the hardware aspect and the swarming aspects of the mirrors. The software used was openFrameworks. A beautiful, simple project.
Nevermind the safety concerns (the police did). There is something absolutely exciting about exploring a city (especially one with height) through a lens that offers us a perspective unlike any other. This project becomes very interesting when you begin to think about what google earth has done for digital-urban exploration. Techniques like these offer us new ways to think about presence, urban exploration, delivery, location. Unlike google street view you are witnessing a real-time camera feed and can react with the real-time environment. I like the fact you wear the googles and fly a plane, reminds me of my youth…ahh Microsoft Flight Simulator 98
Pilot claims in interview he has 120 mile range with control and camera feed. Awesome.
I have a couple of ideas, and I wanted to use my Looking Outwards to take a look at what’s been done in the area of those ideas. On a side note, I found a really cool site for generational computation art, Generator.x.
This one is cool of the guy who did the Ultraman hack earlier.
Idea 1 – Body Puzzle
I think it would be funny to re-arrange somebody’s body parts on their body, and they have to put themselves back together. You could still move your arms and you would have to grab the parts and place them. I think I could use some concepts from the Kinect Camoflauge project.
Idea 2 – Something With Helicopters
I found this video of a Kinect mounted on a helicopter. They have the Kinect mounted on the helicopter, which I probably don’t have the time to complete. But, I would like to command a helicopter. Shawn did one last semester, but I really think it could be more robust and really fun.
Idea 3 – Reading Body Language
I’ve always been interested in subtle social dynamics, and body language is king. I wonder if the Kinect could read it. Machine Learning could be ran on it to learn the language. Another hard part is how to visualize it. Body language that goes well is such a success, I am thinking of a visualization that speaks to that excitement and happiness. I like this flowery thing, or this use of squares, or this curve thing.
Comments Off on Ward_Penney – Project3, Looking Outwards
This is a project that allows you to place a virtual keyboard onto any surface, such as a desk or the floor. It can be played by pressing or stepping on the surface. I think it’s a really interesting idea because it is so flexible in terms of size and location. You can create a huge keyboard for fun or a small keyboard for actually (pretending to) play the piano. A big issue with this project is that you need some sort of visual cue in the real world to know what keys you’re pressing and where the other keys are located, but this could be fixed with a projector or even just some paper taped to the ground.
Holographic TV
This project from the MIT Media Lab uses a Kinect to record someone and transfers that recording to another computer, which computes a hologram of that person. The idea of this is really cool, because it gives you the ability to send 3D messages which is a pretty futuristic concept. I’m not sure about the execution of this particular project, however. The video doesn’t give a really clear view of what the hologram looks like, and from what I can tell it seems pretty slow and choppy and the image is not very detailed.
Motion Emotions
This is a really simple project that aims to use the Kinect as a way to display emotions. The example above is showing how the program plays a really triumphant sound while the user raises his arms (triumphantly). With the Kinect’s 3D capablities, it is reasonable to suspect that it could be used to measure pretty sophisticated body language, and I think the person had great insight when they decided to take advantage of this. Many of the characters in Kinect games that are supposed to be mirrors of the player don’t show any emotion, and I think it could be interesting to have the characters not only mimic your actions, but take into account what your actions say about your disposition as well.
So I’ve been thinking a lot about what I want to do with the Kinect. I got one for christmas, and I still haven’t had time to do much with it. I’m a huge music person, and I’d like to create an interactive audio visualizer that takes input from body movement instead of perceivable audio qualities (volume, frequency waveforms, etc…). I think that using gestural input from a person dancing, conducting, or otherwise rocking out to music would provide a much more natural input, since it would accurately reflect the individual’s response to the audio. I can imagine pointing a Kinect at a club full of dancing people and using their movement to drive a wall-sized visualization. It’d be a beautifully human representation of the music.
I’ve been Googling to see if anyone is doing something like this already, and I haven’t been able to find anything really compelling. People have wired the Kinect through TUIO to drive fluid systems and particle emitters, but not for the specific purpose of representing a piece of music. I don’t find these very impressive, because they’re really dumbing down the rich input from the Kinect. They just treat the users’ hands as blobs, find their centers, and use those as multitouch points. It must be possible to do something more than that. But I haven’t tried yet, and I want everything to be real-time – so maybe not ;-)
Here are a few visual styles I’ve been thinking of trying to reproduce. The first is a bleeding long-exposure effect that was popularized by the iPod commercials a few years ago. Though it seems most people are doing this in After Effects, I think I can do it in OpenGL or maybe Processing:
This is possibly the coolest visualization I’ve seen in a while. However, it was done in 3D Studio Max with the Krakatoa plugin, and everything was painstakingly hand-scripted into a particle system. I love the way the light shoots through the particles (check out 0:16), though. I’d like to create something where the user’s hands are light sources… It’d be incredibly slick.
I’m not sure how to approach implementing something like this, and I’m still looking for existing platforms that can give me a leg-up. I have significant OpenGL experience and I’ve done fluid dynamics using Jos Stam’s Navier-Stokes equation solver, so I could fuse that to a custom renderer to get this done, but I’d like to focus on the art and input and let something else handle the graphics, so suggestions are welcome!
Comments Off on Ben Gotow-Looking Outwards (Kinect Edition)
Very psyched to start messing around with the Kinect! It’s amazing that such a huge amount of things can be done with a damned console peripheral.
Looking through the MediaArtTube playlist was really interesting. Daniel Rozin’s motorized mirrors are really amazing devices, and the wooden one is an easy favorite. The material is wonderfully natural in a Scrabble-tile kind of way, and the wood makes for a really interesting sound texture for the sea of clicking coming from the rig.
Our nerd compatriots at MIT fashioned this Minority Report style interface with the Kinect. This is a really awesome technology that I can see becoming an actual reality with some serious time and effort.
Another awesome Kinect interface in a similar style is the physics based drawing application DaVinci. The user makes gestures to draw shapes on the screen as well as to perform a variety of physics-based actions with the drawn objects.
Comments Off on Project 3: Interact – Looking Outwards
Primer For my own project, I am interested in the idea of augmenting a live feed of yourself into a different you. To start, I wanted to search for existing projects which already do this.
More on this later in the napkin sketch post, but I am fond of this idea because it is a prevalent experience in games and because there is obvious appeal to escape through an alternative ego. (How people did you see make themselves a Scott Pilgrim or Madmen avatar?)
Update From class today, it seems like the avatar notion is well explored. Shifting focuses…
Also, this is just a photograph series, but has some applicable potential:
—
1 | Embody an Avatar (Second Life)
Exactly what it sounds like: this project uses the Kinect in order to take movements in from people to control a 3d second life avatar. The avatar mimics what the user does. Additionally, some gestures also work as camera controls.
The movements are still limited as the motions are mostly done with the upper-body in this demo. Jumping is not actually done by jumping, but by raising the hands up as if you were jumping. Rigging up a 3d model isn’t easy and probably is a bit out of scope for a 2 week project. However, the idea of controlling an alternative self has obvious appeal here.
Paint with rainbow, drippy brush on a canvas via body gestures. The brush maps to one of the users hands and follows it around.
The kick for paintbomb is a nice bonus. This adds on nicely to the idea of “alternative universe” because it empowers the user to control the brush.
Most of the body is not involved in this project (only one hand and the feet when kicking). The brush options are also limited to that one brush. The critique ultimately being, this project could have so much more capabilities.
A non-Kinect project. This installation leverages the mobile phone’s sensitivity to infrared light (which we cannot see with the naked eye) to reveal medieval beasts.
This is inspiring because it is a (relatively) simple hack using a device most have now and it’s done well. The discovery process is very nice, but I find that the display is too static. If the display was more animated or had another level of interaction, I think it would make the imaginary animals more alluring. For example: Maybe hiding the animals in a larger canvas and then giving the user the ability to interact with them as a reward.
Comments Off on Susan Lin – Looking Outwards – 5: Interact Project Inspiration
Furin is an interactive light installation. When a user steps underneath it, each light chimes and lets off a corresponding glow in a rippling effect across the space. A simple interaction with beautiful results
Plaster cast of head is 3d scanned, and translated into drawings to create a sort of head -topo. More interesting would be the possibility of applying this to something alive rather than the plaster cast.
Kinect project, user can push/stretch simulated skin using the kinect. My favorite is when you catch the glimpse of detail underneath, like lips or a finger
This was one of the first clips that really made me consider the future of computer vision and virtual reality. The researcher in this video creates a 3D virtual model of an object just by showing it to the camera. A lot of predictive processing takes place on the back end, but ultimately, the result looks pretty good. This approach doesn’t even make use of a depth camera like the kinect, which would make this process a lot easier.