A quick idea for interacting with computer vision.
What if life was a comic book? Interactions between people took place using drawn onomonopia balloons. Thoughts bubbles are generated above stationary people, while the microphone knows when to create speech bubbles. Contact generates “Pow, bang, and boom!” The bubbles and contact display with the correct size and context according to depth.
Comments Off on Mark Shuster – Interaction – Sketch
For my project, I would like to create an site-specific installation, in which when a person walks in to a space, a spot light begins to follow the person, and another one that runs away from the person. I would hope that this would encourage the person to play with the two lights, trying to get on to touch the other.
After speaking to Golan, he suggested I look at Marie Sester’s project Access, which is very similar to what I would like to do. Access, however, involves a user on the internet controlling the spotlight, as well as playing audio to the spotlight. Access is more of a commentary on surveillance. For my project, however, I would like to focus on facilitating unexpected interaction, as the participant does not know that they have entered an altered space until the light starts to follow them. I would also like it to feel game-like.
This idea involves allows users to fly upwards (and ideally, if time permits, downwards, and then in all 360 degrees) using hand gestures on the Kinect. Given the right hand gesture, the user will generate a balloon and float up into the sky. The sky will be drawn and/or generated as the user floats. Scrolling will simulate the flight.
We aren’t 100% sure what the final product will be, it will really depend on the finesse of the gesture detection. Potentials include a game where you dodge or collect items in the sky. Or a flushed out virtual world to explore.
We do know that isn’t impossible to detect fingers:
This idea was inspired by Volll’s site. This European design firm came up with a hard to hate layout and navigation scheme.
—
Excited for this upcoming collaboration with Chong Han.
Comments Off on Susan Lin + Chong Han Chua — Interact, Sketches
Tim and I have two ideas. First, we have a small box containing sand, clay, blocks, or some other familiar, plastic medium. A Kinect watches the box from overhead and gets a heightmap of whatever the participant creates. A program then renders a natural-looking landscape from the terrain, possibly including flora and fauna, which is projected near the sandbox.
Second, I Kinect watches a crowd of people and a program generates thought balloons above their heads. Thoughts would be selected based on observable qualities of each person, like height, gesture, and proximity to other people. Thoughts could be humorous or invasive.
So it seems really apparent at the moment how little technical kung-fu that we actually know. We’ve gotten the Kinect up and running in Jitter, but we haven’t the slightest idea how to get it to do anything. Other than play with the camera pitch control, which is surprisingly amusing. Scouring forums has proved only that there although is a lot of information out there on this topic, virtually none of it is explained. It’s as if they got past a certain complexity level and then went back and erased all evidence of the basics. Congrats Max MSP community, this shit is Nintendo hard.
With the elephant in the room being our inherent noob-saucery in this field, we need to go forward think of how we’re going to either learn some chops or deal with what we have. It looks like we’re going to have to learn (be taught?) a very large variety of things damned quickly.
The bitch of all this, at least in the planning department, is that we are working with a tool that we don’t quite know the abilities or limits of. Feasibility of ideas cannot possibly be assessed without knowing what boundaries we’re operating within. All this withstanding… some basic ideas.
We want to make a puppet. We want to learn OSC so that we can get the Kinect working with Max, which this guy says can work with Animata. We thought about using Animata to create some sort of puppet, but then kind of realized it would basically just be using the software for it’s intended purpose. And that’s nothing unique. I think that the direction we want to go in is to have bodily actions control some sort of non-human puppet. Controlling a humanoid puppet via a human input gives you a a 1:1 with your arms controlling the puppet’s arm and your legs controlling the puppet’s legs, etc. If we make a non-humanoid puppet, we would force the audience to figure out how to control it instead of automatically assuming a 1:1 input-output ratio (am I making any sense? This is rather tough to explain).
tl;dr:
TECHNICAL
Kinect to OSC
OSC to Max/Jitter
Max/Jitter to Animata
CONCEPT: A puppet that is not a direct human analog.
…Gonna be a long couple weeks.
Comments Off on Project 3: Interact – Project Brainstorm
Honray and I were talking about simulating a constantly swirling force or wind that the user can manipulate with their movements, maybe featuring three dimensional perlin noise effect, or a flocking simulation that would swarm or avoid the user. It would recognize certain movements, such as making a sphere with your hands and respond accordingly
I was interested in the fact that even though the Kinect is designed to represent 3 dimensional space, it technically takes a 2 dimensional capture, leaving a hollow area behind the face you can see. I was thinking about creating a graphic that emerges from the hollow negative spaces, so that you’d only catch glimpses of it from a straight on view, but rotating the scene reveals it. Also playing with the same concept I thought of an app where you could paint on your face or body, like the invisible man. Only the portions of you touched by your hands would become visible on the screen, and you can observe your “hollow” body
For Fall/Winter 2010, Chanel not only hosted one of the most expensive runway shows of the season, but also managed to create the most ridiculous garment ever to grace them. The Chanel “Wookie Suit” could be interestingly recreated using the Kinect. I was looking at the Esotera Processing example, and thinking that you could extend every pixel in the group of pixels that compromise the person in order to create a simple “Wookifier”, so that you too can try on the world’s ugliest suit from home.
Comments Off on Alex Wolfe | Project 3 | Interaction Ideas
We want to explore projection mapping and the Kinect. We started working on the project by thinking about places where we could project something, the content of the projection and different interaction modalities.
We then moved on to do some testing! We brought a projector and some boxes and modeled how they looked in illustrator. This was a completely ad-hoc approach, but gave us some insight about how things could look…
We ended up being interested in a small scale projection, where the projector is placed behind some sort of 3D canvas. We thought we could place the Kinect right before this canvas, such that it captures the viewers. The current idea we have consists of changing the content of the projection to show rainy, windy, sunny or dark scenarios. We thought that we could do some hand gesture recognition for the interaction (i.e. if the user moves its hands down, then the night comes…)
Comments Off on Madeline Gannon & Marynel Vázquez – Project Sketch
Dot matrix display is cool. I like that how this is so high tech and yet so low tech at the same time. There’s something attractive about the abstraction of the form on the wall of OLED panels.
Posted this for no other reason than the fact that it is Street Fighter and kinect. On a serious note, it is interesting how physical motions maps very naturally to games that uses physical motion (duh) but I wonder if the kinect can be responsive enough to make playing a game like this even mildly competitive. However, this comes to a very obvious shortcoming of the kinect: when used in a setup like this, the user can only move within perhaps 3 feet by 3 feet square, which means that Kinect games might be a largely stationary affair.
Cheese is a set of recordings of actresses trying to hold a smile. A computer continuously “rates” the strength of their smile, and sounds an alarm if they falter. It’s technically impressive. I’m used to computer-detection of human expressions being jittery and fickle, but the ratings plunge immediately when the actresses begin to relax. We tend to consider smiles idiosyncratic and personal, an attitude which is threatened by objective measurement.
(Skip to ~50 seconds in.) This is a pretty minor tech demo of putting a person and a virtual character into the same space. It turns the Kinect’s field of view into a middle ground between reality and virtuality. The proximity between imagined and actual is exciting.
Mehmet Akten’s Webcam Piano is a beautiful both visually and acoustically, but it unfortunately is not nearly as interesting as a real piano, because it gives the user no control. The notes are all pre-set to a pleasing scale so that there is no possibility for dissonance, and the compact arrangement notes prevents granular control. Instruments and art-making devices should ease, enable, and respond to the choices of the user, and not try to make up for the user’s lack of talent.
A very simple presentation using kinect and openframeworks. Not very interesting style wise I think, but there’s a few things to note here. Firstly, it is kinda similar to classic CV projects such as text rain and even the works by Theodore Watson but using the Kinect now, it means that these kind of installations can now potentially work for a much larger audience. As seen on the projection, the shapes of individual humans can be made out and therefore interaction can be separated. I think that’s something interesting and potentially can be exploited in the future.
I have a couple of ideas, and I wanted to use my Looking Outwards to take a look at what’s been done in the area of those ideas. On a side note, I found a really cool site for generational computation art, Generator.x.
This one is cool of the guy who did the Ultraman hack earlier.
Idea 1 – Body Puzzle
I think it would be funny to re-arrange somebody’s body parts on their body, and they have to put themselves back together. You could still move your arms and you would have to grab the parts and place them. I think I could use some concepts from the Kinect Camoflauge project.
Idea 2 – Something With Helicopters
I found this video of a Kinect mounted on a helicopter. They have the Kinect mounted on the helicopter, which I probably don’t have the time to complete. But, I would like to command a helicopter. Shawn did one last semester, but I really think it could be more robust and really fun.
Idea 3 – Reading Body Language
I’ve always been interested in subtle social dynamics, and body language is king. I wonder if the Kinect could read it. Machine Learning could be ran on it to learn the language. Another hard part is how to visualize it. Body language that goes well is such a success, I am thinking of a visualization that speaks to that excitement and happiness. I like this flowery thing, or this use of squares, or this curve thing.
Comments Off on Ward_Penney – Project3, Looking Outwards
Very psyched to start messing around with the Kinect! It’s amazing that such a huge amount of things can be done with a damned console peripheral.
Looking through the MediaArtTube playlist was really interesting. Daniel Rozin’s motorized mirrors are really amazing devices, and the wooden one is an easy favorite. The material is wonderfully natural in a Scrabble-tile kind of way, and the wood makes for a really interesting sound texture for the sea of clicking coming from the rig.
Our nerd compatriots at MIT fashioned this Minority Report style interface with the Kinect. This is a really awesome technology that I can see becoming an actual reality with some serious time and effort.
Another awesome Kinect interface in a similar style is the physics based drawing application DaVinci. The user makes gestures to draw shapes on the screen as well as to perform a variety of physics-based actions with the drawn objects.
Comments Off on Project 3: Interact – Looking Outwards
Furin is an interactive light installation. When a user steps underneath it, each light chimes and lets off a corresponding glow in a rippling effect across the space. A simple interaction with beautiful results
Plaster cast of head is 3d scanned, and translated into drawings to create a sort of head -topo. More interesting would be the possibility of applying this to something alive rather than the plaster cast.
Kinect project, user can push/stretch simulated skin using the kinect. My favorite is when you catch the glimpse of detail underneath, like lips or a finger
Light Drive is an animation on a cell (and later a torus and other elliptical shapes) that attempts to create a dark, exciting, and “on-edge” mood. It comes with strong, moving, and dark music that syncs with the animations.
When I first saw Light Drive, I thought of light pulsating through a human cell. It looked very slick, and futuristic. I think the audio synchronization and choice was excellent, as it added to the feeling of the video and kept me on edge.
However, I don’t quite understand how the lighting of the cell transitioned into a pulsating torus that contracted and warped itself. The animations done with the torus felt more rough compared to the cell lighting animations.
The video itself certainly created a fast, dark, and exciting mood, but I wonder if it could be more effective if something different was animated instead.