As the title suggests, this is what it sounds like: an iPhone app which can be used to control a robot. All the controls are multitouch input.
This is inspiring because it makes robotics accessible — particularly in my case since my background is art/design. The multitouch input is intuitive which brings up the idea that anyone can control a robot with ease. As a bonus, the creator explains how he did it in detail w/ the source code available online (Read: 200 lines of Javascript and Python).
In its current state, the controls are still basic. However, it is obvious that this can be a good baseline for some really interesting projects in the future.
Snow is an installation which uses white feathers and air pressure to simulate the flow of real snow.
This piece intrigues me because of its minimalistic presentation and ability to pose introspection on an everyday encounter. Yoshioka extracts what is beautiful about snow and gives it to the viewer on a silver platter. The way the artist executed this concept is inspiring: Simple and effective.
At first, I thought that the flow of the feathers was affected by the viewer. That would be a worthwhile route to expand a project like this: giving viewers the ability to control an element they would not usually be able to. It would amplify the emotional high from this piece.
This project, called “Solar Rework”, is a really fantastic visualization of audio that uses colored blobs, bright colors and glassy “waves” to represent audio data. I think it’s cool because it visually conveys the idea that the sound is “washing over” the blobs in the scene. I really don’t have any complaints with this one, except that I wish there was source I could download and try out myself.
Cosmogramma Fieldlines is an interactive music visualization created in OpenFrameworks. It was created by Aaron Meyers for the release of an album by the band Flying Lotus. I really like the steampunk, ink and paper graphic design of the project, and I like the way the lines radiating from the object in the center converge around the “planets.” I think it’d be cool to change the interaction approach so that the user could “strum” or otherwise manipulate the radial lines instead of the planets, but it might be harder to do?
The Graffiti Analysis project by Evan Roth makes an effort to capture the motion of graffiti in an artistic fashion. I’m interested in using the Kinect to capture hand gestures representative of audio, and I think this is a really cool visualization of gestural input. The way that velocity information is presented as thin rays is visually appealing. I think it would be more interesting if the project incorporated color, though–since real graffiti communicates with the viewer using color as well as shape.
eCLOUD is an installation piece suspended from the ceiling of the San Jose International Airport that uses special tiles to display weather data for select cities. The tiles are opaque in their resting state but become clear when electricity is introduced; this technology is more commonly used for the window glass of variable-privacy rooms like bathrooms.
I love that this project presents mundane information in an interesting format: Even the most preoccupied travelers can feel rewarded for sparing a glance. The piece presents data in a form that is qualitative and dynamic, contributing to its attraction. The medium is not limited to decorating the ceiling, but could serve as floor tiles or wall coverings. Introducing direct human interaction by allowing people to select the cities from a station or mobile device would make the experience more engaging: A definite plus to travelers stranded for hours due to weather.
I implemented the text rain exercise in Processing and used code from the Background Subtraction sample at Processing.org to do the underlying detection of objects in the scene.
This is a processing.js file uploaded to my lab’s site. The wordpress processing.js plug-in had a problem with my class, however the code works fine when run with the processing.1.0.0.min.js file.
Press ENTER to pause.
Your browser does not support iframes.
This is the processing code for the Schotter project.
Over at KinectHacks, DieTapete used the Kinect to spice up his Christmas party. The Kinect projects the image of the people dancing in darkness, but replaced with psychedelic colors.
Critique
It looks like he just projects the image on the wall, with the people replaced with colors. I would like to see this projected onto the dancers themselves, covering each person in light. You could also use different colors for different people, and combine their colors if they started dancing together! This confirms that the Kinect works fine in a really dark environment when picking out people. It suggest that it could be used in many dark situations, but I really like the idea of using it at parties.
– Ward
Comments Off on Ward Penney – Looking Outwards1 – Kinect Party Hack
Over at CreativeApplications.net, a team from AirCord has brought the iPad into the third dimension. By placing a glass prism with a special film on top of an iPad running a split-screen app written in Open Frameworks, the team creates a hologram viewable from all angles. I find it interesting how they calculated the angles and properly rendered the image three times on the iPad, so each face of the prism would reflect the appropriate view. I also think it’s really interesting to see someone improving so drastically on an already amazing and popular device.
Critique
Although the prism still allowed the team to use the iPad’s microphone, it totally obscured the touchscreen. If they could somehow make the faces of the prism touch sensitive like the iPad screen, that would be something even more amazing.
This project suggests that the iPad, could be used as engaging 3D presentation tool – especially on a table with people gathered around it.
– Ward
Comments Off on Ward Penney – Looking Outwards2 – AirCord Holographic iPad
Over at KinectHacks, Bener Suay used his Kinect to control the Nao robot! The robot mimics his moves, using his initial location and arm height as the origin of a coordinate system. I find this really interesting because it is so amazing to see a little robot copying what a man is doing. This is probably as close as we are to Avatars.
Critique
His calibration moves are so broad and clumsy. Perhaps there are more subtle moves he could do and correctly observe with the Kinect to turn on the robot and its motors? Maybe even mimic how the robot slumps then rises, so there is complete visual continuity between the two of them. This suggests that the Kinect is pretty solid when it comes to positioning. It looks like more projects could be done on the assumption that the Kinect knows where the origin is, and it does not lose it.
– Ward
Comments Off on Ward Penney – Looking Outwards3 – Human-Humaniod Interaction
This schotter is intentionally made a bit shorter to keep the page from long scrolls.
Didn’t really get this one. ‘raining WTF’ in OF – using openCV + particle examples found on OFforum, next step is to use Kinect depth based camera for more accurate blob detection in combination with the bg subtraction