I saw this presented at TEI recently, and though it sonded cool, I don’t really see the need for it. I agree that it is really great to combine new media technology and tangible objects like writing on a scroll, but inventing a new method seems a bit too much, Although some of the features- like marking up pictures is very neat.
This is a really cool concept. I like the idea of virtually existing in a galaxy space as you are on a physical swing, and I think it is cool that they used a kinect. However, I don’t really see why they used a kinect instead of a regular motion sensor. It doesn’t seem to be doing anything more than zooming in and out of the star field.
These are really neat! I like the interaction with a virtual turntable to create a beat. It is like taking one of the DJ apps for an iphone and making it into a single unit. This is cool but also makes you think about how simple it really is to make this using a touch screen for a device that can do so many other things. although this uses an actual turntable arm and light patterns to create the beat in a way that is different that coding a touchscreen.
This is another Looking Outward for Interactivity where I wanted to focus on Augmented Reality.
Street Art + AR
Sweza is a German Street Artist who uses QR codes to link to his art work. I couldn’t embed the video but here’s a link to it. QR codes seem slightly cliched now but I thought AR + Fiducial markers serve the same purpose. At one point in the video, Sweza talks about a project where a QR code on a radio image links to a song, which was pretty cool.
This video led me to another project by Sweza called GraffYard which uses QR Codes to preserve graffiti after it is removed. He places a QR Code in the exact location which resolves to an image of the original.
Virtual Graffiti Apps
There are a bunch of apps that let you “virtually” place graffiti at a place using AR techniques like ARTags, ARStreet, Street Tag, AirPainter, Layar, and finally in 3D via Scrawl.
So, I’m going to be posting a lot of interactive art I’ve found over the last few weeks. I would have uploaded this earlier, but I was really caught up in international travel issues.
…summary paragraphs coming soon….
#1: Concentricity, Interactive Light Sculpture by Joshua Kirsch:
So I really liked this project because it involves interactivity AND light. Based off the project video, the user is ale to control the mechanism and the corresponding light simply by physically interacting with the device. By either pulling the inner doodad outwards or pushing it around the sides, the lights go through various patterns. I like this concept because I find it subtly immersive, just enough to get the user to use it for some optimal length of time. Light through through its surrounding space, and with this piece, the user is capable of controlling a larger medium than he is most likely used to. A possible revision could include the machine “remembering” what movements the user made and then repeating the light changes and physical moves so that the user feels like they created an actual light presentation.
#2: Self-Portrait by rAndom and Incubator
Essentially, this is an subtly interactive art piece that continuously draws a snapshot of the audience looking at it on a blank canvas. Every time it snaps an image, it erased and recolors the canvas to make the images appear. The passive interactivity is something I can appreciate. I imagine seeing this at an actual art show. At a certain point, the audience may feel a bit less interactive in that the procedure of the art dictates a very limited scope of change. I feel like this could be solved by perhaps changing the color of the ink to correlate with how close or far the person in the audience is. This would be a next step that I could see really helping this in an exhibit.
So this is a software art app that allows the user to interact with it to create a desired effect. You can change the movements of the on-screen particles based on gravity, or direction, and you can make some very pretty patterns. I particularly like how the app is designed based on the original Gravilux that was released in 1998 in a galleries and museums around the world. The iPad itself is pretty great for several reasons, but the main one that I find increasingly awesome about it, is that the touch interactivity feels very natural. I just purchased an iPad and while I don’t have the programming abilities to make a full fledged app, I feel like if I could somehow involve my iPad, I’d be able to really come up with some creative solutions to my interactivity.
Light Drive is a stop motion video by Kim Pimmel. Each photograph is a long exposure video of lights controlled by an Arduino remotely controlled via bluetooth to drive a stepper motor. The light sources include cold cathode case lights, EL wire, lasers, etc.
David Haylock’s Project (Dance)
This is still a work in progress by David Haylock, but looks interesting. It uses a 10k Sanyo Projector, a 10m by 7m Cyc, Quartz Composer, MaxMSPJitter and the Kinect to augment a dance performance by projecting previous movements on the screen behind the dancers. I thought the result was really pretty for something that seems like a pretty simple idea.
Kinect Cat Sequencer
I’m always amazed at the lack of interactive pet software there is out there. Sometimes I’ll see a few here and there but the Kinect Cat Sequencer seemed like an interesting idea. It uses the Kinect to trigger audio samples that are triggered by how filled the cat’s plates are. Code & details are available here.
Kinetic Art – Dynamic Structure
The video is of a set of 32 independently moving lines that are controlled by an integrated computer system. The video shows some of the constantly changing ordered and random structures that appear and disappear.
The reacTogon is an interface that enables people to build arpeggios using hexagonal blocks. The blocks are placed on a board that represents all of the notes on the harmonic table. Some blocks have certain effects that alter the sequence or direction of the arpeggio. There are also controls for other aspects of the sound. This is similar to the augmented reality project Golan showed in class but allows the user to make more refined sequences and patterns.
Despite the not-so-subtle innuendo that is evident in the design of this remote, the idea of having objects that become alive only when you need to use them is intriguing. This remote in its sleep state turns into a floppy gel but becomes rigid when the user grabs it. I think there is probably a use for this type of interaction but Panasonic has not found it.
Last but not least, my favorite tangible interaction project ever.
This project was inspired by my recent experiences with learning and teaching origami. I had a 3 week residency at the Children’s museum where I taught people of all ages how to make paper sculptures from simple animal to more complex modular and tessellating patterns. I because really interested in the way people approached the folding, they were either eager to learn and confident in the ability to mold the paper into their desired form, or extremely skeptical that the result was going to work out. I still can’t see patterns beyon da certain complexity…I had no idea this:
becomes this
:
I began thinking about how this mapping from 2D to 3D on a piece of paper occurs. There has a been a lot of research about projecting line drawings onto different planes to see when we see a 3D object.
At first, I thought that an autostereogram would help demonstrate the potential for a 2D object to become 3D, but then I realized it was a bit limiting since it cannot really be realized and compared as rapidly as other 3D viewing techniques. So I decided to go with stereovision. There is a Stereo library fro Processing that is fun to play with, but in the end it seemed like it would be easiest to write my own version.
The first program allows you to draw lines, or trails with the mouse and then create a stereo version of the image. It then shifts in place to pop out or pop in, generating a flip-book of sorts of the image you created.
The second program allows you to draw points on the canvas and then watch as lines randomly form between them to see what kind of 3D objects you can make with a given point range.
Here is a video illustrating what they can do (watch with red/green glasses if you can!)
The longer term goal is to combine this with origami crease finding algorithms such as Robert Lang’s treemaker: http://www.langorigami.com/science/computational/treemaker/treemaker.php#
and see if this could be a tool for designing 3D origami forms that seem like they would be impossible.
Comments Off on Project 3- Line Drawings and Depth
Harvesting your own bone… With this particular speculative project, the user can insert extra bone cells into their arm to grow their own artifacts for them to share with other people or just to keep for yourself. Though highly speculative, I think it is an interesting perspective to use our own flesh and blood to create artifacts or actual products.
[vimeo https://vimeo.com/6792724 w=500&h=400]
Another project by Mike Thompson, this lamp interestingly seeks to make people aware of the value of light. Like his other bone project, this lamp more or less seeks to create a discussion about the particulat ethos of such an object. I think for the sake of discussion it’s really provocative, and the interaction is an appropriate one, but from my point of view, it doesn’t really do more than the initial surface level discussion. I guess the extreme depth is not what I sense.
http://labs.teague.com/?p=1451
Here’s an example a very functional (albeit a bit difficult to wrap your head around at first) type of interactive design. What’s neat is that it’s also open source! Free to incorporate or alter into your other projects.
feelSpace is actually a cognitive science research project at the University of Osnabrück, but I think it is a strong inspiration for artistic interaction. The project consists of a belt lined with vibration motors and an electronic compass. The section of the belt that points to magnetic north is always vibrating slightly, which allows the wearer to feel his or her heading. After wearing the belt for six weeks, users reported that they became accustomed to navigating with their sixth sense.
So while it was designed to research how the brain adapts to new sensory sources, there’s a lot of artistic potential in making intangible information always present to us as “new” senses. As an aside, this project inspired the Stalker Sensor, a belt I made that gives the wearer a crude approximation of the distance of object behind them.
These projects by Ludwig Zeller also take on psychological and cognitive science topics, but come from an art and design perspective without a focus on research. The three devices, Dromolux, Optocoupler, and Introspectre explore how technology can be used to alter mental ability as a reflection on the intended and unintended consequences of living in an always-connected world. I particularly enjoy Optocoupler (pictured), which is designed as a “digital depressant” or relaxation chamber. I can’t speak to how well it actually works, but I think the form is interesting (sticking your head inside a TV) and the light patterns are beautiful. Very reminiscent of scenes from Kubrick’s 2001: A Space Odyssey
Niklas Roy’s My little piece of Privacy is a great example of the fun to be had in overengineering. Roy’s studio has a large window facing the street and to gain some “privacy”, he mechanized a tiny curtain so that it would move to block the view of passerby. This instantly creates a game where people on the street are distracted by something moving as the walk past and then try to outsmart the curtain and see into the studio. As a result, there’s probably even less privacy, but despite the title, I don’t think that was the point. I like this because its fun, conceptually simple, and the engineering is well done and well documented.
Last, we have Michael Cross’s Bridge 1. The interaction here is purely mechanical, but I find the result incredibly compelling. Inside an old church flooded with water, visitors walk across a bridge that appears in front of them and disappears behind them, leaving them apparently in the middle of the water with no way back. So while there’s nothing computational about this piece, I include it because its one of my favorite interactive pieces and the feelings it evokes and explores can inspire many other projects.
The name… says it all? A MIDI drum kit is used to coordinate electrical shocks directed at the bisected bodies of preserved frogs. Morbid? Sure. An ingenious interaction? Undoubtedly.
A collaboration between the MIT Media Lab and the Harvard School of Design lead to this little gem, a series of video games based on rope-related activities throughout the world. In the age of Kinect and Wii, it’s refreshing to see an input method so thoroughly grounded in a tactile medium.
You need to fast-forward quite a bit to get the point of this clever video game / real life mashup. The gimmick here is that you can buy physical action figures which are then “Transported” into a video game, unlocking the figurine as a playable in-game avatar. Not only is it an ingenious marketing strategy, it redefines the concept of “downloadable content” and playfully blurs the line between game and reality. If I were ten years younger, I would be eating this stuff up.