Category Archives: looking-outwards

Dev

27 Feb 2013

Computer Vision

Reconstructing Rome

Paper for those interested: http://www.cs.cornell.edu/~snavely/publications/papers/ieee_computer_rome.pdf

This was reccomended by a friend of mine who is really into CV.  This project is amazing – it takes photos from social image sharing sites like flickr and uses the images with their geolocation to recreate a 3-D image of Rome’s monuments. Crowdsourcing 3-D imaging. Amazing.

Faceoff/FaceAPI

http://torbensko.com/faceoff/

FaceAPI is an API developed to track head motion using your computer. This is not too far off from what we have seen with FaceOSC. The part that is cool is the fat that this API is integrated into the  source engine. Games developed with the Half-Life 2 engine can utilize this API to provide head-based gesture control for games. This can be anything from creating a realistic 3-D feedback to zooming in when closer to the screen. The video goes over this in detail. I simply like the idea of using head interaction in gaming since webcams are so commonplace now.

Battlefield Simulator

https://www.youtube.com/watch?v=eg8Bh5iI2WY

 

Immersive gaming to the MAX. This is byfar the most impressing game-controllers/reality simulators I have seen. It really engages your entire body. I won’t be able to explain everything, but basically CV is used here to map pixels onto the dome based on the users body position. Also body gestures are detected via Kinect to trigger game events. On top of that the game is constantly being scanned for detection of the player getting hurt. When the player is hurt in-game, paintballs are triggered to shoot making the pain a reality. Check out the video – its long, but worth it.

 

Patt

27 Feb 2013

Computer Vision 

ALT CTRL by Matt Ruby

Excitebike is one of the three projects under a series of works ALT CTRL to experiment with how people interact with different, unfamiliar interfaces in digital systems. The game is controlled by the sound of a user that is detected by a microphone embedded in the helmet. Instead of a normal hand controller, volume is used to control the gas buttons and the frequency is used to change  lanes. The project pushes the users out of their comfort zone, steering away from what is considered the norm.

Cubepix by Xavi’s Lab at Glassworks Barcelona

Cubepix is an interactive and real-time projection mapping installation that combines the use of a projector, a Kinect, 8 Arduino boards, OpenFrameworks, 64 servo motors and 64 cardboard boxes. Users can interact to control the motion and the illumination of the boxes. This is a project I really like because it integrates the use of software and hardware to create something that exemplify simplicity. I also like it partly because I am interested in doing a project on projection mapping. I think it would be really fun to play with.

Fabricate Yourself by Karl D.D. Willis

My fascination with 3D printed objects puts this project on top of my list. The Kinect is used to capture different poses of people, where the depth image is then processed, meshed, and displayed in real-time. The images are saved as SLT files, and printed as a 3×3 cm models with dovetail joins added on the side for pieces to snap together. Ah, it’d be cool to have people do a wave, printed them out in 3D, and line a series of them next to each other around the room.

Dev

27 Feb 2013

Interactivity

Myo

The video for this almost looks too good to be true. Its a simple arm band that measures muscles tension and arm movements using accelerometers and other sensors. From this data a variety of interactions can be imagined. Users are seen controlling anything from computers to television to vehicles. I have always been interested in different forms of off-screen interaction. Something like this is particularly appealing for its broad usage spectrum and its non-intrusive nature.

Occulus Rift

http://www.oculusvr.com/

Commercialized VR! This is really cool. I had heard about Occulus from a friend during CES where it became very popular. Basically this is a VR headset which claims to be non-laggy and provide a huge field of view. The non-laggyness is a huge part of it since it enables high-frame rate activities like video games to be realized. John Carmack, creator of Doom endorses this project, and if it pans out will have Doom released for the platform. This is super exiting for gamers like me who are always looking at the future of gaming.

Real World In Video Games

http://arstechnica.com/tech-policy/2013/02/montreal-designer-remains-defiant-plans-to-release-new-counter-strike-map/

More gaming related news. This article talks about a designer who is adamant about recreating real-world areas into video game maps for Counter-Strike. Its very interesting and hacky since hae turns places people see every day like the subway into warzones. Areas that people might have neglected in real life, become well known in-game for different reasons (like good cover). Unfortunately this guy is getting a ton of flak from authorities for doing this, something I think is perfectly legal. How can anyone own the rights to how a place in this world looks?

Keqin

26 Feb 2013

SeeStorm

It can produce synthetic Video with 3D Talking Avatars by computer vision. Beyond plain real video, user can choose their look for each talk. User can generate Content (UGC) created from photo and voice. It’s a new mode of fun, personalization, visual communication. Voice-to-Video Transcoder converting voice into video. Platform for next-level revenue-generating services

 

Obvious Engineering

Obvious Engineering is a computer vision research, development and content creation company with a focus on surface and object recognition and tracking.

Our first product is the Obvious Engine, a vision-based augmented reality engine for games companies, retailers, developers and brands. The engine can track the natural features of a surface, which means you no longer have to use traditional markers and glyphs to position content and interaction within physical space.

The engine now works with a selection of 3D objects. It’s perfect for creating engaging, interactive experiences that blur the line between real and unreal. And there’s no need to modify existing objects – the object is the trigger.

 

MultiTouch

MultiTouch technology identifies and responds to the movement of hands, while other multitouch techniques merely see points of contact. It’s a good way to put the computer vision into the multitouch screen.

Yvonne

26 Feb 2013

Kinects! Kinects everywhere! Sorry! But they’re so cool :o

 

Kiss Controller
Okay… I don’t know if this counts as computer vision… It is interaction though! I think it’s awesome and different, especially since most games are controlled using hands, arms, legs, or bodies as a whole.

 

Virtual Dressing Room
This amused me and I thought the idea, though mentioned a lot in future scenarios and visions, is pretty fun and useful. Or maybe it’s the guy… and the music… and the skirts.

 

Make the Line Dance
I just thought this was really beautiful. It’s basically kinect skeletal tracking with a series of lines projected onto the human body.

 

Other fun things
More for my personal reference than yours! :P

Kinect Titty Tracker

Fat Cat

Bueno

26 Feb 2013

Ah, computer vision. In retrospect I should have put the Kyle McDonald work I mentioned in my previous looking outwards here. No matter – just an excuse to go out and dig up more work.


Please Smile by Hye Yeon Nam is a fairly simple installation piece. To be honest, the tech side of it doesn’t seem that complex. It can detect the presence of humans and if they are smiling or not, so not exactly the most thrilling set of interactions. What I do like is the use of the skeletal hands, which seem to point accusingly at you as their default reaction to your presence. It’s like they are punishing you for failing to be more welcoming to them.

Link: http://www.hynam.org/HY/ple.html

Dancing to the Sound of Shadows comes to us from the design group Sembler in collaboration with the Feral Theatre. The project takes the movements from the latter collaborator’s shadow puppet production of The Sound Catcher and uses them to generate real-time music that reflected the live performance. The music itself is inspired by native Indonesian music. It’s a real treat.

Link: http://www.thecreatorsproject.com/blog/dancing-to-the-sound-of-shadows

Lastly is another work from our homeboy James George, in collaboration with Karolina Sobecka. It’s  pretty amazing, I think. A dog is projected into a storefront window and reacts aggresively, defensively, indifferently, or affectionately based on the viewer’s gestures. Unlike the previous skeleton hand piece, I think here the choice of the dog as the central figure encourages more sustained interest and engagement with the piece. It was done using Unity3d in communication with openFrameworks.

Link: http://jamesgeorge.org/works/sniff.html

Andy

26 Feb 2013

1. Flutter

Flutter is a company that I interviewed with back in September. They use computer vision algorithms to allow users to control their music programs via gestures recognized by webcams, thus when iTunes is minimized to the tray you don’t need to open it up to pause or go to the next song. And it’s free!

2. DepthJS

With many of the same goals in mind as Flutter, DepthJS is a software application which uses the Kinect to allow users to navigate web browsers with gestures. This project raises the question – just because we can use gestures to control something, does that mean we should? It seems to me that the point-and-click interface is far superior to the DepthJS interface in terms of convenience and usability. Gestures will only succeed when they demonstrate that they are better than the current status quo, and to me all I see here is a swipey touch-screen like mentality that doesn’t utilize the depth of the Kinect sensor.

3. Kinect Lightsaber

I’m all about this project. Track a stick, overlay it with a lightsaber. I could see myself doing something like this to create an augmented reality game or something like that. Maybe fruit ninja except you have to actually slash with a sword to get the fruit. EDIT: Kinect fruit ninja definitely already exists. Dang.

Kyna

26 Feb 2013

Blow-Up

LINK

Blow_Up_03

Blow-Up is an interactive piece wherein a camera which is aimed at the viewer, whose image is then broken up and displayed in a seemingly semi-random fluid display of smaller squares. The overall effect is that of an insect’s compound eye.

The Telegarden

LINK

total1997

The Telegarden is a piece wherein the audience can participate in the management of a garden via a robot connected to the internet. Users can plant, water and monitor the remote garden through control of an industrial robotic arm. The cooperation of the collective audience is what manages and maintains the garden, which I find very interesting.

Close-Up

LINK

Close-up_04

‘”Close-up” is the third piece of the ShadowBox series of interactive displays with a built-in computerized tracking system. This piece shows the viewer’s shadow revealing hundreds of tiny videos of other people who have recently looked at the work. When a viewer approaches the piece, the system automatically starts recording and makes a video of him or her. Simultaneously, inside the viewer’s silhouette videos are triggered that show up to 800 recent recordings. This piece presents a schizoid experience where our presence triggers a massive array of surveillance videos.’

Caroline

25 Feb 2013

Pygmies by Pors and Rao (2006-09)

In this playful piece Rao and Pors create a multitude of personified little creatures. The creature live around the peripheries of the frame. Then pop out when they sense that the environment is safe.

Screen shot 2013-02-25 at 2.04.57 AM

http://www.ted.com/talks/aparna_rao_high_tech_art_with_a_sense_of_humor.html

This piece creates a system of little creatures that are extremely simple in form, but are animated in their movement and interaction with their environment. They retreat whenever they are faced with noise, but they ignore background noise. I think this installation succeed in creating an environment for play, but I might have been more compelling from a formal stand-point.

Scratch and Tickle by George Roland (1996)

In Scratch you are faced with an image of a woman’s back and a voice requesting for you to scratch it with your mouse. She then instructs on how she would like to be scratched, but as time goes on she becomes increasingly insistant and abusive.

SFCI Archive: SCRATCH and TICKLE (1996) from STUDIO for Creative Inquiry on Vimeo.

This is a classic piece, where a very simple interaction is used as a framework to create a relationship and tell a story. I think it is a good example of how the simplest interaction, like a mouse click and drag can create a very compelling piece. I think it is also successful because it requires minimum effort on the part of the user, most of the piece happens in the application itself.

 Street View Stereographic by Ryan Alexander 

Alexander uses the google APIs to manipulate street view into a stereographic or circular view.

Screen shot 2013-02-25 at 3.13.04 PM

This isn’t really an art piece, as presented here, I am more interested in it because I want to learn more about how he coded it. (All his code is on git!!) It is an interesting visual effect. It creates quite a humorous form. I wish they could be globes I could circle around.

Alan

25 Feb 2013

#Google Glass

Google Glass is a glass which extends human sensation. It is integrated Internet service and as many sensors into the small devices. It will be the first time possible to make strong augment reality possible for normal people in large scale.

 

#Johnny Cash Project

Again this is the most impressive crowdsourcing art project for purpose of memorizing Johnny Cash. The project divide each frame of the song Ain’t no Grave and present them on the Internet. Anyone who is interested in the certain frame can reedit it by all means. The MV is finally re-generated by people all over the world.

The Johnny Cash Project from Chris Milk on Vimeo.

 

#Bicycle Built for Two Thousand by Aaron Koblin

Bicycle Built For 2,000 is comprised of 2,088 voice recordings collected via Amazon’s Mechanical Turk web service. Workers were prompted to listen to a short sound clip, then record themselves imitating what they heard.


#Swarm Robots

Swarm Robots is a swarm of robots which individually has lower ability but collectively can achieve things which powerful robots cannot do with coordination. This is interesting since it provides us different view of intelligence.