Keqin

26 Feb 2013

SeeStorm

It can produce synthetic Video with 3D Talking Avatars by computer vision. Beyond plain real video, user can choose their look for each talk. User can generate Content (UGC) created from photo and voice. It’s a new mode of fun, personalization, visual communication. Voice-to-Video Transcoder converting voice into video. Platform for next-level revenue-generating services

 

Obvious Engineering

Obvious Engineering is a computer vision research, development and content creation company with a focus on surface and object recognition and tracking.

Our first product is the Obvious Engine, a vision-based augmented reality engine for games companies, retailers, developers and brands. The engine can track the natural features of a surface, which means you no longer have to use traditional markers and glyphs to position content and interaction within physical space.

The engine now works with a selection of 3D objects. It’s perfect for creating engaging, interactive experiences that blur the line between real and unreal. And there’s no need to modify existing objects – the object is the trigger.

 

MultiTouch

MultiTouch technology identifies and responds to the movement of hands, while other multitouch techniques merely see points of contact. It’s a good way to put the computer vision into the multitouch screen.

Yvonne

26 Feb 2013

Kinects! Kinects everywhere! Sorry! But they’re so cool :o

 

Kiss Controller
Okay… I don’t know if this counts as computer vision… It is interaction though! I think it’s awesome and different, especially since most games are controlled using hands, arms, legs, or bodies as a whole.

 

Virtual Dressing Room
This amused me and I thought the idea, though mentioned a lot in future scenarios and visions, is pretty fun and useful. Or maybe it’s the guy… and the music… and the skirts.

 

Make the Line Dance
I just thought this was really beautiful. It’s basically kinect skeletal tracking with a series of lines projected onto the human body.

 

Other fun things
More for my personal reference than yours! :P

Kinect Titty Tracker

Fat Cat

Bueno

26 Feb 2013

Ah, computer vision. In retrospect I should have put the Kyle McDonald work I mentioned in my previous looking outwards here. No matter – just an excuse to go out and dig up more work.


Please Smile by Hye Yeon Nam is a fairly simple installation piece. To be honest, the tech side of it doesn’t seem that complex. It can detect the presence of humans and if they are smiling or not, so not exactly the most thrilling set of interactions. What I do like is the use of the skeletal hands, which seem to point accusingly at you as their default reaction to your presence. It’s like they are punishing you for failing to be more welcoming to them.

Link: http://www.hynam.org/HY/ple.html

Dancing to the Sound of Shadows comes to us from the design group Sembler in collaboration with the Feral Theatre. The project takes the movements from the latter collaborator’s shadow puppet production of The Sound Catcher and uses them to generate real-time music that reflected the live performance. The music itself is inspired by native Indonesian music. It’s a real treat.

Link: http://www.thecreatorsproject.com/blog/dancing-to-the-sound-of-shadows

Lastly is another work from our homeboy James George, in collaboration with Karolina Sobecka. It’s  pretty amazing, I think. A dog is projected into a storefront window and reacts aggresively, defensively, indifferently, or affectionately based on the viewer’s gestures. Unlike the previous skeleton hand piece, I think here the choice of the dog as the central figure encourages more sustained interest and engagement with the piece. It was done using Unity3d in communication with openFrameworks.

Link: http://jamesgeorge.org/works/sniff.html

Andy

26 Feb 2013

1. Flutter

Flutter is a company that I interviewed with back in September. They use computer vision algorithms to allow users to control their music programs via gestures recognized by webcams, thus when iTunes is minimized to the tray you don’t need to open it up to pause or go to the next song. And it’s free!

2. DepthJS

With many of the same goals in mind as Flutter, DepthJS is a software application which uses the Kinect to allow users to navigate web browsers with gestures. This project raises the question – just because we can use gestures to control something, does that mean we should? It seems to me that the point-and-click interface is far superior to the DepthJS interface in terms of convenience and usability. Gestures will only succeed when they demonstrate that they are better than the current status quo, and to me all I see here is a swipey touch-screen like mentality that doesn’t utilize the depth of the Kinect sensor.

3. Kinect Lightsaber

I’m all about this project. Track a stick, overlay it with a lightsaber. I could see myself doing something like this to create an augmented reality game or something like that. Maybe fruit ninja except you have to actually slash with a sword to get the fruit. EDIT: Kinect fruit ninja definitely already exists. Dang.

Kyna

26 Feb 2013

Blow-Up

LINK

Blow_Up_03

Blow-Up is an interactive piece wherein a camera which is aimed at the viewer, whose image is then broken up and displayed in a seemingly semi-random fluid display of smaller squares. The overall effect is that of an insect’s compound eye.

The Telegarden

LINK

total1997

The Telegarden is a piece wherein the audience can participate in the management of a garden via a robot connected to the internet. Users can plant, water and monitor the remote garden through control of an industrial robotic arm. The cooperation of the collective audience is what manages and maintains the garden, which I find very interesting.

Close-Up

LINK

Close-up_04

‘”Close-up” is the third piece of the ShadowBox series of interactive displays with a built-in computerized tracking system. This piece shows the viewer’s shadow revealing hundreds of tiny videos of other people who have recently looked at the work. When a viewer approaches the piece, the system automatically starts recording and makes a video of him or her. Simultaneously, inside the viewer’s silhouette videos are triggered that show up to 800 recent recordings. This piece presents a schizoid experience where our presence triggers a massive array of surveillance videos.’

Ziyun

26 Feb 2013

{It’s you – Karolina Sobecka}

I like the way it’s showing it in the mirror, which is a much more intuitive way than to show it on a regular screen, although the result that you’ll see yourself in it is the same.

The “non-verbal communication” concept is another fact that makes this project interesting. If you look carefully, when you’re “morphed” into an animal, your ears tend to be more expressive!

 

{Image Cloning Library – Kevin Atkinson}

This is an openFrameworks addon, which allows you to turn one face into another..it is realtime and the result is, I would say, quite seamless.
ahh.. technology..I want to do this with voices!

hey..sad lady..

 

{Squeal – Henry Chu}

a cute app, I love the face changing swap..

 

Caroline

25 Feb 2013

Pygmies by Pors and Rao (2006-09)

In this playful piece Rao and Pors create a multitude of personified little creatures. The creature live around the peripheries of the frame. Then pop out when they sense that the environment is safe.

Screen shot 2013-02-25 at 2.04.57 AM

http://www.ted.com/talks/aparna_rao_high_tech_art_with_a_sense_of_humor.html

This piece creates a system of little creatures that are extremely simple in form, but are animated in their movement and interaction with their environment. They retreat whenever they are faced with noise, but they ignore background noise. I think this installation succeed in creating an environment for play, but I might have been more compelling from a formal stand-point.

Scratch and Tickle by George Roland (1996)

In Scratch you are faced with an image of a woman’s back and a voice requesting for you to scratch it with your mouse. She then instructs on how she would like to be scratched, but as time goes on she becomes increasingly insistant and abusive.

SFCI Archive: SCRATCH and TICKLE (1996) from STUDIO for Creative Inquiry on Vimeo.

This is a classic piece, where a very simple interaction is used as a framework to create a relationship and tell a story. I think it is a good example of how the simplest interaction, like a mouse click and drag can create a very compelling piece. I think it is also successful because it requires minimum effort on the part of the user, most of the piece happens in the application itself.

 Street View Stereographic by Ryan Alexander 

Alexander uses the google APIs to manipulate street view into a stereographic or circular view.

Screen shot 2013-02-25 at 3.13.04 PM

This isn’t really an art piece, as presented here, I am more interested in it because I want to learn more about how he coded it. (All his code is on git!!) It is an interesting visual effect. It creates quite a humorous form. I wish they could be globes I could circle around.

Nathan

25 Feb 2013

I’m using this as a kind of sounding board for all of the projects that I consider descent and at least a little elegant, interesting conceptually, and beautiful. I will slowly fill this in with more text but for now it is a culmination of my searching for inspiration.

.fluid – A reactive surface from Hannes Kalk on Vimeo.

Rain Room at the Barbican from rAndom International on Vimeo.

YCAM InterLab + Yoko Ando “Reactor for Awareness in Motion” promotional video from YCAM on Vimeo.

WOODS from Nocte on Vimeo.

Kentucky Route Zero trailer from Cardboard Computer on Vimeo.

One Hundred and Eight – Interactive Installation from Nils Völker on Vimeo.

Alan

25 Feb 2013

#Google Glass

Google Glass is a glass which extends human sensation. It is integrated Internet service and as many sensors into the small devices. It will be the first time possible to make strong augment reality possible for normal people in large scale.

 

#Johnny Cash Project

Again this is the most impressive crowdsourcing art project for purpose of memorizing Johnny Cash. The project divide each frame of the song Ain’t no Grave and present them on the Internet. Anyone who is interested in the certain frame can reedit it by all means. The MV is finally re-generated by people all over the world.

The Johnny Cash Project from Chris Milk on Vimeo.

 

#Bicycle Built for Two Thousand by Aaron Koblin

Bicycle Built For 2,000 is comprised of 2,088 voice recordings collected via Amazon’s Mechanical Turk web service. Workers were prompted to listen to a short sound clip, then record themselves imitating what they heard.


#Swarm Robots

Swarm Robots is a swarm of robots which individually has lower ability but collectively can achieve things which powerful robots cannot do with coordination. This is interesting since it provides us different view of intelligence.

 

 

 

 

Bueno

25 Feb 2013

Greetings, fellows. I offer as intellectual tribute the following code-based works that incorporate interactivity in various ways.

The Creators Project | San Francisco, CA

Work number one is from our very own James George along with Chris Milk and Ben Tricklebank, titled The Treachery of Sanctuary. Essentially, the work displays your shadow on one of three white monolithic screens. The work follows a bird motif – depending on the screen, your silhouette may grow wings, may be devoured by birds, or may dissipate into a flock of them. This installation is an interesting convergence of various technologies – Kinect data gleaned through OpenFrameworks is funneled into Unity. The interaction that occurs is passive – your body is simply the canvas on which the work is produced.

Link: http://jamesgeorge.org/works/treachery.html

7592665112_8b349222de_c

 

Work two is from Kyle McDonald. This interactive piece asks for a certain amount of trust from the user. While you keep your eyes closed and hold a pen, the moving platform on which you have placed your hand moves, resulting in a self-portrait created using computer vision software. It’s a blind portrait you would never be capable of drawing, a result of probably the most direct collaboration with a machine I have ever seen.

More here: http://thecreatorsproject.com/blog/you-can-finally-be-an-artist-with-this-self-portrait-machine

The Creators Project | San Francisco, CA

 

In my research of figures we haven’t discussed much in class (and talking to Golan), I discovered that Intel has its own research division that delves into creative coding. `As part of a live event, Doug Carmean and his team created an interactive work in collaboration with Social Print Studio. It took a real-time stream of instagram photos taken during the event and allowed visitors to sift through them using a Kinect. Not anything particularly revolutionary, but I like the impermanence of the whole thing – it can only exist for the event, and it seems to use that to its advantage.

Link: http://thecreatorsproject.com/blog/artists-and-engineers-create-the-future-of-art-and-technology