Category Archives: Uncategorized

Caroline

06 Mar 2013

Inspiration/ References:

  • Here is an awesome article about how virtual reality does and does not create presence.
  • Virtual presence through a map interface.  Click
  • making sense of maps. Click
  • Obsessively documenting where you are and what you are doing. Surveillance. Click
  • Gandhi in second life. click.
  • Camille Utterback’s Liquid Time.
    • (Bueno) Ah, could you imagine what it would be like to have a video recording spanning years?
  • Love of listening to other people’s stories. click
  • (Bueno) Archiving virtual worlds


Thoughts:

  • carnegie Mellon and “making your mark”. People really just past through places and I think there is a kind of nostalgia in that.
    • (Bueno) I think an addendum to that thought is that what “scrubs” places of our presence really is just other people. Sure, nature reclaims everything eventually but there is a fantastic efficiency in the way human beings repurpose/re-experience space.


  • artistic photos of old places


Technically Helpful:

 

Sound scrubbing library for processing.

Andy

06 Mar 2013

So for better or for worse, I think one of my greatest sources of inspiration is to learn how to use a new tool and then demonstrate proficiency with it. For my current idea of the interaction project, perhaps two tools. I guess my basic idea is pretty simple – I want to take real objects and put them into video games. Below is my first attempt to do so – a depth map of my body sitting in chair which I was able to import into Unity via the RGBD system.

bodyinunity

Here is the image in the intermediate steps, perhaps you can see it better this way:

mesh0

So we are a long way away from good-looking representation, much less the possibility of recombining depth maps to get true 3d objects in Unity (an idea), but I really want to become more proficient in Unity as well as spending some time with RGBD, so I like the idea of playing around and seeing what I can make possible in a space which is (to my knowledge) very unexplored.

WHERE IS THE ART?

Definitely a question on my mind. The super-cool idea I have is to use RGBD and augmented reality to allow me to create levels for a video game with the objects around my house, recording the 3D surfaces and assigning spawn points/enemy locations/other stuff based on AR symbols which I can put in the scene. The result could be this hopefully cool and creative hybridization of tabletop and video games, allowing users to create their worlds and then play them.

I’m also curious to see who shoots the first sex tape in RGBD, but I don’t think I want to be that guy.

Patt

06 Mar 2013

I have two ideas for this coming project, both of which has to do with making sound/music.

The first one is to combine the Kinect with abletonLive to create an interactive, real-time performance.

I talked about this video in my LookingOutwards post. I think it’s a great work that combines different tools and brings together different groups of people to create something quite extraordinary. Since I am new to both the Kinect and abletonLive, it will be a chance for me to explore what is possible. For this project, my goal is to learn how to combine the two softwares together to create great music and just something fun to play with.

My second idea is more hands-on. I want to use arduino, conductive materials, and objects you can find around the house such as paper, fabric, plastic, etc..etc.. to make something that can make interesting sound. I have seen tutorials that teach the basics of how this can be done, but I am trying to come up with a new and interesting way to implement it.

http://hlt.media.mit.edu/?p=1372

Caroline

04 Mar 2013

FFT = Fast Fourier transform

Engineering Terminology for artists

Will be focussing on continuous digital data: 1D sensors and 2D signals (images)

Even buttons have noise. Media artists must deal with noise:

url

 

Signals:

amplitude, frequency, period

Timbre: the shape of the wave (ex: square, ragged, curved)

Phase: phase must have two waves in relation to each other. They can cancel subtract or add to each other.

Pulse Width Modulations: duty cycle is the amount of time something is on

Spatial Frequency: visual signals have it all too ( amplitude, frequency, period and orientation)

different spatial frequencies convey different things about about an images:

high = detail, low = blur

Digital Signals:2 numbers characterize the sampling resolution:

Bit Depth

Sampling Rate

Nyquist Rate & Aliasing: nyquist rate is 1/2 the sampling rate. Any frequency higher than  two times the sampling rate will be aliased ( distorted and represented as a lower frequency)

line fitting: least squares line fitting. opencv

Forier: ways of representing a complex sound as a combination of different waves. This allows you to re-create a sound. see visually in stereography

can also see the the fft of an image. (has orientation unlike stereography) can reconstruct an image from its fft.

Noise:

Gaussian noise is most common when observing natural processes

shot noise: bad individual samples (sporadic pops)

Drift noise: linked to time. where sensor becomes degraded

Filtering:

local averaging: local filters average of surrounding local values (use a copy buffer)

median average: gets rid of spot noise really quickly.

Winsorized Averaging: is a combination of median and averaging. It cuts off extreme values and then it averages.

convolution kernel filtering (2D): replacing my value with that of my neighbors. Can give different weights to different pixels/

kernel: 3×3 equal weights. can use it to detect edges etc. ( use imagej to write own filters)

gaussian: 7×7 pays less attention to corners.

Histograms: thresholding – determining foreground and background.

finding the best thresholding: use the random triangle method that usually works. eyeo thresholding is the intersection between different curves. iso thresholding.

Anna

03 Mar 2013

The kinect presentation last week by James George and Jonathan Minard made me start thinking about all the old school sci-fi novels I’ve read, so the idea for my interactivity project is unsurprisingly inspired by one of my favorite books, The Demolished Man by Alfred Bester (1951). I’ve always found the book extremely clever, both in its ideas and in its execution—particularly when it comes to Bester trying to depict what communication via Telepathy would look like in textual representation. Take this passage, for example.

demolishedman

I completely loved idea that people talking with their minds would somehow translate differently in space and time, when compared to normal speech. It not only made the book more engaging, since every page felt like a puzzle, but it also made me wonder about different ways you could represent normal party conversation in a way that better captured its overlapping chaos, serendipitous convergences, and trending topics.

So, with that said, enter my idea: The Esper Room. (‘Esper’ is the term for a telepath in the novel…)

esper_room

I’d like to create a room where everybody entering is given a pin-on microphone adjusted to pick up their voice only. All the microphones would feed into a computer, where openFrameworks or Processing would convert the speech to text and visualize the words according to some pre-selected pattern (“Basket-weave? Math? Curves? Music? Architectural Design?”). Recurring words and phrases would be used as the backbone of the pattern, and the whole visualization could be projected in real-time onto the walls or ceiling of the room.

Aside from being a nifty homeage to 1950s sci-fi, I think this could be an interesting way to realize that people on opposite sides of a party are actually talking about the same topic. Maybe it could bring together the wallflowers of the world. Maybe it could cause political arguments, or deter people from gossipping. In a way, the installation would be like pseudo-telepathy, because you could read the thoughts of people whom you normally wouldn’t be able to hear. I’m interested in seeing if that fact would have a substantial impact on people’s behavior.

Joshua

28 Feb 2013

Firefly Vision tools + Kangaroo Physics

This is using grasshopper and a physics engine (kangaroo) and firefly (which lets you get data /control other connected devices like camera or arduino). This is a sort of awesome experiment that lets computer vision be used directly in the manipulation of cad objects (spheres in this case).  This basic structure could be used to make some pretty cool stuff.  I really like the idea of creating a more humane interface for cad modeling.  Modeling right now works well but is rather de-humanizing in comparison to clay, wire or pencil and paper.  Computers have the tendency to make one feel like the brain is turning into jello.  Maybe if one could use hand gestures and computer vision to make 3d models it would be a little less alienating.

 

https://www.creativeapplications.net/openframeworks/hand-tracking-gesture-experiment-with-iisu-middleware-and-of/

Anna

27 Feb 2013

Goodevening. I bring you a brief and cursory ramble (how can it be both brief and a ramble? magic!) about computer-vision-related dazzle-ry. A disclaimer for anyone splitting hairs: more of this post is focused on Augmented Reality, but that tends to involve a good bit of computer vision.

I’d also like to point out that two of the three things below were discovered via this thoughtful commentary on the influence of filmmaker George Melies on the ideas of Augmented Reality. You may be familiar with Melies if you saw the film ‘Hugo’ — or read the book it was based upon.

Ring-ring-ring-ring-ring-ring….

It’s cellular, modular, interactivodular…

Raffi song aside, this is an awesome demo of everyday objects turned into complex gadgetry via the recognition of gestures— say, opening a laptop, or picking up a phone. It reminds me a lot of the student project about hand-drawn audio controls that graces this website’s homepage, but what I really like about it is the fact that the system relies not on shape recognition, but instead upon such small and seemingly inconsequential human behaviors. Honestly, who even thinks about the way we open our laptops? It’s just something we do, habitually and subconsciously. To be able to harness that strange subliminal action and use it to transform objects into devices is fascinating to me. I’m also interested in the work that went into projecting the sound so that it seems to originate at the banana.

And now, a word on demonic chairs….

KinÊtre

This demo of a man using a kinect to make a chair jump around isn’t particularly compelling to me as a stand-alone piece, but I thought the article was worth including because of the implications for animation that it proposes. It really makes a whole lot of sense to use the kinect to create realistic animations. Sure, you can spend a lot of time in Maya making every incremental movement perfect, or you could capture a motion fluidly and organically by actually doing it yourself. It’s a no brainer, really, and I bet it’s a lot cheaper than those motion capture suits like they put on
Andy Serkis in LotR… or Mark Ruffalo in Avengers…

Did somebody say Avengers? Oh that reminds me…

Jarvis is the new watercolors….

Last post, I made the bold claim that watercolors make everything better, a statement that I could be quickly talked out of believing. When it comes to Jarvis (or really anything Marvel related), however, I’m much more likely to stand my ground, for better or worse. This is part of the promotional material for the Iron Man 2 movie a few years ago — an interface that lets you put Iron Man’s helmet on, and also control bits of Tony’s Heads-Up Display via head gestures. Honestly, this looks like something that could be pounded out with very few issues just using FaceOSC and some extra-sparkly sci-fi, Stark-Industries graphics. But even with a technical background, I am still a sucker for sparkly pseudo-science graphics. Sue me! The Marvel marketing machine tends to be pretty clever, if admittedly gimmicky.

Marlena

26 Feb 2013

I love the idea of computer vision–it’s an excellent system for sensing that lets people react with machines in a much more natural and intuitive way than typing or other standard mechanical inputs.

Of course, the most ubiquitous form of consumer computer vision has been made possible by cheap and the Kinect: 

Of course here are plenty of games, both those approved by Microsoft and those made by developers and enthusiasts all over the web; [see http://www.xbox.com/en-US/kinect/games for some examples], but there are also plenty of cool applications for tools, robotics, and augmented reality.

Here’s a great example of an augmented reality application that uses the Kinect–it tracks the placement of different sized blocks on a table to build a city. It’s a neat project in its ability to translate real objects into a continuous digital model.

Similarly, there is a Kinect hack that allows the user to manipulate Grasshopper files using gestures [see http://www.grasshopper3d.com/video/kinect-grasshopper ]. It is a great prototype for what is probably the next level of interaction: direct tactile feedback between user and device. This particular example is lacks a little polish–its feedback isn’t immediate and there are other minor experience details that could be improved. For an early tactile interface, though, it does a pretty good job. There are plenty of other good projects at http://www.kinecthacks.com/top-10-best-kinect-hacks/

Computer vision is also incredibly important to many forms of semi or completely autonomous navigation. For example, the Cobot project at CMU uses a combination of mapping and computer vision to navigate the Gates-Hillman Center. [See http://www.cs.cmu.edu/~coral/projects/cobot/ ]. There are a lot of cool things that can be done with autonomous motion, but the implementation is difficult to create due to the large levels of prediction necessary for navigating a busy area.

Another great application of computer vision is augmented reality. There are plenty of projects at http://www.t-immersion.com/projects to give a good idea as to how many projects involving augmented reality exist, with every idea ranging from face manipulation to driving tiny virtual cars to applying an interface to a blank wall having been implemented in some form. Unfortunately, it is difficult to make augmented reality seem like a completely immersive experience because there is always a disconnect between the screen and the surrounding environment. A good challenge to undertake, perhaps, is then how to make the experience such that the flow from screen to environment doesn’t break the illusion for the user. Food for thought.

Ziyun

26 Feb 2013

{It’s you – Karolina Sobecka}

I like the way it’s showing it in the mirror, which is a much more intuitive way than to show it on a regular screen, although the result that you’ll see yourself in it is the same.

The “non-verbal communication” concept is another fact that makes this project interesting. If you look carefully, when you’re “morphed” into an animal, your ears tend to be more expressive!

 

{Image Cloning Library – Kevin Atkinson}

This is an openFrameworks addon, which allows you to turn one face into another..it is realtime and the result is, I would say, quite seamless.
ahh.. technology..I want to do this with voices!

hey..sad lady..

 

{Squeal – Henry Chu}

a cute app, I love the face changing swap..

 

Nathan

25 Feb 2013

I’m using this as a kind of sounding board for all of the projects that I consider descent and at least a little elegant, interesting conceptually, and beautiful. I will slowly fill this in with more text but for now it is a culmination of my searching for inspiration.

.fluid – A reactive surface from Hannes Kalk on Vimeo.

Rain Room at the Barbican from rAndom International on Vimeo.

YCAM InterLab + Yoko Ando “Reactor for Awareness in Motion” promotional video from YCAM on Vimeo.

WOODS from Nocte on Vimeo.

Kentucky Route Zero trailer from Cardboard Computer on Vimeo.

One Hundred and Eight – Interactive Installation from Nils Völker on Vimeo.