Hereafter by UVA

by davidyen @ 8:24 pm 28 February 2010

hereafter by UVA
http://www.uva.co.uk/archives/57

Hereafter is a piece by UVA, a group whose physical computing work is really good and has super high production value. It is one of a few of their pieces based on “mirror” ideas that involve the theme of interacting with video history. As with all their work, the high quality of their finished product, and thought given to context and aesthetics lends a magical nature to the technology that is more bang and less whiz.

check out “Battles” too.
http://www.uva.co.uk/archives/62

Looking Outwards: Inflatables

by Max Hawkins @ 3:32 pm

I came across this guy’s website a number of years ago. He builds awesome glowing plastic inflatables for parties and concerts.

Best of all he tells you how to make them and encourages you to build your own. I totally want to build one of these.

[AKAirways]

Project 3 Sketch–“Dandelion”

by aburridg @ 7:12 am 24 February 2010

Here is an image of a prototype of the piece:

In terms of interface, this is how I’m planning for it to look.

Right now I’m in the process of figuring out how to have my figure respond to “electromagnetic fields.” I’m going to have the stem of the dandelion and the little seedlings move depending on an electric field as shown by this Processing example here.

I’m going to have the electric flow depend on real-time audio input from the user. Right now I’m trying to figure out the ess library in Processing in order to use audio input.

I’m having a little trouble, but right now I’m focusing on the creation of the dandelion and the movement of its seeds.

Looking Outwards–Augmentation

by aburridg @ 7:04 am

Sorry, I didn’t see that this Looking Outwards was due until I looked at the schedule this morning.

Here’s a piece that kind of has the same premise of what I wanted to do for my second project…using human input to create something visual.

This piece is by Camille Utterback and it’s called Aurora Organ:

The above link contains a video on the piece, but here’s a picture:

The piece is set up in an atrium and the human input is how people grasp the railing of a staircase in the atrium. Depending on how they grasp it columns of light hanging from the ceiling change color.

I also like how the piece fits into the environment. It pays with a simple gesture of grasping a railing that a person would normally not pay attention to.

trying to reproduce Georg Nees

by jsinclai @ 5:43 pm 23 February 2010

So at the beginning of project 2 I wanted to emulate some stuff that the early generative art guys did, just to test my abilities and learn something new. The Georg Nees piece Schotter looked like it was really simple to recreate:

It really just looked like a nested for loop, with displacement and rotation increasing proportionally to the row. Well, I certainly had some learning to do, but I really like the process more than the product! My “broken Schotter”s are much more interesting to me than my success.

First attempt

Here’s a PDF that contains all 7. The last 3 are really cool next to each other.

GDE Error: Unable to load profile settings

Another issue I had was printing these out. Every printer I tried to print this on jammed. In particular, each printer had a “Fuser Jam.” After a week or so I realized that this is because the PDFs actually contain more data than it shows…well, I think that’s what the problem ways. In order to get them to print, I actually had to take a screenshot of the PDF (just a heads up!). I thought it was funny that this digital error was also causing physical errors as well (the printout would be there stuck in the “fuser,” all crumpled up).

I finally read the “translate” tutorial and figured it out.  Here are 6 different generations that are fairly close:

GDE Error: Unable to load profile settings

I also made a slightly interactive version that just uses mouseX and mouseY to affect the squares. Press any key to freeze the animation (any key again to resume):


https://ems.andrew.cmu.edu/2010spring/wp-content/uploads/2010/02/jsinclai_nees_sketch.html

I still need a paper cutter for the 11″x17″s, but I printed them out and put them on my walls 🙂

-Jordan

Looking Outwards #5: Augmentation

by areuter @ 9:41 am 22 February 2010

I thought this implementation of augmented reality was interesting since it doesn’t use fiduciary markers or any other standard techniques for defining the surface on which to render. Instead, the makers have developed a way to read the architecture of the environment so that augmentations can be projected onto and surface within the frame.

Also, I thought the AR game of “Mario” on the street was really interesting–it reminded me of an article I read a few days ago about how “social games” may soon integrate with every aspect of our lives. However, many traditional gamers and developers feel left out of this movement. Perhaps AR games, like in “Mario”, can be used to bridge this gap by utilizing a social game mechanic in which you improve your score through interactions with your environment, which is augmented using AR to transform it into an immersive extraordinary experience.

Looking outwards: Augmentation

by xiaoyuan @ 8:21 am

http://www.we-make-money-not-art.com/archives/2010/02/acoustic-botany.php

Acoustic botany seeks to come up with ideas for creating plants that perform music. Taking from the idea that in the future many organisms will be genetically engineered to suit our needs, they are inspired by existing plants to design music-playing plants to genetically engineer in the future. Some of their plants would create music through symbiotic interaction with bugs and bacteria that reside within their resonating membranes. Their ideal result would be a “genetically engineered sound garden.”

I think this idea is cool and the idea of plants that emulate loudspeakers is something to look forward to. Unfortunately, they are currently only in the design phase of the project.

Project 2: Flower Generation

by areuter @ 2:47 am

Project 2: Flower Generation

In this piece, my aim was to explore generation of both form and movement by designing a soothing simulation of randomly generation flowers guided by fluid dynamics.

Get Adobe Flash player

Note: You’ll need FlashPlayer 10.0 to view this…

To create the flowers, I designed an algorithm consisting of three steps–petal arrangement, petal shape, and petal color. A flower can have either one or two layers of petals, and the number of petals on each layer is a Fibonacci number (as in real life). Once the number of petals is determined, a Bezier curve is created from the base of the petal to the top, with a control point which is biased by the number of petals to prevent overcrowding. Finally, petal colors are generated by defining a color first in HSV (which is more intuitive for defining a natural-looking color space) and then converting to RGB.

The second part of this project is fluid simulation to make the flowers appear as though they are floating on the surface of water. This effect was created by implementing Jos Stam’s GDC paper Real-Time Fluid Dynamics for Games, with some optimizations made to improve its performance in Flash.

There are several improvements I could still make in this project. The color picking algorithm could be refined to choose more interesting color combination, perhaps with more than two colors. Some of the petals still overlap unnaturally, and it would be great to include more variations in the petal shapes–even for individual petals. The fluid simulation could also be improved–flowers will occasionally jitter when they approach the edge of a cell or when the cursor moves over them. I’d also like to make the fluid a little more responsive to the cursor’s movement, although so far I’ve been having trouble getting this to work with Flash.

Looking Outwards: More AR

by Michael Hill @ 1:07 am

Stumble Upon was kind enough to show this to me tonight:

levelHead v1.0, 3 cube speed-run (spoiler!) from Julian Oliver on Vimeo.

I thought it would be worth sharing.  For more information, visit the game’s website: levelHead

Augmented Reality

by Max Hawkins @ 11:41 pm 21 February 2010

AR

After I saw Julian Bleecker speak at Art&&Code last fall I started subscribing to his design collective’s blog. Last December he had a post about augmented reality that made me think more critically about what augmented reality is good for. It’s good to remember that AR hasn’t left the buzzword stage and most projects are more gee-whiz than practical.

[Near Future Laboratory]

Looking-outwards(Augmentation):Flow of Qi

by kuanjuw @ 11:40 pm

Flow of Qi utilizes UWB technology to give installation visitors a direct impression of the ancient Chinese philosophy of Qi. This work lets participating observers orient their breathing on the spirit of famous works of calligraphy from the National Palace Museum in Taiwan and thereby establish personal contact with timeless cultural treasures.

I have seen this installation once and I think it is a delicate artwork.
Setting on the chair and relax the words show up according to your breath.
The speed of writing relates to user’s breathing rate and the density of ink depends on the breathing depth.
The interaction in this artwork is beautifully match the spirit of Qi.

The interaction demo starts from 03’45 in the video.

AR with common objects- looking outwards

by caudenri @ 10:27 pm

While browsing through a variety of augmented reality projects, I found myself most attracted to the projects that did not rely on specially designed codes that the computer can recognize; rather the ones that could use everyday objects as triggers for animation and effects.

Apologies if everyone has seen this project already, it was a collaboration between magician Marco Tempest and Zach Lieberman and Theo Watson, implemented in Open Frameworks. Nice example of using playing cards as an augmented reality trigger. Video is a bit long but is entertaining and really done well.

Here’s another project, this one dealing with pool. At around the 2:13 mark the video gets into the AR/projection part of the demonstration. It seems like a good idea for implementing the technology— it looks like they can calculate the trajectory of a pool ball quite accurately. I also like the pool-ball positioning robotic arm you see earlier in the video.

Looking Outwards – Night Lights

by ryun @ 10:23 pm

Visit their website

This is an art installation happened in New Zealand? I guess. It is beautifully made for public interactive art and quite impressive thing was the people’s excitement we can observe from here. I think the beauty of this art is that the public get involved in creating the art dancing/having fun together and showing it to the public in the open space. Here are some more picture. I like it a lot.

Looking Outwards – Augmentation

by Karl DD @ 9:29 pm

While brainstorming with Cheng and Golan about the final project several ideas for using different types of input for fabricating physical objects came up. Here are a couple interesting projects I found out about from Cheng and Golan that use silhouette for input and/or output.

Kumi Yamashita, Dialogue, 1999

Nadeem Haidary, Hand-Made


I really like the idea of using body shape and body movement as sculptural input for digital fabrication. I have explored this general idea with another project called ‘Spatial Sketch’.

Looking Outward (Augmentation): Hacking your Camera

by sbisker @ 6:49 pm 20 February 2010

Anyone who knows me from outside of this class likely knows that my research involves playing around with time-lapse photography in public spaces, particularly using cheap keychain digital cameras. In particular, I wire up microcontrollers to these cameras such that they can take pictures every few minutes, with a cost per camera small enough that they can be simply left in public spaces in large numbers to learn about the world around us. I’m interested in “personal public computing”, the idea that individuals will be able to use cheap, ubiquitous hardware in public places that act on their behalf in the same ways that we today put up paper posters to find your missing dog, or that we hold up signs to protest abuse of power. If you’re curious, you can learn about it here or here, or drop me a line.

While poking around in preparation for a possible final project related to this work, I came across this cool resource for others who might want to make more sophisticated interactions that turn traditionally non-programmable hardware like digital cameras into “input devices.”

CHDK [Canon Hack Development Kit] is a firmware enhancement that operates on a number of Canon Cameras. CHDK gets loaded into your camera’s memory upon bootup (either manually or automatically). It provides additional functionality beyond that currently provided by the native camera firmware.
CHDK is not a permanent firmware upgrade: you decide how it is loaded (manually or automatically) and you can always easily remove it.”

Essentially, the Canon Hack Development Kit an open-source firmware upgrade that you can stick on a Canon digital camera in order to do more things with its hardware than Canon ever intended when they sold it to you. New camera features the open-source community has created with this new firmware include Motion Detection and Motion Triggered Photography, Time Lapse Photography, and scripting the camera’s operations such that the camera can do things such as prepare, take and analyze photos automagically. What’s more, the scripting language is generic enough that you can write scripts to program your camera’s actions, and share those scripts with others who own Canon cameras (even different models of Canons).

Canon has gone on record as saying that the CHDK does NOT void your camera’s warranty, since they deem firmware upgrades “a reversible operation”. What this probably really means is that Canon trusts an open-source community as organized as CHDK to create firmware versions that don’t literally brick people’s cameras, and they they’re asking CHDK to help them push the limits of their own hardware. This is quite exciting – more generally, I think we’re entering an era when companies are letting us “hack” the electronics in things we don’t normally consider programmable. This helps both us and product manufacturers explore possible new interactions with their hardware. A particularly geeky friend of mine is writing a new firmware for his big-screen TV, so he can programmatically do things like volume control and input selection, and ideally even more ambitious tasks like save raw video being shown on his TV from any input source to his desktop. What’s next? There’s some sort of “firmware” in everything these days, from our refrigerator’s temperature control to our car radio. How can we augment our day-to-day interactions by simply re-programming the hardware that exists all around us?

Music Mountains

by jedmund @ 6:27 am

The concept behind this was to use music to generate terrain, and consequently, topographic maps. It didn’t go very well, but when I have some time I plan to maybe dive into MaxMSP and really make it real, since I don’t think Processing is gonna cut it.

You can download my experiments here.

visually

by jedmund @ 6:20 am

Hey guys,

I just launched visually (http://vi.sual.ly/), a media bookmarking tool I’ve been working on since last summer. It is very humble right now, and I do consider it alpha software, but please take a look! It’s currently closed to everyone except for CMU students, so you have to use an @andrew.cmu.edu or @cmu.edu email account to register.

If you want to know more or have anything to say about it, let me know! (or if something breaks, cause that could happen too)

Cheers!

Looking Outwards (Augmenting): Interactive Dance

by mghods @ 7:28 pm 19 February 2010

In a dance performance, scenes or music which interacts with dancers can add another dimension to dance performance. Here are some examples:

1- Mortal Engine

“Mortal Engine” is a full length dance piece, featuring six dancers on a steep stage, mostly kept in darkness by a computer interactive projection system. In this production by German software artist Frieder Weiss and Australian choreographer Gideon Obarzanek, projections react to the dancer’s moving body, graphically illuminating and extending it.

See video here: http://www.frieder-weiss.de/works/all/Mortal-Engine.php

Also, visit other works of Frieder Weiss here: http://www.frieder-weiss.de/

2- Conductive Paint

Pop-star Calvin Harris performed his new single on a “Humanthesizer,” a group of dancers painted with body-safe conductive ink used to trigger sounds. Students at the Royal College of Art developed the material, called Bare Conductive.

Chromafish, project 2 by david yen

by davidyen @ 6:11 pm

This is somewhat updated from class. I added some more features to the UI which I think make the interaction experience a bit richer, but it’s still just the surface of what I probably should tackle, as far as access to the knobs and such.

Applet here: http://halfconscious.com/chromafish
Project info here: http://halfconscious.com/index.php?page=chromafish

Description:
Chromafish is an interactive simulation of fish behavior and genetic evolution that I programmed in Processing. The Chromafish are a peculiar (and fictional!) species of fish whose salient genetic feature is color. Dictate which color is dominant by using the menu at the top, and sit back and watch as they naturally select towards that color.
After experimenting with parametrically designed fish, I moved onto using flocking behaviors and perlin noise for organically changing vector fields. I used some open-source code snippets from Dan Shiffman and Robert Hodgin in this program.

I’m planning on bringing this to the iphone soon, whenever I have a chunk of time.

David Yen

Looking Outwards: Freestyle (Paul)

by paulshen @ 11:04 am

http://www.we-make-money-not-art.com/archives/2010/01/the-best-way-to-kick.php

http://www.streetwithaview.com/

Community Performance in Google Street View

This piece is not technically challenging but rather its concept is fun and strikes an issue of concern. Google Maps Street View has been immensely useful for many people but also bothers many people of his lack of privacy concerns. So a group of artists “staged collective performances and actions that took place just as the Google Car was driving through the neighbourhood: a 17th-century sword fight, a lady escaping through the window using bed linen, a gigantic chicken, a parade with a brass band and majorettes, the lab of the inventor of a laser that makes people fall in love, etc. The images that document the events have become an integral part of the Google image archive.”

Now, these images are stored in Google’s databases permanently and serve as documentation for what is at any specific point in the world. “With their series of collective performances and actions, Kinsley and Hewlett create an analogy between their carefully planned and coordinated artistic events and the equally fictitious reality presented by Google.”

Oh, and this project came from the CMU CFA. Coincidence.

With the complicity of both the inhabitants of Sampsonia Way in Pittsburgh and Google Street View, artists Ben Kinsley and Robin Hewlett staged collective performances and actions that took place just as the Google Car was driving through the neighbourhood: a 17th-century sword fight, a lady escaping through the window using bed linen, a gigantic chicken, a parade with a brass band and majorettes, the lab of the inventor of a laser that makes people fall in love, etc. The images that document the events have become an integral part of the Google image archive.

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2016 Special Topics in Interactive Art & Computational Design | powered by WordPress with Barecity