Camille Utterback’s External Measurements series began in 2001 and presents an interesting example of interactive art in a museum setting. This project has existed in multiple iterations, but across each, a digital work of art is augmented by physical movement around the room it is in. The projected image is a representation of an aesthetic system which responds to input from an overhead video camera. Custom tracking allows parts of the digital work to respond entirely to the movement and placement of the people in the room. I am particularly interested in this work as it excels both as an aesthetic work of art and a creative use of interactive technology. Her work does not rely on technology as an aesthetic, but instead as a tool for pushing what could have been a still image further. She does not compromise her aesthetic interests for technology and thus is able to produce unique and beautiful works of art that benefit from both her technical interests and the aesthetic systems.
I looked at design I/O’s (Theo Watson and Emily Gobeille) interactive puppet installation called “Puppet Parade.” This project utilizes openFrameworks and Kinect to track a user’s hand movement and arm movement to control a projected bird.
I love the whimsy with which the birds and their environment are created in this project. The colors and shapes are lovely to look at, and I think, considering that this project is meant for children, they hit the nail on the stylistic head. However, the flip side to this artwork interacting with kids’ gesticulations is that the movement of the birds can often be quite jerky and uncomfortable (watch the video below, and you’ll see that nearly every kid is jumping up and down and waving their arms like Tigger after 3 bottles of 5-hr Energy. If they could have somehow found a way to smooth out that jerky movement, that would have improved the project. Additionally, I would love to see more interaction between the two bird being possible.
I think kids are an easy target for interactive art of this nature. Not that I’m saying there’s anything wrong with that — I applaud design I/O for recognizing that they have the perfect audience. But I a
lso think that interactive art has great potential to make a statement, as it incorporates the users into the artwork, and I’m not really seeing that in this piece.
Project Page (I love the last image on this page!)
I also found a video of Theo and Emily describing a prototype of their project. It has a bit more explication on how it works:
The project that I want to write about this week is part of the Future Forward event in New York City, Drift, a thermal-responsive chandelier that interact with the lighting system in the gallery space. The reason this project stuck with me is that, while we are trying to achieve interaction with complicated electronic and digital tools, Doris Sung chose a totally different approach from her experience as an architect in experimenting with materials, made Drift itself free from electronics and digital controls, saying the installation is “something natural and seemingly unlikely”. How it works is that when the light beams change their path on the structure, then change the heat distribution in the region, that the pieces in the chandelier made of heat-sensitive metal change the curvature and tension accordingly, change the overall appearance of the whole structure. The way the metal pieces move one by one and try to slowly going back to balance is really mesmerizing, and I imagine it would be calming to watch even for hours. In fact it demonstrates the capabilities for smart buildings to move with trajectory of the sun in the future.
Aside from the movement of the pieces of the structure, the material itself creates a very soothing dynamic in the space: the shimmering reflections of the metal pieces changing when people walk by, or the tilted and slightly swinging metal line where the light comes through when others stands still. In the very short video about the making of this installation, Doria Sung also dis Use the idea of balance and pivoting, she said by using the idea of balance and pivoting, there is a position that it wants to naturally be in. That reminds me of the waterwheel, a traditional water transportation system that turns according to the accumulation of water in each slot, an elegant integration of nature and human activities.
Béatrice Lartigue is a designer and artist who works in the area of interactivity and the relationship between space and time. Her interest in this area stemmed from her childhood love of comic books, where she first began to draw the lines between how each panel was a visual representation of space and time.
She is also a member of Lab 212 collective, a group of friends who graduated from the School of Visual Art les Gobelins in Paris, and studied Interactive Design. The interdisciplinary art collective works on pushing the boundaries on what can be defined as a visualization in our daily lives.
I am attracted to Latrigue’s elegant interactions and sophisticated visualizations, especially in her work related to light and sound. Additionally, I believe her style of dark surroundings filled with crisp blue light is very similar to the aesthetic that I have been working towards for some time.
Lartigue is also passionate about the realtime visualization of sound and music. In her work Portée/ she worked with her colleagues from Lab 212 to create a minimalist music interaction. When the audience plucks a string, it plays the corresponding note on the connected piano. This work reminds me of previous project I’ve discovered this semester, particularly along the theme of the necessity of collaboration. Much like 21 Balançoires , while one could simply swing, or in the instance of Portée/, pluck, alone and create a beautiful note, the true magic occurs when many come together to participate. In order to create the art you need others around you. Whether they are strangers or friends is irrelevant, because in that moment you are all simply a note, coming together to create a melody.
I also adore her work as a VR art director in Notes on Blindness: Into Darkness. In this interpretation of the audio-diary cassettes of John Hull, the user can only see what the user can here. Nothing is visible until sound touches it, which is exactly how Hull describes the world around him. His description or rain is breathtaking, as he describes that only when it rains can he truly see an environment, rather than pieces, here and there. He wishes that it could rain indoors, so that he could see his home the way he can see trees, pavement, and gutters. In Latrigue’s work the user truly feels deep empathy for Hull, and his world that is entirely dependent on sound. After watching the original film Notes on Blindness, I felt that this interpretation fell short, and that the VR expression is much more elegant in its method of visualization and storytelling.
While reading All the Light we Cannot See by Anthony Doerr (read: my favourite book), I was completely invested in his description of how a little blind girl “saw” the world around her in the 1940’s. I feel that Doerr and Latrigue both do an exceptional job at describing a world for someone one had, but has now lost their vision. In Doerr’s work the girl’s story is told in parallel of that of a little boy’s (with normal vision). The two childrens’ paths cross very briefly, but they have a significant impact on each other. I think that the VR experience that Latrigue created would be a fascinating way to tell Doerr’s story, in particular I would be interested to see how the little boy’s world would differ from the girl’s, and what would happen when they meet.
Adrien M / Claire B
I stumbled upon some of the recent works of Adrien M / Claire B, a french company headed by artists Adrien Mondot and Claire Bardainne. They create a range of digital arts for performances and exhibitions, combining the virtual and physical world. Their motto is “placing the human body at the heart of technological and artistic challenges and adapting today’s technological tools to create a timeless poetry through a visual language based on playing and enjoyment, which breeds imagination.”
I particularly enjoyed this performance, Coincidence (2011), where a juggler dances, juggles both a metal and digital sphere, and interacts with a background of living type. Adrien and Claire have been developing eMotion, a tool they implement in their projects to create objects (particles, text, drawing strokes, quartz compositions) that move and interact live with a performer.
Typography is always around us everyday, from the nutrition facts on Nutella spread to street crossing signs to Facebook etc. I thought the projection of large type surrounding, and even attacking the performer is so poetic; it is no longer that the human controls and has influence over type (type designers, readers, writers), but the type equally influences us in good and bad ways (clarity, legibility, information, helpful, demanding). But what’s even more impressive, is the ability for the type to seem alive and aware of the performers. Both are having a conversation with each other. I think it’s so much more natural and right that projections for performances are generated in real time instead of pre-recorded, it brings us into a more convincing new world. It’s just like a pit that responds accordingly to the actors and singers of musical. Humans will always make mistakes and algorithms are new lending hands.
More projects by Adrien M and Claire B:
When I had seen this a few years ago (my friend had sent me a link) I couldn’t stop laughing. Why? Because in all the times I played Minecraft I’d forgotten just how ridiculous the idea of hitting (or punching) a block to gather resources was. To do anything in the survival mode, first thing you need is wood, which is achieved by punching trees. Need dirt to build temporary walls? Punch dirt (or hit it with anything). Seeing this in “real life” just made those silly actions become a reality, and for me this was just an emphasis of that feeling. The artist, Ben Purdy, made 3 of these videos (I originally only saw one) but it’s a shame he hasn’t done anything with it. This has great potential to be an public or education-based interactive artwork or exhibition. I can see this being used in very lower-aged groups, such as elementary kids (perhaps ages so young they haven’t even played Minecraft as a sort of digital/interactive-art building block or learning method.
Unfortunately I don’t find this particularly inspiring for my own work, however it is an enjoyable piece of interactive work that has obviously required some special thinking and meticulous work (such as managing to perfectly project onto the sides of the box using one(?) projector… I believe in the 3rd video he uses multiple projectors for even-sided projection.
Okay, I know this isn’t technically an art project, but it’s still a form of interactivity that I find interesting, so it counts, right? Tilt Brush combines my unboundedly increasing obsession with Google and my long-term love for virtual reality. Basically, it’s an environment in which you can “paint” in three dimensional space using their special Tilt Brush tool. You can choose color, stroke width, and all that good stuff, and then… just draw. In air. Technically, you need a VR headset to be able to see what you’re drawing, but that’s a small price to pay for the ability to instantly create 3D objects around you (either for fun or to plan out a future project). There have been 3D modeling tools out there for a while now, but Tilt Brush is different. It’s not as good if you want to run physics stimulations on your creations, but it’s so much better for abstract brainstorming of ideas. You can create all sorts of shapes fairly quickly, and then actually walk around them and see how they would look in the real world. This is an idea I’ve dreamed about since I was a kid, and something I think could be really useful to all kinds of artists in the future.
This video illustrates how Tilt Brush works, and while it looks a little simplistic now, the possibilities are endless. I bet that, in the not-so-distant future, it could be possible for users to smooth out the surfaces of the shapes they draw, because right now, that’s the main thing that bothers me about the drawings shown in the video below: they’re rough, and look a bit like they were put together with colorful strips of paper mache. Anyway, this whole project really excites me, and I’m looking forward to seeing where it goes from here.
My Looking Outwards for this week scratches at the confluence of two ideas I’ve been thinking about for the past few days:
- Interaction as Challenge
- Blurring the Physical and Digital (sparked by James & Josh’s talk on Wednesday
With regard to the first idea, it’s very common within the School of Design to talk about what makes something hard/bad, not human-friendly, etc. This isn’t surprising, because almost always, the goal of ‘design’ is to get out of the way and reduce the friction between the human and the built-world/designed-artifact/etc. But, during How People Work (51-271) on Septemeber 28th, the idea of making things ‘difficult on purpose’ came up within the context of learning and video game design. To the second point, after listening to James Tichenor and Joshua Walton speak on the need to create ‘richer blurs’ between digital and physical spaces, I’ve been on the lookout for good example of this in status quo.
When I first saw Mylène Dreyer’s interactive drawing game on Creative Applications, I felt like it was really hard to understand and would probably confuse users. But I also tried to think about how that could benefit her within the context of ‘Interaction as Challenge.’ It also reminded me about some discussions [1, 2] within the UX community awhile back about how Snapchat’s bad user experience is actually to its benefit. Also: Double points for cute music and simple graphics — it really makes the game pop!
The Manual Input Workstation (2004-2006: Golan Levin and Zachary Lieberman)
Yes, I realize that writing about Golan’s work is maybe not the most productive thing in this class but this piece was super cool!
Basically this is a system which uses two kinds of projection (digital and analog) and computer vision to recognize hands. The shadow produced by hands in the projection are identified by the computer vision software and shapes are created using both the negative space and the actual form of the hand. The user can interact with and play with forms made from light. The light takes on a material property as you can see it bounce and you can control it’s movements.
I loved this project because it’s so tangible in borderline-sculptural way. Many digital interactions are abstracted beyond the point of intuition or too simple to be entertaining for very long. This seems like it would be endlessly amusing because there are so many infinite shapes that hands are capable of making and the animations are so physical. The only thing that seemed a little off was how bouncy the shapes were, it could have been intentional or just a limitation of the technology.
(I was so tempted to do one of Theo/Emily’s works again like Connected Worlds because I loved them so much but I wrote about their lecture in LookingOutwards01 so…)
The work I chose is Lit Tree by Mimi Son, an interactive artwork consisting of a real tree in conjunction with digital emulsion: the tree is augmented with video projection, resulting in patterns created from the light hitting the leaves, which act as voxels. The project is described to allow the tree to have a “visceral conversation with human visitors”, thereby becoming a sort of aesthetic object. This project coincides with my usual preferences when it comes to interactive artwork, which is why it appeals to and interests me: nature-based, with a calm, serene, and intimate atmosphere as the audience experiences a subtle, private, albeit dynamic relationship with the single tree. As someone who also usually admires digital work on a screen–immediate and through an interface, this was impressively novel to me in that it combines technical features with a real-life, physical object to create yet an immersive conversation between human and nature. The work also feels very well-resolved because of the specific attribute of light being chosen to be utilized with the tree’s leaves, thus relying on the varying placement, space, surface, and texture to output the beautiful patterns.
Interactive installations. There are too many. It’s hard to choose one to focus on. Do you go with the commercial / advertising projects? Artwork in galleries? Performance? I could pick Hakanai by the French artists Adrien M and Claire B… or the complete advertising coup of the decade, the Museum of Feelings in New York City (created by Ogilvy / Radical Media), or Kyle McDonald’s Sharing Faces… or Social Galaxy by Black Egg (and Kyle McDonald and Lauren McCarthy, with some code by our own Dan Moore), which utilized the user’s Instagram feed and takes you inside your own images and hashtags, floating around with the feeds of other participants, inside an infinite mirror tunnel. This is inside the Samsung store in Chelsea in NYC. Having participated in this, I can say it is moderately uncomfortable, a little embarrassing, a little thrilling, a little ego-trip, and a little 2001 Space Oddesy.
One of the most well known interactive installation is Chris Milk’s Treachery of Sanctuary which I have seen many people lay claim to, and spread around the internet with abandon.
I like all these installations. I wonder how to grow from the “interactive installation pose” – aka the spread arms and waving them around in front of a projection that responds to your (graceful movement) (flailing). Gesture-based interaction is very compelling to me, but it is also a little repetitive. How can we push this method further? What new technology can we use to allow our natural body language to come through?
I have to also shout out Golan’s list of installations that include a large majority of work done by women. I clearly have more research to do.
Image above: Museum of Feelings
Nova Jiang’s Ideogenetic Machine is an interactive piece that allows participants to become characters in a generative comic book based on current events. I think it’s pretty cool because I’m pretty interested in storytelling and narrative art. With the audience being the characters, they get to choose how they would react to certain scenarios. The participants can also add their own dialogue to the comic afterwards. It would be cool if in addition to the interactivity, the dialogue was generative. I guess it makes it a little less interactive, but I think acting out a story generative story that responds to the participant it’s taking a photo of would be cool.
I really wanted to go – but I fell asleeeeeeeeeeep.
Anyway, on another note: Daniel Rozin!!!!
“Wooden Mirror – 1999
830 square pieces of wood, 830 servo motors, control electronics, video camera, computer, wood frame.
Size – W 67” x H 80” x D 10” (170cm , 203cm, 25cm).
Built in 1999, this is the first mechanical mirror I built. This piece explores the line between digital and physical, using a warm and natural material such as wood to portray the abstract notion of digital pixels.”
I think his work with making wooden mirrors has a particularly good point in relation to the plotter project – something that I wish I had considered more (should’ve restocked on my different pens) was the physical medium used for computational output. His quote “explores the line between digital and physical” I think is a key consideration in making a successful plotter rendition and something that makes me wish I had thought of more beyond simply bringing in different pens (pens that I should have bought replacement nibs for before 2 am of wednesday night….thursday morning….).
The wooden mirror wouldn’t have been quite as successful if it was simply a digital rendition of video input – the fact that the data effects something physical (and depends on the physical environment like lighting and angles of the wood blocks to create value!) in the world makes the piece all the more enthralling. Affecting and responding/dependent to digital and physical environment. WOw. Mind twist. It doesn’t work out of context (it needs the light to be that particular angle in order for the tilt of the squares to work to create shadow)
What a striking method of data representation! With the increase in climate, the louder and more apparent the “heartbeat” of New York is. It even creates a frightening sound! When I saw that this work had an underlying concept of making a statement about climate change, it made me happy to see a form of activism through the use of computing. It is inspiring, and I am happy to see what is possible with computing for artistic, social, and political practices. I am interested in looking at more of Andrea Polli’s work, I feel like her work could have a huge influence on me.
Kimchi and Chips’ piece, Light Barrier, concerns a form of real-time processing involving the reflection of light.
The piece involves sound and the reflection of light through mirrors. Not only is this an interaction between the user who experiences the reflections, it’s an interaction between space and time. (“The light installation creates floating graphic objects which animate through space as they do through time.”)
I enjoy this interaction mainly because it allows for new forms of visuals to be experienced in open space. As a filmmaker, light plays a huge role in filming and projecting movies. I also believe this project leaves open the discussion of using light, and open space as seen in the video, as a medium for having films/images “appear out of thin air”. In general, Kimchi and Chips’ work set the new standard for interactive visual experiences and create the opportunity for artists to develop new forms of media through light.
Rafael Loazno’s “Please Empty Your Pockets”
link to the project
This installation consists of “a conveyor belt with a computerized scanner that records and accumulates everything that passes under it”.
I think it is a really clever use of a combination of scanner and screen, and the moment people remove their object and the image of their object is still there feels really magical, and interactive art is wonderful often just because of these magical moments.
Moreover the interaction looks really great. People not only interact with the piece, but also in some way interact with those who interacted with the piece before them, since they can see all the things that has ever been placed on the belt. And then they add their own contribution to this collection of things. It’s like leaving one’s own mark on a monument.
Visually the piece has a very special style that reminds me of pop art, since it consists of many small, colorful every day objects.
It also make me contemplate what does it mean for an object to be in a pocket. It needs to be small to fit in a pocket, and it is necessary for it to be quickly accessible. Seeing the all these contents of people’s pockets almost make me believe objects kept in pockets should belong to a category. Like dogs, cats and pocket objects.
Superfeel 2014 at Cinekid by Molmol
I was drawn to Superfeel by Molmol because of its fun nature. Superfeel is an interactive stage where people wear devices embedded with sensors that take information from muscle movement and body gesture. These devices then send that information to the mechanical elements of the stage to give the users an interactive experience. From moving and flexing, the user can cause gusts of air, wind, fog, and vibration that lets then feel and understand their body’s movement in a new way. With these devices, the users are given new power. This project was commissioned by the Cinekid Festival in Amsterdam, October 2014. What I admire most about this is the exciting and unique way it’s getting kids interested in the electronic and interactive arts. It’s showing children that the electronic arts aren’t just limited to the screen, and to games and videos. By giving them super powers that they must be amazed to have, the project is inspiring them to think about what is possible, if already such super powers are, in the realm of new media and interactive and computer art. The project is perfectly designed to capture the energy and excitement of kids, and use that to its advantage, and I really appreciate the thought put into that.
Nike+ Collab: City Runs
YesYesNo Team: Zach Lieberman, Emily Gobeille and Theo Watson.
This project is fascinating for me because it takes real time data from nike+ sensors in people’s shoes to light up a map. The thing that makes the project so fascinating though is the scale at which this data is produced. Since Nike has thousands of people wearing their shoes, their data generates all kinds of variation, but also a consistency where they have their Nike training sessions. One thing that makes the interactivity so captivating is how the lines evoke the energy they are representing. It is one of the few monochromatic projects that I’ve seen that actually work and a lot of that comes from the glowing style evoked from the lines but also the layering with street life.
If I were to change something about the project, I’d probably layer current runs over previous runs or do something to highlight the change over time. I can imagine that at certain times of the day the project “dims out” because no one is running. An even cooler idea would be to highlight runners running the same path so you could look at it like a race.
This week, I’ve chosen to write about an interactive art piece created as a collaboration between Janet Echelman and Aaron Koblin called Unnumbered Sparks. Created as an installation for TED’s 30th anniversary, it allows the audience to interact with an enormous suspended fiber sculpture in real-time by painting on its surface with light using a chrome web app. It’s a networked experience that allows multiple users to interact at once, seeing their brushstrokes interact with other audience members.
I’m personally a huge fan of public art, and I particularly enjoy the dynamic and ethereal nature of Echelman’s fiber work. All of her non-interactive pieces are beautiful, but I think this collaboration adds a new exciting layer to the project. Giving people a sense of power as their tiny touch-screen gestures are translated into enormous strokes of light is exciting and unusual and allows for kinds of collaborative dance and interaction to occur between strangers as they play and mingle with each other’s patterns.
That being said, I think the interaction method was perhaps a bit too simple, and afforded button-mashy swiping a bit too easily which makes the way people interact with it often chaotic and unrefined. Perhaps introducing more subtle interactions, or somehow throttling the effect would have created a more elegant output in the hands of the audience.
The project should also be applauded for its huge logistical complexity, with projection mapping and mounting of the sculpture alone being an amazing feat, not to mention the interaction all through the chrome browser.
I’m not very familiar with varying sound with code, which maybe makes me even more intrigued with this project. I have no idea how complex implementing their algorithm was, but it’s very engaging to see dance and movement in general change sound. It seems to me that there is a little bit of lag between their movement and the actual sound it creates, but nonetheless the interaction between two methods of expression, movement and sound, collide here. It’s definitely the sort of thing I’d like to see more projects like.
Toshio Iwai is known as the Peter Pan of digital culture. His interests range from video and film, to animation (zoetrope), into what he is now considered: an interactive and computer artist. Iwai has brought to fruition a variety of projects yet maintains a distinct style and theme in his work. Although his pieces fall on a broad spectrum of topics, much of them deal with audiovision and interactivity.
Two of Iwai’s works I wanted to touch on, Resonance of 4 (1994) and Piano – As Image Media (1995), fall into this category of interactive audiovisual art.
As cool as these sorts of interactive public pieces are, I personally am not interested in them. I don’t feel compelled to make interactive walls/projections (ex. top right pic), or real-world games (ex. Heather Kelly). If I become more accomplished as coder in the future, I find myself making interactive art on the screen, or on a VR headset (not that I think anything else isn’t worthwhile, it’s just that they don’t compel me).
I’m sure we all know the wonders of art on the computer screen, so let me talk about my interest in VR instead. Outside of games, I don’t know many VR projects that I follow (maybe because Google is algorithmically feeding me VR games instead of VR experiences because it knows I like games) but I still remember some of the projects Golan showed in class, like Poop VR. Being able to be transported to a completely different world is amazing. It’s the future. VR can provide social interactions, narrative devices, and personal investment that other mediums can’t accomplish as well.