This is a project that transform body movement into digital abstract digital visual form as in the description: it explores techniques of extrapolation to sculpt abstract forms, visualizing unseen relationships – power, balance, grace and conflict – between the body and its surroundings. However I feel the best part is that they chose sports videos to study and generate the forms rather than real-time capturing random movement from random passers on the street.
Hand from Above (2009) – Chris O’shea
This is a relatively “old” but very classic installation realized by openFrameworks and OpenCV. I like the playful way he designed the interactions which give people a very strong sense of involvement. I remember the first time I watched it I was kind of mind-blown when seeing passer being picked up and entirely removed from the screen..
Electricity Comes From Other Planets – Marek Bereza, Fred Deakin, Nat Hunter, James Bulley
This is a light&sound interactive project, well done but not impressively but one of the few sound-related projects by openFrameworks.
A great contribution to the field: iQ font, a font developed by motion-tracking the four corners of a car with an overhead camera, and documenting it as it drifts and swerves in the shapes of all the letters of a font. I love everything about this project. The idea is far-reaching and interesting, the execution is brilliant, and the documentation is interesting and well-done throughout. I whole-heartedly admire this project; the only downside is that the link to download the font seems to be broken.
Quick and dirty project: Subdivide a Tetrahedron to a Sphere. I chose this project just because it was super random. How many people do you know have had the problem where they couldn’t create a sphere when given four corners of a tetrahedron? Never fear! This guy solved it! His documentation is pretty, which also helps his project seem interesting despite the fact that it doesn’t seem to accomplish any major milestones in art or math. A well done quickie.
Three: European Researcher’s Night. This project impressed me primarily because I’m not super sure how OpenFrameworks was involved. European Researcher’s Night is a tabletop surface with small semi-transparent rocks on top (quartz?). Based on where the rocks are located on the circular surface, the surface lights under the rocks glow in different colors, which creates this beautiful and useless tactile experience of moving the rocks around the table and watching the colors change. I would like to see an evolution of this project, maybe incorporating water or a larger surface, but the accomplishments here are worth noting.
1. Second Surface by the Tangible Media Group at MIT
Second Surface is a project that creates an interactive virtual environment layered over-top everyday environment. It allows the user to generate content to add to the environment in real-time and this content it shared across devices so that other users can view and add to content created before them and further, that users can collaborate on this artwork in real-time. What is really cool and innovative about this project is that AR recognition technology is used to determine the pose of the user’s device, so that the content one adds is embedded more smoothly into the virtual environment. Personally, this project is really interesting for a number of reasons. Firstly, I envision users utilizing this project as a kind of graffiti that could create really interesting dialogues across users. Secondly, and this stems from my background in architecture, I find that the project is a response to an open-ended question I’ve had since switching into BCSA which is how to bridge or intersect physical and virtual space. In one of my studios, we spent a good deal of time dealing with differences on the range between private and public space and it seems to me that this project works to overturn these distinctions and allows any space to become a new sort of public space.
2. Playmodes by Eloi Maduell, Santi Vilanova, and Jordi Teixidor
Playmodes is a cool tool/installation that acts as a sequencer for your body movements. It allows the user to specify certain parameters to capture their movements and piece them together in different ways. So far it has been used as an installation for multiple users to play with and enjoy and for a performance piece, as shown in the video. I think that this project could be extended to be used in conjunction with music sequencing and allow the user to use their body to create music in a way that I have not seen before.
3. Optical Flow by Denis Perevalov
This project is an experimentation that has not yet been applied to a project but that I think is really cool and has a lot of potential to be used in interesting projects. It basically allows the user to manipulate a grid using his body movements but the grid has an elasticity to it that causes it to bounce somewhat as it responds. I see a lot of potential for overlaying the grid with images or environments that can be instilled with an immersing dreamlike quality. The creator provides a link to the source code so I may try using it for a future project in the the class.
An audio visual installation that visualises the oscillations of sunspot size and density in a 7 part modular screen made with Arduino, openFrameworks and Max/MSP.
I like the overall execution of the installation. Especially how each hexagon has sub-sections of light rather than lighting the entire thing. The sound generated by the measured color and intensity really creates a sci-fi ambiance, as such you would be like in a spaceship. I think this piece has a nice balance between data visualization of light and sound. Adding a 2-layered visualization makes the installation very interesting. According to one of their videos, the artists we inspired by science fiction and the installation is a homage to movies such as Solaris (1972) and 2001: A Space Odyssey (1968).
An abstract visualization of wandering in a superstructure, 6 logs with dates and hours, made in openFrameworks.
The black & white contrast and the motion in this visualization is a wonderful depiction of wandering in a superstructure. I like how cluttered the black lines are, and yet you’re able to distinguish the silhouettes of buildings. I wished the artist would provide more information on his intentions. There’s literally no description or anything on his website about this research project.
Toccatatouch, Fabrica, 2012
The illusion of playing with the forces of a physical environment. A digital textile reacts to touch, changing its form and generates synthesized tones and noises.
I like how dynamic the generative art is and how it allows a user to modify and experiment different settings and dynamics with the app. Additionally, the generated sounds really complement the motion and form of the digital texture. Although, it’s not clear whether the sound changes based on the visual. I think there’s an opportunity here to create outputs for fabrication. Users can define their own form and aesthetic, which they can materialize through methods such as 3D printing.
Important Contribution: OFX Timeline by James George
This is an add on that lets you modify code parameters with a timeline and key frame based system. It takes inspiration from big expensive packages like After Effects, Premier, and Final Cut, but it is reusable light weight.
I have a feeling that there is way more to this add on than what is demoed on the video, but if I am right in think you can code it into whatever project you are working on it sounds like it could safe a lot of time and possibly give adobe a run for their money. I would really like to see an altered timeline that allows you to stack layers and access them through an input variable.
Quick Sketch: Screen Capture to Sound by Satoru Higa
This app captures the appearance of the window behind it and turn it into a texture. The color of the texture then determines the pitch and volume of the sound.
this is a bit of a non sequitur project. What is the relationship between the appearance of the desktop and the action of making sound? I think this in an example of a project that make correlations for no conceptual reason. This project might be interested if the window functioned as a lens that somehow revealed or transformed the space beneath it.
Forth is public installation that simulates the an endless ocean and groups of people traversing that ocean in life boats. The weather, sound, and amplitude of the waves is influenced by the current weather in its place of installation.
This piece was commissioned for a academic space where they used simulation. I think Sobecka succeeded in creating that allegorical aesthetic in this piece. I think it is particularly successful as a commissioned piece where the patrons had very specific criteria. I found the process of modifying a game engine interesting. This piece relates to Brody Conrad’s Elvis Piece.
TocattaTouch allows the user to manipulate a simulated sheet of fabric, applying forces to drag it through the space in which it resides, producing audio feedback to the user’s actions at the same time. I find this project interesting because it enables interaction in an imprecise manner rather than the typical precision involved in computing. The use of audio feedback continues in this vein, providing only a sense of what is happening to the cloth rather than precise measures. In its present form, TocattaTouch seems to operate in a specific range of parameters on a single cloth object. As a result, this project feels more like a conceptual demo than an idea-generating tool. I could see a more fleshed-out iteration gaining traction as a conceptual design tool.
Photon explores the transformation of 2D information into an ephemeral 3D object which is nonetheless responsive to real objects in its vicinity. This project doubtless drew inspiration from Anthony McCall’s 1973 Light Describing a Cone, in which a film projector was used to project a cone into a foggy room. Unfortunately, Photon does not make a more significant departure from the earlier piece, which is disappointing given the strides made in the underlying technology since then. The cone’s only response to objects intersecting its surface is to split away from them, reinforcing its untouchable nature. It would be interesting to see Haebich explore more complex responses to intrusion in the work, which could add personality to a light object which at present seems rather rigid.
Painting with a Digital Brush is an extension of a longstanding field in computer art: text-mode. A painter using white paint on a black canvas is replicated in real time by the software to produce the ASCII-art rendering of the painting, which is then overlaid onto the original by a projector. This blurring of the distinction between working in the real world with traditional materials and producing a work digitally is intriguing, as is the notion that only through the (comparatively) vast computing resources of today, have we become able to live-generate works in a simple art medium that hasn’t been in widespread use for decades.
While not written in openFrameworks, Lick Weed does give a good introduction to the modern text-mode demo. All of the objects and effects in the video are generated in real time as the demo is running, and then rendered into terminal-printable characters for display. Text-mode demos began on the earliest personal computers, some of which had no graphic capabilities except for writing text to the screen. However, due to the requirements of converting to text-mode rendering and lower resolution, these demos lack the complexity typical of modern demos. Nonetheless, the complex reflections and distortions seen in Lick Weed represent a significant step forward in this field.
This servo-articulated mobile tracks users with its four android tablets using two Kinects. As with The Stranger, which I discussed in my last post, this installation is meant to raise questions about what it means to be watched by our devices. It makes me think about the idea that any window that one can gaze out of may also allow others to gaze in. I consider this installation to be more compelling than The Stranger, in part because the kinetic nature of the installation makes it seem like much more of an organic creature. I wish I knew more about what the processing app on each tablet accomplishes. It may simply be animation, with Open Frameworks managing everything else off-board. If this is the case, I would be somewhat surprised that simple screens weren’t used instead, except the use of consumer devices seems to be a major theme in other projects displayed by Mendoza on his website.
This Open Frameworks demo takes advantage of the accelerometer built into Macbooks and many other laptops to halt spinning hard drives in case of a sudden drop or impact. The demo is fairly simple and just allows spheres to roll about the screen as the laptop is tilted. Still, this is a fascinating demo to me because I didn’t realize that the accelerometer could be accessed easily. I feel a little silly after finding this because my faceOSC project essentially accomplishes the same thing, but by detecting faces and assuming (possibly incorrectly) that the user is upright and has good posture. I wish there was a bit more documentation on this project. I think there’s a lot of potential to do some rather cool tablet-style projects on ordinary laptops, and I also wonder what other sensors can be tapped.
This demo uses Open Frameworks to render a section of a 3D Minecraft world in augmented reality. It uses the Bukkit server plugin to export map chunks. As an avid Minecraft player and server curator, this is one of the more creative uses of Bukkit that I’ve seen, and certainly the first time I’ve seen anything related to both Minecraft and augmented reality. It certainly has a few shortcomings, but it’s a good start. It would be good to see it incorporate texture files instead of just drawing large colored cubes, and as the poster mentions, the render does not currently update. If these improvements were made, however, this could be a very useful tool for demonstrating structures and mechanisms in tutorials and webcasts.
Reciept racer takes a simple “dodge obstacle” game and adds a twist. The entire game is level is printed out on a long sheet of paper. The rate at which the game progressess is limited by the speed at which it can be printed. The player controls a light (projected on the paper) which is sensed by a camera. If the camera sees the light intersecting an obstacle, the game is over.
I think this project is awesome because the printing makes the game more physical. Also, the entire journey the player took is printed. I think this recording is very deep since people never remember the past when playing games like the one demonstrated. Its always about the present. That said if the author did want to show destruction of the past, he could make the paper drop into a fire. This might add more anxiety to our player.
The author (http://www.undef.ch/cat/projects) seems to have done a number of print related projects in the past. His passion with this medium probably inspired him the most.
2. Particles
Processing is neat, and there are a lot of cool effects you can make for the confines of your computer screen. This project was particularly neat because it showed something that I would expect on a screen in real life. Seeing a display like this physically makes the experience really magical.
The artist essentially created a large “roller coaster” for glowing orbs. These orbs can transmit positional and glow information with the technology inside of them. With this, an elaborate effect can be constructed. I kind of wish they had added speakers to the orbs though. This would open up so much more potential, something like a marching band effect.
I was not surprised to find that the creator of this was an architect. When I first saw the image of the project, I thought that it was another light up building. I think the construction of the project mirrors what one might see in modern architectural buildings.
3. Precious
A bike, outfitted with sensors to measure speed, direction location, temperature, humidity, and more can communicate with the world through a website. The goal here was to create something that was informative with a human-like touch.
Humanizing objects is always cool, but I feel like with the bike it is even more appropriate. Since the bike was to go on a journey across the US, it made the process of presenting the journey a little more fun. Something interesting the creator could do to go one step further would be to add sensors to see if the bike was damaged or on its side. This may indicate that the user is hurt, and could be used to notify 911.
This little application is a beautiful visualization of noise/ PPG data. PPG stands for Photoplethysmograph, which is a device that provides data about the volume of blood pulsing into tissue or an organ. The data is used to distort various vertexes of a sphere. The entire thing is smoothed and rendered beautifully apparently with some help from stuff on this site: http://machinesdontcare.wordpress.com/, which is super awesome. In addition, the user interface allows for the manipulation of the shaders and how quickly and at what magnitude the sphere distorts. I find this project interesting for two reasons: 1) it is a visualization of data (or noise as a test) that is not necessarily informative in a quantitative sense, but is still interesting aesthetically, and perhaps could be useful in a more qualitative fashion. 2) its making some beautiful blobs. I don’t know anything about this method of rendering, but it looks interesting and I want to learn.
This is an experiment using the Projector Camera Toolkit, an add on for openFrameworks. This library allows precise projector calibration. Shadow play involves two projectors, precisely aligned, and displaying inverse images. The result is a white screen that only displays something interesting in the shadow cast by blocking light from one of the projectors. Thus an animation, image, or whatever, is sort of encased in the shadow. I can imagine kids enjoying this. Especially if the setup detected what sort of shadow was being cast and updated the image accordingly.
All the Universe is Full of the Lives of Perfect Creatures is a screen based piece where a 3-d model of an animal head is overlaid on the reflection of a persons face. I suspect this is done using a lcd screen and a two-way mirror. Using faceOSC, the animal head mirrors the motion and expression of a person. Apparently the animal sometimes makes its own expressions. The animals are supposed to range from the highly domesticated, like a dog, to the extremely feral, like a wolf. I am not entirely sure what the universe being full of the lives of perfect creatures has to do with overlaying those creatures on a persons face, but I think this an interesting project because the interaction seems entertaining and well-mapped and it is an elegant way of fiddling with the idea of identity and species. It would be awesome to do this with bizarre monsters, or animals like nematodes and naked mole rats, or even other people’s faces.
Second Surface-Multi-user spatial collaboration system.
The system can provide an alternate reality space that generates playful and natural interaction in an everyday setup for multiple users. The goal is to create a second surface on top of the reality, invisible to the naked eyes could generate a real-time spatial canvas on which everyone could express themselves. The system can create an interesting and new collaborative user experience and encourages playful content generation andcan provide new ways of communicating within everyday environment such as cities, schools and households.
I think this one makes a great contribution to the field. It combines the virtual world and the real world by a screen. Also in traditional way, researchers always widen the virtual world by projection or make a virtual world. But this one does a opposite way. It is really good way to help people to make something in the real world collaboratively.
ALS is a disease which has left people almost completely physically paralyzed except for their eyes. This international team is working together to create a low-cost, open source eye-tracking system that will allow ALS patients to draw using just their eyes. The long-term goal is to create a professional/social network of software developers, hardware hackers, urban projection artists and ALS patients from around the world who are using local materials and open source research to creatively connect and make eye art.
I think this is not just a tech project just based on Openframework. It’s a very meaningful activity for patients. It helps them to do the writing or drawing work again even though they don’t have ability to use their arms to do this. It also have a long goal to gather such a group to help these ALS patients. And this also makes me to think that tech should be like this. It should help us to make the world more beautiful and help people’s life. Let them live more easily one this planet.
I think these two are very interesting little test. Though they are very little, they are all important to help us to make the very complicated project. We can do many things to know whether the people sitting in front of the computer smile or where it hands. So I think they are very helpful and interesting.