kelseyLee – Mobile in Motion

by kelsey @ 6:41 am 1 March 2012
I started out with the idea of wanting to generate a visual piece through utilizing a song’s data. I wanted the appearance to be abstracted, simplified and really liked the idea of using motion to convey different aspects of a song. A source of inspiration was:
[vimeo=https://vimeo.com/31179423]
While brainstorming for the project, I happened across an Alexander Calder Sculpture at the Pitt Airport. A sculptor whose works I’ve admired for a long time, it struck me as strange that while these hanging sculptures seem so lively and free, hanging in space, they never actually move.

At this point I was inspired to generate a hanging mobile that would dance to the music.

I began by looking at a bunch of Calder mobiles, examining how the different tiers fit together.

I then went on to examine OpenGL’s 3D library to determine how to generate the shapes in space. After sampling from a processing program that drew cubes, I then needed to figure out how to generate motion. It was at first difficult to think in terms of 3D coordinates, and then to have each tier connected to the above tier and move about in space while still staying connected. In my piece I store the tiers in an array and must calculate the top tier so that the below tier can be found, and so on.

As for the music, I used Echonest to analyze Phoenix’s Love Like a Sunset (Turzi RMX).

It was particularly difficult to get data because the Processing library that uses Echonoest had version control incompatibilities and wasn’t really working, so I want to thank Kaushal for helping me circumvent it to analyze the song I chose.
When I got the data, there was some interesting stuff related to the popularity level, among other things, but with a deeper search I was also able to access more granular data that had second by second analysis of pitches and timbre, etc. Since I wanted to show motion, I focused on the pitch data, which included over 3,500 segments of analysis for my song. I planned to time the motion of the mobile to the song data.
Then I encountered difficulties, because the pitch data was actually 12 pitches, on a scale of 0-1.0, and this could be found for about every 3-5 milliseconds. I couldn’t find any documentation about what the pitches were or why 12 pitches were associated with each segment. At this point it had taken me so long to prep for the data that I just decided to make due with it. I would correlate a range of pitches to a specific tier of the sculpture, and whenever that pitch was played, would move that tier of the mobile. With so much pitch data, I just took the first pitch in the sequence of 12 pitches and used that to determine which tier would move.
Originally I wanted only 1 tier to rotate at once and so simplified the data to only update the tier movements about once a second. This however seemed too choppy so I just utilized the appearance of a pitch to begin the tier’s movement. Ideally I’d like the movement to stop for that tier after some time so that it’d make for more interesting movement patterns, however this works as well. Watching how the sculpture moves as the song progresses, as more tiers become involved, as the asynchronous nature of the tiers ebbs and flows is interesting.
I wanted to show music doing a dance, something that it is unable to do. I wanted to move away from those electric, space-nebula filled looking music visualizers and do something a bit more relatable. Yes I would have liked to get more relevant data, that had more meaning, the actual note patterns of the song, discretized into pieces that were human understandable or even just supported by documentation. There is definitely room for improvement with this project, but I am happy with it in terms of the fact that I was able to generate motion from music in an inanimate object. I could easily plug in another song’s Echonest analysis file and a completely different dance would arise, and this fingerprinting of songs in a visual way was what I foresaw for my project in its original inception.

[youtube=https://www.youtube.com/watch?v=WOpIPqEFJcg]

Evan Sheehan | Project 3 | Science!

by Evan @ 6:28 am

On the Origin of Egg Drops

I’m not entirely sure where the idea for this project came from. I was exploring several ideas for using flocking algorithms when I suddenly thought of evolving solutions to the egg drop problem using genetic algorithms.

[vimeo=https://vimeo.com/37727843]

I recall performing this “experiment” more than once during my childhood, but I don’t think I ever constructed a container that would preserve an egg from a one story fall. There was something very appealing about revisiting this problem in graduate school and finally conquering it.

Grab the code.

Physics in Processing

[vimeo=https://vimeo.com/37725342]

I began this project working with toxiclibs. It’s springs and mesh structures seemed like good tools for constructing an egg drop simulation. It’s lack of collision detection, however, made it difficult to coordinate the interactions of the egg with the other objects in the simulation. On to Box2D…

Box2D made it pretty easy to detect when the egg had collided with the ground. Determining whether or not the egg had broken was simply a matter of looking at its acceleration, and if that was above some threshold (determined experimentally), it broke.

Genetic Structure

What interventions can you make to preserve an egg during a fall? The two obvious solutions are 1) make the egg fall slower so it lands softly and 2) pack it in bubble wrap to absorb the force. These were two common solutions I recalled from my childhood. I used a balloon in the simulation to slow the contraption’s fall, and packing peanuts inside to absorb the impact when it hits the ground. This gave me several parameters which I could vary to breed different solutions: bouancy of the balloons to slow the fall; density of the packing peanuts to absorb shock; packing density of peanuts in the box. Additionally, I varied the box size and the peanut size both of which affect the number of peanuts that will fit in the box.

I also varied the color of the container, just to make it slightly more obvious that these were different contraptions.

Evaluating Fitness

My baseline for evaluating a contraption’s fitness was how much force beyond the minimum required to break the egg was applied to the egg on impact. A contraption that allowed the egg to be smashed to pieces was less fit then one that barely cracked it. To avoid evolving solutions that were merely gigantic balloons attached to the box, or nothing more than an egg encased in bubble wrap, I associated a cost with each of the contraption’s parameters. Rather than fixing these parameters in the application, I made them modifiable through sliders in the user interface so that I could experiment with different costs.

Nir Rachmel | Project 3 + Generative Art

by nir @ 5:24 am

Evolution of ideas.

My initial idea for this project was to simulate a line or a crowd, by implementing a combination of flocking / steering algorithms. I planned to set some ground rules and test how the crowd behaves when interesting extreme characters enter the crowd and behave completely differently. Will the crowd learn to adapt? Will it submit to the new character?

After playing around with ToxicLibs to generate the queue, I thought about an idea that interested me more, so I only got as far as modeling one of the queues:

William Paley

William Paley was a priest, who lived back in the 19th century. He coined one of the famous argument in favor of god’s existance and the belief that everything we see is designed for a purpose (by god, of course). As a rhetoric measure, he used to talk about a watch, and how complicated and synchronized need all its parts be for it to function well. In a similar fashion, are all other things we see around us on earth and beyond.

I wanted to use this lab assignment to play with evolution and see if I can create a set of rules that will generate a meaningful image out of a pool of randomly generated images. Following Daniel Shiffman’s book explanation about genetic algorithms, I designed a program that tries to generate a well-known image from a pool of randomly generated images. The only “cheat” here is that the target image was actually used to calculate the fitness of the images throughout the runtime of the program. Even with that small “cheat”, this is not an easy task! There are so many parameters that can be used to fine-tune the algorithm:
1. Pool size – how many images?
2. Mating pool size –  How many images are in the mating pool. This parameter is especially important for a round where there are some images with little representation. If the mating pool is small, they will be eliminated and vice versa.
3. Mutation rate – is a double sided knife, as I have learned. Too much mutations, and you never get to a relatively stable optimum. Too little mutation, and you get nowhere.
4. Fitness function – This is the hardest part of the algorithm. To come up with criteria that measures what is “better” than another. As explained above, for this lab I knew what is my target, so I could calculate the fitness much easier. I used color distance for that: For each pixel in the image, I compared the three colors with the other image. The closer you are, the more fitness you have, and bigger your chances to mate for the next round.
5. Last,but not least – performance. These algorithms are time consuming and cpu consuming. When trying an image of 200×200, the computer reacted really slowly. I ended up with an optimum of 100px square images for the input assignment.

Following is a short movie that shows one run of the algorithm. The image is never drawn right. There are always fuzzy colors and in general it looks “alive”.

The End.

EliRosen – Project3 – Excitable Creatures

by eli @ 2:37 am

Not a Prayer
I started this project with the idea of generating a group of creatures that would worship the mouse as their creator. The sound I knew was going to be critical. I discovered a pack of phoneme-like sounds from batchku at the free sound project. I used the Ess library for processing to string these together into an excited murmur. It didn’t sound much like praying but I liked the aesthetic. In order to create the visuals I took a cue from Karsten Schmidt’s wonderful project “Nokia Friends.” I put the creatures together as a series of springs using the toxiclibs physics library.

Here are some of my early character tests:

Getting the characters together took a lot of tweaking. I wanted as much variation in the forms as possible but I also wanted the bodies to be structurally sound. I found that the shapes had a tendency to invert, sending the springs into a wild twirling motion. This was sort of a fun accident but I wanted to minimize it as much as possible.

Here is the structural design of my creature along with some finished characters:

At this point I focused on the behavior of the creatures. I used attraction forces from toxiclibs to keep the creatures moving nervously around the screen. I also added some interactivity. If the user places the mouse over the eye of a creature it leaps into the air and lets out an excited gasp. Clicking the mouse applies an upward force to all of the creatures so that you can see them leap or float around the screen.

Here is a screen capture of me playing with the project:
[youtube https://www.youtube.com/watch?v=sMfVkc9OhU0&w=480&h=360]

« Previous Page
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity