Looking Outwards- Synthesis, Simulation, Morphogenesis

by blase @ 12:25 pm 21 February 2012

Tim Blackwell’s Swarm Music (2001-2002):
Swarm Music

In a few different series of compositions, Tim Blackwell uses swarming and flocking algorithms to generate music. He explains that in an ensemble that is improvising is essentially performing a sort of flocking behavior, wherein the entire group as a whole is moving in some direction, led by the musicians responding to each other. Based on these principles, he synthesizes MIDI compositions in which the instruments swarm/flock to create music.

Overall, what attracted me to this piece was the idea of creating a composition in the same way jazz improvisers create music, with each voice in an ensemble responding to the others. I think this work is quite successful, although I wish these pieces were updated with modern synthesizers. It’s very clear from listening to these decade-old compositions that MIDI and other synthesized audio technologies have come a long, long way in the last ten years.


Jon McCormack’s Morphogenesis Series (2011):
Link to series

This series of digital prints synthesizes flora using genetic algorithms, with Australian flora as a starting point. He notes that the types of flora that were produced from this series had familiar characteristics, yet would probably be impossible in nature. He seems to use a nice renderer/shader to go from the models that are created, as shown in a detailed image on the site linked above, to the final version.

The idea of synthesizing biological creatures that are familiar yet impossible is an attractive idea, and I think McCormack executes it well. As someone who’s not familiar with Australian flora, I perhaps can’t appreciate all of the subtlety of what was produced by the genetic algorithm, as opposed to what actually is characteristic of Australia. Regardless, I think the images are very pretty, and I would appreciate seeing these printed out rather than on a computer monitor. (There are prints available). I think mediating this sort of art with a screen, even though it was created on a device whose interaction with us is mediated by such a screen, doesn’t do as much justice to the images as printing them out on high quality paper might.



Deborah Kelly, Beastlines (2011)
Link to Beastlines videos

Beastlines is a series of short animations taking biological mashups, in a cut-and-paste style, and animating them. In essence, it’s a commentary on new lifeforms and genetically mutated life. It takes the opposite approach from the two other projects I’ve looked outwards towards today. From the artist’s description, I think the most striking phrase used it “biology is no longer destiny.” It seems that this phrase is a major driver in this work, animating biological deviations.

On its own, I think the individual figures aren’t very striking. However, in animation, taking on bizarrely human and dance characteristics, I think this piece becomes more compelling. There are a few moments when the characters seem to be dancing, juxtaposing their distinctly mashed-up, non-human form with what I perceive as an inherently human movement.

Project 2 Proposal Autostereogram Constructor

by deren @ 11:40 am

I would like to create a program that constructs and deconstructs an autostereogram. I am very interested in the mechanism in which people can perceive them and think it would be interesting to explore how long you can see the object pop out once you know that it is there. The user will be able to interact with the speed and amount of “random” generation of the pixels to see how long they can see the object clearly. I would like this to help people who have trouble seeing the objects and also to be an interesting perspective for people who can see them and don’t quite understand how or why.
I have not yet decided what the object will be, I think that I will be able to pick an object once I have tested it some more, but I think I will choose something that is easily recognizable.


http://en.wikipedia.org/wiki/Autostereogram#Simulated_3D_perception

Billy Keyes – Project 3 Proposal

by Billy @ 10:58 am

Initial Ideas

Inspired by some of the projects in my last Looking Outwards, I wanted to do something that would produce nice-looking images with an emphasis on color. Both ideas I had for achieving this involved things that traced out colored paths in the image. The first involved little creatures that behaved using some combination of Craig Reynolds steering behaviors and color-based reproduction and grouping rules. As the things moved around, they would leave color trails. The second idea involved growing colored paths for dots wandering around the space, as if steps were appearing out of nothing. As the dots moved farther away from a path segment, the segment would fade out.

In talking about these ideas with other people in the class, I decided that it would be very difficult to get the rules tuned to produce results that looked better than scribbling on paper with crayons. Both of the projects I was inspired by had a well-defined structure, which I was lacking here.

Around the same time, a completely different ideas I had discarded early on started to seem possible and interesting.

Growing Light-Responsive Buildings

I know nothing about architecture, but I’ve always enjoyed structures that make excellent use of natural light. So I thought it would be interesting to try growing buildings, like plants, in response to light sources. At minimum, I hope to create some interesting shapes that are inspired by nature without being natural. At best, I’d really like to create actual buildings (even if they could never be built), with windows, and show how light exists in the generated spaces. At best-est, I’d like to 3D print some of the resulting structures, but I don’t think there will be time for this.

The project will likely be built in Processing, using Toxiclibs to handle the meshes and computational geometry, although I’m also considering scripting Blender. It mostly depends on whether my approach involved generating all geometry or automating traditional modelling tools (extrusion and subdivision). Regardless of where the models are produced, I will likely use Blender to create final renderings.

Currently, I’m trying to determine the best way to approach growing the models. I hope to have at least one method working by Thursday, and hopefully have a second started so I can compare them. Included below are some (mostly unhelpful) sketches of the methods I’ve thought of so far.

Project 3- Flocking Lights

by blase @ 10:35 am

For the generative project, I’m going to use a flocking algorithm to generatively change the colors on a strand of lights. Imagine a strand of 25 LED lights, where each light’s color is individually controllable. Now, to decide the color of each light, imagine a flock of 25 birds in which each bird is mapped to a light. This flock of birds is flying over either a color rectangle or a color wheel, as below, and the color under the bird specifies the color of its corresponding light:

x

I’m planning to use these lights, which I have already but haven’t had the chance to mess around with:
Adafruit Lights

lights

I’m deciding whether to give Processing one more chance to prove itself, or to use Javascript/HTML5. If I use Processing, I will use:

* This flocking library by Daniel Shiffman, implementing Craig Reynold’s Boids: Flocking Library

*Perhaps this HSB color wheel: Processing HSB Color Wheel

If I use HTML5/Javascript, I’m debating using WebSockets for communication, implementing my own flocking algorithm (using HTML5 Canvas to draw), and cannibalizing this color picker code: JS Color Picker

SankalpBhatnagar-Project3-Proposal

by sankalp @ 10:08 am

So, I decided to scrap the idea of self-generating origami crease patterns for a few reasons, like the fact that the coding behind it, is incredibly beyond me, and even if I had help, I wouldn’t be able to implement it in a truly beneficial way. I also have quite a lot on my plate in regards to time commitments (internship interviews, class midterms, and group presentations the week that this is due) so I really need to scale back. But, I honestly don’t find anything wrong with scaling back, especially in this sort of complex assignment.

Regardless, I have now started to develop a new idea, that still involves both Mathematics and Design. In Mathematics, I am interested in developing recursive fractals. In Design, I am interested in exploring typography.

Thus, for this assignment, I will explore Generative Typography as it relates to fractal creation. I’m not 100% sure how I’m going to do it, but I do have quite a strong source as to what can help me with the project. I recently found this book called Type + Code by Yeohyn Ahn that thoroughly gives tuturials for working with basic, intermediate, and advanced coding in Processing (the main language I code in). I’m super excited about it, becuase I plan to use this book along with sections of The Nature of Code by Dan Shiffman that deal with fractals and cellular automata to really create something great.

Essentially, I plan to generate something Beautiful that I will then, according to time, laser print into a block of wood, thus creating myself a physical artifact. Should be fun, dealing with lasers that is, because I’ve never used laser cutters before. So I’m pumped for that. But first, generative art.

See you on the other side of this project

Sankalp

KelseyLee – project 3 Idea

by kelsey @ 6:37 am
My original concept was about using music to generate some type of visual representation of itself. I like the idea of creating something that appeals to more than just the eyes and the addition of audio seemed like it had a lot of potential.
I remember this one visualization of Bach’s Cello Suite No. 1 – Prelude

[vimeo=https://vimeo.com/31179423]

And while it doesn’t appear to be a generative piece of art, the appeal of using what was being seen to reinterpret what was being heard became the focus of my project idea.

Recently I saw mobile by Alexander Calder. Every time I see one of his works I’m always struck by how interesting it is to both view and contemplate. The balancing of the weights and the floating in mid-air, always seeming to want to be more lively.

My idea now is to generate mobile type sculptures based on some musical signature and have the sculpture type visualization rotate and move about according to the rhythm of the song. Thus far Shapes 3D seems like it could help generate the mass and structure of the mobile, with examples such as Example 1 and Example 2. The 3D physics based movement/spinning would then need a library which is TBD.

Looking Outwards 4

by sarah @ 9:24 pm 20 February 2012

Matthias Pliesnig
http://matthias-studio.com/sit/sit.html

I am thinking of doing something with generative furniture design for my project and Matthias Pliesnig’s work came to mind while I was doing some research. He doesn’t give too much information about his design process, but some of the works seem that they could lend themselves to generative design. (And even if they’re not generative works I thought he was an interesting artist to share.) I think the way he treats space and a person’s interaction with his piece add interest to his work and his craftsmanship is very impressive! Some quick background information: his studio is in Philidelia, PA and has taught at Anderson Ranch in Colorado (where Golan taught too).

I recently saw this video from a friend who is taking Ali Momeni’s Digital Fabrication this semester. Phil Cuttance creates molds and objects through a process contained on a mobile cart. The forms that are produces from it are interesting and mix digital media with hands on craft.

Pipe Cleaners by Lars Berg was made with openframeworks and ofxMarchingCubes. I thought Berg was able to create really interesting and tangible movement and texture for these creaturelike things. However, the image appears blown out at times and I wish that the contrast and maybe color of the piece was altered so that it would be easier to see the movement and details. It would be great to see these put in some kind of context.

Coplanar

http://coplanar.org/work/achilles.html

Achilles from 2008 by Coplanar is visually interesting. It’s odd to see what appears to be steel-like bars melting and folding like fabric. This project reminds me of the “Curtain” on OpenProcessing by BlueThen.

Evan Sheehan | Generative Art Proposal | Darwinian Egg Drop

by Evan @ 4:08 pm

For my generative project I am planning to do an egg drop simulation. The egg drop, for those that may not know, is a staple activity in most public elementary school science curriculums. The challenge is to design and build a container that will protect a raw egg when dropped from a specific height.

I recall doing this several times throughout my education as a child. Each time I zealously over-designed my solutions, and I don’t believe I ever successfully protected the eggs from harm. I intend to conquer this challenge once and for all by having a computer design my solutions for me.

Initial Concepts

Predator-prey simulation using flocking

I explored several different simulation ideas initially. They were all largely based on flocking algorithms. My original idea (bottom-right) was a simulation that used flocking algorithms to mimic an environment of predators and their prey. The idea was to track the populations of each species and compare that to the predicted behavior according to the Lotka-Volterra equation.

Evolve flocking creatures within an ecosystem

My second idea (bottom-left) was to create an evolutionary ecosystem where the animals (the triangles) were competing for food (the dots). Every X clock cycles the fitness of each animal is evaluated and they breed, evolving new animals. Parameters that would vary across creatures might be things like their maximum speed, propensity to wander, and their sensitivity to food.

Simulated dog fight

A third idea (top-right) was to create a dog-fight simulation. Possibly once an entire side of the battle had been defeated, you could evaluate the fitness of each ship and recombine them to create a new generation of ships and watch the battle evolve that way. Otherwise you might just watch the ships chase each other around the screen and be able to tune the parameters manually to see what different effects they have.

Evolutionary Egg Drop

At some point during all of this, I had the idea for the egg drop simulation. The idea of revisiting this design challenge from my childhood was so appealing that I immediately abandoned my desire to play with flocking algorithms in order to pursue it.

Evolving Egg Cartons

Containers can vary by rigidity, size, and wind resistance

One of the first things I began to consider is what qualities of a potential egg container could be made variable such that a variety of containers can be generated and bred together. One common solution to this problem in reality is a parachute, so wind resistance is an obvious quality that can probably be easily mimicked. Another common solution is to wrap the egg in a lot of packing material to dissipate the force of impact. I can probably create a similar effect by suspending the egg inside the container using springs of varying length and rigidity.

Evaluate fitness to evolve not just a solution, but an efficient solution

It’s not enough to consider a solution fit if the egg survives. That could easily lead to boring solutions that just increase the wind resistance or springiness of the container until the egg survives. I want a variety of spring lengths, rigidity, and wind resistance for the solutions, so I think I’ll need to develop some kind of measure of efficiency in addition to ensuring the survival of the egg. My hope is that this process will converge on local maxima contingent upon the random initial conditions, rather than always evolving the same solution.

I can penalize high wind resistance by trying to minimize the amount of time it takes the egg to reach the ground. I will also attempt to assign some cost to having overly long springs. The spring rigidity may not need special consideration in the fitness function: if the springs are too loose, the egg will bang into the ground; if the springs are too rigid, they’ll just transmit the force directly to the egg and break it.

Toxiclibs Implementation

I’m looking at Toxiclibs to implement this simulation. Toxiclibs gives me springs, gravity, and 2D meshes. My hope is that by subclassing some of these tools, I can access the data I need to evaluate a container’s fitness and breed new ones. By Thursday I hope to have at least a 2D mesh egg that breaks when dropped. If I can implement at least this much, I should have a sense for how much trouble I’m in on this project.

Looking Outwards 3

by sarah @ 9:38 pm 16 February 2012

Simon Katan created an interface that allows the user to alter and play with an environment that visualizes sound in real time. It’s a simple sketch, but I thought he achieved some interesting results. I really liked that he incorporated how the object effect and play off of one another in the later part of the video. I think this could be a fun installation piece for a children’s museum (or maybe regular) at a large scale projected on a wall that visitors were able to interact with.

http://www.puntoyrayafestival.com/pyrformances11_eng.php?pyrformerID=31

(Sorry, having trouble embedding this video, visit the link above to view the video. I will keep working on getting it to work in the mean time.)

On the same theme of visualizing sound, Rikkert Brok and Maarten Halmans combine analog and digital techniques to produce a live performance for Punto Y Raya this year which is an abstract line and dot film/animation festival that takes place in Madrid. I really like some of the animations that come out of the festival a bit more, but for the sake of generative/simulative I thought I should post one of the performances.

Joel Lewis (who once one of the presenters at Art && Code last year) along with his partner Pete Hellicar were commissioned to create the Turner Prize Twitter wall by the Tate Britain.Their project allows visitors to comment on the work within the show. These comments are then projected on a wall within the museum and stream in real time so that other visitors can see the opinions of other visitors. This project strongly boarders on a datavis, but I thought it was worth including because of its potential to generate a more critical conversation about the work in a museum with a larger audience that typically occurs in the moment while viewing the exhibition.

http://www.hellicarandlewis.com/2010/10/15/turner-prize-twitter-wall/

Looking outwards 2

by deren @ 11:59 am

http://www.openprocessing.org/visuals/?visualID=28848

this is neat, i like the concept behind it, but i’m not sure if it works best with logos. maybe something more personal like a name tag…

the ascent

http://www.futurefeeder.com/2011/05/the-ascent/

this is really crazy, not quite sure how it’s working or what is happening though. i like the idea of mind control to generate actions. i would maybe do more of an augmented reality thing and not a physical harness mechanism

http://www.shanecooper.com/Feed/index.html
this is a really neat idea, “a garden that grows from the light of a television. i think it would be really eat if people could have some sort of input and control to the amount of light on the screen

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity