Duncan Boehle – Project 3 Proposal

by duncan @ 5:12 am 22 February 2012

For my generation project, I plan to create a simulation for interactively growing, manipulating, and destroying plant-like organisms.

Throughout my gaming history, I’ve played countless games based around the element tetrad – the balance between air, water, earth, and fire. However, what I haven’t seen is an organic, emergent simulation of these elements or how they react with each other in a way that still affects the game. Some of the work of Ron Fedkiw and other graphics researchers have been very inspiring, and I could learn from some of their techniques for combining mesh and fluid simulations that I’ve already programmed. Here’s one paper in particular that’s relevant, along with a couple of videos.

[scribd id=82414078 key=key-2ksb57goml7o9moscuee mode=list height=100px]

Those exact techniques seem a bit too advanced for the scope of this version of the project, unfortunately. As a first stab at tackling this simulation, I want to just try to stick to plant life. Besides the art in the games from my recent Looking Outwards post, I was also inspired by the mathematics of plant growth taught in a video series by Vi Hart:

[youtube=https://www.youtube.com/watch?v=ahXIMUkSXX0&w=600]

 


 

The ideal interface that I’m imagining for this project would be to use a Kinect sensor and have the player’s hands directly guide the growth of the plants. At first, I was hoping to play with the duality between earth and fire, and perhaps one hand could be used to grow plants, while the other would manipulate fire, but I’m not sure this is either feasible or conceptually cohesive. I think it would make more sense to have only the ability to generate new life and accelerate the death of old life, and experiment more with the phenomenon of aging and life cycles rather than just outright destruction. Perhaps I could add more elements later; for example, to see how water can promote growth, drown life, or dowse fire which can bring destruction to plants, but cannot be rejuvenated.

In order to save time for polishing aesthetics and to make the interface more accessible, I plan to make everything in two dimensions, so I wouldn’t use something like Unity for this project. Either OpenFrameworks or Cinder seem appropriate for this project; OFX already has some decent Box2D support along with Kinect support, and Cinder seems to link up nicely with existing C++ libraries. But since I’ve never used them before, I’m very tempted to stick with what I know and use something like XNA with Microsoft’s Kinect SDK. Theoretically I could use Processing, since 2D drawing is dead-simple and it has plenty of Kinect and physics support, and it’s nice to be able to share things online. But if I ever wanted to extend the demo with more elements, any grid-based fluid simulation or advanced GPU rendering wouldn’t be possible.

Nir Rachmel | Project 3 + Queue Simulation (Proposal)

by nir @ 2:01 am

Standing in line – What if.. ?

Following the lecture we had on Feb 14th, I had in my mind one of the flock algorithms that simulates a crowd entering through a small crack in the wall. It inspired me to think about simulating a crowd standing in line for a ticket booth, or even better at the grocery store.

Here’s the thing. Each time you get to the cashier, you make a choice in which line to stand. My simulation will emulate several lines, and the user can select the one he thinks will get them out in time. The simulation will then run all lines in parallel, and the user can see his chosen location as well as all the alternatives he didn’t choose.

Each person in line will be represented by a colorful circle and will have  several properties, such as number of items he is carrying, and the “complexity” for each item which affects the time it takes to process that item at the register (such as fruit for example).

I want to add some more parameters that will make the simulation interesting and resemble real life, such as:

1. A random event that delays the line (a product without a barcode / price, or the need for manager approval to correct a mistake)
2. A customer asks another customer to bypass him in line (in case the customer is late, for example).
3. A customer forgot to pick up an item, and thus loses his spot.
4. The cashier’s speed of processing will also vary.

A big timer on the screen will show the user the elapsed time, and this whole thing can be thought of as a game, where the user does his best to always choose the fastest line. He can have a score according to his choices.. The game part of this simulation can be further thought.

As for libraries, I still need to explore more, and decide exactly how will the app look like. In the meantime, here are some sketches:

Joe Medwid – Project 3 Proposal

by Joe @ 10:12 pm 21 February 2012

Project Proposal – EVOLUTION!

 

pokemans

Although not quite as grand a computer-generated cityscape or as geometrically bizarre as an algorithmically-modified Doric column, the various creatures, monsters and critters we looked at in our exploration really resonated with me. It’s impossible to mentioned “genetic algorithms” or “evolving forms” without my mind immediately jumping to those little imps that have permeated popular culture over the last 15 years, Pokemon.

Thumbs

When viewing the generative Nokia blobs, though, a second childhood memory surfaced. It’s a common practice in the artistic community to do a number of quick thumbnail silhouettes in an attempt to get as many ideas on the page as quickly as possible before choosing which to develop. Although it would be bordering on sacrilege to take this extremely loose creative process and relegate it to the cold mechanical guts of a machine, there are some interesting considerations within the realm of evolution, inheritance, genetic algorithms and morphology.

evolution

During my Looking Outwards, I stumbled upon these little guys – creatures generated by randomly combining body parts, each with associated attributes. Tentacles, for example, make a create more aggressive, while a larger body makes it more durable and sluggish. They can then perform an approximation of a battle, pursuing each other according to their morphological programming.

What I’m proposing is, ideally, some combination of the three preceding images. A program that can, at the very least, make a creature either out of a predetermined kit of parts or through abstract geometry. The next step would be to enable that creature to “evolve,” enhancing various features of its physiology, like a Pokemon. Time willing, there would also be some sort of genetic inheritance tied to data associated with their ultimate morphology. I’m really hoping to at least get that first one done!

To this end, here are a few of the resources I’ve scoped out…
Genetic Algorithm Overview
Genetic Algorithm Java Library
Geomerative, a basic geometry library for Processing
Toxiclibs, specifically Toxi.geom
Aaaand some more discussion of genetic modeling.

Project 3; Generative designs for tables and chairs and hybrid combinations of the two

by sarah @ 10:01 pm

My idea for this project comes from an desire to do some woodworking. I want to “cross breed” prominent modern chair and table designs to create a new hybrid/mutated design, which then, at a later date, I would like to build. I am planning on using the interactive selection variation of the genetic algorithm from Dan Shiffman’s book in order to manually determine the “fitness” of the designs by interest.
A problem I am considering how to approach is whether or not these designs should be modeled in 3D for the final product. The source imagery I am pulling from is 2D and I’m trying to weigh the options of whats possible in the time given. Ideally I would really like to have 3D designs to genetically evolve and mutate.

Code example from Dan Shiffman’s book, The Nature of Code, Chapter 9, page 36.

Some images I am planning to use are

Sam Lavery – Project 3 Proposal

by sam @ 8:16 pm

For this project I really want to produce something that is both beautiful and interesting. My interest in urban planning and design has exposed me a little to the fairly novel field of parametric urban design. I’m definitely not sold on the idea that a computer (or a person for that matter) can centrally plan a successful city, but the technology available creates some interesting opportunities for experimentation.

My current plan is to use ESRI CityEngine to model several different versions of Pittsburgh, changing the appearance of the city by applying rules from the most famous and infamous urban design theories. I am imagining now that there will be a dense, low-rise, small-block a-la-Jane-Jacobson city, a city composed of superblocks, towering modern buildings, and vast expanses of grass and parking lots, and perhaps some kind of future or alien looking city.

I have a shapefile of Pittsburgh’s topography that I will use as a base for my 3D models. From there I will write logic that will dictate how the streets are laid out and how the resulting lots are filled. Unfortunately, CityEngine is a VERY expensive program and the trial version won’t export the model to any file type that I could use to make nice renderings. Hopefully I can find someone with a full version of the program or some other method…

VarvaraToulkeridou – Generate – proposal

by varvara @ 6:52 pm

In this project, I would like to experiment with form generation via a Braitenberg vehicles simulation.

The concept of Braitenberg vehicles was developed by the neuroanatomist Valentino Braitenberg in his book “Vehicles, Experiments in Synthetic Psychology” (full reference: Braitenberg, Valentino. Vehicles, Experiments in Synthetic Psychology. MIT Press, Boston. 1984).

What excites me about this concept is how simple behaviors on the micro-level can result to the emergence of more complex behaviors on the macro-level.

—————————————————————————————————————————————

Below there is some precedent generative art work using the concept of Braitenberg vehicles:
Reas, Tissue Software, 2002
In Vehicles, Braitenberg defines a series of 13 conceptual constructions by gradually building more complex behavior with the addition of more machinery. In the Tissue software, Reas uses machines analogous to Braitenberg’s Vehicle 4. Each machine has two software sensors to detect stimuli in the environment and two software actuators to move; the relationships between the sensors and actuators determine the specific behavior for each machine.
Each line represents the path of each machine’s movement as it responds to stimuli in its environment. People interact with the software by positioning the stimuli on the screen. Through exploring different positions of the stimuli, an understanding of the total system emerges from the subtle relations between the simple input and the resulting fluid visual output.

 

Yanni Loukissas, Shadow constructors, 2004
In this project, Braitenburg vehicles move over a 2d imagemap collecting information about light and dark spots (brightness levels). This information is used to construct forms in 3d, either trails or surfaces.
What I find interesting about this project is that information from the 3d form is projected back onto the source imagemap. For example, the constructed surfaces cast shaddows on the imagemap. This results in a feedback loop which augments the behavior of vehicles.

 

     

 

—————————————————————————————————————————————

 

I would like to implement a Braitenberg vehicles simulation where the vehicles will move in 3d space and their positions in space will correspond to the control vertices of a surface. This way, while moving in space, interacting to the various stimuli, the vehicles will generate surfaces. I expect that by linking together groups of vehicles, each group having a different set of behavior different surfaces in space will be generated. I have not decided yet what the stimulus will be, however I will try to have the evolving surfaces contributing to the stimulus so I can have a feedback loop that will augment the behavior of the vehicles. I am thinking of constraining how far away each vehicle can move from the rest of its group by linking them with springs.
As far as libraries are concerned, I will start with using toxilibs for the geometry and peasycam to navigate in space.

 

John Brieger – Project 3 Update

by John Brieger @ 3:57 pm

Since in Project Three, we need to come up with a way to generate form, I began my concepting by looking at how I generate form. I do a lot of woodwork and a lot of cooking, so I started looking at how I would be able to generate form in those contexts. Without doing some complicated Rhino scripting to use a CNC Router, doing algorithmic woodworking seemed like a no-go in our timeframe, so I focused on food. Initially, I wanted to build some sort of robotic cooking tool, but again: time issues. Golan encouraged me to do something with Markov Text Chain Synthesis and recipes, so I began to look at way to generate recipes algorithmically.

The difficult part of recipe generation is that the association caused by ingredient lists and titles REALLY messes up Markov synthesis. I started with a Belgian Folk Recipe Book from Project Guttenberg and edited out the intro and exit, then wrote a quick script to strip out the titles. Running that through the Markov synthesizer gave me a very unique recipe for a Cod stew with raspberries, so I knew I was on the right track.

I’ve decided for this project to create a cookbook of 20 or so algorithmically generate recipes, tentatively entitled “Edible Algorithms”. I felt that it really wasn’t enough to just generate the text of recipes (the work for which essentially involves editing a plaintext and running it through a very simple algorithm). To really get at the heart of the project, you have to cook them.

This weekend, I’ll be cooking 8-10 recipes I’ve generated and documenting with photos (which will of course be in the cookbook). I’m finishing my plaintext editing today, and hopefully should have all my recipes generated by tonight. Then, I’ll have to reverse engineer a list of ingredients out of the recipes, and go shopping. I’m also still working on a way to generate titles for each recipe (I’m thinking a wordcount frequency of “Most Common Adjective + Most Common Noun” for each recipe).

I’m pretty excited (and you should be terrified given that I’m probably bringing in some food Thursday).

-John

A note about my plaintext:
I pirated some scans of “Mastery of French Cooking”, “Joy of Cooking”, and “The Silver Palate”, which I consider to be seminal works in American Cuisine. I ran them through OCR, and then have been editing them by writing some simple RegEx based perl scripts to strip out things like page numbers, recipe titles, chapter names, etc.

Sample Recipe I generated last night:
Beat a tablespoon of sugar is whipped into them near the end of which time the meat should be 3 to 3 1/2 FILET STEAKS 297 inches in diameter and buttered on one side of the bird from the neck to the tail, to expose the tender, moist flesh. Gradually make the cut shallower w1til you come up to the rim all around. Set in middle level of pre­heated oven. Turn heat down to 375· Do not open oven door for 20 minutes. Drain thoroughly, and proceed with the recipe. Blanquette d Agneau Delicious lamburgers may be made like beef stew, and sug­gestions are listed after it. Savarin Chantilly Savarin with Whipped Cream The preceding savarin is a model for other stews. You may, for instance, omit the green beans, peas, Brussels sprouts, baked to­matoes, or a garniture of sauteed mushrooms, braised onions, and carrots, or with buttered green peas and beans into the boiling salted water. Bring the water to the thread stage 230 degrees. Measure out all the sauteing fat. Pour the sauce over the steaks and serve. rated, washed, drained, and dried A shallow roasting pan con­taining a rack Preheat oven to 400 degrees. Spread the frangipane in the pastry shell. Arrange a design of wedges to fit the bottom of the pan, forming an omelette shape. A simpleminded but perfect way to master the movement is to practice outdoors with half

UPDATE TO MY UPDATE: Started cooking this first recipe. Since I can’t make a 24 foot steak, I decided I would take a bit of creative license and use 2.97in medallions instead.

Photo 1: “3 to 3 1/2 FILET STEAKS 2.97in in diameter” with some meat typography from trimming “the cut shallower until you come up to the rim all around”

Photo 2: Completed Dish (in frangipane bed, with onions, carrots, peas, and brussel sprout, garnished with sauteed mushrooms.”

It terrifies me that this looks good. As for taste, the frangipane sauce was actually delicious with the meat (which cooked to a medium-rare at 20 minutes and 375 degrees). It did NOT mesh so well with the vegetables, which were less than impressive.

Billy Keyes – Project 3 Proposal

by Billy @ 10:58 am

Initial Ideas

Inspired by some of the projects in my last Looking Outwards, I wanted to do something that would produce nice-looking images with an emphasis on color. Both ideas I had for achieving this involved things that traced out colored paths in the image. The first involved little creatures that behaved using some combination of Craig Reynolds steering behaviors and color-based reproduction and grouping rules. As the things moved around, they would leave color trails. The second idea involved growing colored paths for dots wandering around the space, as if steps were appearing out of nothing. As the dots moved farther away from a path segment, the segment would fade out.

In talking about these ideas with other people in the class, I decided that it would be very difficult to get the rules tuned to produce results that looked better than scribbling on paper with crayons. Both of the projects I was inspired by had a well-defined structure, which I was lacking here.

Around the same time, a completely different ideas I had discarded early on started to seem possible and interesting.

Growing Light-Responsive Buildings

I know nothing about architecture, but I’ve always enjoyed structures that make excellent use of natural light. So I thought it would be interesting to try growing buildings, like plants, in response to light sources. At minimum, I hope to create some interesting shapes that are inspired by nature without being natural. At best, I’d really like to create actual buildings (even if they could never be built), with windows, and show how light exists in the generated spaces. At best-est, I’d like to 3D print some of the resulting structures, but I don’t think there will be time for this.

The project will likely be built in Processing, using Toxiclibs to handle the meshes and computational geometry, although I’m also considering scripting Blender. It mostly depends on whether my approach involved generating all geometry or automating traditional modelling tools (extrusion and subdivision). Regardless of where the models are produced, I will likely use Blender to create final renderings.

Currently, I’m trying to determine the best way to approach growing the models. I hope to have at least one method working by Thursday, and hopefully have a second started so I can compare them. Included below are some (mostly unhelpful) sketches of the methods I’ve thought of so far.

Project 3- Flocking Lights

by blase @ 10:35 am

For the generative project, I’m going to use a flocking algorithm to generatively change the colors on a strand of lights. Imagine a strand of 25 LED lights, where each light’s color is individually controllable. Now, to decide the color of each light, imagine a flock of 25 birds in which each bird is mapped to a light. This flock of birds is flying over either a color rectangle or a color wheel, as below, and the color under the bird specifies the color of its corresponding light:

x

I’m planning to use these lights, which I have already but haven’t had the chance to mess around with:
Adafruit Lights

lights

I’m deciding whether to give Processing one more chance to prove itself, or to use Javascript/HTML5. If I use Processing, I will use:

* This flocking library by Daniel Shiffman, implementing Craig Reynold’s Boids: Flocking Library

*Perhaps this HSB color wheel: Processing HSB Color Wheel

If I use HTML5/Javascript, I’m debating using WebSockets for communication, implementing my own flocking algorithm (using HTML5 Canvas to draw), and cannibalizing this color picker code: JS Color Picker

KelseyLee – project 3 Idea

by kelsey @ 6:37 am
My original concept was about using music to generate some type of visual representation of itself. I like the idea of creating something that appeals to more than just the eyes and the addition of audio seemed like it had a lot of potential.
I remember this one visualization of Bach’s Cello Suite No. 1 – Prelude

[vimeo=https://vimeo.com/31179423]

And while it doesn’t appear to be a generative piece of art, the appeal of using what was being seen to reinterpret what was being heard became the focus of my project idea.

Recently I saw mobile by Alexander Calder. Every time I see one of his works I’m always struck by how interesting it is to both view and contemplate. The balancing of the weights and the floating in mid-air, always seeming to want to be more lively.

My idea now is to generate mobile type sculptures based on some musical signature and have the sculpture type visualization rotate and move about according to the rhythm of the song. Thus far Shapes 3D seems like it could help generate the mass and structure of the mobile, with examples such as Example 1 and Example 2. The 3D physics based movement/spinning would then need a library which is TBD.
« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity