I dig the star wars reference, but the use of the kinect for a holographic image is something new. It seems like every other day a new usage for this device becomes available. The video isn’t a great representation of the usage of this technology
I plan on continuing to work with the ABB4400 robot in dFab. My final goal is live, interactive control of the machine. This may take the form of interactive fabrication, dancing with the robot, or some type of camera rig.
Inspiration
There have been a few projects and areas of research that have given me inspiration. The use of robotic surgery tools is an extremely adept example of interactive/ live control of robots while maintaining the precision and repeatability they are designed for. The ultimate goal of my project is to leverage these same properties of the robot through a gestural interface.
There is a very interesting design space here, which is the ability for these robots to become mobile and perform these tasks in different environments. My vision on the future of architecture is these robots running around building and 3d printing spaces for us. something like this…
Design Goal
The project will explore the relationship between the users movement and gesture and the fabrication output of the robot. This is to say that the interpretation of the input will be used to work on a material that offers a unique and efficient relationship to the user. IE the user bends a flex sensor and the robot bends steel, or the user makes a ‘surface’ by gesturing hands through air and robot mills an interpretation of that surface. A few other ideas are an additive process like gluing things together based on user input like this example…
Technical Hurdles
TCP/IP opensocket communication is proving to be a bit tricky with the ABB Robot Studio software. I beleive that we can solve this problem but there is some worries about making sure we dont make the super powerful robot bang into the wall or something because that would be costly and bad.
Question
What are some interesting interfaces or interactions you can imagine with the robot? input // output?
There are a few constraints like speed, room size, safety, etc…
This is an interesting anti-pattern for my transit visualization. It’s a somewhat arbitrary mapping between sitemap information and a London-Tube-style map.
How transit-oriented is the portland region? The mapping here is pretty straightforward (transit-friendliness to height) but compelling.
Cool project out of Columbia’s graduate school of architecture. It maps the homes of people in New York prisions on a block-by-block basis. I want my project to have this sort of granularity.
I’m still tossing around ideas for my final project, but I’d like to do more experimentation with the kinect. Specifically, I think it’d be fun to do some high-quality background subtraction and separate the user from the rest of the scene. I’d like to create a hack in which the users body is distorted by a fun house mirror, while the background in the scene remains entirely unaffected. Other tricks, such as pixelating the users body or blurring it while keeping everything else intact could also be fun. The basic idea seems manageable, and I think I’d have some time left over to polish it and add a number of features. I’d like to draw on the auto calibration code I wrote for my previous kinect hack so that it’s easy to walk up and interact with the “circus mirror.”
I’ve been searching for about an hour, and it doesn’t look like anyone has done selective distortion of the RGB camera image off the kinect. I’m thinking something like this:
Imagine how much fun those Koreans would be having if the entire scene looked normal except for their stretched friend. It’s crazy mirror 2.0.
I think background subtraction (and then subsequent filling) would be important for this sort of hack, and it looks like progress has been made to do this in OpenFrameworks. The video below shows someone cutting themselves out of the kinect depth image and then hiding everything else in the scene.
To achieve the distortion of the user’s body, I’m hoping to do some low-level work in OpenGL. I’ve done some research in this area and it looks like using a framebuffer and some bump mapping might be a good approach. This article suggests using the camera image as a texture and then mapping it onto a bump mapped “mirror” plane:
Circus mirror and lens effects. Using a texture surface as the rendering target, render a scene (or a subset thereof) from the point of view of a mirror in your scene. Then use this rendered scene as the mirror’s texture, and use bump mapping to perturb the reflection/refraction according to the values in your bump map. This way the mirror could be bent and warped, like a funhouse mirror, to distort the view of the scene.
At any rate, we’ll see how it goes! I’d love to get some feedback on the idea. It seems like something I could get going pretty quickly, so I’m definitely looking for possible extensions / features that might make it more interesting!
A few things I’m thinking about, mainly about sound.
In the clip above, the image is dissected into each individual colours and rotated to show a distorted perspective. I think that breaking down image into individual balls and orbs might be a good idea, and then giving it some sort of physics, producing a sound as the user moves around and the balls collide with each other in a spring model.
The video here shows a dancer dancing with a virtual actor. I’m thinking of using the kinect to track a dancer’s body and producing music along with the dance. In a sense, it’s a juxtaposition of dancing to the music to generating music with dance.
The last inspiration that I had was the project where the artist tracks the movement of glass blades blown by the wind and produces sound. I’m thinking of creating a purely sound based project, creating a soundscape where users can wander into and interact with the environment. The idea is that basically the user is wading through a blade of grass and as the user pushes the grass blades around, they will collide and create a sound. It would a project purely sound based with no visuals.
Comments Off on Looking Outwards + Ideation, Final
For my final project, I’m currently thinking I want to adapt the dynamic landscape with kinect I made with Paul Miller for the 2nd project, and probably create some sort of game on top of it.
Recompose is a kinect project developed by the MIT media lab. It uses the depth camera, mounted above a table, to do gesture recognition on the users hands in order to control a pin-based surface on the table. I think this is interesting because it’s almost a reversal of the work Paul and I did, which I’d like to expand – modifying something physical rather than something virtual. These type of gestures might also be good to incorporate into a game to give the user more control.
Not quite a project, but something I’ll be looking over is this Guide to Meshes in Cinder. OpenFrameworks has no built-in mesh tools, so if Cinder has something to make the process easier, I may consider porting the code over in order to save myself some trouble.
This project, Dynamic Terrain, is another interesting reversal of our project. This work totally reverse things, modifying physical through virtual rather than modifying virtual through physical.
These aren’t new, but I’m trying to find more info on Reactables, as one of the directions I could go in would be incorporating special objects into the terrain generator that represent certain features or can make other modifications. A project like this can help guide me in how to think about relating these objects to each other, and what variety and diversity of functions they might have.
Finally, I found this article on Terrain reasoning of AI. I’m thinking of a game where the player must guide or interact with groups of tiny creatures or people by modifying their landscape, so the information and ideas here could be extremely useful.
Comments Off on Timothy Sherman – Final Project – Looking Outwards
For my final project, one of the possible trend is to keep working on the algrhythm project and make a set of physical drum bots. Ideally I’d like to create about 10 drum bots with different drum stick/ and in-built algorithm. These drums may pile up/ make a chain/ circle, tree or what ever. And see what kind of music can we get from these drums.
Some inspiration:
1. Yellow Drum Machine
This is a cute project. The drum machine has a IR sensor which, instead of make the robot avoid obstacles like most other robots do, lead the robot to these objects and beat them.
2. ABSOLUTE MACHINES
I saw this video on the last week’s What’s on talk. Jeff Lieberman showed his project absolut machines. Triggered by a piece of impromptu music, these set of machine robots will replay and revise the pattern. By combining different type of bots, the final work turn out to be a piece of art.
3. muchosucko
This project is from Georgia Tech. The drum robot will learn the percussion beat from the human and by applying some generate algorithm it will make more complicate but beautiful beats and play it together with the drum performer.
I found this really weird video of people walking on a street, but they are colored and background subtracted in.
I have been thinking about doing some “soul” visualizations of the observers in a live installation. Some possible scenarios:
User walks up to a seemingly normal mirrored display of themselves. Then a moment later, a “soul” of their same person but more like a light outline, walks right to where they are and joins with them.
Other user’s pervious souls walk in and stand and approach the installation.
Here is the background subtraction camo example:
Presentations
My friend has an idea to use the Kinect to direct a live presentation. That gave me an idea of using the Kinect to speak to a facetious large audience, and try to get them riled up. The user would talk and stand behind a podium. Perhaps like the state of the union? User walks up to the podium and half of them stand and clap. Say something with cadence and a partisan block stands? See this video of the the interaction in Kinect Sports:
So there hasn’t been much precedence for this, since contemporary knitting machines are ungodly expensive, and the older ones, generally the brother models, that people have at home are so unwieldy that changing stitches is more of a pain to do this way than by hand. But if I can figure out someway to make it work out, I think knitting has ridiculous potential for generative/algorithmic garment making. Since it is possible to create intense volume/pattern in one seamless piece of fabric, simply though a mathematical output of pattern. It would be excellent just to be able to “print” these creations on the spot, and do more than just fair isle.
I sent off a couple emails to hackpgh, but I’ll try to stop by their actual office today or tomorrow and just ask them in person if I can borrow/use their machine
Here’s a pretty well known knitting machine hack, for printing out images in fair-isle. This is awesome, but I was hoping to be able to play more with volume and texture than color
Computational Crochet
Sonya Baumel crocheted these crazy gloves based off of bacteria placement on the skin
User Interactive Particles
I also really enjoyed the work we did for the kinect project, and would be interested in pursuing more complicated user generated forms. These two short films by FIELD design are particularly lovely
Generative Jewelry
I also would be interested in continuing my work with project 4. I guess not really continuing, since I want to abandon flocking entirely and focus on getting the snakes or a different generative system up and running to create the meshes to make some more aesthetically pleasing forms. Asides from snakes, I want to look into Voronoi, like the butterflies on the barbarian blog.
Comments Off on Alex Wolfe | Final Project | Looking Outwards
When I was in grade school, I had a minor obsession with Pascal’s Triangle.
First 9 rows of Pascal's Triangle
Just to refresh your memory, Pascal’s triangle a pyramid of numbers formed by adding a row of numbers together to generate the next row. It contains many patterns and fascinating attributes that would be very useful for a generative art project, such as: binary row sums, number locating, hockey stick patterns, prime occurrences, magic 11’s, polygonal numbers, points on a circle and others.
I could use several of these attributes to do some visual effects in the design. Here are a few ideas:
hockey stick patterns
Pascal Hockey Stick Patterns
By adding numbers in a diagonal direction, the last number on a changed course equals the sum. I could do something where I do lightening bolts down the hockey stick points.
Polygonal Numbers
The occurrence of polygonal numbers could allow me to display 2D quasi 3D polygons at varying intervals.
When I was thinking about the triangle, I always wondered it was possible to extend this into 3D space.
BB Gun
Carnival Star
I also have another idea to re-create a classic carnival game where the user shoots out a paper star with a BB gun and a fixed amount of ammo. I think I can do the star paper like how Igor Barinov did the Open Virtual Curtian, and let it fall apart from the BB’s.
It looks like I could use the MSA Physics environment to do the BB Gun.
I plan on continuing some of the work that began in the Marius Watz/MakerBot workshop a could of weeks ago. This potentially means that the project will include Reaction/Diffusion+Camera interactions and/or a digital output for Rapid Prototyping or milling.
Comments Off on shawn sims-lookingOutwards-Project 4
Surface detail is an incredible, elaborate, 3D generative fractal art piece set to music. The surface of the sphere constantly deforms into new and exotic textures.
Stanza is an artist who makes a lot of generative art from a variety of sources. This one is from his automaton series. I like this one the most out of his work because I feel that a lot of generative art looks really messy and unorganized despite all the rigorous math and computation that goes into him, but this series seems very crisp and like there is a clear structure behind it. In general, I like the use of color in his art, though this piece is not a prime example.
Quasimodo
I can’t get a picture to upload, but here is a project created by this guy that just generates a bunch of bezier curves that look really elegant and smooth. Kind of mesmerizing to watch, although I wish there were some color variation.
Klaus Sutner
The Farey sequence and Ford circles.
Iterating riffle shuffle.
One of my professors really enjoys visualizations of complex theoretical computer science concepts. Admittedly, they aren’t the prettiest because his goal is more to show the pattern than to make a work of art. But they are very very technical, and it’s pretty impressive to see these abstract concepts described in images. He has a gallery of these images here.
Comments Off on LeWei-LookingOutwards-Generative Art
I’m a huge fan of Dave Bollinger’s work “Density” (http://www.davebollinger.com/works/density/). He does a mix of generative art and traditional art, and he blends computer programming with traditional mediums. He’s done some generative works that are in a wood block style, and I think they look pretty cool. Unfortunately, he doesn’t document his process very much.
There’s a service online called DNA11 (www.dna11.com) that produces generative art from DNA. You submit a small DNA sample, and they run a PCR of it, colorize it, and enlarge it onto a large canvas. I think it’s a really cool form of generative art because it’s completely personalized.
I think it’d be fun to use this assignment to create an art piece I can hang in my apartment (my walls are looking pretty bare right now…) so I’ve been focusing on generative art that creates static images. I found the work of Marius Watz pretty interesting because he uses code to produce large wall-sized artworks that are visually intriguing and have a lot of originality from piece to piece, while retaining a sense of unity among the set. You can browse the collection of final images here: http://systemc.unlekker.net/showall.php?id=SystemC_050114_150004_04.
The Snail on the Slope is a generative animation inspired by a book by the same title.
The animation deals with a set of ‘humans’ trying to conquer a “forest” that fights back against them- it was made in Processing, and it’s actually quite beautiful. I love the faded aesthetic it begins to take on
an iPhone app that allows the user to generate a kaledioscope image. The images do not move, which is a pity; I kind of wish it gave you an animation to watch after you pick your 10 shapes and 6 colors, but the stills are rewarding in their own way. It’s interesting to note that Davis creates images just like these as a part of his artistic practice, and in giving others the tools to follow in his footsteps, he both invites people into his creative process and stirs up all sorts of issues about ownership. Is every image generated with his tool his, even if others use it? Is the creator of the tool always the original creator? Questions, questions.
I know this was shown in class, but I absolutely adore the collective Nervous System. I would kill to be able to do something similar; turn algorithms from nature/everyday life into wearable/touchable objects. The jewelry especially does a wonderful job bringing the fragility of the original form along for the ride, but by turning it into a precious metal it transforms the intent of the object- and the function- and the meaning.
Comments Off on Caitlin Boyle :: Looking Outwards Generative
On the topic of big robots and generative art…Federico Diaz: Geometric Death Frequency-141-spot is an amazing use of technology for the production of generative form. Read the FastCo article here.
Last week I took place in a Workshop called Interactive Parametrics. The premise was to look at strategies for parametric modeling using Processing, outputting MakerBot-ready models as STL files and printing them on site. The workshop was led by Marius Watz, StudioMode, and Bre Pettis of MakerBot. It was a great workshop and got me very excited about the potentials for outputting 3d-printable objects right from code, skipping the typical workflow of 3d modeling in rhino, maya, or 3dMax.