Alex + Mahvish | Thumbnail and Summary

by a.wolfe @ 6:52 am 3 May 2012

For our final project, we completed a series of studies experimenting with soft origami in the context of creating a wearable. We developed several methods of creating tessellations cut normal folding time in half and were simple to create in bulk, including scoring with the laser cutter, creating stencils to use the fabric itself as a flexible hinge, and heat setting synthetics between origami molds. We also examined the folds themselves, writing scripts to generate crease patterns that either focused on kinetic properties or being able to control the curve and shape of the final form.

These studies culminated in a  dress that took advantage of the innate kinetic properties of the waterbomb fold to display global movement over the entire skirt structure with a relatively lightweight mechanical system. The dress moves in tandem with a breath sensor, mimicking the expanding/contracting movements of the wearer.

Alex Wolfe + Mahvish Nagda | Final Project

by a.wolfe @ 8:00 am 1 May 2012

For our final, Mahvish and I collaborated on a kinetic wearable. We wanted to focus on creating a lightweight system that allowed for garment wide kinetic movement, a feature lacking from most wearables today. In order to accomplish this we used the Lilypad Arduino and Processing to develop some generative Origami.

 

 

KelseyLee – Project 5

by kelsey @ 7:40 am


Paragraph:

Using an Arduino Nano, LEDs and an accelerometer, this wearable bracelet lights up based on the wearer’s movement. Originally envisioned as an anklet, this piece can be worn on the arm as well. The faster the movement detected, the more quickly the cyclic LED pattern blinks. The wearer can then switch through the patterns using a designated pose. The piece is meant to express motion through light, whether the wearer is walking, dancing, or anything in between.

Alex Wolfe + Mahvish Nagda | Final Project Update

by a.wolfe @ 1:26 am 17 April 2012

Concept

For our final project, Mahvish and I are developing a dress that shields the wearer from unwanted attention. If verbal communication fails to convey your disinterest, now it can have a physical manifestation, saving you from further measures of slightly harsher words, flight, or a long night of painful grimaces. The dress achieves this largely through a large kinetic collar attached to a webcam which can be hidden in a simple and ergonomically efficient topknot. By subtly placing a hand on one’s hip, the camera is told to take a picture of the perpetrator. Using a face recognition algorithm, the camera, which is mounted on a servo, will track the newly stored face while it remains in your field of view. The corresponding part of the collar will be raised to shield your face from whatever direction the camera is facing, sparing the wearer from both eye contact, and yet another incredibly awkward social situation.

Mechanical/Electronic Systems

The first thing we attempted was a prototype of the collar design. We were inspired by Theo Jansen’s strandbeest’s wing movement and wanted to experiment with the range of motion we could achieve, as well as experiment with materials. So this initial form is created out of bamboo and laminated rice paper, for the final design we want to use a much more delicate spine material.

[youtube=https://www.youtube.com/watch?v=Kaw7lA5TfYM&feature=youtu.be-A]

The collar currently is moved by servos which oscillate in separate directions. However, powering multiple servos from the Lily Pad does not work well at all, so we built (with much help from Colin Haas) a controller with an external power source to help us direct the four/five servos that will manipulate the collar, as well as the one hidden in the model’s hair.

The facial recognition code does require a laptop to run, so rather than trying to hide a large flat inflexible object in the dress, we’re going to construct a bag to go with it and run the wire up the shoulder strap. If you are the kind of lady who would wear a dress like this, it is very likely you’d like to have your laptop with you anyway. The rest of wires will be hidden in piping within the seams, with the lily pad exposed at the small of the back.

Facial Recognition + Tracking

For the facial recognition portion we’re currently using openCV + openFrameworks. When the image is taken, the centermost face is chosen as the target, and the dress will do its best to track it and avoid it until the soft “shutter” button on the dress is pressed again.

Other Concepts/Ideas

Depending on how quickly we can get this dress off the ground, some other dress designs we’d love to try to pull together would be a deforming dress that incorporates origami tessellations and nitinol, and a thermochromatic dress that would have a constantly shifting surface.

Final Resources

by a.wolfe @ 4:36 pm 15 April 2012

Nitinol Dress || Water Bomb || Magic Ball

http://alymai.wordpress.com/2011/10/03/laser-cutting-folded-textiles/

http://www.thisiscolossal.com/tags/paper/page/2/

http://www.papermosaics.co.uk/diy.html

happyfolding.com

For lasercutting

http://bryantyee.wordpress.com/2011/01/22/repeating-waterbomb-bases/

http://cedison.wordpress.com/category/origami-tessellation/page/2/

http://pleatedstructures.com/herringbone_pleating/

http://www.barthalpern.com/Bart_Halpern/Pleats_Available_on_Sheer_Fabric.html

http://couturecarrie.blogspot.com/2009/01/laser-lattice.html

http://drawnassociation.net/2011/08/sybil-connolly-couturier/

Dresses

http://origamiblog.com/origami-tessellation-romina-goransky/2011/11/01

http://www.amazon.com/Stitch-Magic-Compendium-Techniques-Sculpting/dp/1584799110/ref=pd_sxp_grid_i_1_0

Kaushal Agrawal | Final Project Idea

by kaushal @ 8:06 am 3 April 2012

The idea of the project is to enable a person to click a photograph anywhere, anytime without requiring the user to pull out his mobile phone or the camera.

[youtube https://www.youtube.com/watch?v=YrtANPtnhyg; width=600; height=auto]

The Inspiration
I saw this idea long back in the concept video called the “Sixth Sense”. Essentially how it was publicized to work is that a person has a camera+projector hanging around the neck. The person has radio-reflective material tied to their fingers. They make a gesture for a rectangle, casting a camera frame. The device around the neck senses the gesture and clicks a photo.

Improvements
1. The camera/projector that senses the gesture is around the neck, even if we assume that it readjusts itself to get the proper perspective, it is still unclear to the user.
2. It needs strapping of radio-reflective material on the fingers, which is a good concept, but realistically no one wants to do that.

Proposal
1. Use glasses with a camera instead of the camera+projector assembly hanging around the neck.
2. Leverage mobile phones for computation and storage of photos
3. Using OpenCV create a classifier for the gesture that triggers the photo click.

Nick Inzucchi – Final Project Ideas + Inspiration

by nick @ 11:22 pm 2 April 2012

1. Kinect body/motion visualizer for large-screen projection and performance settings. Take the point cloud literally; make each voxel a physics particle which reacts the the motion of the dancer.

Three things I love about this project : 360 degree depth map, depth points become physical particles, the sweet industrial soundscape tying it all together.



2. Extend my project 3 into an even more performance-centered system. Bring projections into the mix so the audience can visualize my activity behind the tables on a large scale. Play up the conducting metaphor even more. Push and pull musical elements into or out of the mix and onto the projection screen, brainstorm creative ways of combining spatial actions and visualized audio.

3. Something with a glitch art aesthetic. I recently visited the American Art Museum in DC and saw some images from Thomas Ruff’s jpegs series. I love how these images bring the medium into relief, and I think it would be cool to do something similar for computer art. Ruff’s art works because we have a common language of pixels. What is the common language of interactive art?






http://chicagoartmagazine.com/2011/09/an-unknown-error-has-occurred-new-media-and-glitch-art/

4. Binaural soundscape explorer. Strap on headphones and be completely transported to another audio realm. Seriously dark shit, ambient industrial factory horror set.

Emergency Telecommunications by Smoke

by luke @ 7:50 am 29 March 2012

In a remote data center, advanced computer vision software watches a live video feed looking for smoke signals emanating from a distant emergency signal box and is called upon to order rations for two feckless explorers, lost in the woods and famished.

The premise was to build a hypothetical communications link between an area of poor phone reception with a cell tower using smoke signals and a telephoto lens.

Although the outcome is speculative, everything in this video is real except that we loaded prerecorded video into Max rather than setting up a live video stream to monitor. A Max patch performing basic color analysis on the footage looked for particular colors of smoke: red, red/blue, or blue. When a particular signal was detected, it used a really super screen-automation scripting environment called Sikuli which dialed the number to papa john’s in Skype complete with the cool animation. The call recording was a real conversation (although it was a papa john’s in mountain view, CA–the only one open at the time we were creating this. Perhaps his patience to stay on the line was due to regular robotic pizza calls from silicon valley….)

By Craig and Luke

Nick Inzucchi – Project 4 – Kinectrak

by nick @ 6:58 am

[vimeo https://vimeo.com/39389553 w=600&h=450]

My fourth project is Kinectrak, a natural user interface for enhancing Traktor DJ sets. Kinect allows DJs to use their body as a MIDI controller in a lightweight and unobtrusive way. The user can control effect parameters, apply low + high pass filters, crossfade, beatjump, or select track focus. This limited selection keeps the interface simple and easily approached.

[slideshare id=12200721&doc=p4pres-120328211310-phpapp02]

I used Synapse to get skeleton data from the Kinect (after several failed bouts with ofxOpenNI), sent over OSC to a custom Max patch. Max parses the data and uses it to drive a simple state machine. If the user’s hands are touching, the system routes its data to the Traktor filter panel, rather that FX. This gesture lets DJs push and pull frequencies around the spectrum; it feels really cool. Max maps the data to an appropriate range and forwards it to Traktor as MIDI commands. The following presentation describes the system in greater detail.

I’d like to end with a video of Skrilex (i know) and his new live set up by Vello Virkhaus.

Rothera | Interactive Project | Recreating “Tango” by ZBIGNIEW RYBCZYNSKI

by alex @ 6:08 am

I have always been interested in performative artwork, especially work that is both playful or technologically engaging, but also speaking to something greater than the individual performance.

McLaren’s work is great. He has a mastery of animation timing. In my favorite piece “Canon” McLaren confuses the audience first before revealing the pieces composition. And just before the viewers fully grasp the rhythm and situation, he ends the performance.

I am very attached to recreating “Tango” in hopes of collaborating with ZBIGNIEW. At the time he was working at the edge of technology to produce this piece through compositing and meticulous planning. But I feel the true strength of “Tango” is in this idea of transient space. Each ‘performer’ is unaware of one another and continues their life without care or contact with another person.

Why is this re-performance important:

My version is important because I will be able to computationally incorporate involuntary viewers into the piece. I imagine this piece being installed in a room or common place. My work differs in that the software will look for people and walking paths that can be incorporated into a looping pattern in a way that they never collide with each other. Like Zbigniew my goal is to fill an entire space in which participants do not see each other, nor collide.

Tech/learned skills:

-Potential and importance of kinect imaging as opposed to standard camera vision for tracking.
-Kinect’s ability to tag/track people based on body gestures.
-How to organize 3D space in 2D visualization.

NORMAN McLaren

“Tango” by ZBIGNIEW RYBCZYNSKI

http://vodpod.com/watch/3791700-zbigniew-rybczynski-tango-1983

[youtube https://www.youtube.com/watch?v=pTD6sQa5Nec&w=960&h=720]

[youtube https://www.youtube.com/watch?v=rBZrdO3fU8Y&w=640&h=360]

[youtube https://www.youtube.com/watch?v=Ib_X7PDfx4E&w=640&h=480]

[youtube https://www.youtube.com/watch?v=kZQ11VhXCP8&w=640&h=360]

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity