Alex Wolfe + Mahvish Nagda | Final Project Update

by a.wolfe @ 1:26 am 17 April 2012

Concept

For our final project, Mahvish and I are developing a dress that shields the wearer from unwanted attention. If verbal communication fails to convey your disinterest, now it can have a physical manifestation, saving you from further measures of slightly harsher words, flight, or a long night of painful grimaces. The dress achieves this largely through a large kinetic collar attached to a webcam which can be hidden in a simple and ergonomically efficient topknot. By subtly placing a hand on one’s hip, the camera is told to take a picture of the perpetrator. Using a face recognition algorithm, the camera, which is mounted on a servo, will track the newly stored face while it remains in your field of view. The corresponding part of the collar will be raised to shield your face from whatever direction the camera is facing, sparing the wearer from both eye contact, and yet another incredibly awkward social situation.

Mechanical/Electronic Systems

The first thing we attempted was a prototype of the collar design. We were inspired by Theo Jansen’s strandbeest’s wing movement and wanted to experiment with the range of motion we could achieve, as well as experiment with materials. So this initial form is created out of bamboo and laminated rice paper, for the final design we want to use a much more delicate spine material.

[youtube=https://www.youtube.com/watch?v=Kaw7lA5TfYM&feature=youtu.be-A]

The collar currently is moved by servos which oscillate in separate directions. However, powering multiple servos from the Lily Pad does not work well at all, so we built (with much help from Colin Haas) a controller with an external power source to help us direct the four/five servos that will manipulate the collar, as well as the one hidden in the model’s hair.

The facial recognition code does require a laptop to run, so rather than trying to hide a large flat inflexible object in the dress, we’re going to construct a bag to go with it and run the wire up the shoulder strap. If you are the kind of lady who would wear a dress like this, it is very likely you’d like to have your laptop with you anyway. The rest of wires will be hidden in piping within the seams, with the lily pad exposed at the small of the back.

Facial Recognition + Tracking

For the facial recognition portion we’re currently using openCV + openFrameworks. When the image is taken, the centermost face is chosen as the target, and the dress will do its best to track it and avoid it until the soft “shutter” button on the dress is pressed again.

Other Concepts/Ideas

Depending on how quickly we can get this dress off the ground, some other dress designs we’d love to try to pull together would be a deforming dress that incorporates origami tessellations and nitinol, and a thermochromatic dress that would have a constantly shifting surface.

Final Resources

by a.wolfe @ 4:36 pm 15 April 2012

Nitinol Dress || Water Bomb || Magic Ball

http://alymai.wordpress.com/2011/10/03/laser-cutting-folded-textiles/

http://www.thisiscolossal.com/tags/paper/page/2/

http://www.papermosaics.co.uk/diy.html

happyfolding.com

For lasercutting

http://bryantyee.wordpress.com/2011/01/22/repeating-waterbomb-bases/

http://cedison.wordpress.com/category/origami-tessellation/page/2/

http://pleatedstructures.com/herringbone_pleating/

http://www.barthalpern.com/Bart_Halpern/Pleats_Available_on_Sheer_Fabric.html

http://couturecarrie.blogspot.com/2009/01/laser-lattice.html

http://drawnassociation.net/2011/08/sybil-connolly-couturier/

Dresses

http://origamiblog.com/origami-tessellation-romina-goransky/2011/11/01

http://www.amazon.com/Stitch-Magic-Compendium-Techniques-Sculpting/dp/1584799110/ref=pd_sxp_grid_i_1_0

SankalpBhatnagar-LookingOutwards-6

by sankalp @ 10:38 pm 9 April 2012

[youtube https://www.youtube.com/watch?v=Kp-kZcImV70&w=560&h=315]

Jesse Chorng’s “Sneaker Mirror”, as discusses in my final project ideas post.

 

 

SankalpBhatnagar-FinalProject-Ideas

by sankalp @ 10:38 pm

So I’m an instructor for Carnegie Mellon’s famed student taught course, Sneakerology, the nation’s first college accredited course devoted to sneaker culture. Every year we have a final event, called KICKSBURGH, which is a celebration of sneakers! One of our course’s first KICKSBURGHs in 2008 hosted a really awesome interaction project called the Sneaker Mirror by Jesse Chorng (UPDATE: Apparently Jesse was a student of Golan’s. Wow! what a small world) that displayed a projected image captured from a foot-level camera, but instead of pixels, it was use a catalog of famous sneakers from throughout history! This is what inspires me to make an interactive data visualization. I’m not quite sure how I’d do it it, but I’ll ask people’s thoughts in class.

Okay, so I got back from class earlier this week and I met with a few of the stellar people in my segmented group and I brought up the visualization of sneakers idea, and people really liked it. They recommended I do something with chronology of sneakers, and I agree, because I like showing how time effects objects, etc. Then Golan recommended I do something involving the sole of sneakers (see sketch below) which I think would be cool, but I’m not quite confident about in terms of actually building it, I mean I don’t exactly have the skills to do something based in industrial design, but we’ll see…

So I’m starting to think this whole sneaker visualization thing might not be the best thing for me. Right now, I have a lot on my plate, what with actually planning this year’s Kicksburgh event and I’m not sure I can round up everything I need, including the knowledge of how to build something like this, by the proposed deadlines. I like the idea of getting a user to stand on a device, but I’m not quite sure focusing on soles would be a good idea since there are a lot of little things that could get in the way (how to implement it? how exact it can be? do I make it voluntary or involuntary?). I’m hoping to involve a user, or at the very least, myself in a voluntary interactive data visualization..

Final Project

by mahvish @ 3:18 pm 8 April 2012

https://www.creativeapplications.net/tutorials/arduino-servo-opencv-tutorial-openframeworks/

Input:

Sensor List: http://itp.nyu.edu/physcomp/sensors/Reports/Reports
http://affect.media.mit.edu/areas.php?id=sensing

Stroke Sensor: http://www.kobakant.at/DIY/?p=792
Conductive Fabric Print: http://www.kobakant.at/DIY/?p=1836

Conductive Organza:
http://www.bodyinterface.com/2010/08/21/soft-circuit-stroke-sensors/
Organza as conductive fabric
http://www.123seminarsonly.com%2FSeminar-Reports%2F017%2F65041442-Smart-Fabric.doc

GSR:
http://www.extremenxt.com/gsr.htm

EEG:
http://en.wikipedia.org/wiki/Alpha_wave
http://neurosky.com/

Output:

Actuators:
Flexinol Nitinol Wire: http://www.kobakant.at/DIY/?cat=28 & http://www.kobakant.at/DIY/?p=2884
http://fab.cba.mit.edu/classes/MIT/863.10/people/jie.qi/flexinol_intro.html
http://www.robotshop.com/search/search.aspx?locale=en_us&keywords=flexinol
http://letsmakerobots.com/node/23086

EL Wire: http://www.kobakant.at/DIY/?p=2992

Stuff with Magnets:
http://www.kobakant.at/DIY/?p=2936

Inspirations:
http://www.fastcodesign.com/1664515/a-prius-inspired-bike-has-mind-controlled-gear-shifting
ITP Wearable: http://itp.nyu.edu/sigs/wearables/
http://sansumbrella.com/works/2009/forest-coat/
http://hackaday.com/2012/03/16/fashion-leads-to-mind-controlled-skirt-lifting-contraption/
http://www.design.philips.com/about/design/designportfolio/design_futures/design_probes/
Affective Computing: http://en.wikipedia.org/wiki/Affective_computing

Research:

V2 has a wiki with specifics on each project: https://trac.v2.nl/wiki/tweet-bubble-series/technical-documentation#ClassicTweets

Thermochromatic Fabric:
Readily available as leuco dyes. American Apparel sells thermochromatic t-shirts. Also available by the yard on inventables:

Here’s some background: http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/fabric-display2.htm

https://www.inventables.com/technologies/temperature-sensitive-color-change-fabric–3
http://prettysmarttextiles.com/video/

MOSFET diagram

Bare Conductive Paint (Skin):
http://www.v2.nl/archive/organizations/bareconductive

Organza Fabrics:
http://www.wired.com/gadgetlab/2010/06/gallery-smart-textiles/2/
http://www.josos.org/?p=176

Shareware/Modular Fashion:
http://www.dance-tech.net/video/di-mainstone-interview-and

Contact Dress:
http://www.josos.org/?p=315

Body Speaker:
http://www.wired.com/gadgetlab/2010/06/gallery-smart-textiles/4/

John Brieger — Final Project Concepting

by John Brieger @ 5:26 am 5 April 2012

For my final project, I’m teaming up with Jonathan Ota to expand on his earlier Kinect and Virtual Reality Project We have three major tasks:

  • The design and manufacture of a new carrying rig
  • Porting all the code to openframeworks
  • Programming of algorithmic space distortions.

We’re planning on building a real rig that is self contained, has battery power, and lets us take it into the street. We also are going to build some sort of real helmet. (Sorry fans of the cardboard box). Jonathan and I were thinking we might do some sort of vacuum-formed plywood backback and maybe insert the VR googles into a motorcycle helmet or something similar (I might make something out of carbonfiber).

The key expansion to Jonathan’s earlier project is the addition of algorithmic distortions of the kinect space, as well as color data and better gamma calibration.

By subtly distorting the 3d models, we can play with users’ perception of space, leaving them unsure as to whether their perception of reality is accurate. This, combined with the third person view from the crane rig on the backpack, allows us to explore concepts in human perception of space.

Distortions we are looking to include:
Object stretch and scale
Space distortion through field stretch and scale
duplication of objects
removing objects
moving objects
transposing objects
transposing space (left to right)
inserting new objects (which we might not do)

Below you can see an interaction inspiration for some cool helmet stuff. Also incorporates a cool little arduino to do some panning.

Kaushal Agrawal | Final Project Idea

by kaushal @ 8:06 am 3 April 2012

The idea of the project is to enable a person to click a photograph anywhere, anytime without requiring the user to pull out his mobile phone or the camera.

[youtube https://www.youtube.com/watch?v=YrtANPtnhyg; width=600; height=auto]

The Inspiration
I saw this idea long back in the concept video called the “Sixth Sense”. Essentially how it was publicized to work is that a person has a camera+projector hanging around the neck. The person has radio-reflective material tied to their fingers. They make a gesture for a rectangle, casting a camera frame. The device around the neck senses the gesture and clicks a photo.

Improvements
1. The camera/projector that senses the gesture is around the neck, even if we assume that it readjusts itself to get the proper perspective, it is still unclear to the user.
2. It needs strapping of radio-reflective material on the fingers, which is a good concept, but realistically no one wants to do that.

Proposal
1. Use glasses with a camera instead of the camera+projector assembly hanging around the neck.
2. Leverage mobile phones for computation and storage of photos
3. Using OpenCV create a classifier for the gesture that triggers the photo click.

Varvara Toukeridou – Final project ideas

by varvara @ 7:35 am

The work I did for the interact project enforced my interest on the idea that crowd behavior (either crowd’s movement or sound) may assist a designer in the generation of form. I can think of two ways this could be approached:

– either by designing an interactive geometry that will change and adjust to various inputs, not with the objective of just providing aesthetic results but also of creating different user experiences in that space.

– or by using the crowd input to digitally generate different fixed geometries, each providing a specific user experience

Looking for precedent projects, focusing on the field of acoustic surfaces I came across the following project which I find inspiring:

Virtual Anechoic Chamber

The objective of this project is to see how the acoustic performance of a surface can be modified through geometry or material.

 

A couple of ideas for the final project:

– Develop a small interactive physical model that will be able to accommodate a small number of sound conditions; a parallel sound – geometry simulation will demonstrate how differently geometry affects sound.

– Develop a tool where you would be able to experiment with a given geometry system and based on sound or movement input to be able to see how the different geometries can interact with the input. For example, what kind of geometry would be ideal for a specific crowd behavior?

 

 

Final Project ideas- Zack J-W

by zack @ 7:29 am

Getting off the ground seems to be a theme stuck in my head right now.

The two toys below essentially constitute the subject matter of my first and most developed idea.  I would like to suspend a bubble blowing robot on a wire in the arch hallway of the CFA building.  Using motion tracking and simple computer vision, the robot (perhaps in the form of a squirrel) would be triggered to come out of it’s home when someone walks by.  It would locate them and stop overhead, or follow them as they move.  If they stop, the robot will blow bubbles at them from overhead.

The main idea is to engage viewers with that space in an interactive way.  Where I particularly appreciate the reproduction statues in the hall, the fact that they are static would become interesting again, contrasted with an art installation that is dynamic.

[youtube https://www.youtube.com/watch?v=bOf43yb4V0U]


[youtube https://www.youtube.com/watch?v=9cTmJkCaUI4]

————————-

Another, more literal ‘leaving the ground’ inspiration is from the great work that people are doing with low earth orbit video.  There are an increasing number of amateur and professional projects using GoPro and other HD cameras.  This idea is much less developed, but I’m wondering what may come of recording and tracking a balloon, gathering and editing a video as the final project.


[youtube https://www.youtube.com/watch?v=HWp4suB60fg]


[youtube https://www.youtube.com/watch?v=ZCAnLxRvNNc]

————————-

The third idea feeds my UFOlogy monkey.  I stay as current as I can on the latest and best UFO/Alien encounter/abduction news.  To me there is nothing greater than the possibility of making contact with extraterrestrial/ extra-dimensional beings.  What is just as interesting is the effect that possibility has on humanity.  Ronald Reagan once said he often thought how an official alien encounter would serve to unite humanity in the knowledge that we are collectively one of potentially hundreds of races.  Or under the suspicion that they could wipe us out if they wanted.  What have you?

I am inspired to do one of two things: a really fun UFO hoax in Pittsburgh, OR a conceptual piece where something very human is sent into space as an alien visit from us, to another civilization.  Imagine, like the videos above, sending a terrestrial object (plant, toy car, etc.) into space as a way of visiting our civilization on the home turf of the aliens.

 

Theo Jansen's UFO

I also found it hilarious that Theo Jansen, of Strandbeest fame, did a similar project as a young man.

What becomes interesting is our ability to use simple computation to up the ante on jokes played out since the 50’s and 60’s.  We can easily introduce RC, amazing light arrays, cameras and so on.

Foiur "UFO" pictures taken in 1967 by Michigan teenagers Dan and Gram Jaroslaw were reprinted all over the world. The two eventually admitted it was a hoax.

A brief history of UFO hoaxes.

 

 

Luci Laffitte- Final Project Ideas

by luci @ 7:24 am

IDEA ONE- Campus Pokemon

Expanding upon my location aware text adventure game, I think it would be really awesome to code a multi-player campus-based pokemon app. By that I mean people could be running around with a pokemon style map of campus on their phone, finding pokemon & fighting battles with other people that they run into that are currently playing.

The challenge- not really knowing how to make this happen.

FYI I came up with this before google quest. (beetches stole half my idea!)

 

 

 

 

 

 

 

 

IDEA TWO- Worldly Sounds

I am also interested in created an installation using sound. I would want visitors to explore a dark space; exploring sounds and moving towards the sounds that they are more attracted too, and then once they have “selected” a sound (AKA move closer to it for a while) more details about the origin of the sound will become clear (AKA lights will increase around the point and they will see where/what country the sounds are local too. I am interested in doing something like this because I think it could be a beautiful explorartory interaction.

I would plan to determine location using a kinect or a series of distance sensor & an arduino. The experience would be made up of  a large scale map on the floor paired with lights controlled by an arduino and speakers.

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity