ChainFORM from the MIT Media Lab presents some interesting insight to the present and future of technology. The bit where they use the chain to detect and correct the person’s posture was to me an unexpected way to use it. There’s such a broad range of uses it’s pretty fascinating to think where it could wind up in the future. I can see it being used in children’s toys, clothing in performance art, robotic arms, future styluses… the list goes on.
So I personally don’t have a very broad or thorough concept of what physical computing entails, so I decided to look at three projects that are very different but relate to different sides of physical computing.
So the first piece I really liked was similar to Design IO’s Connected Worlds piece. Curious Displays by Julia Tsao simulates what may eventually become a real physical project through a connected display on two screens in a living room setting and some sort of sensor which detects the placement of objects around the room.
I found this to be a really interesting project by Lauren McCarthy, a project in which individuals have physical followers as opposed to followers in the digital world. I feel like it is interesting how it shifts the idea of a “follower” from something that creates a system of meritocracy to something that might mean you are being stalked, and how being followed is no longer a means of validation in the latter case. As charming of an idea as this is, I have doubts about who would actually get the app and use it. It’s really funny but I wonder if there’s a way to trick people into using it. I suppose I could also see people intentionally using it. When you are a follower, you are doing the stalking, but maybe to the follower it’s just “people watching”. Interesting to see how this is perceived from both sides of the experience.
and the wind was like the regret for what is no more by João Costa
What it is: “This work consists of a set of sixteen bottles – with air blowers attached to each one of them – and a wind vane. The vane is fixed on the outside of a window and detects the direction the wind is blowing. Inside of the room, the motor starts blowing air into the bottle that corresponds to that particular direction. This event generates a smooth sound, and each direction has its own pitch. The bottles are arranged in a circle, similar to the shape of the compass rose, depicting the eight principal winds and the eight half-winds.” – Costa
To be honest I thought this was referencing some important historical monument but I did some research and I was actually just thinking of a spongebob episode.The episode SpongeHinge. Not the monument Stonehinge. Honest mistake.
I think what makes the project so effective is that it requires your full attention to really be aware of whats going on. The artist is capturing wind direction with sound which is something you probably wouldn’t notice if you weren’t fully present in the moment. Wind direction isn’t something that people are generally attuned to so for us it is something like the invisible.
The capturing of the invisible, which is what the artist claims to get across isn’t quite there for me. The sound is obvious, but at the same time i don’t think it’d be immediately clear to me that the sound is linked to the wind direction (at least from the documentation). I think the winds would have to be more forceful and and controlled than what is given in that environment.
However I think the project is technically sound.
Adam Ben-Dror seems to have a PIXAR them going with his physical computing projects. The Abovemarine reminds me the fish in the bags at the end of Finding Nemo, and you can’t not think about Luxo the lamp when watching the Pinokio the lamp video. Despite being very PIXAR they show a fair amount of creativity and originality, especially in the documentation.
They both use pretty simple motion tracking to make the objects move around, but those movements give them a pretty great deal of personality. I would very much like to go on a walk down the street with Jose once he is not bound by the wires.
I was also told Adam was an exchange student here from New Zealand so I feel like I gotta keep upping my game because of how simple but fun these concepts and executions are.
After attending Hiroshi Ishii’s lecture in McConomy auditorium last week, I was totally blown away by the breadth of work that despite different all seemed to share a similar spirit about working with computational and digital ideas in a physical manner.
What I found missing from his work in current from was a lack of practicality or way to imagine how more complex interfaces and experiences would be enabled within his worldview. On the other hand, the above project by the Fluid Interfaces Group uses a series of existing technologies to make their prototypes completely possible today.
That’s to say I think the interaction of holding a lens to everything and manipulating the physical world with zero to little non-visual feedback is necessarily a good idea though. Having used and made software and interfaces like this, I know they can be exceedingly frustrating and not overly enjoyable.
Andante by the Tangible Media Group at MIT visualizes animated characters walking along a piano keyboard, as if they are playing the physical keys with each step. It was by chance that this project attracted me because of its aesthetic of the lit up figures, reminiscent of my recent Mocap project where I modeled the human figure as a pedestrian walking signal. The luminescent representation of the bodies are similar, and this project as a whole feels well-considered and completed; the attributes of the visualization successfully complement each other, and seamlessly integrate the virtual visuals with the physicality of the piano keys being pressed at the right times. I especially found the motive of the work to be very appealing as well: that it is based off the expressive, full-body, and communicative characteristic of learning music, thus promoting an understanding of how music is instinctively rooted in the human body–something that any audience can find relatable and introspective. It is also noted that such approach took advantage of walking as being one of the most fundamental human rhythms, indicating the careful and admirable consideration that went into the decision of representing the movement. I find charming in the color palette of the visuals, especially the brightness in conjunction with how the video in particular was recorded, amidst a darker background that strengthens the design choice of the figures–I can almost see the clips being from a dream or bedtime story scenario, of little lit fantasy characters bringing music to life in some sort of tale. Another neat characteristic is the variety of forms and physiques that were represented: different body types, walking postures, and even some animals! All things considered, I think this would make practicing the piano less lonely, and playing the piano more fun; it is indeed like a little other world.
I thought that Lineform – by Hiroshi Ishii, Ken Nakagaki, and Sean Follmer was particularly compelling due to not only its simplicity, but its range of applicability. Lineform is a “shape changing interface” in the form of a ‘line’ it is a kind of robotic string that can change itself to fit multiple proposes. It can act as a telephone, transmit and receive data as a touchpad, it can represent digital change through movement that represents digital actions, it can record your motions and make your repeat them, it can serve as a stencil, it can function as a lamp when a light bulb is plugged into it, or as whatever is required when anything else is attached. Just the amount of things that you can do with a 3d line is really impressive to me, and is nicely straightforward. Overall I think that the possibilities that lineform represent are very interesting, but I would like to see it applied to more practical/everyday tasks and actions
I was first exposed to Hiroshi Ishii’s work last year thanks to Austin Lee, my studio professor at the time. As a professor for the environments track for the school of design, Austin showed us Hiroshi’s work as a way to help communicate what environments design means. His work helps create harmony between digital and physical interactions and environments.
Being able to see and meet Hiroshi Ishii after studying his work was a wonderful experience. After discovering his passionate, inspirational, and whimsical attitude towards education and design, his work brought on a new life. Hearing him speak helped me not only understand his work better, but also to look at new technology, art and design differently. I think we are often caught up in the technical power of a piece that we often dismiss work as a tech demo rather than a simple art piece. For instance, with his Levitation piece, when I first saw it a year ago, I was in awe of the technology, but now after hearing Hiroshi talk, I see it in a new light. His work gives off the impression that it is magic, and I think this shows that we often take technology for granted.
Perhaps what resonated with me the most during his talk was when he argued about the boundaries between art, design, philosophy, and computer science. He told us not to label these disciplines, or ourselves, because labels tell the world what you are not, just as much as they say what you are. These fields live together and survive because of one another. I enjoyed how he used verbs to identify when to optimize each field: Envision (art and philosophy), embody (design and technology), and inspire (art and aesthetics).
Additionally, I really appreciated his comments on friendship and collaboration. I think that this is one of the greatest skills I have acquired from the school of design. My closest friends are the one who critique the hardest, push me the furthest, and challenge me the most. I also respect that even as successful as he is, he is still humble and takes a significant amount of time to recognize those who have helped him along the way. As the world, particularly in America at the moment, feels more divided than ever, I appreciate that Hiroshii emphasizes the importance of friendship and collaboration.
Tangible Media Group – Daniel Leithinger, Sean Follmer, Alex Olwal, Akimitsu Hogge, Hiroshi Ishii / 2013
inFORM project link:
inForm is a project by the Tangible Media Group that I really enjoyed. The project wants to create a relationship between a user’s digital information, and tangible space. I found this project to be amazingly playful and fun. The idea of transferring digital to physical data is very interesting to me. In a digital world, often we don’t make much of an effort to be physical anymore, even though the physical world is so important and integral to living. While we stare at our screens, we’re almost living in a virtual world that is constantly evolving to fit us better, to be more addicting, and to not let us go. When everyone is making such an effort to turn everything digital, it’s so refreshing to see a way physicality can factor back into digital space and possibly make digital space better. In the end, I think the thought processes and the technology behind this project have great potential going into a future of digital takeover, and that it will help us in developing spaces of digital and physical combination, interactions and interfaces that satisfy our invisible and tangible needs, which truly, I think are the best kind.
Also, they really made it look damn good in the video.
A project from Hiroshi Ishii and the Tangible Media Group at (MIT Media Lab) that I became very interested with is Materiable. The project is based off of Hiroshi Ishii’s concept of tangible works called radical atoms. The idea behind radical atoms is a combination of computational screen work and actual physical work: using technology to make previously un-tangible data, tangible. Materiable exemplifies this idea very well. In this project, intractable prisms/pins come together to create a larger malleable prism that is responsive to touch and is able to replicate dynamic material (i.e. – sponge, elastic surface, etc.). This work, among many other Hiroshi Ishii works, is on the forefront of new technology/dynamic works. I’m excited to see how this concept of radical atoms can expand farther from its roots and effect other previously intangible media like film.
After going to Ishii’s wonderful lecture, I was looking through the collection of projects from the Tangible Media Group and was particularly struck by Cilllia. Over the last year or so, I’ve become increasingly interested in biomimicry and the design insights that can be gleaned from studying natural systems. Simultaneously, I’ve become increasingly skeptical of 3D printing as a medium for genuine innovation, as so much of the hype surrounding it boils down to little more than overpriced on-demand desk decorations.
However, this project thoroughly impressed me. By framing 3D printing not as the end medium, but using it as a method of synthesizing a unique material that itself has new properties, the TMG explores some many compelling use-cases for these furry plastic doodads. Additionally, the output is astoundingly low-tech. Aside from the complex production method, it doesn’t require electricity or hardware to function, but instead reveals new possibilities when combined with tech.
Some of the scenarios presented are more rooted in aesthetics, unique textures, and even a little goofy, but others-such as the directional touch recognition-are beautifully functional. Overall, this project, and its documentation, are a phenomenal example of what exploratory and experimental design should do: open doors for new ideas and provoke the audience into questioning possibilities with this creation in a way that invites response and collaboration.
I chose to write about “Conditional Lover” because I thought it was absolutely charming. It’s a robot that uses data it gathers from the pictures on your phone to figure out what sort of facial features you would find attractive. Then, it uses its camera and “fingers” to use Tinder for you, deciding which users you would like and swiping left or right accordingly.
I love this idea (as an art piece more than a practical tool), because it makes an objective, impersonal process out of dating, which should be very personal. However, when you think about it, Tinder has already done that, replacing meaningful connections with “Do I find him/her attractive at first glance?” If Tinder is going to take most of the humanity out of dating, why not just hand the whole thing over to a robot? This piece really made me think about our superficial culture surrounding relationships, if only for a little while, so I think it has succeeded not only as a work of technology, but as a work of art.
For this Looking Outwards assignment, I chose to focus on the bioLogic project from the MIT Tangible Media Group. Hiroshi Ishii discussed it in his lecture, but I didn’t understand it the first time around, and I wanted to research it because I am very interested in the idea of biological entities that play an active role in art, and especially in new media, which is often regarded as an art form that is often associated with tech more than anything else.
Essentially, bioLogic was an investigation into programming “living organisms and invent responsive and transformable interfaces of the future.” The group focused on the Bacillus subtilis natto, which expands and contracts in response to atmospheric moisture. The group utilized this property of the bacteria to make several products including little synthetic flowers, but most notably, a garment that ventilates based on the sweat coming off of the body.
Their process documentation is very helpful in understanding how their project actually works:
Finally, their project webpage contains a couple of interviews — one with a representative of New Balance, who talks about the importance of bioLogic to the atheltic industry, highlighting the success of the team in making an interesting and useful product.
Hiroshi Ishii – Materiable
This series of creations/projects by Hiroshi Ishii and the other team members of the group is perhaps the most famous, and possibly therefore most cliche Looking Outwards pick, by the Tangible Media Group. I personally connect to this project because I have seen the documentation long before this, I think possibly even before college years. I had never heard of physical computing, of interactive art or anything at the time. I just knew, when I saw it, that this was the future!
The Materiable “tables” are complex yet simple designs that allow you to “form” shapes using blocks on motors. As seen in the documentation, you can use programatic designs (such as 3D graphs), interactive reactions (such as motion sensing) and “moldable” forms, which react to direct physical actions like pushing down on the blocks. I really loved the visualizations of the 3d graphs, and the “real life” example of the phone that moves into view.
Some critiques on this would, I suppose, be that it is still a bit too static (limited by a square of area on a table). I would love to see a room-sized version of this where you can walk on it and interact with it that way, perhaps with a 2-3 foot height when each block is fully extended. And, of course, “resolution” also could be increased, although with each additional “pixel”/block resolution, the complexity would only increase. But maybe in time.
How Do You Design the Future?
“Transform Beyond Pixels, Towards Radical Atoms” by Hiroshi Ishii
- Last time Hiroshi was in this room was Randy Pausch’s Last Lecture September 18, 2007
- Ars Electronica
- Students are the future, how do you inspire them?
- 1992: ClearBoard: Seamless Collaboration Media
- 1995: TRANS-Disciplinary: Finding opportunity in conflict between disciplines & Breaking down old paradigms to create new archetypes
- Ideas Colliding, Opportunities Emerging, Disciplines Transcending, Arts + Sciences
- Music Technology MirrorFugue III by Xiao Xiao – embodied interaction to artistic interaction
- Lexus Design in Milan 2014 – Transform
- 1. Visions >100 years 2. Needs ~10 years 3. Technologies ~1 year
- Tangible Bits embody digital information to interact with directly with hands
- Origin: Weather Bottle – the sound of weather coming out of a soy sauce bottle in her kitchen
- I/O Brush by Kimiko Ryokai, Stefan Marti & Hiroshi Ishii 2004
- It looks like a painting but goes beyond that
- Capturing and weaving history
- Audio pad by James Patten and Ben Recht (Physics & Media)
- Urp: Urban Planning Workbench
- Two Materials:
- 1. Frozen Atoms
- 2. Intangible Pixels
- Third Material
- 3. Radical Atoms
- Time Scape: based on relief, manipulate in real time
- inFORM 2013: http://tangible.media.mit.edu/project/inform ART NOT UTILITY
- Sean Follmer, Phillip Scholl, Amit Zoran,
- Opposing Elements / Design vs Technology / Stillness vs Motion / Atoms vs Bits
- Materiable is an interaction framework that build a perspective
- Flexibility, Elasticity, Viscosity
- Biologic: “Bio is the new interface” http://tangible.media.mit.edu/project/biologic/
- “Making material Dance“
- Why do you have to obey?
- The Future is not to predict but to invent – Alan Kay 1971 “This is the century in which you can be proactive about the future; you don’t have to be reactive. The whole idea of having scientists and technology is that those things you can envision and describe can actually be built”
- Envision — Art and philosophy,
- Embody — Design and Technology,
- Inspire — Art and Aesthetics
- Eye –> Telescope –> Observatories –> Hubble Space Telescope –> Voyager 1
- People could only see the world from their own perspective
- Towards Holistic Worldview
- Holistic Perspective –> Heuristic Focus –> (“Life is short”)
- Inspiration: Douglas Engelbart, Mark Weiser, William Mitchell, Bill Buxton, Alan Kay, Nicholas Negroponte (Heroes and Gurus)
- Who are friends? Bouncing ideas back, this tension is friendship
- Golan Levin – Director of Studio for Creative Inquiry, CMU 🙂
- Austin Lee
- Lining Yao
- Technology soon becomes obsolete
- How do you focus on vision? What is the most exciting
- Abacus – a physical embodiment of a digit
- Abacus – sound of accounting
- What do I care about?
- Get more legs to your chair so people understand because art is abstract
- Virtual Reality is completely opposite of Randy Pausch’s Dream and what I do, but I’m nice and I just say let them do it
- Your one hour listening to me is beyond art, design, and technology
- What do you want to communicate, and influence?
- Reacting to Failure, sometimes the floor gets so low, the ceiling gets so high, but what’s the new potential?
- Try not to think of Art, Design, Science, and Technology as boundaries
Hiroshi Ishi mentioned materials that translated the intangible yet versatile digital ‘pixel’ or atom, to the physical. He proposes radical, interactive, auto-adaptive materials.
I am very much excited about moving towards tangible media! A particular sentiment that Ishi expressed – “digital pixel, you can’t touch, it…it sucks”. The direction he proposes for technology is one where in more technology will feel like less. Technology will be used to make technology ‘ invisible’ in the sense that most will be translateable to the physical world.
The little motorized tile ‘pixels’ presented at Lexus Design Conference in Milan is a perfect example.
Here’s a little bits/tangible media inspired collaboration project of mine from earlier:
This is an arduino project that I really enjoyed. The sound generated from motion when applied with body motion capture can give a whole new depth to the phrase “percussive dance”
“1. The Critical Engineer considers any technology depended upon to be both a challenge and a threat. The greater the dependence on a technology the greater the need to study and expose its inner workings, regardless of ownership or legal provision.”
“5. The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user’s dependency upon it.”
I am looking forward to the design lecture tomorrow. I am both excited and incredibly wary of the booming rise of the internet of things. I can imagine it can very easily be another subject that would go perfectly as an expansion of the documentary “death by Design”. The documentary “Death by Design” explores the question –
“What is the cost of our digital dependency?”
It uncovers a global story of damaged lives, environmental destruction, and devices that are designed to die.
With the engineering principles, I find #1 something that’s emphasized in my design studios. When designers include ‘fancy’ futuristic tech as solutions in their concept pitches, the biggest warning is always that the designer has to be able to consider carefully how to make the technology work for people, finding the gaps within the system that the tech could fulfill rather than making gaps to the existing system to necessitate a convoluted solution. I like to liken the use of technology that one doesn’t fully understand as a design solution problem to the federal government pouring money into a flopping program; it doesn’t really help. Well….I mean, it does. Money will inherently make the wheels spin faster, but the amount of money certainly isn’t proportional to the net benefit. The money simply isn’t being used effectively. Technology, if not understood well, is much the same.
For number 5, I agree! Tools shape you, you shape tools!
That’s a quote directly from Graphic #37, a graphic design magazine that introduces computation as a medium for graphic design.
An analog example of this sentiment would be in the evolution of symbols. When a designer first makes a symbol for an object, it tends to be more literal and representative. But as the public gets used to this association, the next generation of designers would redesign the symbol to be a simplified version of the previous. If this second symbol was used at the very beginning, it might not be nearly as effective. (think of the symbols for phone)
I chose Rachel Binx to look at because she had worked for NASA and that seemed pretty cool, but that was a while ago and she’s moved on to different things since then. Those things are still pretty cool, though. The work that reeled me in was her visualizations of viral Facebook posts. They weren’t very readable without an explanation but watching them was still mesmerizing. The structures formed have a lot of energy while exploding every which way, and have a very organic form to them. I also think the data she chose to base this project off of was funny: 3 of George Takei’s Facebook posts. I see people sharing his posts all the time so I totally believe the explosive virality shown in the time lapse video, but at the same time, I’ve always wondered why George Takei is so active on social media. I get that he’s big into social activism and all that, but he posts a lot of memes for an old man. I’ve read that he has a lot of other people posting for him sometimes, but I still find the whole thing odd. Unfortunately, all the links to the original posts are broken now.
All in all, these visualizations don’t really resolve any confusion about George Takei’s social media activity, but they’re beautifully done and fascinating to watch.
LLAP Mr. Sulu
AI is a big topic nowadays, but sometimes I like to take a step back from super-intelligence and instead look at the progress being made in endearing robots. I love the idea of people having empathy for robots. I always hear the argument “but it doesn’t have feelings” or “its not like it actually cares”, but at what point does this not hold true? It’s practically the same discussion about an AI’s capacity for emotion. Pinokio, by Adam Ben-Dror, is one of those projects that pushes this discussion just a little bit further. It isn’t groundbreaking but I find it endearing nonetheless.
This assignment has three parts: Some readings, a Looking Outwards, and a software project. Please note that these deliverables have different due dates:
- Part A. Reading-Response #08: Two Readings about Things, due Monday 11/14
- Part B. Looking Outwards #08: On Physical Computing, due Monday 11/14
- Part C. Software for a Skeleton (Computation + Mocap), due Friday 11/11
- Ten Creative Opportunities
- Technical Options & Links
- Summary of Deliverables
Part A. Reading-Response #08: Two Readings about Things
This is intended as a very brief reading/response assignment, whose purpose is to introduce some vocabulary and perspective on “critical making” and the “internet of things”. You are asked to read two very brief statements.
Due Monday, November 14.
Please read the following one-page excerpt from Bruce Sterling’s “Epic Struggle for the Internet of Things”:
- http://www.strelka.com/en/press/books/the-epic-struggle-for-the-internet-of-things (this is an alternate link)
Please (also) read the one-page “Critical Engineering Manifesto” (2011) by Julian Oliver, Gordan Savičić, and Danja Vasiliev. Now,
- Select one of the tenets of the manifesto that you find interesting.
- In a brief blog post of 100-150 words, re-explain it in your own words, and explain what you found interesting about it. If possible, provide an example, real or hypothetical, which illustrates the proposition.
- Label your blog post with the Category, ManifestoReading, and title it nickname-manifesto.
Part B. Looking Outwards #08: Physical Computing
This LookingOutwards assignment is concerned with physical computing and tangible interaction design. As part of this Looking Outwards, you are strongly strongly encouraged to attend the public lecture of Hiroshi Ishii on Thursday, November 10 at 5pm in McConomy Auditorium. (Chinese food will be served afterwards in the STUDIO.)
Due Monday, November 14.
Here are some links you are welcome to explore for your Looking Outwards assignment:
Physical computing projects:
- By Hiroshi Ishii’s Tangible Media Group at MIT Media Lab
- At Steve Wilson’s links
- On Creative Applications (tagged ‘physical’)
- On Vimeo (tagged ‘physical computing’)
Arduino (specific) projects:
- The Arduino Playground Exhibition
- Instructables Arduino Projects
- ArduinoArt Vimeo Group
Please categorize your Looking Outwards with the WordPress Category, LookingOutwards08, and title your blog post nickname-lookingoutwards08.
Part C. Software for a Skeleton
For this project, you are asked to write software which
creatively interprets, or responds to, the actions of the body.
You will develop a computational treatment for motion-capture data. Ideally, both your treatment, and your motion-capture data, will be ‘tightly coupled’ to each other: The treatment will be designed for specific motion-capture data, and the motion-capture data will be intentionally selected or performed for your specific treatment.
Code templates for Processing, three.js and openFrameworks are here.
Due Friday, November 11.
Ten Creative Opportunities
It’s important to emphasize that you have a multitude of creative options — well beyond, or alternative to, the initial concept of a “decorated skeleton”. The following ten suggestions, which are by no means comprehensive, are intended to prompt you to appreciate the breadth of the conceptual space you may explore. In all cases, be prepared to justify your decisions.
- You may work in real-time (interactive), or off-line (animation). You may choose to develop a piece of interactive real-time software, which treats the mocap file as a proxy for data from a live user (as in Setsuyakurotaki, by Zach Lieberman + Rhizomatiks, shown above in use by live DJs). Or you may choose to develop a piece of custom animation software, which interprets the mocap file as an input to a lengthy rendering process process (as in Universal Everything’s Walking City, or Method Studios’ AICP Sponsor Reel).
- You may use more than one body. Your software doesn’t have to be limited to just one body. Instead, it could visualize the relationship (or create a relationship) between two or more bodies (as in Scott Snibbe’s Boundary Functions or ). It could visualize or respond to a duet, trio or crowd of people.
- You may focus on just part of the body. Your software doesn’t need to respond to the entire body; it could focus on interpreting just a single part of the body (as in Theo Watson & Emily Gobeille’s prototype for Puppet Parade, which responds to a single arm).
- You may focus on how an environment is affected by the body. Your software doesn’t have to re-skin or visualize the body. Instead, you can develop an environment that is affected by the movements of the body (as in Theo & Emily’s Weather Worlds).
- You may position your ‘camera’ anywhere — including a first-person POV, or with a (user-driven) VR POV. Just because your performance was recorded from a sensor “in front” of you, this does not mean your mocap data must be viewed from the same point of view. Consider displaying your figure in the round, from above, below, or even from the POV of the body itself. (Check out the camera() function in Processing, or the PerspectiveCamera object in three.js, for more ideas. If you’re using three.js, you could also try a WebVR build for Google cardboard.)
- You may work in 3D or 2D. Although your mocap data represents three-dimensional coordinates, you don’t have to make a 3D scene; for example, you could use your mocap to control an assemblage of 2D shapes. You could even use your body to control two-dimensional typography. (Helpful Processing commands like screenX() and screenY() , or unprojectVector() in three.js, allow you to easily compute the 2D coordinates of a perspectivally-projected 3D point.)
- You may control the behavior of something non-human. Just because your data was captured from a human, doesn’t mean you must control a human. Consider using your mocap data to puppeteer an animal, monster, plant, or even a non-living object (as in this research on “animating non-humanoid characters with human motion data” from Disney Research).
- You may record mocap data yourself, or you can use data from an online source. If you’re recording the data yourself, feel free to record a friend who is a performer — perhaps a musician, actor, or athlete. Alternatively, feel free to use data from an online archive or commercial vendor. You may also combine these different sources; for example, you could combine your own awkward performance, with a group of professional backup dancers.
- You can make software which is analytic or expressive. You are asked to make a piece of software which interprets the actions of the human body. While some of your peers may choose to develop a character animation or interactive software mirror, you might instead elect to create “information visualization” software that presents an analysis of the body’s joints over time. Your software could present comparisons different people making similar movements, or could track the accelerations of movements by a violinist.
- You may use sound. Feel free to play back sound which is synchronized with your motion capture files. This might be the performer’s speech, or music to which they are dancing, etc. (Check out the Processing Sound Library to play simple sounds, or the PositionalAudio class in three.js, which has the ability to play sounds using 3D-spatialization.)
Technical Options & Resources
As an alternative to the above, you are permitted to use Maya (with its internal Python scripting language), or Unity3D for this project. Kindly note, however, that the professor and TA cannot support these alternative environments. If you use them, you should be prepared to work independently. For Python in Maya, please this tutorial, this tutorial, and this video.
For this project, it is assumed that you will record or reuse a motion capture file in the BVH format. (If you are working in Maya or Unity, you may prefer to use the FBX format.) We have purchased a copy of Brekel Pro Body v2 for you to use to record motion capture files, and we have installed it on a PC in the STUDIO; it can record Kinect v2 data into these various mocap formats.
Our Three.js demo (included in BVH example code):
Our Processing demo (included in BVH example code):
Summary of Deliverables
Here’s what’s expected for this assignment.
- Review some of the treatments of motion-capture data which people have developed, whether for realtime interactions or for offline animations, in our lecture notes from Friday 11/4.
- Sketch first! Draw some ideas.
- Make or find a motion capture recording. Be sure to record a couple takes. Keep in mind that you may wish to re-record your performance later, once your software is finished.
- Develop a program that creatively interprets, or responds to, the changing performance of a body as recorded in your motion-capture data. (If you feel like trying three.js, check out their demos and examples.)
- Create a blog post on this site to hold the media below.
- Title your blog post, nickname-mocap, and give your blog post the WordPress Category, Mocap.
- Write a narrative of 150-200 words describing your development process, and evaluating your results. Include some information about your inspirations, if any.
- Embed a screengrabbed video of your software running (if it is designed to run in realtime). If your software runs “offline” (non-realtime), as in an animation, render out a video and embed that.
- Upload an animated GIF of your software. It can be brief (3-5 seconds).
- Upload a still image of your software.
- Upload some photos or scans of your notebook sketches.
- Test your blog post to make sure that all of the above embedded media appear correctly. If you’re having a problem, ask for help.