John Brieger — Looking Outwards 4: Interaction

by John Brieger @ 3:50 am 6 March 2012

Siftables and Cubelets

One of my favorite pieces of interaction design, Siftables comes from MIT media lab. It’s an interaction platform unlike any other: a series of customizable and rearrangable blocks that can be used for a variety of computing applications:

For a more robotics-oriented approach, CMU’s Co-D lab produced a project called Cubelets.

Wooden Mirror

I did a Daniel Rozin project for my first looking outwards that was non-electronic, and now I thought I would put up his wooden mirror project (1999) that was part of a series of 4 mechanical, electronic mirrors he produced. This video is from a commission for a recreation of the work which he did in 2005.

Treachery of Sanctuary:

Strobe Flower

Circque Calder:

Poultry (Paltry) Internet

Alex Wolfe | Looking Outwards | Interaction

by a.wolfe @ 9:30 pm 5 March 2012

Fabric/Paper Speakers on Craft Magainze

Electroplated Textiles/Thread and an Open Source spinning Machine by Dreaming Robots

 

Playtime – Ying Gao

Light sensitive dresses that blur the silhouette when a camera flashes.

Looking Outwards 5

by sarah @ 4:43 pm 2 March 2012

Pranav Mistry: The thrilling potential of SixthSense technology

[ted id=685 lang=en]

I though the TED talk by Pranav Mistry from MIT’s Media lab about the “Sixth Sense” project was interesting to think about while brainstorming for the next project. I think he brings up some good points about possibilities in human interaction with digital media. His ideas span from useful tools for everyday life to games to occupy idle time while on the subway. He is invested in getting the digital world away from just existing on a screen and making it a more tactile experience. If you have the time and haven’t seen it yet I’d recommend it!

Interactive City Map of Berlin

I mentioned earlier in the year that I am interested in suburban and city planning and I wanted to see what kind of interactive art/projects were out there about this topic. From this search I found ART + COM’s work. They made a media table for the Red Town Hall in Berlin, Germany in 2009 which allows visitors to interactively view the cities history and cultural attractions. While this project seems very fitting for its purpose in the town hall, it’s not quite what I am personally interested in. I think they succeeded in making an engaging informational tool, but I don’t really view it as an art piece.

Drawing Machine, by Lab212

Lab212 created a drawing machine for the public to interact with, which was installed on front facade of the French Institute of Morocco to celebrate the international festival of short animated films. There are many electronic drawing machines out there, but I thought this one had an interesting take in its scale and simple interface. It seems like it would be a fun experience to be a part of a drawing that big and in a public space. (Don’t get how it relates to short animated films though, but fun anyway)

Duncan Boehle – Simulation Project

by duncan @ 10:08 am 1 March 2012

“Real Enviroment”

 

There’s a live demo!
 

 


Abstract Robot Expression

by heather @ 9:12 am

I’ve always been a big fan of abstract art, letting the stories twirl in my head around the shapes or along the brushstrokes. In this project, I ask, can robots walk through the space of emotional expression to achieve the same effect? The human mind is wonderful at making abstract connections, creating narrative and attributions of intent.

My clay was the Gamebot robot, a three-axis head and screen. Eventually intended to play boardgames with us on a touchscreen surface in the Gates Cafe, the project is led by my advisor Reid Simmons. In the first video below, I show a handcrafted robot expression.

Gamebot: “My Heart Hurts”
[vimeo=https://vimeo.com/37718056]

Traditionally the software used with this robot, which I had never worked before, has been developed and used for the Robo-Receptionist in Newell-Simon Hall. The Robo-Receptionist has a screen but no head motors, which required some adaptation of the code, as the mapping from ‘simulation’ to ‘robot’ sometimes produced motion that was abrupt, too fast, too slow, or not communicating the desired state/emotion.

Another reason to rework the code was to create more variable-based expressive states, in which amplitude and timing are controllable characteristics. The previous software mostly uses a categorical approach to emotions, by which I mean, emotional and expressive robot behaviors are scripted and discrete. The video below shows four possible states of waiting: happy, pensive, sad, mad. The is an example of an emotion model with categorical states.

Gamebot: Waiting State Machine
[vimeo=https://vimeo.com/37718158]

In contrast to this state machine, I show the robot running through continuously varying range of mood valence (this sample is face only, but will be used in conjunction with motion in the following step). Rather than use a state machine for emotion representation, I borrow a function called mood from the Robo-Receptionist to explore the space between happy and sad. By representing the continuous variable, I can next use shifting sequentials and generative algorithms to explore the space of expression.

Gamebot: Gradient of Happy to Sad
[vimeo=https://vimeo.com/37718557]

Next, I scripted a branching and looping function to combine mood state with emotion, in the hope that the unplanned creations would evoke stories in us, as we imbue the robot with intent. Throughout evolution, we have used our ability to “read” people and make snap decisions to safely and happily navigate the world. This same unconscious behavior occurs when we see a robot face or robot in motion.

The code: define parameters to constrain the lips to transitions between smile – neutral – frown or stay the same, while the head motion explores the space of an outer and inner square.

The story: the next step is the interpretation, each generation will be unique. I share my stories for the three videos below, but I invite any of you to craft your own. Watch for the lips and head motion!

In I, I see a tale of beauty and intimacy. The camera angles interplay with expression.

Generative Gamebot I: Beauty & Intimacy
[vimeo=https://vimeo.com/37718644]

In II, I see a robot interrupted by an unwanted video camera. The desire to look presentable conflicts with with her frustration at the imposition. Benign bickering likely to follow.

Generative Gamebot II: A Camera, Really?!
[vimeo=https://vimeo.com/37718902]

In III, the robot seems to be gazing into the mirror and thinking, “It’s hard to hide my unhappiness.” It practices smiling, occasionally despairing at the farse and letting it fade again.

Generated Gamebot III: Hard to Hide
[vimeo=https://vimeo.com/37719220]

The Big Ideas:

  1. Humans are good at making up stories
  2. Motion is uniquely expressive
  3. Explore variable rather than categorical expression
  4. Generative Algorithms can help us explore the space

Excited to get a simple model up and running! Thoughts about more complex algorithms I could use to explore this space (especially motion) would be awesome!

Joe Medwid – Project 3 – Artificial Evolution

by Joe @ 9:08 am




Almost a year ago, an inconspicuous Tumblr blog called the PortraitDex caught my attention. It challenged artists, primarily webcomic artists, to create “Pokemon Self Portraits.” Translating the immensely introspective task of portraiture into the evolutionary design language of Nintendo’s smash hit game proved to be an incredibly enjoyable and rewarding experience.



Evolution Example

PortraitDex Submission



Creating my piece really got me thinking about the design language involved in the design language of these evolutions, the very deliberate ways in which forms grow and transform. When you approach these little critters as a legitimate design space, it quickly becomes obvious that each stage changes both the morphology and personality of the pokemon, resulting in a dramatic change from the first to final stages. I even managed to find a scholarly article on the topic. Inspired by concept artists who use simple silhouettes to quickly generate starting points for their creature designs, I decided to explore the evolutionary process of pokemon design as my generative assignment.



Exploratory Sketches





I began by really digging into the basic geometrical changes of the 16 original pokemon with 3 stages of evolution. I choose to stick to the original 151 pokemon as they were the work of a single illustrator and have served as the template for the 500-some additions that were to follow over the course of a dozen years of games. Reducing the creatures to their basic forms revealed some commonalities – Rounded bodies and large, friendly eyes in the first stage, slender bodies and elongated limbs in the middle, and powerful, confident forms as the monster achieves its final form.

The Beasties


Identifying a number of basic elements (Head, Torso, Arms and Legs) I created a simple processing applet that would, as much as possible, generate forms similar to those displayed in the designs of actual pokemon. I initially explored using Box2D or Toxicglibs to create these forms, but was unable to wrangle them into the simple parts I required, ultimately ending up with an elementary series of ellipses.


Riffing on Silhouettes


Once many many silhouettes were generated, I decided to incorporate the concept artist’s methods and actually create a few rough sketches of potential pokemon based on the program’s results.

Nick Inzucchi – Project 3 – Disconnected

by nick @ 8:41 am

[slideshare id=11808588&doc=slides-120229225632-phpapp02]

90% of my day is spent staring at screens. It’s been months since I’ve spent time outside, or even took a quiet moment to collect my thoughts. I designed Disconnected to explore this tension. I was interested both in how digital technology can distance one from nature, and in how these digital experiences eventually begin to blend into one’s reality. I wanted to create an experience that would make the viewer explicitly aware of this conflict.

The system works by sensing the user’s emotional stability and using this to dynamically deconstruct a placid, natural scene. The concept is that stress, anxiety, and general lack of calm will manifest themselves as digital interruptions in nature. As static and digital patterns obscure the scene, this directly reflects the mental instability produced by overuse of technology.

The background was shot from one of my favorite spots in Schenley Park. It’s a place I used to go to think, relax and meditate, back when there was time. This view represents a kind of pure connection to nature, something I feel I’ve lost.

The system uses biometric input (skin conductance and heart-rate variance) from a WildDivine IOM USB sensor. It abstracts these values to judge whether the user’s level of arousal is above or below normal, modifying the visualization in turn. Each time the system is used, it records all readings in an external XML file. This archive comes to represent the user’s ‘baseline’ arousal, allowing the system to judge whether on any particular use their state is above or below normal.

Based on this judgement, the system takes several steps. An ofxCameraFilter blurs the screen, modifies contrast and adds noise. A ofxDelaunay mesh constructed from randomly positioned points also fades into view. Each vertex uses perlin noise to randomly shift about the screen, and the intensity of this movement is also modified by biofeedback. I love that they came out looking like gnats over the lake. Lastly, the system crossfades between two soundscapes, one clean and the other heavily distorted, depending on the user’s state.

VarvaraToulkeridou-Generate with Braitenberg Vehicles

by varvara @ 8:32 am

The objective of this project is form generation via a Braitenberg vehicles simulation.

A Braitenberg vehicle is a concept developed by the neuroanatomist Valentino Braitenberg in his book “Vehicles, Experiments in Synthetic Psychology” (full reference: Braitenberg, Valentino. Vehicles, Experiments in Synthetic Psychology. MIT Press, Boston. 1984). Braitenberg’s objective was to illustrate that intelligent behavior can emerge from simple sensorimotor interaction between an agent and its environment without  representation of the environment or any kind of inference.

What excites me about this concept is how simple behaviors on the micro-level can result to the emergence of more complex behaviors on the macro-level.

—————————————————————————————————————————————

>> inspiration and precedent work
Below there is some precedent generative art work using the concept of Braitenberg vehicles:

Reas, Tissue Software, 2002

In Vehicles, Braitenberg defines a series of 13 conceptual constructions by gradually building more complex behavior with the addition of more machinery. In the Tissue software, Reas uses machines analogous to Braitenberg’s Vehicle 4. Each machine has two software sensors to detect stimuli in the environment and two software actuators to move; the relationships between the sensors and actuators determine the specific behavior for each machine.

 

Each line represents the path of each machine’s movement as it responds to stimuli in its environment. People interact with the software by positioning the stimuli on the screen. Through exploring different positions of the stimuli, an understanding of the total system emerges from the subtle relations between the simple input and the resulting fluid visual output.

 

 

Yanni Loukissas, Shadow constructors, 2004

 

In this project, Braitenburg vehicles move over a 2d imagemap collecting information about light and dark spots (brightness levels). This information is used to construct forms in 3d, either trails or surfaces.
What I find interesting about this project is that information from the 3d form is projected back onto the source imagemap. For example, the constructed surfaces cast shaddows on the imagemap. This results in a feedback loop which augments the behavior of vehicles.

 

     

—————————————————————————————————————————————

>> the background story

There have been attempts in the field of dance performances to bring together explicitly movement with geometry. Two examples are described below:

“Might not the dancers be real puppets, moved by strings, or better still, self propelled by means of a precise mechanism, almost free of human intervention, at most directed by remote control ?”
Oscar Schlemmer

 

During Bauhaus, Schlemmer was organizing dance performances where the dancer was regarded as an agent in a spatial configuration; through the interaction of the dancer with the spatial container trigger the performance proceeded in an evolutionary mode. “Dance in space” and “Figure in space with Plane Geometry and Spatial Delineations” were performances that intended to transform the body into a “mechanised object” operating into a geometrically divided space, pre-existing the performance. Hence, movement is precisely determined by the information coming from the environment.


Slat Dance, Oscar Schlemmer, Bauhaus, 1926

William Forsythe, imagines virtual lines and shapes in space that could be bent, tossed, or distorted. By moving from a point to a line to a plane to a volume, geometric space can be visualized as composed of points that were vastly interconnected. These points are all contained within the dancer’s body; an infinite number of movements and positions are produced by a series of “foldings” and “unfoldings”. Dancers can perceive relationships between any of the points on the curves and any other parts of their bodies. What makes it into a performance is the dancer illustrating the presence of these imagined relationships by moving.


Improvisation Technologies – Dance Geometry, OpenEnded Group, 1999

>> The computational tool

A Braitenberg vehicle is an agent that can autonomously move around. It has primitive sensors reacting to a predefined stimulus and wheels (each driven by its own motor) that function as actuators. In its simplest form, the sensor is directly connected to an effector, so that a sensed signal immediately produces a movement of the wheel. Depending on how sensors and wheels are connected, the vehicle exhibits different behaviors.


In the diagram below we can observe some of the resulting behaviors according to how the sensors and actuators are connected.

source: http://www.it.bton.ac.uk/Research/CIG/Believable%20Agents/

>> the environment 

In the current project the environment selected to provide stimuli for the vehicles is a 3d stage where a number of spotlights are placed interactively by the user. The light patterns and colors can vary to actuate varying behaviors.

The vehicles can move in 3d space reacting to the light stimuli. The vehicles will be regarded as constituting the vertices of lines or the control points of surfaces which are going to get transformed and distorted over time. The intention is to constraint the freedom of movement of the vehicles by placing springs at selected points. The libraries of Toxiclibs, Verlet Physics and Peasycam are being used.



 

 

 

 
 

 

 

 

A.Rothera | project 3 | Simulation

by alex @ 7:20 am

My original inspiration originally game from a moment recently when someone asked me “what did you love when you were little.” For some reason I immediately remembered k’nex. I remember day after day coming home from school to spend hours alone building and creating. I’ve been thinking about this in respect to my current/long lasting want to build new ‘things.’ Alot of my work involves this act, of the build and the unbuild. Most specifically I remember this game my father and I used to play with k’nex, a game I think I’ve subconsciously held onto.

I’m also thinking about the current increase in 3D printing. How printers in every form and size are being made, with concerns of resolution. I think about necessity and material. Of how the world will change if 3D printers do become a common tool for every family. Where will all the plastic come from… and go.

With this I think about materials that we have and exist for form. Both forms we understand and forms we can fabricate or refabricate.

Untitled-1

This simulation can work well because the “fitness” of the different stages of the layouts are very computational. There is a definitive percentage that can be calculate between the distance of the edges of the shapes to the contour.

Kaushal Agrawal – Project 3 – Battlefield Simulation

by kaushal @ 6:48 am

This project is an effort to simulate a battle between two armies. I had planned to simulate an army which comprised of the infantry, cavalry, archers and catapults. I eventually ended by doing a battle simulation with just the infantry. My idea for driven by one of my looking outwards – “Node Garden”, where a set of curves would twirl in space to create nodes. Based on the feedback I got, I focused on simulating the behavior of the infantry.

Initial Designs

Bow-Tie Problem
I decided to move the infantry to its nearest enemy. This resulted in biased results, where the infantry turned to the fastest enemy.

Revised Concept

Simulation
Simulation

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity