New Nature by Marpi is primarily an interactive exhibition, featuring display panels and surround sound immersing guests in a world filled with virtual creatures – abstract-looking trees, plants, and flowers which react to guests’ presence and hand movements via Kinect and Leap Motion sensors. While the full experience is on exhibit at Artechouse in Washington, DC., accompanying experiences are available as mobile experiences on iOS and Android. The project was a collaboration with Kevin Colorado (technical direction), Bent Stamnes (sound design), Will Atwood (3D art), with documentation by Daniel Garcia and Jeremy ShanahanNew Nature was made using Unity, in conjunction with external software for Kinect and Leap Motion.

I admire the procedural nature of the generative plants and creatures, as well as the physical, tactile nature of “touching” the creatures with your hands. As a person working within Augmented and Virtual Reality, I am also interested in exploring this physicality of virtual objects. Having virtual objects react to your movement through sensors adds a level of involvement and connection to the virtual work that would not otherwise be possible.

Marpi, New Nature

dorsek – Looking Outwards -1

Born from a curiosity in a want to understand our physicality in a such a heavily digitally developed world; It wavers between being human or merely representational and as such is intended to confuse the user, from afar and in person the piece seems to err on the genre of hyperrealist painting and it’s not until the viewer lingers for awhile with it that they begin to notice the “painting” slowly breathe, twitch, and blink.


When you boil it down the piece is a video of a shaved, nude, monochromatically painted individual on an LCD monitor painted the same shade. Funny enough nothing about this “new media” artwork is actually new… process wise there’s a poetic simplicity in how it combines technology & “fine art” (which I think is one of the main reasons I gravitate towards it so much as a point of inspiration), and reflects on a history of art that questions explores the presence of the individual/human brewing in an environment saturated with technology.

** If I had the opportunity to work on this piece, I would try to play a little more with this space between human & representation; engage with the audience in a way that further intensifies this feeling of confusion using through the interface and feedforward perhaps based in various monitoring technologies (but maybe that’s the easy answer?) – on that note presently it seems to me to be noteworthy for being as interactive as it is without any programming to stimulate/simulate engagement.


Link to the Exonemo website



Graffiti Nature – teamLab

Graffiti Nature is an exhibition in teamLab’s Borderless Museum. After coloring in either an animal or flower outline with crayons, your drawing will be scanned in and then projected onto the floor. Your creature comes alive and can walk around, but people can also step on it and eventually kill it.

teamLab is a Tokyo-based company of ~600 people (though the technical department is much smaller). One interactive team member takes on the bulk of the work per project (and coordinates with other departments). The projects are made predominantly in Unity, and are created over the course of a few months.

^ My creature!

^ Strangers, both adults and kids, intentionally stepped on my creature ;-;

I really enjoy how the project transforms the common coloring book experience, giving both kids and adults the chance to bring their drawings to life (with no shame to those bad at drawing). I was however quite surprised by the darker side of this project and a few other teamLab works (you can burn little people in another kids’ project, and the animals covered in flowers in the hallways will die if you remove all the flowers on them). Creating death complicates an otherwise utopic world and pushes it more towards reality, but part of me wishes there was more consequence to the virtual murder of my baby creature.


“Think about an interactive artwork…which you find inspirational”

Wow. Choice paradox much? How does one choose? And what does it mean to be inspirational for me? Because there are many different types, and I feel the need to break that down.

Some interactive work I find inspirational because there’s an aspect that I want to be able to grow towards — to aspire to (whether it be high level of craft, narrative, execution), but that doesn’t mean that I find the project as a whole to be inspired.

There are pieces I admire for being outstanding and brilliant as “last word” art, but I don’t know if they have as much power moving the collective intellect or emotional landscape — which I might argue is more important to “inspired” work. (Why might I say that? Well, I’m personally very bored by modern photorealistic paintings. My first impression of them is just that they showcase the artists’ “outstanding” technical skill at observation and representative replication, but most of the initial feeling of ‘awe’ from the audience feels cheaply won because it’s not unique to the piece….simply a byproduct of admiration for that general skillset…and it’s skill in a technique that isn’t novel or niche anymore.)

And then I wonder, how does my choice here reflect on what I prioritize?

Lol. I hope no one reads too deeply into it— so here is me showing that I’ve reflected on the assignment…and then discarded my musings because it’s just the first looking outwards assignment, thus I should take a chill pill and save my words for the actual juice, so here goes:

I love their craft. This is the level of work I aspire to.

INORI (Prayer) from nobumichi asai on Vimeo.

I find it inspirational because, well yes, a lot of the initial value might be the wow factor of the super fast tracking and projection (latency in the milliseconds!) but it takes advantage of the technology to create a dance that has great rhythm (visual and otherwise) and surrealism. The mixing of the dancer’s physical features with the virtual projections as a design choice does well to augment the  ‘surrealism’ that comes with an emerging/or fairly novel technology.

This is how it was made:

INORI (prayer) / Making from nobumichi asai on Vimeo.

Does it move me emotionally? Sure, as much as any well executed dance or film piece might. But intellectually?…..perhaps, but it’d be subtle and I’d have to really take the time to reflect on why this piece was appealing to figure that out. It certainly isn’t nearly as obvious an answer as say…



..this “AI artwork” that sold for half a million at Christie’s Auction:


Instead of a painter’s author signature, in the corner is an innocuous math formula.

These pieces raises a lot of questions that I don’t have answers to immediately. The French artist collective, “Obvious” reaped the monetary acknowledgement of  ‘authorship’ but they didn’t write the code that made the AI. A lot of it is built of Google’s research and some American developers GitHub (that they also didn’t credit)

Not to mention, they most definitely aren’t the first of their kind as they seem to claim.

Ok, So what is their time and effort put into this considered if not authorship? I wouldn’t give the artistic or creative credit to the AI, because there was no initiative behind it without a person driving it’s efforts. Someone put thought behind the data set this was trained on (old master paintings) and while the exact results weren’t things the artist might have thought of explicitly, they aren’t exactly unexpected either are they? But then that also raises an interesting point….were the artists then simply curators? But by that logic, they didn’t author this, they censored a generator, and were awarded for having some degree of taste.  For training the AI, are they simply the biological bootloaders for the actual creator? The AI? I wouldn’t necessarily say that either…because regardless of authorship or creative drive, I feel at least comfortable to say I”d agree that they are responsible for this instance of an image modeled like an old master’s painting and ‘signed’ by a formula in Christies’ gallery.

Perhaps its the context that makes it art? That one would take this instance with its unique process outside of the niche corners of Twitter (with plenty of creators in this medium that have arguably better technique and novel work ex:Mario Klingemann) and frame it as if it were last word art in one of the art world’s most prestigious auctions?

From that perspective…there are, at least on the surface, strong parallels with the Ready-Made pieces of Dada artists.

hmm…..welp. Things to think about. Either way I’m glad this piece did what it did, even if the artists felt more like hype marketers because it’s interesting to think about and sometimes it really is just about asking the interesting questions and making others do that too.

…..certainly it’s done more to move people than any of my piece so.


yeah so please enjoy this and come talk to me about your thoughts on this topic! Maybe this whole thing is just semantics but those are important to finesse too!







Strandbeest by Theo Jansen (1990-Present)

Theo Jansen and a Strandbeest. (Photo: Loek van der Klis, https://www.flickr.com/photos/50964344@N08/10594194254/in/photostream/)

Jansen’s Strandbeesten grew out of an interest in designing living and autonomous organisms with software. The original ratios for the legs were calculated on an Atari running a month-long evolutionary algorithm. I chose this project because it accomplishes the rare feat of connecting technology to the physical world in a beautiful and functional way.

Illustration of Strandbeest leg design. Ratios were calculated with an evolutionary algorithm.
Strandbeast in action on the beach. There are many different varieties.

Over the past few decades, the artist has created (“evolved”) contraptions capable of propelling themselves with wind, storing energy, detecting and avoiding water, and briefly surviving harsh conditions with simple and robust materials. He employs volunteers or assistants to transport, document, restore, and exhibit his creatures, but generally he works by himself.

“Is this science or garbage?” Homer Simpson. Season 28, Episode 10.

Jansen was inspired to begin this project to investigate the fundamentals of life after reading “The Blind Watchmaker” by Richard Dawkins, a book that explains how the complexity of life emerged from random mutation. The artist sees himself as a pretend God that is evolving creatures over a short period of time or as someone infected by a virus that reproduces through him (and others via 3D printing plans available online). However, the evolution of this work is limited to advances concocted by Jansen or his computational techniques. It would be interesting to see a version of this system that invited participation from other creators and utilized easily reconfigurable materials (think Legos). A combination of network effects and speed could lead to emergent qualities and facilitate Jansen’s goal of artificial creatures that could survive on their own.  

Family tree of Strandbeest evolution. (Source: https://www.exploratorium.edu/strandbeest/meet-the-beests)

Official site: https://www.strandbeest.com/

3D printing plans: https://www.shapeways.com/shops/theojansen


Engelen, John. “Strandbeests by Theo Jansen.” De De Ce Blog. April 13, 2015. Accessed January 21, 2019. http://www.dedeceblog.com/2015/04/13/strandbeests-by-theo-jansen/.

Vicente, J. L. de. “Theo Jansen.” ArtFutura. 2005. Accessed January 21, 2019. https://www.artfutura.org/v3/en/theo-jansen/.

Weschler, Lawrence. “Theo Jansen’s Lumbering Life-Forms Arrive in America.” The New York Times Magazine. November 26, 2014. Accessed January 21, 2019. https://www.nytimes.com/2014/11/30/magazine/theo-jansens-lumbering-life-forms-arrive-in-america.html.

jaqaur – Looking Outwards 1

When I went to SF MoMa in 2017, my favorite floor was the one full of sound-based artworks (a temporary installation). I have always had a strong reaction to sounds (some positive, some negative), and so all of these were especially moving. However, one stands out in my memory as particularly interactive: Cloud by Christina Kubisch.

The artwork is a large tangle of red wires suspended in midair. Different prerecorded sounds play in different parts of the sculpture. These sounds are mostly recordings of electromagnetic fields from locations around the world, with some generated sounds mixed in. By wearing special headphones, guests can pick up the sounds and hear the magnetic fields themselves, creating their own soundscape as they move around the sculpture.

I really like how accessible this piece is. The visual of this massive net of wires fits perfectly with the audio experience it delivers: chaotic, dense with detailed but unintelligible information. It also really makes me feel the presence of all the data–public, mundane, or extremely intimate–that is being transmitted through the air.

Here is a video of Kubisch discussing the piece (note that Clouds is actually a series, and she is talking about a different but very similar Cloud from the one I experienced):



ANIMA II by Nick Verstand

ANIMA II GIF animation
ANIMA II GIF animation

ANIMA II(2017) by Nick Verstand is the second version of a previous work ANIMA(2014). ANIMA II is inspired by the four thousand years old Chinese philosophy of “Wu Xing,” the “Five Elements” of the universe, also means the ever-evolving “Five Stages” that the universe has: metal, wood, water, fire, and earth. The system of “Wu Xing” describes the interactions and relationships between phenomena: which can be natural phenomena, or the interaction between the internal and external self. By balancing the five qualities, one is able to actualize their inner self. 

Image of ANIMA II by Nick Verstand
ANIMA II by Nick Verstand

I was at the premiere exhibition of this piece after I read about it. I admire how this audio-visual piece strikes me as extremely organic, peaceful and engaging. The globe has an internal hemispherical projector, that projects fluid visuals that are algorithmically generated and transition between the five stages. The visual is accompanied by a spatial sound composition constructed from recordings of corresponding five elements in nature. It also the globe communicates to human approaching. It uses 3 Kinect sensors to decide faster or slower diffusion of fluid based on human locations. 

The work is created by a group of people/studios; it took years to complete; used projector, hemispherical lens, 8.1 speaker system, 4DSOUND software, and openFrameworks. 


My choice for this week’s example of interactive art is not exactly art, per se, but adjacent. Detroit: Become Human is a high-budget, triple-A, branching-narrative adventure game centered on a futuristic Detroit and its impending android revolution. Detroit is the latest entry in a collection of graphically sophisticated but lukewarm “interactive movies”, such as Until DawnHeavy Rain and Beyond: Two Souls, but unlike its predecessors, Detroit‘s attempts to tear down ludonarrative dissonance while incorporating meaningful gameplay are innovative and fascinating, if not fully successful.

Where most video games attempt to depict a narrative in spite of the form’s limitations, Detroit attempts to make its tropes a conscious move. For example, UI elements such as missions, objective markers and invisible boundaries are presented as part of the in-world HUD through which the three android protagonists perceive the world. This HUD distorts and even breaks as the protagonists gain sentience, and so surpasses being a disconnected layer of information to become narratively essential. In terms of narrative, Detroit surpasses its predecessors by offering a much stronger illusion of meaningful and irreversible decisions, as evidenced by the sophisticated flowcharts that display at the end of each level. On the other hand, Detroit is still obviously an “interactive movie”; levels are still movies interspersed with button pressing or thematically appropriate minigames, alternate scenarios are often just the same lines delivered by different actors, and the need to display the flowchart at the end of each level belies a deliberate appeal to completionist video game players. Perhaps these moves seem so innovative to me because I’ve become accustomed to the standards of gaming and accepting of ludonarrative dissonance, as we discussed in class. Nonetheless, seeing these small “tricks” in a highly produced, highly commercial video game that seems to earnestly try to push the boundaries of its form excites me.

Perhaps more interesting is the game’s laborious production and the implications of what that effort attempts to achieve. Detroit: Become Human is the result of €30 million and six years of development—over two years to research and write 3,000 pages, 250 3D-scanned and mocapped actors for 513 roles, and even a new game engine to support graphical advancements. Detroit betrays the video game industry’s desperation to imitate life. In addition to actors who portray the characters, there are actors who take every possible step in every scene. There are stunt doubles. All these actors are scanned and mocapped. Then the modelers and animators adjust the resulting 3D models and animations side by side with reference videos of the original performance. Eye and eyelid animation are manually added because they are excluded from the mocap data. Facial animations are also animated from scratch in action scenes. All this extended effort to reproduce a live performance is reduced to an awkward moment when a few hundred android figures appear untextured and unanimated in what is supposed to be a triumphant crowd due to hardware limitations.

It’s fascinating to examine how Detroit struggles with its conflicting desires to be interactive, but also narrative, but also commercial.