Category Archives: looking-outwards

LValley

29 Jan 2015

CSIS Global Data Chandelier from Sosolimited on Vimeo.

As a fan of 3D art and clever data visualization, I quickly found “Global Data Chandelier” by SOSO Limited.

The chandelier itself is a series of hanging colored dots that change to display different worldy statistics.

Quite honestly, this pieces bores me. I find parts of it subtle, and the concept seems a little overdone, but I was able to appreciate its seamless integration into its surrounding, and its ability to get a point across without rubbing it in the viewer’s faces.

Looking at a Horse from Evan Boehm on Vimeo.

I’m not a horse fanatic, but I find horses majestic. In fact, when I think of the word “majestic” I automatically think of a horse running in slow motion; that’s why I was drawn to the piece “Looking at a Horse.”

“Looking at a Horse” is a work that shifts based on the way a person looks at it.

In the school of art, I’ve been told to think context many many times, so this piece instantly struck a chord with me. The horse keeps moving forward no matter what, but the surroundings change constantly.

My favorite thing about this piece, other than the horse, is that it constantly challenges the viewer’s relationship to the art. A painting viewed in a white room can be viewed completely differently as the same painting in a black room.

Setting is everything even if the subject remains static.

mileshiroo

28 Jan 2015

“Corpus-Based Visual Synthesis: An Approach for Artistic Stylization” by Parag K. Mital, Mick Grierson, and Tim J Smith recreates the styles associated with Impressionism, Cubism and Abstract Expressionism using algorithmic means. The process matches geometric representations of images to corpora of representative images in a database. The researchers also created an augmented reality “hallucination” which applies the stylization process to a feed from a camera mounted on augmented reality goggles. The project page includes a video that synthesizes Akira Kurosawa’s “Dreams” using an image database that of Van Gogh’s “Langlois Bridge at Arles.” The result is convincing and beautiful; I’d like to watch an entire movie this way. An accompanying paper, presented at ACM Symposium on Applied Perception 2013 lays out the technical details of the research. It would have been nice to interact with a working demo, but the powerpoint and paper are thorough enough for one to recreate the process independently.

More here.

mileshiroo

28 Jan 2015

“85 CE 86 EE 4B B1 72 9B oA AD 15 46 47 33 2C 30” is an eighteen minute sonic reenactment of the Boids program, developed by artificial life researcher Craig Reynolds in 1986. TFC is Lars Holdus, a Norwegian artist, whose technology engaged practice deals with rhythm, seriality and melody. 85 CE 86… features what synthetic birds sounds over layers of synthesizer textures. Holdus doesn’t explain how the piece reenacts Boids, and it’s unclear whether the piece depends on Boids poetically/conceptually or structurally/technically. The work is compositionally dynamic: it oscillates between modes of ambience and noise. If 85 CE 86… is procedural, it’s difficult to detect an algorithm or set of rules governing the sound. Holdus writes on the project page that “labelling computer generated species after preexisting ones complicates our relation to the former.” We project our understanding of birds into Boids, and see the artificial creatures as lesser imitations.

More here.

Amy Friedman

28 Jan 2015

Theme: OpenFrameworks & OF Addons

EyeWriter


EyeWriter is an open source, low cost eye-tracking system that allows ALS patients, physically paralyzed and patients with motor impairments to draw on a computer using their eyes. The project was created by Free Art and Technology(FAT), OpenFrameworks, and the Graffiti Research Lab. The software for EyeWrite has two components, one software tracks the movement and location of the eye using blob detection, and the second is a drawing software which creates drawings using eye movement, both of which are programmed in OpenFrameworks. This project allows people to utilize their creativity when they are limited by physical demands, although I am unsure of the accuracy of the drawing tool. I find this work to be inspiring as we are constantly focused on the medical aspects of physically disabled rather than how we can allow them to experience opportunities they are limited from. The tracking system is impressive to be able to track the eye, but I wonder how long someone can do it without hurting their eyes or getting tired.

City Runs

A years worth of runners workout routes in New York, London, and Tokyo aggregated from Nike+ to create a mapped visualization using OpenFrameworks and OpenStreetMaps. This installation was created by YesYesNo, and Dual Forces for Nike retail stores. This visualization allows viewers to understand these cities from a runners perspective, the more dense the white the more times that path was travelled. It would be interesting to see a week by week visualization of the routes to understand patterns in correlation to weather and time of year.

rlciavar

28 Jan 2015

timthumbComputational fashion by Iris Van Herpen + Neri Oxman + Julia Koerner

http://www.formakers.eu/project-763-iris-van-herpen-neri-oxman-julia-koerner-voltage

An eleven-piece collection of 3D printed wearables. These pieces were printed using the Objet Connex multi-material 3D printer, unlike most printers the Object can print a variety of material properties in a single build. This means both hard and soft materials to be incorporated within the design, allowing movement and varied texture. “The incredible possibilities afforded by these new technologies allowed us to reinterpret the tradition of couture as “tech-couture” where delicate hand-made embroidery and needlework is replaced by code.

I’ve been exploring wearables for a while. I like the challenge of making something that works closely with the human form. Unlike making a hand held product, a wearable requires a greater degree of accuracy. It must account for more subtle movements. I found it inspiring that they were able to build everything computationally in 3D modeling softwares without trying it on the model first. Multi-material prints on an Objet printer aren’t cheap and ones this size certainly aren’t quick. I typically make 1-2 mock-ups of a garment before I commit to the final material. Seeing the accuracy they were able to get using computational methods makes me want to applying this to creating my own designs.

Most of these designs are fairly loose fitting. I wonder how these materials would function as form fitting clothing. In one part of the article they mention the clothing working as an “armor-in-motion”. How could these methods be applied to making form fitting protective clothing. The multi-material techniques would allow certain areas (around the joints) to be flexible and areas prone to injury to be rigid. How could this be integrated with smart materials such as D3O.

img7880561607587770428

http://faceresearch.org/demos/average

Faceresearch.org is a participatory online psychology experiment that explores the traits people find attractive in faces and voices. Visitors can participate in online experiments or play with mashing together faces for fun.

Appearance, personality, scent, voice, these are all evaluated alongside each other when you experience attraction. Attraction is a complex part of the human experience. With the increase in communication via the internet it becomes even more complex. Apps like Tindr reduce the experience down to a photo and a swipe, left or right. Thinking about this study in the context of a world with Tindr is interesting. Can you determine attraction with only appearance? I noticed that the study allowed the women in the pictures to include make-up. This seems like it would skew the data since it’s not a true representation of their appearance. It might also make the study biased because I didn’t notice any men wearing make-up.