Stippling is way of drawing by means of small dots if varying sizes. Usually, a single color is used by pen or brush. If the dots are drawn closed to each other, the apparent shade is darker and if the dots are placed far away from each other, the apparent shade is lighter. Stippling is found in nature in flower petals.
Robert Hodgin created a stippling algorithm to convert images into dot patterns. The particles emerge from the center of the image and start pushing each other out to create the final image. What I like about this piece is that the particles magically make the image once there are enough particles generated. There is a moment where the viewer ‘sees’ the image in hundreds of particles and goes ‘wow’.
Reference : http://roberthodgin.com/stippling/
StippleGen:
StippleGen is software developed with Processing. It takes in images and converts them into stippling images. the software also has mode of Voronoi cell display. The number of stipples (points) in the image are controllable. More the number of stipples, more intricate the resulting image is.
The House of Walker is an interactive Johnny Walker whiskey tasting experience created by Nelson Ramon. This interactive exhibit took place in my hometown of Austin, TX; guests to this event were given RFID cards that they placed in a slot at their tasting mats. Two 25-foot tables embedded with 6 glass panels each were illuminated with visualizations and information that corresponded to each type of whiskey that they sampled, and guests were able to share their experiences on social networks directly through a button on the table. The video shows testing and only images are included for the actual event, so it’s difficult to see what other information was displayed. This exhibit is an interesting application to visually enhance and personalize alcohol sampling experiences, and can be applied to other types of events in the service industry to add an educational element to the consumption experience.
Curio Cabinet
The Curio Aquarium is an interactive exhibit created for the 2014 Grace Hopper Celebration of Women in Computing Conference developed by Specular with support from Microsoft. In this installation, users can create bug-like creatures from tangible wooden pieces, place them inside a cabinet, and then see a digital version become born inside the virtual aquarium, often interacting with other digital creatures created by others. I like how the users can participate in the creation with their own hands-on physical creations and then observe their digital counterparts interacting with other creatures. The exhibit uses several Kinects to scan the shapes of the physical creation and translates it to a virtual creature. The cabinet also listens so that when the viewer speaks the creature’s name, it comes to life with its own personality and interacts with the other creatures. Perhaps the Kinects can also detect movement outside the aquarium so that the digital creatures can react not only to each other but to the viewers outside? I would imagine that this can also have educational applications for young students.
“Corpus-Based Visual Synthesis: An Approach for Artistic Stylization” by Parag K. Mital, Mick Grierson, and Tim J Smith recreates the styles associated with Impressionism, Cubism and Abstract Expressionism using algorithmic means. The process matches geometric representations of images to corpora of representative images in a database. The researchers also created an augmented reality “hallucination” which applies the stylization process to a feed from a camera mounted on augmented reality goggles. The project page includes a video that synthesizes Akira Kurosawa’s “Dreams” using an image database that of Van Gogh’s “Langlois Bridge at Arles.” The result is convincing and beautiful; I’d like to watch an entire movie this way. An accompanying paper, presented at ACM Symposium on Applied Perception 2013 lays out the technical details of the research. It would have been nice to interact with a working demo, but the powerpoint and paper are thorough enough for one to recreate the process independently.
“85 CE 86 EE 4B B1 72 9B oA AD 15 46 47 33 2C 30” is an eighteen minute sonic reenactment of the Boids program, developed by artificial life researcher Craig Reynolds in 1986. TFC is Lars Holdus, a Norwegian artist, whose technology engaged practice deals with rhythm, seriality and melody. 85 CE 86… features what synthetic birds sounds over layers of synthesizer textures. Holdus doesn’t explain how the piece reenacts Boids, and it’s unclear whether the piece depends on Boids poetically/conceptually or structurally/technically. The work is compositionally dynamic: it oscillates between modes of ambience and noise. If 85 CE 86… is procedural, it’s difficult to detect an algorithm or set of rules governing the sound. Holdus writes on the project page that “labelling computer generated species after preexisting ones complicates our relation to the former.” We project our understanding of birds into Boids, and see the artificial creatures as lesser imitations.
Power Vocab Tweet is a twitter bot written by Allison Parish, which posts randomly generated words and their markov’d definitions. She calls this an exploration into “speculative lexicography”. I found this work interesting because it takes a generative approach to text (a lot of high profile examples of generative media are based in art and sound), and because it attempts to kind of assault readers with randomness; by compelling an audience to process this random, generated text, it forces them to think about something truly novel and reflect – in the same way Dada often does.
Parish explains that the project was inspired by the many existing “word of the day” twitter bots that send users new vocabulary words daily. She also draws philosophical inspiration from author Suzette Haden Elgin. Parish explains:
“Elgin’s contention is that the manner in which a language “chunks” the universe of human perception into words reflects and reinforces structures of power; therefore, to break the world up into words differently is a means of counteracting the status quo.”
In her work (albeit less rigorously), Parish attempts to explore similar themes.
I like the artist’s idea of making their audience think by having them read plausible but ultimately meaningless text. However, I’m not sure if I believe this project manages to do that. I think there exists a lot of content that compels us to reflect and emote, but vocab words and their definitions aren’t one of them, if not simply because there are so many words out there we simply do not know or never use – it’s neither a novel nor a particularly gripping experience. I would rather she do generative “missing child” signs or history books, something where imbibing the text triggers an emotion or forces us to buy into the nonsense on the page.
Energy Flow, a joint effort of FIELD and The Creator’s project, is an interactive, generative film that links together 4-10 storylines about the forces that shape the modern world – the narrative changes each time the film is played. It employs non-linear narrative and abstract representation, entreating the viewer to bring their own interpretation to what they see on the screen.
This project excites me because it uses generative narrative to try and make compelling statements about the world at large. I haven’t seen the film, but the medium of generative filmmaking is very interesting to me, and it appears that the creators created complex algorithms to power the presentation.
The statement that the responsibility of creating a message from the content lies on the viewer is a little sketchy – an argument could be made that even with randomness, the creator should have some intent or meaning in mind, and that if it isn’t communicated, the nonlinear medium is sort of ineffective. I don’t know how much I buy into that, though.
The piece is inspired by “current” events such as Arab Spring and Fukishima – the piece tries to address the chaos of the modern day. Many of FIELD’s previous projects also explored generative form and nonlinear narrative.