Category Archives: looking-outwards

sejalpopat

12 Feb 2015

For this looking outwards I mainly looked at papers that related to extracting patterns from 2d visuals.

Pattern Recognition Using Genetic Algorithms
In this paper the author recounts how his approach to designing “creatures” in a genetic algorithm and how they perform at recognizing patterns in 2d visuals. I thought this was interesting because the author emphasizes drawing from existing visual systems in animals and refers to that in his design. One problem with this paper is it reads like a journal entry about ideas that may potentially be more fully explained later but are not quite fleshed out yet; given this it was hard to follow some of the paragraphs that trail off into different possible explanations for the observed results.

A Language for representing and extracting 3D semantics from Paper-Based Sketches
I
 liked this paper a lot more because the application of the research was not unclear; I think its really interesting to think of pattern recognition in terms of recognizing parts of a 3D geometry and not just the repetition of 2d patterns like the previous paper. This paper also appealed to me because I find the idea of paper-based programming and languages that are not linear (but spatially organized) super fascinating. The goal of the paper is to allow sketching in conjunction with annotation that defines operations (i.e. “extrude”, “sweep”, “revolve”) to result in 3d forms.

mileshiroo

12 Feb 2015

Caffe / ofxCaffe

Caffe is an open source deep learning framework Yangqing Jia developed during his PHD at UC Berkeley. The framework can be used for image classification, and a demo on the site lets you submit images to the system and get words back in return. I submitted an illustration of a man to the service and got back the words “consumer goods, commodity, clothing, covering, garment.” Since I don’t know a lot about machine learning or neural networks, it’s difficult for me to understand exactly what this framework is, but I just have to read more at this point. The site is comprehensive and includes links to tutorials, examples and other documentation. Parag Mital made a wrapper for this library called ofxCaffe, which he describes as follows on the GitHub page: “openFrameworks addon for visualizing and interfacing with pre-trained models in Caffe.” I’d like to try to use this library in a future project, but I have to read up first.

“NSA-Tapped Fiber Optic Cable Landing Site, 
Mastic Beach, New York, United States” by Trevor Paglen

“NSA-Tapped Fiber Optic Cable Landing Site, 
Mastic Beach, New York, United States” is an interactive diptych by artist and geographer Trevor Paglen, included in the Data Issue of Dis Magazine. One interacts with the diptych using the Google Maps interface, which is a smart UI choice and an ironic gesture in light of the subject matter. The left side of the diptych features an image of Mastic Beach, one of several NSA-tapped fiber-optic cable landing sites in the US. On the right side is a collage of image and documents relating to the site — gathered from the Snowden archive and other sources — with annotations that appear when you mouse over them. The base document is a map used for marine navigation, which indicates the location of undersea cables. Paglen’s diptych avoids the abstract metaphors of mass surveillance, and instead draws from the methodologies of experimental geography. I appreciate this work’s emphasis on the physical sites and infrastructure of surveillance, and its clear presentation of multiple layers of a complex subject.

Yeliz Karadayi

12 Feb 2015

Twitter Bot: “The Sorting Hat Bot” by Darius Kazemi. 2015

sortinghat

The clever thing about this bot is that it takes a popular character that everyone wishes they could interact with, and allows them to interact with it. Everyone wants to know what Hogwarts house they belong in, and that’s what make this bot so engaging. Throw in the rhyming and it’s a home run. Only problem I have with this is after a while of looking at posts I start to see some bad rhymes or repeated rhymes. It could have been smarter but who is going to put in the effort to do that, honestly. This was good enough to make it a huge hit.

More

“EMERGING FACADE – swarm-designed structure in Grasshopper” by Jan Pernecky. 2015

EMERGING FACADE – swarm-designed structure in Grasshopper from Novedge on Vimeo.

You know what’s insane? I posted my swarm jewelry … February 10th? And this video was posted around the same time. Great minds think alike, I suppose…Jump to exactly 1:02:00 to see what I”m talking about. It’s EXACTLY the same as what I made, except he rendered it better. I have no words. Well, I do have words. Mine was a necklace, and his is a ring I think. Not that that makes a difference. Yeah no, I really have no words.

ST

12 Feb 2015

My Looking Outwards this week is about narrative and the unique timeline techniques that computationally delivered stories can employ.

The first is Taboo, created in 2008 by Carmen Olmo-Terrasa. The work consists of web pages of ASCII art. The imagery is drawn from religion and sexual fetish.

lo2

Each image has several hyperlinks embedded, that take the viewer to a new page, and new image. It reminds me of interactive fiction, in that there is an ending, and a point at which the narrative must start over. This point is denoted by this awesome page:

lo1

The project is mostly in Spanish, so I wasn’t able to get the whole sense of the narrative. However, I did enjoy the relationships between the text that I could understand and the imagery. This relationship was even more interesting because the image was made of text.

 

The next project is Short Story by Jon Thomson and Alison Craighead.

This story was arranged into 7 steps. Each step had two distinct options. When you click on the image, the option changed. Then, clicking on the text would transport you to the next step. So, this story could be anywhere from 7 to 14 steps long! It was also looping, so besides the enumeration, there was no clear beginning and end.

lo4 lo3

The story was fairly interesting.  I found some steps more intriguing than others, especially enjoying the ones that featured dialogue transcript or described the image they were paired with.

John Choi

12 Feb 2015

As you might know by now, I’m a really big fan of robots. So I’m going to do a Looking Outwards on more cool robots:

Sepios by ETH Zurich, 2015

Sepios isn’t really an art project in the traditional sense, and is more of an exploration of underwater robotic actuation. But it looks and moves in a really cool unconventional way, so I’m going to call it art. Basically, it emulates the motion of a cuttlefish, and has 4 fins that wave around smoothly to create omnidirectional movement underwater. And when I say omnidirectional, the robot truly is omnidirectional: it can move around in 6 different directions and can rotate on 3 different axes, giving it unsurpassed maneuverability. The way it moves its fins is by actually using 36 servomotors, 9 for each fin, and then swiveling them in coordination to create propulsion. While the robot is still experimental and in development, I don’t think it will be that practical, especially when compared to underwater robots that use simple rotors. But then again, its moves in a really fluid, expressive way. Note that this is a student project at the Swiss Federal Institute of Technology (ETH) at Zurich.

BeachBot by Disney Research, 2015

BeachBot is a robot that is actually designed to make art.  Basically, what it does is draw very large pictures on the sand.  Somewhat reminiscent of turtles trudging through beaches during mating season, this robot rolls around on balloon wheels to leave no trace on the sand except where intended with a large precision rake that is dragged around on its rear.  The rake, when applied to the sand, leaves a dark mark, and BeachBot uses its location and angle sensors to get accurate coordinates of the lines and curves it etches on the beach.  It’s a simple, but brilliant concept, and it is executed very well by the Disney Researchers.  I think this project might be improved by allowing “shades” in between lines for a gradient effect.  One way of doing this would be to have a tank of water and spray areas with varying concentrations of water to darken the sand.  But that’s just an idea, and I would love to see one of these in action while strolling around in a beach.