Avatar Apparelis a conceptual lifestyle brand that shifts your online #brand into meatspace. Clothing ranges from a Google Maps suit (a concept that is realized beautifully in Rachel Binx’s monochōme) to wearing your Facebook wall as a tshirt. My favorite realized line is Searchwear, which pulls the top images from a Google image search of your name and tastefully places them on a suit (highest ranked photos go on the chest, middle ranked go on the arms, lower ranked go on the pants). Why choose what patterns my clothes should have when Google can algorithmically decide what images are most “me”?
Random Darknet Shopperis an automated shopping bot. Each week it’s given a budget of $100 in Bitcoins and randomly choses and purchase one item on the deep web, which it then sends to the Kunst Halle St. Gallen. The items are then displayed at the exhibition “The Darknet. From Memes to Onionland”. Besides the shock value/questionable legality of the bot purchasing things like MDMA, the project presents an interesting portrait of what the modern black market looks like.
WikiGalaxy is 3D visualisation of wikipedia articles. What I like the most about this project is that the metaphor of galaxy that has been used. Wikipedia articles are related to each other. When I read about something I don’t know the definition of that word always has two,three words I go back and find again on wiki. I feel the connected-ness of the wiki articles are depicted beautifully by the artist. Also, wikipedia is like a galaxy of articles that is interconnected which forms a universe of its own. I find the visualisation pretty neat, especially the fly through animations and the features of UI- map and ability to turn off the UI and exploring random articles in wikiGalaxy. I also believe that there is scope of adding more features in UI and structuring the information in a better way when someone clicks on a star in galaxy (article in wikipedia). Currently, the data appears scattered on screen and is not readable. As a user, I enjoyed the interaction with UI but could’t read well on the website, which seems to be the prime focus of the project.
more- http://wiki.polyfra.me/
2. “Handheld” by Pors and Rao
Handheld is an interactive art piece. It simulates the motion of a small person holding a big piece of paper in both hands and trying to balance it. What I found most impressive about this project is simplicity. There are not too many elements involved in the design to create an effective experience . Also, the motion of balancing the piece of paper seems real. Hands at the edges facing outwards create an image of a person hidden behind the paper facing towards the paper. What’s missing or what can be added in this project at the cost of adding a little complexity is real time interaction with audience. If the motion of hand reacts to voice of a person, distance or ambient noise in the room, richer and engaging interactions can be created. If the artpiece is made into a painting and is made to tilt towards left when viewer says “left” (as if trying to get the orientation of painting right on the wall) and so on, I believe it will be more fun.
Using Processing and MetaCarta, this project maps travel routes by searching for the phrase “Just landed in” in Twitter updates and extracting the home city listed on a person’s profile page.
I’ve long been fascinated with maps, and being able to uncover and visualize global travel movements in 3D arcs from text updates is quite beautiful. The pulses in the arcs allows us to see the movement more clearly. Examining the maps shows the majority of the movement originating from North America, with not many flights mapped back to North America; this indicates that Twitter usage appears to be more concentrated in U.S./Canada, or perhaps that additional languages would need to be parsed to determine movements from other continents. Additional parsing would need to be done to determine movement from abroad to back home (North America). Also, the routes are plotted from west to east, when the airplane movement may not have necessarily traveled in that direction; this is likely a limitation of simply searching for the phrase “just landed in” and connecting location to the user profile’s hometown without additional directional information. The creator explained that he was inspired by a discussion with a friend to determine the spread of infectious diseases as well as the “Where’s George” project (that tracks the travel of dollar bills by having users enter the serial number and ZIP code). He concedes that much more work is needed to capture disease travel information accurately. But it’s a good and visually interesting start.
This project combines EEG signals with soft silicone inflatable air muscles. Brain wave activity is read and then translated to air pressure, causing these soft robot appendages to smoothly inflate and unfurl like tentacles. Seeing this soft robotic material bend and morph in varying contours is a pleasant contrast to the typically rigid material from which most robots are typically constructed, with many clear applications for more natural prosthetics for the severely disabled. New architectural possibilities are also enabled by this material. However, the collective appearance of this thing looks a bit like an alien sea-creature that is somewhat off-putting. Similar projects that influenced Furl include Slow Furl, a room-sized fabric installation that morphs and reacts to the presence of guests moving about in the room, and HygroSkin, a modular thin wooden skin that reacts to weather changes by contracting and expanding built in apertures without any electronics.
Explained: Intercepting/modifying/replacing commands from Playstation controllers using an intermediary microcontroller.
Chosen: I’ve been playing a lot of video games lately, and this just spoke to my current mindset. Software-reliant cheaters are pretty much gone from the online console community thanks to a variety of countermeasures, but what happens when the hardware is modified?
Critiqued: His given prompt for the project — to realistically slow characters down from exhaustion — is an artistic prompt, not a marketable one. I think opportunities are missed from presenting this object with a prescribed purpose instead of as a platform.
Related: Near Future Lab is a speculative design / design fiction practice, so much of their work is derived/inspired from (and in turn derives/inspires) those fields. This particular project seems to have been a product of two things: a desire to create an “anti-game controller” and to explore logic analyzers. I can’t find any notable works derived from it.
Explained: Contents of a computer’s memory converted to color and woven on a CNC loom.
Chosen: The actual content of memory has fascinated me since I learned how it works. The logic behind the jacquard loom provides an interesting analogue to that concept as well.
Critiqued: Since there’s a direct mapping from RAM to yarn color, the entire tapestry can be reverse-engineered thanks to an included key, i.e., cool. I wish there was more of an experiential connection between what the fabric shows and what was actually on the computer.
In this project, a team of developers called Takohi created a Gameboy emulator which runs inside Unity games. The idea of a game-within-a-game is further enhanced by running the game using Oculus Rift to create a virtual reality experience. I find the idea of nested games fascinating. Imagine if virtual reality were so advanced that it was almost indistinguishable from actual reality. If virtual reality games were nested inside other virtual reality games, it would be impossible for a player to figure out when they are playing a game and when they are in reality- basically the game equivalent of Inception. Of course, this idea is also pretty terrifying. That being said, I think there’s a lot of potential for games-within-games without taking it to such an extreme level (i.e. actions the player takes in nested games could produce consequences in the outer layer of games, etc.) To this end, I think the project didn’t explore all the things it could potentially explore with the nested game concept. The project is more of a technical demo and less of an exploration into how nested games can create a unique experience for gameplay. That being said, this project is obviously just meant to be a tool for others to use in their own Unity projects, so it’s understandable that the creators themselves might not explore all the possibilities available with nested games right away. The creators main influence for creating the project was simply that they enjoyed playing Gameboy games as children and wanted to create an emulator, so they decided to create one in Unity.
The second project I found was created by Yousuke Ozawa and is a visualization of classical paintings as data. In these pieces, the artist took digital image files of the paintings, and printed out the code that makes up these digital files, displaying this image data as a new work of art. I think these pieces are interesting because they are reverse takes on data visualization. They present raw data as an art form when usually data visualization pieces attempt to mask the data with pretty visuals. It also inspired me to think about how data can come from many different and unique sources: data isn’t limited to numbers in spreadsheets. While I appreciate the bold statement of just displaying the raw, unedited data, I do think that the project could be interesting if the raw data was more interactive with the viewer rather than just a long cluster of words to look at. For example, it could be interesting to provide some sort of app in which the user could search up certain words or patterns in the data, or compare patterns in the data between two different pieces, etc. In the article I read, Ozawa was influenced by going to museums and thinking about how paintings are so much more detailed in real life than in their digital replicas. That caused him to think of the physical qualities of the paint as data which inspired the data visualization project.