The goal of this project is creating a heat map of attractive and scenic places for Google maps. Find more about this project here.
For the first step, I developed a JavaScript based web-page combining Google maps and a Processing environment. In this web-page user can navigate through maps using Google map system, while JavaScript code transfer map boundary coordinates to processing applet embedded. Processing applet use map boundary data to get number of uploaded pictures on http://www.panoramio.com/, which are tagged to be within those boundary coordinates. Applet divides map to sub-maps, based on “Precision” value which is controllable using “Precision” scroll in bottom of the web-page, and get number of pictures for each sub-map. Processing applet gets number of pictures by querying following string:
in which minX,maxX,minY, and maxY are boundary coordinates. The result of above query is a JSON file containing a field called count, which specifies number of pictures within the coordinates. Using number of pictures and maximum pictures of sub-maps an array of transparency would be created. The JavaScript get this array from applet and create a heat-map for current map based on it, and display the heat-map on top of map.
Applet is able to create two different kind of heat-map, red transparent variant map and color spectrum map. User can alternate between two modes by pressing “Mode” button. Also, user can control heat-map overall transparency by “Transparency” scroll-bar. Finally, user can remove less attractive places by adding value of “Popularity”.
Currently, there are two main problems stops the web-page from working:
1- JavaScript can not get the transparency array from the applet.
2- Generating the transparency array has O(n^2) complexity runtime, which makes it really slow in “Precision” values more than 10.
Marco Evaristti, Ice Cube Project, 2004. From Wikipedia:
With two icebreakers and a twenty-man crew, Evaristti used three fire hoses and 3,000 litres (790 US gallons) of paint to color the iceberg blood-red. The artist commented that, “We all have a need to decorate Mother Nature because it belongs to all us.”
Although a very simple idea, it immediately communicates a powerful ‘open’ message. This could be commentary on whaling, general environmentalism, or even protesting a war. The juxtaposition of blood red, with pure white is strongly out of place.
We have developed the device which “invites/induces” natural icicles. Our device is composed of two parts- a pot and a string. A pot can preserve snows, control ices, and make a drop of water. A drop of water freezes again by an electronic-controlled cold string during trickling down. The accumulation of iced strings grows up to a big icicle.
8 LEDs which is attached to a string light up an icicle from inside. Light flickers like virtual drops, and generates new optical phenomena. Real water drops generate poly-rhythmic percussive sounds.
Using the above technique they can create icicles artificially regardless of the environment. What is really beautiful, especially in the top image below, is the way the icicles have been illuminated. They don’t look at this work as a ‘fabrication’ experiment, but rather an exploration of the aesthetics of icicles.
The shiny box above is “interactive art” that literally sells itself. You buy it on ebay, and when you get it, you plug an ethernet cable into it, and it sells itself on ebay. It comes with an interesting contract and list of rules which can be read at its ebay listing and comes with and instruction manual for cleaning and maintenance.
I think it would be interesting to see who’s been buying it, how many hands it’s exchanged, and how much it sells for at a time. I did find out though that apparently, it recently put itself online for a starting bid of $2,500 and is currently on sale for $4,250.
I’ve always enjoyed computational papercraft projects for the dichotomy they create. Paper folding is one of most traditional art techniques and 3D modeling one of the most cutting-edge, but they must both be present in a good papercraft project. The combination makes for some charming projects that make us wonder what things in our virtual world might look like IRL.
Above I linked to a great papercraft profile by Bert Simons, an artist who has been converting 3D models of people’s heads into real-life paper models. The results are convincingly realistic and can even be worn as a mask!
Simons did his paper work in a piece of software called Pepakura Designer which converts 3D models from various popular programs into paper cutouts (or laser-cut-outs) that can be glued together to create photorealistic models. There is a gallery of Pepakura-generated projects on the software’s website.
Others have had success creating paper models using the open source modeling software Blender.
Another (perhaps more politically charged) papercraft project is this paper AK-47 created by designer Martin Postler. I can’t find many details about how the model was constructed, so I don’t know if it was computer-generated or created by hand. One thing that this craft’s designer pointed out that hadn’t occurred to me is how paper-based models can take advantage of paper’s inherent malleability. People can customize their weapon by drawing on it, adding stickers, rearranging the pieces, or even burning it.
Update: I just found the company website for the designers who made this AK-47. It looks like it is computer-based. Check out their website for some other cool computer art projects.
As 3D printing becomes ever cheaper, many expect that someday a 3D printer will be as common and accessible as an office laser printer is today. Until then, however, it would be interesting if we could use our already-existing paper printing abilities to make interesting projects like these. There’s something special about seeing the virtual world become reality through 3D sculpture, and these paper products look like a great low-cost way to break that barrier.
Front Design, a Swedish furniture company, are pioneering a new technology they named “Sketch furniture“, also called “gestural furniture design”. They use motion capture to record pen strokes made in 3D and then use rapid prototyping technology to actually create furniture pieces from the motion capture data. This project struck me as really interesting because I have seen a lot of interactive art and computational design work, but I’ve never seen anyone attempt anything with furniture design. I think the concept is very interesting and adds value to the final pieces (which in my opinion are interesting to look at but not particularly beautiful). I also appreciate them making something ‘useful’; too much interactive art and design, in my opinion, is interesting and perhaps beautiful, but has absolutely no purpose. I could see this technology being developed to actually help furniture designers test out their ideas in 3D, and allow them to develop their concepts while thinking spatially, rather than having to do 2D sketches of something that will live in 3D space. I do think designers will become even more empowered if we’re not always stuck working on paper or a 2D screen, but can actually develop our ideas in 3D space.
It’s a interactive wall installation with many dots, and depending on how long a person stands by the dots, the dots will dilate accordingly. Here’s another picture that kind of shows the dilation process:
If you couldn’t tell…I’m having a hard time deciding on what exactly I should do for Project 3. So, I’ve been looking at various art pieces for inspiration. 🙂 I definitely think I want to incorporate sound into my third project though.
This project involves the fabrication of ‘polluted’ icicles based on the amount of traffic in proximity. Pure water is trickled out from a container at regular intervals, while passing vehicles cause darkened water to be dispensed and pollute the icicle. The project explores how we can visualize information though physical fabrication and also hopes to raise questions about how our presence impacts upon nature.
Background
Being a new resident of Pittsburgh and experiencing for the first time in my life a lot of snow, I was struck by how beautiful the fresh white snow can appear. I was also impressed by the beauty and temporal nature of the icicles that formed in the cold weather. At the same time, I also noticed how quickly the snow darkened at the side of the road as vehicles pass by.
For this project I was interested in exploring how fabrication could be used as a form of visualization. Digital Fabrication usually means, creating some kind of digital model, then outputting it to a given fabrication device such as a 3D printer, milling machine, or a laser cutter. In general, I am interested in exploring how we can ‘unify’ this process more so that realtime factors contribute to the fabrication process, i.e. the input more closely affects the output. An icicle for example, forms in an incremental process when water flows then refreezes repeatedly. Nature has no prior design for the icicle, but rather its form is derived from the process. This project is one small step towards exploring this bigger idea.
Some Related Artworks
Marco Evaristti, Ice Cube Project, 2004. From Wikipedia:
With two icebreakers and a twenty-man crew, Evaristti used three fire hoses and 3,000 litres (790 US gallons) of paint to color the iceberg blood-red. The artist commented that, “We all have a need to decorate Mother Nature because it belongs to all us.”
Although a very simple idea, it immediately communicates a powerful ‘open’ message. This could be commentary on whaling, general environmentalism, or even protesting a war. The juxtaposition of blood red, with pure white is strongly out of place.
We have developed the device which “invites/induces” natural icicles. Our device is composed of two parts- a pot and a string. A pot can preserve snows, control ices, and make a drop of water. A drop of water freezes again by an electronic-controlled cold string during trickling down. The accumulation of iced strings grows up to a big icicle.
8 LEDs which is attached to a string light up an icicle from inside. Light flickers like virtual drops, and generates new optical phenomena. Real water drops generate poly-rhythmic percussive sounds.
Using the above technique they can create icicles artificially regardless of the environment. What is really beautiful, especially in the top image below, is the way the icicles have been illuminated. They don’t look at this work as a ‘fabrication’ experiment, but rather an exploration of the aesthetics of icicles.
Prototype
Here is an overview of how the prototype works.
The water is released using a gravity feed solenoid valve. Its regular state is closed, so by turning an Arduino pin on HIGH for 100ms or so it will quickly open and close, releasing a dribble of water.
Here you can see the two valves attached to the bottom of the water containers.
The solenoid valves are controlled by a very simple circuit: a Boarduino and some transistors to deliver the 12VDC to the solenoid.
A photo-resistor is used to detect cars passing by. As icicles are made overnight when the temperature drops, it is easy to detect the jump in light levels when a vehicle with headlights switched on passes by.
To get an idea of how many vehicles were passing by on my street I made a small program to record the time and light level reading overnight.
I live on a reasonably quiet street, so there is not a constant stream of vehicles going by. You can see that between 2am – 5am it is pretty quiet, but from 6am it really picks up as people start their day. I also recorded the temperature on a given night, and was surprised at how little it changes. Granted the values below are rounded to the nearest integer, I still expected a more dramatic change.
From the image and movie below you get an idea you get an idea for how the icicle would form from the dripping water in the right weather conditions.
Trials
The next stage was to do a trial, and try and ‘fabricate’ an icicle. Surprisingly, there is such a thing as black food coloring, which serves the purpose of polluting the icicle every time a vehicle passes by.
When you drop some full strength coloring into the water it creates an interesting effect.
Here are the containers, one clear and one dirty.
As reliably cold as Pittsburgh’s weather has been over the first half of January, it appears that we are experiencing a ‘warm patch’. After leaving the device out overnight, I awoke to find nothing but a puddle of dirty water. The temperature was supposed to come down to -2C, but the lowest reported temperature I noticed was 3C. If anything, it is clear that the sensing/actuating device is working as the water has been polluted by the vehicles going by.
Reflection
After several more attempts, Pittsburgh’s weather just hasn’t been cold enough to go below freezing. Unfortunately I am away at a conference for the last week of this project, so it is frustrating to see it is going to dip below freezing during that time!
I have several improvements I would like to make to the device, thanks to suggestions from people in class. Firstly make the point where the drips form curve to a vertical. This should slow down the drops to allow them to freeze easier. Secondly, another suggestion was to attach some sort of freezer-made piece of ice to the dripping tip, in order to get the icicle started. I am hoping to try this out when I get back.
The most disappointing thing about not being able to form the icicles, is not being able to see how the ‘data’ would form a visualization. For example, would small black patches form? Or would layers of clear and dark ice build up over time? These patterns would be interesting to observe and also try and accentuate by varying the mixtures and release rates.
This week, the User Interface Design group at MIT* got some buzz for their work on a new end-user programming tool, Sikuli. End-user programming itself is hardly a new concept, but the levels to which this paper succeeds – letting people merely visually identify and “select” parts of any open website or application in order to command its actions – pulls up larger questions about the nature of art and design. Who makes art, art? Is it fluency in a medium? Creativity? Or some concept even more amorphous and impossible to talk about?
Projects like this make it clear that programming is what typing was some twenty years ago – a skill that is hardly understood, and mastered by “a chosen few”, but clearly needed by large sections of society.
When I was first taught to type in elementary school, I was taught very formal methods. I was told to keep my fingers on the home row, to keep a proper “posture” – and to never look down at my fingers. (Heck, they even stuck a box over my hands so I couldn’t see.) Only then, my teachers surmised, could I achieve the typing speed of “the typists” – true members of the New Economy, who could type at the lightening-fast speed of 100 words per minute. So much emphasis was placed on raw speed that even my Mario Teaches Typing game kept a constant numerical score of my typing speed in the upper corner – making it clear even to children that always faster was always better.
As it turns out, I didn’t really listen. (I didn’t listen to much back then, but that’s another story.) And years passed, and I grew up typing all funny, and I entered “the New Economy” with a typing speed less than half of the “typists” of the 80’s. Yet somehow, my typing is still enough to get me through society – as typing became a critical part of everything I did, at a variety of speeds. Even my mother, who didn’t take a formal typing class, and was not “chosen” from youth to learn the home-row method, picked up typing as well – in her own unique way, starting one finger at a time and learning via muscle memory a technique even half as fast as mine (but still useful for her needs).
So, if you believe my rant above, it would seem that in 30 years, people would be “programming” in many different ways. Maybe not in the syntax that we call programming today, but typists traveling in time from the 1980’s would be amazed at what we do today with a few, slow keystrokes and the click of a mouse.
So, let’s tie this back into art and design. There’s a running idea that “everyone is a designer”, and that given the right toolkits, people will be able to design themselves usable and meaningful experiences. This idea is right, in many ways – and end-user programming work like this will allow truly creative people without “programming skill” to, well, program things no one has ever dreamed of. At the same time, there’s a distinct implication throughout the *practice* of art and design that progress comes from mastering new and unusual *mediums* through which to express your ideas – be it the repositioning of atoms with small laser beams, or controlling a 10-foot tall, six-axis welding robot. When people are asked if they are designers, they often answer in terms of mediums – saying “Oh, I can’t draw”, or “I’m more of a back-end programmer” (referring to HTML and CSS as a front-end, “designery” programming medium.) I don’t think an average person would say “I’m not a designer because I’m not creative” – indeed, American society encourages creativity in all aspects of life, in line with our individualistic (“cowboy?”) values. I might even go so far as to say that all Americans are somewhat creative (assuming they were encouraged to color in kindergarten.) But are all Americans artists or designers?
Suppose I work for UPS. If a off-the-shelf tool like Sikuli allows me to quickly create a version of “Flight Patterns”, but instead tracking UPS trucks for a UPS marketing campaign, am I still making art or design? (Let’s assume for simplicity that no “plagiarism” is occurring; due credit is given by me to the original “Flight Patterns” creator, who is in turn fine with me creating it.) To viewers of my piece unfamiliar with “Flight Patterns”, my UPS marketing campaign might look totally original. In that sense, it would have just as much value as an art piece in terms of getting people to think (perhaps even moreso, with its massive outreach.) In that sense, the project is still “art” in the sense that it might let people see something for the first time. Yet there’s a distinct sense that, in this scenario, such a project would be uninteresting – or, worse, part of “marketing”, design’s bastard younger cousin. Now, let’s say that UPS built this same project, but gave it a slight spin – they occasionally zoomed it into Kentucky so people could watch the rhythm of trucks around their international air hub in Louisville. Here, UPS has embodied their design with an insight that “Flight Patterns” could never have – but is just bringing new information into an existing visualization enough to make it an act of creative design? How about art? Is an application interface hand-coded in Intel-chip assembly, or sewn together from tin cans, somehow more artistic or designery than a version written in PyGTK in 3 days for an anxious client?
At least today, it’s impossible for me to really answer these questions in any satisfying way. All I know for sure is, using just Processing (a tiny subset of Java, a clunky programming language which is itself nearly 15 years old) our class of students was able to reproduce the earliest 1963 digital artworks of Michael Noll in approximately 15 minutes each. As better and better end-user programming tools find their ways into artists and designers – and perhaps wanna-be artists and designers – it is inevitable that these tools will be created for new mediums even faster than artists can “master” them. At that part, is everyone a designer, or is no one a designer? I’m rooting for the former.
*Yes, I have unavoidable biases on anything coming out of MIT. So it goes.
Comments Off on Looking Outwards (Freestyle) – End-user programming and the nature of art and design
I was just playing around with Google Trends, and thought I would share a few things. First, with such a huge background of information, it is pretty easy to determine which trends are seasonal:
Blue: beach, Green: ski, Yellow: snow, Red: Rafting
The Second little bit if information is that all “trends” can be downloaded as CSV’s, making all of this data easy to parse.
The purpose of Data.gov is to increase public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government.
As a priority Open Government Initiative for President Obama’s administration, Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets. The data catalogs will continue to grow as datasets are added. Federal, Executive Branch data are included in the first version of Data.gov.
Participatory Democracy
Public participation and collaboration will be one of the keys to the success of Data.gov. Data.gov enables the public to participate in government by providing downloadable Federal datasets to build applications, conduct analyses, and perform research. Data.gov will continue to improve based on feedback, comments, and recommendations from the public and therefore we encourage individuals to suggest datasets they’d like to see, rate and comment on current datasets, and suggest ways to improve the site.
We are interested in hearing your ideas and comments about Data.gov. Please contact us and share them.
Goal
A primary goal of Data.gov is to improve access to Federal data and expand creative use of those data beyond the walls of government by encouraging innovative ideas (e.g., web applications). Data.gov strives to make government more transparent and is committed to creating an unprecedented level of openness in Government. The openness derived from Data.gov will strengthen our Nation’s democracy and promote efficiency and effectiveness in Government.
Comments Off on Looking-Outwards: New Data Sources
For my project I will create an interactive display containing videos of people perceiving a minute (without time keeping), where their spatial arrangement is based upon personal background information. When an individual’s minute is up, their video disappears from the screen. This is a continuation of a project I started a while ago, so I already have a database of about 20 people perceiving a minute plus their background information. The next step will be creating an interactive visualization of the data in openFrameworks in which the user can select the background information by which they would like to arrange the videos. This will hopefully reveal interesting correlations between a person’s background (such as their age, whether they drink coffee, and happiness) and their perception of the length of a minute.
There are many visualizations of the diversity of life on Earth, but most are only interested in the evolutionary relationships between species. In the vein of Martin Wattenberg’s Color Code, this visualization attempts to give an idea of the color of life by extracting colors from google image searches for the name of each species in the NCBI taxonomy database.
Colors
The color of each species is determined by the simple color average of the first five “photo content” hits on google images using the scientific name of the species as a query string. (source here)
This color usually tells you something about the environment in which the species lives, and patterns begin to emerge when comparing different types of life. For example, bacteria are pink or blue because of the stains biologists use to observe them under the microscope. Animals colors tend to be more brown than bacteria. Example color output from 100 random species in the database is available here. The data has not yet been processed to boost saturation.
Unfortunately there are some problems with this technique:
Straight-averaging makes dull colors and often tells you more about the background than the subject
Some species don’t have any images available to sample from
I’m hoping to solve the first problem by using a more sophisticated sampling technique. A color quantization algorithm will help identify the dominant colors in an image. After quantization and sampling for the most frequent color, I will increase the saturation to make the colors more visually interesting.
Even a saturated version of the straight-averaged colors (shown in the above article photo) makes the colors more visually interesting and highlights significant differences between species.
Nodes
I was hoping to make a more sophisticated tree visualization (like a phyllotree) but got bogged down trying to interface with my database (didn’t happen) and so my result isn’t very interesting.
The idea is that each leaf node (species) has a color from the Google search. Each species has a genus. If species share a genus, the genus node’s color is the average of the species under it. That way it’s easy to see when related species are similar in color.
I can imagine tracing down branches and visually being able to detect shifts in skin color or environment when the color of the nodes changes.
The final product, as mentioned, isn’t very impressive. I wasn’t able to export data from my graph database without exporting the entire 400,000+ node tree of life so I had to enter the values by hand. I imported the genus Pan (chimpanzee) into a javascript-based graph visualization framework. Here are the results.
Given a little more effort I think I could get the information out of my database and into something like processing or openframeworks and make it look more impressive. In the immortal words of CSI:Miami’s Horatio Caine, “I need… more time.”
So, for Project 1, I have decided to collect information and display it in an interactive scatter-like graph. I chose to collect the data myself. So far, I have 66 results. I’m bribing people at the UC with cookies on Wednesday (3:30-5:30) to get some more from CMU. However, I am pleased to say that I used Mechanical Turk to obtain most of my results–so actually half of my participants came from outside the US, which is pretty cool.
The idea behind this project is (as said in the title) see how different cultures interpret the meanings of words (via color). I provide eight words, and have each participant pick one out of eight colors for each word that he/she thinks best describes the word. Then, I have them briefly explain why they chose the colors.
First, I included “obvious” words, like Anger (which, as expected most people described as red) and Jealousy (which most people from the US and European countries described as Green). And, then I included some tricky words such as Happiness and Confidence.
Here is a prototype. For some reason, adobe reader is not happy with the font. So, here’s what the words say:
On the Y-axis, from top to bottom: Anger, Sadness, Jealousy, Happiness, Fear, Confidence, Beauty, Truth.
On the X-axis, from left to right: Red, Orange, Yellow, Green, Blue, Purple, Black, White
This graph is a prototype, but it basically shows how my final project’s graph will look (without the interactive-ness). The data used in the prototype are 20 participants from Mechanical Turk.
On the final, you will be able to select a filter (on the top right–you can filter gender, age range, and country of origin), and the graph will morph into another graph with the new filter settings. There will also be a random comment displayed from participants depending on which filter is set (I denoted where the comments will be displayed in the prototype).
I guess the issue I’m running into is whether or not people will be able to understand the data. The graph with the pretty circles and font looks really nice, but I’m wondering if people will be able to understand it. Any comments are appreciated! 🙂
Comments Off on Project 1 Sketch — Words Across Culture
This guy has a really cool data set of over 5000 “active” GIT repositories. He pretty much did the first 3 steps of Ben Fry’s steps to making an info-vis: acquire, parse, and filter. He produced some basic statistics on the data, but there’s probably a lot more interesting information hiding there!
My favorite is the “Number of swear words per 1000 commits by language.” I remember an old javascript/php web app I made where I didn’t have a dev environment set up, I had to commit to the actual server to see results. Every time I had to debug something I’d end up with 50 or so commits just on that issue…many of the comments were filled with cursing 🙂
In tune with some of what has been shown in class, this is another project with a simple premise with meaningful implications. Presented in heat map form, it attempts to show the degree of “remoteness” in every location. It does this by calculating the distance it would take to travel (using conventional methods) from any point on the map to the nearest city of at least 50,000.
At a glance, it shows how mobile the world has become, and to me, the reds and yellows emphasize the dominance that humanity has over the planet. (Perhaps an interesting side note: in the comments about this image, people disagree over whether this is a good or bad thing.) It also reveals where people have not established themselves, showcasing the most inaccessible places on Earth. I was surprised by the remoteness of Tibet.
This project seems very interesting–as you move through the art piece, the images and sounds played on the multiple projectors will change based on your movements.
It’s very similar to what I would be interested in doing for Project 2–and has giving me a lot of ideas!
I really wish I could actually go and see the piece. The idea of a cacophony, or a mixture of sounds you would not necessarily want to hear together, is very intriguing to me…especially if the cacophony consists of people’s voices. I can imagine the piece is very eerie if seen in person.
Comments Off on Looking Outwards — “Alien Space” by Alexandra Dementieva
Aharef has an interesting project going on here, which interprets popular websites as graphs.
Unlike a lot of other website-visualization projects I’ve seen (where the nodes are webpages, and if a link exist betweens one webpage and another, there exists an edge between the webpages’ respected nodes), this project instead has graphs which represent html tags as nodes and the edges are created as a result of the hierarchical nature of tags. Or, for instance, let’s say there was an image tag within a link tag, then there would be an edge connecting those tabs’ respected nodes together.
Aharef colored the nodes based on what type of tag each node represents. For instance, dark blue nodes represent link tags and red nodes represent table tags. The piece was written using Processing and actually runs in real time–you type in a website, it goes to the website’s source code and calculates the graph.
One of the interesting depictions on the site is of boingboing.net:
You can see that the website has one large tag which has many many other tags within it.
I still code in html, so it was interesting for me to see these pictures because I can really see how the website designer structured the site. Although, I wonder if someone who was not familiar with html would find it interesting (besides the project’s aesthetic nature–the graphs themselves are very nicely layed out and colored).
Comments Off on Looking Outwards — Visualizing Websites
Although I’m extremely interested in data visualization, I usually don’t discover anything through it that’s actually useful–ie: changes that way I live. However, this project–a bar next to the shower that lights up one bulb for each 5 liters of water used–is something that could really have an impact on me, and as a result, the world. I do have a bad habit of taking long showers, often because I lose track of time since I’m just waking up. This project is a great way to communicate this type of information (I’m blind without my glasses, but the lights would be easy to see), and it’s also more accurate than other methods I’ve devised (like watching the clock,which isn’t an accurate measure of water consumption AND requires me to step out of the shower). Plus the thing looks cool.