MadelineGannon-Project2-Visualizing the Burst of the Housing Bubble

by madeline @ 2:15 am 9 February 2012

Visualizing the Burst of the Housing Bubble

The impetus for exploring the burst of the 2006 American Housing Bubble came from the personal effect the market collapse had on me and my family. My part of the US (S. Florida) was hit particularly hard, with my hometown (Fort Myers, FL) leading the nation in mortgage defaults… This fiscal disaster has been more damaging than most of the natural disasters that have hit this region in my lifetime, and this data visualization is an attempt to convey the perverse distortion of the home, from nest-egg to toxic asset over the decline of the housing market.


[flickr video=6845886229 w=588 h=238]


[flickr video=6846041285 w=570 h=230]


This process video shows the way I manipulated a digital house to reflect the decline / malignant growth of the American housing market from its peak in 2006 to its current index value today. The iterative deformation is based on the S&P/Case-Shiller House Price Index, a national standard for gauging the state of the residential real-estate market. The 20-City composite index shows quarterly values calculated through the volume of repeat sales of single family homes. The second quarter of 2006 held the all-time historic high for the market, the apex of the housing boom, and was followed by 12 straight quarters of collapse. The past 8 quarters have begun to stabilize, and are currently trending around the 2003 index rates.
[vimeo 36455412 w=600&h=400]

The percentage of change from quarter-to-quarter determines the strength and distribution of the forces from attractor point. A drastic decline (roughly 9% change) pushes the attractor threshold to its maximum strength, and thus effects a larger number of mesh points by a magnified offset. Less drastic declines (1% – 3%) lower the threshold value to effect more localized points by a weaker offset. For the quarters that show slight growth, the attractor is brought to the centroid of the house to choose the points least modified to begin to smooth the roughed surface.


Screengrabs while exploring form generation/deformation:


by craig @ 2:04 am

For this project I decided to focus on making an information sonification rather than an info visualization. I came across some data for the Billboard Top 10 chart since 1960, containing data for the key and mode (major or minor key) for each song on the chart. I was interested in using this data to develop a generative composition, meandering through history based on the progression of popular music. I unpacked this data in Max/MSP where I generated arpeggios by finding the minor or major 3rd and 5th of each note from my data. I sent this data to Ableton Live to generate audio.

While it was interesting to me to create a rather spooky and minimal piece of generative music using this data, it became apparent that more data could be presented by creating a visual counterpart to the sound. I decided to work with the Google Image Search API to generate images based on the dates that the songs came out. A search query containing “September 6 1963”, for instance, would return, typically, a magazine cover from this time. I sent the date values from Max/MSP via OSC. I also sent the artist and song data, which is displayed below the photographs, which fade in with each subsequent chart entry that is encountered.

In the future I hope to find better ways of blending together images, so that they better correlate with the tone of the music. I would like to look into effects that blur the images, and potentially add motion. Also the text could be treated such that it blends properly with whatever is behind it. If anyone has any pointers for how to accomplish this, send them my way!




by eli @ 1:32 am

For this project I chose to visualize the battles of the civil war. Aside from the fact that I have always loved to study the civil war, I found this to be an interesting data set because it has so many different dimensions: battle names, geographic information, dates, commanders, casualties, et cetera.  The data was scraped using a python script from the website Thanks to Asim Mittal for all the help with python. I thought at first to visualizes the data on a map as a kind of animation of the civil war but found that that had actually already been done.
My next intuition was to use a timeline. I used this early processing sketch as a way to see what the data would look like on a timeline with the battles sized by total casualties;

I liked this direction but decided I had to add a second element to make the project more interesting. I toyed with the idea of having an animated chronological visualization where as a playhead reached a battle it would explode, scattering two colors of particles that would represent the casualties on each side. Instead I decided to focus on the commander data to try to show where commanders had fought throughout the war. I drew connections for all commanders to all their battles. I found this to be much too cluttered and decided instead to introduce a selection mechanism so that only one battle and set of commanders was in focus at any one time. The user can step through the battles with the right and left arrow keys. Below is the result:

At this point I just added the legend to finalize the piece:

I’m pretty happy with the result. I think it provides some interesting high-level information about when commanders fought and how they fared. It also reveals some instances where commanders faced off in multiple battles. It is also interesting to see how the war escalated and then petered out. The visualization does not however give good insight into individual battles. The exact number of casualties is hard to discern and there is no information on the size of the armies that faced off. The areas where many battles occurred in rapid succession become very difficult to read and the dense areas do not read as being as dramatic as a single large battle even though the total casualties may be greater. It might be nice to have a supplemental tool to target a span of time and see the total casualties on each side.

Here is a video of the visualization as I step quickly through the battles of the war:


Here is the live processing applet:
Select the applet and use your left and right arrow keys to step through the battles.


by sankalp @ 1:08 am

What if you could see color?

What if you could do more than just look at color? To perceive it, understand it, analyze it, really see it, color needs to be visualized.

My name is Sankalp, undergraduate Mathematician + Designer at Carnegie Mellon and I plan to do just that. The first question we must ask ourselves is, how can we possibly visualize something already visual? The answer I found can be summarized in four steps: First, treat color as data. Second, interpret this data as information. Third, graph this information. Fourth, notice patterns in graphs.

“Treat color as data”

In order to treat color as data, we must quantify the the qualitative. To do this, we must reference basic color theory. Every color that we digitally see can be broken down according to the RGB model. This model yields three distinct values according to a particular color’s Red, Green, and Blue hues from any given blended color. For example, the color Black has 0 hue increments in Red, Green, and Blue and thus has an RGB value of (0,0,0). Whereas, the color White has 255 hue increments in Red, Green, and Blue and thus has RGB value of (255,255,255).

“Interpret this data as information”

To interpret this data, we need to increase the scope of color to palettes. A palette is simply a range of colors used by artists, designers, and other creatives. In design, palette consists of around 5 different colors. These colors are usually brought together for their contribution to a palette’s overall mood or theme. Because of the nature of palettes, designers tend to create various versions in pursuit of a specific quality. In fact, at Kuler, designers from around the world can submit their individual palettes with “tags” relating to the mood or theme they interpreted from the palette. Kuler even has the option to organize the available palettes by “Most Popular” (in relation to how many times that palette was downloaded), “Highest Rated” (in relation to how many stars out of 5 the palette averaged), and “Newest” (in relation to the palettes most recently submitted). This online tool is widely used by artists and designers and will serve as the perfect host for our information visualization.

“Graph this information”

Graphing palette date requires an understanding of some geometry. Consider any color you’d like. This color will have a unique RGB value expressed as (r,g,b) where r, g, and b are any integer, distinct or not, from 0 to 255. To continue further with our project, we must convert these r,g,b values as coordinates. Luckily, if we frame our values using intermediate coordinate geometry, this conversion comes for free! If our graph has three axes 120˚ apart that share a mutual minimum value of 0 and distinct maximum values of 255, we can plot 3 points on this plane. Then, we must only generate a triangle with these 3 points as the 3 vertex points (an apex, a base right vertex, and a base left vertex). This triangle would look similar to this images (without all the economics jargon):

Now that we can conceptualize a single color, if we expand our vision to an entire palette full of colors, and their overlapping triangles, we can begin to theorize the possibilities of shapes that may emerge. However, to fully grasp the effect of the color palette, we must also represent that color as we usually know, by sight. Where filling each color triangle from each palette with its corrsponding RGB color could quickly lead to a clustered inaccurate graph, simply changing the stroke color to that color triangle’s particular RGB color will succeed in both efficiency and visual simplicity.

To efficiently graph these palettes in a way that was visually effective, I used the Vormplus colorLib library for Processing language. I also needed access to Kuler’s API, which I requested and received from their API Key Kuler Services page. With these tools, I wrote code that would use an array of every palette, with its own array of 5 color RGB values, of the top 21 palettes returned from Kuler after a given search to generate colored triangles as described above. Essentially, the program I wrote takes in 21 palettes and outputs each palette as a series of overlapping triangles, with no fill, placed in sequential order  where the top left-most set of triangles represents the 1st palette Kuler returns and the bottom right-most set of triangles represents the 21st palette Kuler returns.

“Notice patterns in the graphs”

The following images (and below high quality .pdf’s) are results from several iterations of my program with various search queries:

Most Popular Palettes:

Highest Rated Palettes:

Most Recent Palettes:

Palettes of the Sun:

Palettes of the Moon:

Palettes of Envy:

Palettes of Sex:

Palettes of the 70s:

Palettes of the 80s:
Palettes of the 90s:
With a graphs like these, we can finally begin to see color for what it is and furthermore, what role it plays in a total palette. Given that I have been studying coursework in both Mathematics and Design, this type of visualization is right up my alley. I am really happy about how it turned out and furthermore, excited to link patterns in color to patterns in geometry. Ultimately, I do plan to keep working with this down the road, and throw in stuff like Heronian Area formulas to sort the palettes by maximum total area, but for now, I’m more than satisfied with the above results. If you would like to see a complete PDF of every generated palette posted above in High Resolution, please download it HERE.

This assignment really pushed my skills to their max. A lot of work and a lot of iterations of this project were made until I got this program working correctly. But in the end, regardless of how long it took, I learned a lot. Before this assignment, I had no idea what an API was and I had little experience with such large arrays, but after this project, I’m very proud of the programming skills I had to pick up quickly given the time constraint of 2 weeks. To me, this assignment represents persistence. Had it not been for Golan’s motivational advice a few days before this assignment was due, I would have given up entirely out of stress or intimidation.So thank you Golan.


Blase- Project 2- The Swype-ification of Password Creation Data

by blase @ 12:06 am 8 February 2012

For the last few weeks, I’ve been collecting data on people creating passwords for a research project. As part of this data collection, I’ve gathered keystroke and timing data for 1,200 people creating a password. I decided to visualize the temporal and spatial elements of this password creation.

Since I’ve been spending a lot of my time lately investigating the passwords people create under different experimental conditions, I thought it would be interesting to understand not just the final product, but the process of creation for these passwords. For instance, are there visual patterns on the keyboard that people employ? These patterns might not be obvious (e.g. 1234), yet these patterns would appear if properly visualized. Furthermore, the thought process that goes into password creation is simply interesting, both from research and aesthetic views, since the user is going through a complex decision-making process in which she creates a password that she believes to be secure, yet memorable.

While the creation of a password in isolation is interesting, the way these visualizations can be viewed in aggregate via small multiples is the most interesting part for me. I created a 4×2 grid of keyboards so that, even on my 12″ laptop’s low-res screen, I could view 8 passwords being created in parallel. With a data set of 1,200 passwords, the way these 8 passwords are chosen is very interesting and crucial. For instance, in the demo video above, I filter to only look at people who used all 10 unique digits on the keyboard in their password. Another metric could be only looking at passwords that are strong in the face of a guessing attack, yet users claim are memorable— are there patterns that seem to assist in creation of memorability? Other metrics for filtering might include everything from the linguistic structure (how pronounceable/”English-y” is the password) to the distance between keystrokes to the extent to which successive keystrokes are found on opposite sides of the keyboard, therefore being typed alternately with the left and right hands.

On a technical note, I implemented my project using Processing. In particular, I used its drawing functions extensively.

As part of my visual technique, I tried to draw inspiration from Minard’s 1869 map of Napolean’s invasion of Russia. In particular, the way Minard used the thickness of a line, its direction, and its color to all communicate different elements influenced my decision to show the ordering of keystrokes with color (from black to red), the temporal delay between keystrokes in a statically visible way (the thickness of the line), and the direction (which keys were pressed, as well as what the pattern was).

I also drew inspiration from Swype, the software for entering words on a smartphone by gliding your finger across the key presses. The idea of concentrating on the spatial layout of the keyboard was first suggested by my peers in our breakout sessions; I was initially concerned with just the temporal element of the text.

I think the underlying code was quite successful; it’s robust in the face of errors in the data and all sorts of user behavior (including deleting passwords partway through). Furthermore, the choice of the multiple dimensions of color, thickness, and location for the line did successfully reveal information that I previously hadn’t understood. However, due to time constraints, I’m not happy with the way filtering works. I’d rather have a database of all 1,200 passwords, along with some filtering “knobs” the observer can turn to choose the 8+ passwords to be visualized concurrently as the observer sees fit. I was a bit stuck on how to best implement this GUI in Processing, along with the amount of time it would have taken to make robust filtering possible. I’m also not convinced that my choice of black->red for color was the most aesthetic choice.

Luke Loeffler – Data Visualization – Portrait of IKEA

by luke @ 9:50 am 7 February 2012

To put the motivations of this project in context, one of a series of videos I made recently.

I’m interested in visualizing mass quantities of material, removed from their normal context (which is in carefully-curated groups of furniture in homes or the store). Though I do draw some simple analytical conclusions, the interest was more of making a visceral impact.


more screenshots on flickr

The dataset was collected with a custom python spider written using the mechanize and beautiful soup libraries to fetch and parse html, the urllib to get images, and put into a sqlite database for organization. A photoshop batch script was created to create transparent PNG files from the original jpegs by removing the white background. A program written in Processing using the opencv library was used to extract the perimeter data and store the vertices in the database. Additional programs were written to analyze color, size, and language in the dataset.  The main visualization/game was written in processing and using the fisica box2d library.

Average weight: 24 kg, max weight: 308 kg
$1,401,688 to buy one of everything.


For a while, I will be sharing the dataset itself which consists of a sqlite db file and approximately 20,000 500×500 images (half original jpg/half transparent png).

Joe Medwid – InfoViz

by Joe @ 8:53 am

A Sexual Network Visualization

ClusterF**** is intended to visualize the folk wisdom that, when you sleep with someone, you’re sleeping with all of the people everyone they’ve ever slept with has slept with, as well. The premise is that we all know our personal history, and we may know the history of those partners, but after that? Things tend to get very fuzzy very quickly. ClusterF*** displays each of these faceless individuals and a single mass of humanity.


Inspiration for this visualization came from many sources, beginning with a link from Patrick’s list of visualization resources to, a web tool for tracking your own personal history. This immediately brought to mind an old college buddy who famously kept a personal spreadsheet of exactly the same information. Associations started rolling in, from the excellent data on the OK Cupid Blog to the old Kinsey studies.

Data was gathered from several academic surveys on the sexual activity of persons aged 15 to 44. The full list of data is hilariously extensive, covering most every conceivable combination of sexual orientation, type of intercourse and frequency. As my experience with data scraping is virtually nonexistent, I combined this data into a simply 2D array, drawing only from gender and age information to create the visualization.

As a self-critique, I had originally intended the visualization to be more interactive, along the lines of this excellent New York Times feature on Family size statistics. GUI elements in Processing proved to be a major roadblock, so in the end I kept things as simple as possible.

Code for the project may be downloaded here.

John Brieger — Info Vis – Project 2

by John Brieger @ 8:36 am

For my information visualization, I wanted to visualize data in a way that engaged users with data that might be unpleasant. As most information visualizations are clean, crisp representations of facts, I wanted to create an interactive experience that forced users to do something bad or unpleasant in order to see data.


Wouldn't "Slash and Burn" be a great name for a metal band?

The dataset I chose was deforestation in the Amazon from 1987-2011. Originally, I wanted to have the unpleasant interaction be chopping down trees, but the chopping made too much of a game out of the data, which both trivialized the presentation and sort of made it “fun”.   I wanted users to have sacrifice something they held dear to them in order to get more of the data.  In my concepting group, we discussed the visualization unfriending people on facebook, a “deforestation” of your social network.


Fun Fact: I fucking LOVE unfriending people.  There's almost nothing more satisfying that the feeling you get when you can erase someone from your life forever.

Unfortunately, Burger King pooped on my party. How?



I settled on the idea of mapping area of the rainforest to area of your hard drive, with passive use of your computer “deforesting” more and more of your drive.   There is something perverse about the gradual deletion of files, a kind of glee I felt as I zipped my mouse around the screen.

I also added a color indicator as to your impact on your environment (background transitions from green to gray)

I think we can agree this looks reasonably shitty.  So I changed the color scheme a bit, added in a different background info cue (with blocks of green background disappearing in correspondence to HD space).  Cleaned up the text a bit, and changed the files deleted to square kilometers lost.  I also go rid of the annoying little graphic of the tree.   Another feature I added was that after you move the year the first time, the window locks in place and you’re stuck with program running.

Given more time, I’d love to get a harder to kill/close version of this programmed, as well as making it autodetect the documents folder for Windows and Mac (which really isn’t that hard, I just didn’t have time).  Look at some code!

// John Brieger
// for Golan Levin's 
// Data from
import java.awt.MouseInfo;
import java.awt.Point;
PFont font;
ArrayList files;
int numFiles;
int numDeleted;
int delKms;
String path;
int kmsq;
int kmsqmax = 745289;
int oldx;
int oldy;
int percent;
int currentYear;
boolean locked = false;
void setup()
  currentYear= 1987;
  path = "C:\\Users\\John\\DocumentsBackup\\";
  font = loadFont("MyriadWebPro-48.vlw");
  files = listFilesRecursive(path);
  numDeleted = 0;
  numFiles = files.size();
  println("Loaded "+numFiles+" files.");
  delKms= 41000000/numFiles;
  println("There are "+delKms+" square kilometers of rainforest per file.");
  kmsq = 355430;
    size(300, 120);
void draw()
  int filesRemaining = (41000000-kmsq)/delKms;
  { toDelete = ( files.get(files.size()-1);
  percent = (4100000-kmsq)*100/4100000;
  int numSquares = 0;
  for(int i = 0; i < 6; i++)
    for(int j = 0; j< 17; j++)
      if(numSquares= 376480 && kmsq = 394250 && kmsq < 407980){     currentYear = 1989;   }   else if (kmsq >= 407980 && kmsq < 419010){     currentYear = 1990;   }   else if (kmsq >= 419010 && kmsq < 432796){     currentYear = 1991;   }   else if (kmsq >= 432796 && kmsq < 447692){     currentYear = 1992;   }   else if (kmsq >= 447692 && kmsq < 462588){     currentYear = 1993;   }   else if (kmsq >= 462588 && kmsq < 491647){     currentYear = 1994;   }   else if (kmsq >= 491647 && kmsq < 509808){     currentYear = 1995;   }   else if (kmsq >= 509808 && kmsq < 523035){     currentYear = 1996;   }   else if (kmsq >= 523035 && kmsq < 540418){     currentYear = 1997;   }   else if (kmsq >= 540418 && kmsq < 557677){     currentYear = 1998;   }   else if (kmsq >= 557677 && kmsq < 575903){     currentYear = 1999;   }   else if (kmsq >= 575903 && kmsq < 594068){     currentYear = 2000;   }   else if (kmsq >= 594068 && kmsq < 615462){     currentYear = 2001;   }   else if (kmsq >=  615462 && kmsq < 640709){     currentYear = 2002;   }   else if (kmsq >=  640709 && kmsq < 668132){     currentYear = 2003;   }   else if (kmsq >=  668132 && kmsq < 686978){     currentYear = 2004;   }   else if (kmsq >=  686978 && kmsq < 701087){     currentYear = 2005;   }   else if (kmsq >=  701087 && kmsq < 712619){     currentYear = 2006;   }   else if (kmsq >=  712619 && kmsq < 724587){     currentYear = 2007;   }   else if (kmsq >=  724587 && kmsq < 732051){     currentYear = 2008;   }   else if (kmsq >=  732051 && kmsq < 739051){     currentYear = 2009;   }   else if (kmsq >=  739051 && kmsq < 745289){     currentYear = 2010;   }   else if (kmsq >=  745289){
    currentYear = 2011;

Project 1: Flickr Scrape + Face Averaging

by jonathan @ 8:26 am

For the first “real” project, I initially had a bit of trouble deciding what kinds of data I wanted to obtain. Initially I was pretty set on using Twitter and the Twitter 4j library for Processing. I was entranced by the thought of tapping into the real-time thoughts of Twitter users of Pittsburgh in relationship to the weather. I planned on creating a gradiation of color swatches by taking a photograph of the sky, averaging the color, and relating this color to the tweets during this time exploring the possibility that perhaps cloud breaks or the rare rays of sunshine would drastically affect the mood of local Pittsburghers. However, my idea lacked depth and the topic of weather was pretty much off limits for this project.

Therefore, I sought to explore another form of data visualization: face averaging. From the very beginning, I was somewhat obsessed with the concept of using averaging as a means of abstracting data whether it be the color of the sky or people’s faces. As my final idea, I sought to plug ambiguous words such as “success” and “money” into Flickr, scrape the resulting images, toss the images through Face Tracker  in OpenFrameworks, and finally average the faces in MatLab.

I’ll admit, for a novice Processing user, diving into OpenFrameworks, Face Tracker, and Flickr API simultaneously was quite daunting. I knew what I wanted to do and the general means of acheiving the goal, but the details escaped me. I had never really investigated the C++ programming language before and I wasn’t really sure how to implement the Flickr API.


Regardless I dove head first, writing (or rather copying) a Flickr scraper from the Processing forums.

Once with my images, I headed into OpenFrameworks and immediately running into headaches with adding Addons in XCode 3 (which Max Hawkins assured me would have disappeared had I already installed XCode 4). Anyways logically, I knew I had to interrupt the video feed into Face Tracker with my own images, find where the points were being drawn and store those points in a .txt I could access later. It’s a lot easier than it sounds.

Every error that would appear, I would tried to decode it by either copy and pasting it into Google only to be greeted with more mumbo jumbo I didn’t understand, or I would constantly pester Alex Wolfe to lend me a helping hand (which graciously she did). Mostly, however, I struggled on my own as I really desired to at least understand the basic foundation of C++ and Openframworks.

There were essentially 3 major parts I had trouble with: reading the .jpg’s sequentially, getting Face Tracker to read the images, and outputting the plotted points to a .txt file. I ran into a numerous amount of logic errors, other errors that came and went… Essentially it was a hack job of me cutting and pasting, typing in random classes and types into the .h file and throwing them in the main .cpp file with the hopes that it would do something that I would want without making XCode too mad. Most of the time, I was frustrated by my lack of knowledge of the C++ language, severly hindering how much I could figure out on my own.

Finally, at 2:00am this morning, Alex and I, (mostly Alex) were able to wrangle Face Tracker to do our bidding and export everything very nicely. Unfortunately, once I threw my images into MatLab, things went to hell…again. I had anticipated this somewhat beforehand, but I really did not know how to work around it: Face Tracker is not wholly accurate, meaning 75% of the time, it would not filter images correctly and find faces randomly in the frames or fail to line up with the faces in the image in the first place. Of course this meant that the averaged face would look nothing remotely human or even like anything at all.


Sam Lavery – Infoviz

by sam @ 8:08 am


My original idea was to map open and closed wifi routers and compare this to statistics about where old and young people live in the city of Pittsburgh. I found a great wardriving application (WiGLE Wardrive) for my Android phone that allowed me to store data about the locations of wifi networks wherever I went. After a week I had a map of everywhere I had gone drawn entirely with geocoded wifi networks. I had uncovered around 4000+ unique signals and I began to realize that what was truly interesting about this data was not so much whether people had a password or not, but rather how people had named their routers.

To visualize this data I first used TileMill, a great open-source GIS program that uses a markup language similar to CSS to style data. This was great because it easily read my data and arranged the network names so that they did not intersect. However, this method lacked interactivity so I wrote a Processing sketch using Till Nagel’s Unfolding library. What I really like about interactive visualizations is how they can capture people’s time and attention. I want people to be able to experience the data I have found in their own way. Whether that is trying to find their street by searching for routers they know of or looking for the most vulgar router names they can think of.

I’m a little disappointed in the final presentation. This is the first interactive program I have written so I am really happy that I managed to make everything work to some extent. However, I feel like with a little more time/work I could have improved the visual appearance and user experience. The biggest problem currently is the intersecting of the names that happens in areas where multiple routers were found at the same lat/long. I tried to get around this by making the queried names draw on top of the other names in a different color. It’s still hard to see in some areas so in a future iteration I would try to improve this further.

mapping wifi from Sam Lavery on Vimeo.

Pictures from Processing applet

Wifi names mapped in TileMill


//IACD Project 2
//mapping wifi
//sam lavery
//unfolding library
import processing.opengl.*;
import codeanticode.glgraphics.*;
import de.fhpotsdam.unfolding.*;
import de.fhpotsdam.unfolding.geo.*;
import de.fhpotsdam.unfolding.utils.*;
de.fhpotsdam.unfolding.Map map;
Location locationBerlin = new Location(52.5f, 13.4f);
Location locationLondon = new Location(51.5421, 0.13344411);
//library for textfield
import controlP5.*;
ControlP5 controlP5;
String textValue = "";
Textfield myTextfield;
//library for reading xls file
XlsReader reader;
//declare arrays for wifiname and coordinates
int columnlength = 500;
String[]wifiname = new String[columnlength];
//declare arrays for laititude and longitude
float[]lat = new float[columnlength];
float[]lng = new float[columnlength];
public void setup() {
  size(1400, 800, GLConstants.GLGRAPHICS);
  map = new de.fhpotsdam.unfolding.Map(this);
  map.panTo(new Location(40.433, -79.928));
  MapUtils.createDefaultEventDispatcher(this, map);
  //textfield setup
  controlP5 = new ControlP5(this);
  myTextfield = controlP5.addTextfield("enter text",0,0,200,20);
  //read xls
  reader = new XlsReader( this, "WigleWifi.xls" );
  //put names in an array
  for(int i=0; i<columnlength-1; i++)
    wifiname[i] = reader.getString(i+1,0);
    lat[i] = reader.getFloat(i+1,1);
    lng[i] = reader.getFloat(i+1,2);
    //for testing
public void draw() {
  // Draws locations on screen positions according to their geo-locations.
  for(int i=0; i<columnlength-1; i++)
   Location location = new Location(lat[i],lng[i]);
   float xylocation[] = map.getScreenPositionFromLocation(location);
   text(wifiname[i], xylocation[0], xylocation[1]);
  text(myTextfield.getText(), 200, 200);
  for(int i=0; i<columnlength-1; i++)
  int a = wifiname[i].indexOf(myTextfield.getText());
  Location location = new Location(lat[i],lng[i]);
  float xylocation[] = map.getScreenPositionFromLocation(location);  
  text(wifiname[i], xylocation[0], xylocation[1]);  
  //original method for displaying highlighted wifinames  
    Location location = new Location(lat[i],lng[i]);
    float xylocation[] = map.getScreenPositionFromLocation(location);  
    text(wifiname[i], xylocation[0], xylocation[1]);  
void controlEvent(ControlEvent theEvent) {
  println("controlEvent: accessing a string from controller '"+theEvent.controller().name()+"': "+theEvent.controller().stringValue());
« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity