Nir Rachmel | InfoVis

by nir @ 7:51 am 7 February 2012

Location based, time based, dynamic mood board

Inspired by instagram photo stream of several friends, I thought it would be interesting to display an aggregation of photos from users that can be associated to the same time and place. This can provide an alternative to the official, polished photographs we get on a daily basis from endless sources. While many people share their instagram photos, they are actually pretty private moments, many of the times. The fact that people share their photos triggered my idea that it would be cool if people could search this database in a way they couldn’t until now (Instragram doesn’t offer such an interface).

Instagram photos for example:




In order to obtain the data, I had to hack into Instagram’s API and understand how to make queries using authentication. Also, they limit the amount of photogrphs you receive per query (around 40), so that required for me to break down queries in the code to several smaller ones in order to get the maximum amount of pictures I can.

Last, in an attempt to put some order in the chaos the photos create as they appear and re-appear, I used openCV to try and analyze the photos. What interested me was to differentiate photos that have people’s faces in them from photos of scenery. As can be seen in the example below, openCV does a fair job in these situations, but remote from being perfect. In general, photos on the left hand-side of the screen would be “without photos” and the week after wards, they did.

Currently, the search for queries is done via a separate python script whose output is fed to the processing application that does all the magical graphics. In order to make this application more engaging, I would add some controls to allow the user to perform search in an easy and intuitive way, rather than have to run a separate script.

Xing Xu-InfoVis

by xing @ 7:45 am

Data visualization of people dancing

When I started thinking about visualize data, I tried to use the data from a picture, text or numbers. But seldom, people realize it is interesting to visualize data of the movement of body. Kinect helps to collect the data of the joints. To make the project more fun, I got the idea of data visualization of people dancing.  Thus the first step is to collect the data. I am really familiar with using Unity wrapper for Kinect. Thus I collect the data of 20 joint from the body including their position and speed. Later, I started to invite people to dance. I have no idea about how the visualization will look like and is there a big difference between people when they are dancing differently. And I used Processing the visualize the data by changing the intensity of color.

 

Here is some of  the data I collect:(0.1, 1.1, 0.6, 0.0)(0.1, 1.2, 0.6, 0.0)(-0.1, 1.3, 0.6, 0.0)(-0.2, 1.4, 0.7, 0.0)(-0.1, 1.2, 0.6, 0.0)(-0.2, 1.3, 0.9, 0.0)(0.0, 1.3, 0.8, 0.0)(0.0, 1.3, 0.8, 0.0)(0.1, 1.5, 0.6, 0.0)(0.2, 1.4, 0.4, 0.0)(0.3, 1.2, 0.4, 0.0)(0.4, 1.2, 0.4, 0.0)(0.1, 1.0, 0.6, 0.0)(0.1, 0.7, 0.9, 0.0)(0.0, 0.4, 0.8, 0.0)(-0.1, 0.4, 0.8, 0.0)(0.2, 1.1, 0.5, 0.0)(0.2, 0.7, 0.4, 0.0)(0.0, 0.4, 0.5, 0.0)(0.0, 0.4, 0.5, 0.0)vel(0.0, -0.1, 0.0)(0.0, -0.3, 0.0)(0.5, -1.7, -0.2)(1.0, -1.3, -0.1)(0.5, -1.6, -0.1)(0.8, -1.3, -1.4)(3.1, -4.1, -2.4)(2.6, -5.2, -4.3)(-1.2, -0.1, 1.1)(-0.8, 0.3, 1.2)(0.0, 0.0, -0.1)(1.1, 0.2, -0.5)(0.3, -0.4, 0.2)(0.3, -0.7, -0.1)(-1.5, -0.4, -0.2)(-3.6, -0.5, -3.6)(-0.2, -0.1, -0.3)(0.0, -0.1, -0.1)(0.0, -0.1, 0.0)(-0.1, 0.0, 0.0)pos

 

Dave’s dancing visualization                                                    Eric’s dancing visualization

 

It is interesting to see the difference between these two picture. Because the dancing styles are very different, Dave dances like in a club and Eric dances tap dance. The common thing is that the legs and arms are the strongest moving joints of the body. But tap dance is more tender and the major body does not bear too much movement.

Presentation slides:

https://docs.google.com/present/edit?id=0ATFNJZOdZrYuZGd3ZjVmcl81Zjh4ZmtxY3g&hl=en_US

 

 

Duncan Boehle – Info Visualization

by duncan @ 7:29 am

My info visualization project – Sleep – is my attempt to display data from my own sleep patterns in an evocative virtual diorama.

The idea originally came from data I already had; I’ve been using an app on my phone called Sleep as Android, which tracks accelerometer data throughout the night in order to set an alarm in tune with my sleep cycles. The app keeps a graph of my movements throughout the night, and after looking at a few weeks worth of data, I thought it would be cool to try to exaggerate the motion data in some way. I didn’t stray far from a direct representation, since my next thought was to replicate my room and body in 3D to observe the effects of the motion. As I started to think about what this representation could show, however, I realized I wanted to make it more abstract, and instead focus on the effect of the accumulating sleep debt, using the exaggerated sleep activity to show the destructive nature of restless sleep.

I personally hadn’t heard of many data visualizations that used ragdoll physics in order to make a point, but some of the flairs were certainly inspired from other things I had experienced. The noisy distortions in video and audio were very much inspired from some of the more experimental instrumental rock artists I listen to. In fact, the song I sampled for the instrumental clip is Providence, from Godspeed You! Black Emperor.

Initially, I knew that the idea in my head was too ambitious to come across seriously, but I wasn’t sure how hard it would be to make a point at all. The moment that most people see the ragdoll flopping around, it inspires hilarity, rather than seriousness. I didn’t want to counteract it with a bunch of heavy-handed metaphors or confusing interactivity, so instead I decided to rely on a few simple visual and audial elements to convey the desperation of sleep deprivation. I think these elements – the oversized clock, the blood stains, the dim noisy lighting, the distorted room, and the complete chaos at the end all work very well to visualize mental breakdown.

The point of this project was info visualization, however, and I may have taken too many liberties with that terminology. The only data being visualized is the sequence of random forces propelling the ragdoll, but this information isn’t even completely necessary to make a point about a lack of sleep. Although it is a convenient way to demonstrate harm, I could have used a different type of visualization that really only focused on the sleep movements, and found more interesting results from them.

You can check out the Unity project on my website here, or you can watch the YouTube video of the project below:

[youtube=https://www.youtube.com/watch?v=D_hMDVep_dU&w=600]

Here’s the presentation version of this summary.

KelseyLee-InfoVis

by kelsey @ 6:00 am

Comparing International Tweets

[youtube=https://www.youtube.com/watch?v=cbdOCodVFMI]

For my project I wanted to compare a singular word across different languages. I started imagining comparing antonym pairs across languages, something that I had been interested in since seeing Chris Harrison’s visualizations, which involved antonym pairs and words associated with each of the words. But after obtaining and playing with the data, I found that without context the words weren’t very interesting, so my idea evolved into comparing how people speak about a singular concept(/word) in different languages. For example, using the word ‘Love’ I wanted to examine if there were cultural differences when an English speaker and a Spanish speaker spoke of love. While words may be translated across languages, the cultural significance can vary and in that way examining how thoughts on a singular topic, originating from multiple languages would be an interesting project.
I started by using Bing Translate to accept a query word and traslate it from English into a bunch of other languages. I would later have problems with character encoding and so my final project sticks to only comparing across 10 languages (including English). With the translated word, I then searched the term using the Twitter API. I felt that long excerpts of text, while more meaningful and with more context, was not a great strategy for a visualization. With the Twitter character limit I could achieve a similar affect that would be more easily conveyed visually.
With the Tweet, which I specified as originating from a certain foreign language and containing that translated word via Bing Translate, I scraped the tweets and then fed them back into the translator to see what the sentiment was in English. While this may not convey the exact meaning, and while this system is flawed because Bing translate doesn’t always work, it did provoke a lot of thought.
In this example, I quered ‘love’. After every round of Tweets, these same languages would be fed with new Tweets, after seeing 3 Tweets in each of the languages it would recycle.


In the process of creating this visualization I took some liberties, assuming that if a tweet was tweeted in Italian that it was from Italy, etc. to allow the visual impact of comparing countries across the globe to really take effect. The visualization starts as a colored map, with Tweets slowly fading in one-by-one until all 10 languages are represented once, and at that point the Tweets begin to fade one-by-one, until the screen is again empty. At this point, the screen will repopulate and continue to loop until all of the scraped tweets have been seen.
I ran this query of ‘love’ many times through running the Processing application, often getting different results as new Tweets became available. While I do not have the screenshot now, I remember reading a tweet from Spain about God’s love, which seems to reflect the deep Catholic culture there. In another instance I found it interesting that Italy also celebrated Valentine’s Day.
  • English – love – omb I love paddy I would go out with himanyday!!! Xx :) hope his foot gets better!
  • Dutch – liefde – The love knows no no and no yes
  • Spain – amore – “The time that has been given to us in our life is precious to discover and perform good works in God’s love” (Pope Benedict XVI)
And after creating the application I especially enjoyed reading people’s more meaningful tweets, a Spanish speaking Tweeter said, “love is priceless, however many find it hard to want to”. Compared to many of the English derived Tweets that I read, this was more heartfelt and genuine, truly respecting the word for it’s meaning as opposed to trivializing it with overuse.
While it did not seem like any one language’s derived tweets were particularly indicative of the langauge itself, there were definitely connections between the language the Tweet was spoken in and the culture from which it originated, and by being able to compare and contrast them to other cultures or even to other Tweets within the same culture, the resulting deliberation becomes that much more meaningful.

Happiness

  • English – happiness Astrology: Libra represents partnerships, marriage, love and happiness
  • Portuguese – felicidade – learn find joy in the joy of others the secret of happiness.
  • Spanish – felicidad – Happiness is to realize that nothing is too important

[youtube=https://www.youtube.com/watch?v=SCm9RUSzcNQ]

Sadness

  • English – sadness – Seems like hurt is the only emotion I have anymore. Sadness appears daily as I search for the everlasting. Happiness is beyond my reach.
  • Spanish – tristeza – Sadness is a reflection of a fear wanting to be happy
  • Portuguese – tristeza – at the time of sadness in the city that people fight for politica…have more respect


Self Critique

I would say that my intended, getting to understand how people who speak different languages perceive the same word/idea was successful. The Tweet length is the perfect size for a visualization and through these tweets culture definitely shines through. A few things that I wish I had time to improve are, integrating eastern languages as well as Hebrew, Arabic, Russian, etc. There was a problem with the character encoding that I couldn’t solve in a reasonable amount of time and so I gave up on that. It was really interesting though, that because Chinese/Korean/Japanese characters stand for a whole word, the translations are quite a bit longer and more meaningful, this would have provided a really interesting contrast against 140 letter Tweets.
I would also have liked to implement some way to parse out less meaningful Tweet. At times the Tweets weren’t very indicative of anything, at times being dominated by a link or mentions, being able to sort these out and still have a sizable pool of tweets (and doing so in a timely manner) would have been nice.
In the future I would also like to implement a query box, so that the hardcoded word can be modified by the user.

EvanSheehan-InfoVis

by Evan @ 1:21 am

The Narcissism of Minor Differences

My data visualization project represents traditional Irish tunes visually to highlight their similarities. The idea for the project came from something my mother says to me regularly when I’m home visiting and playing the fiddle: “How do you keep all those tunes straight? They all sound the same to me”.

There are many sources of tune data online. Originally I’d planned to scrape some from thesession.org, but my friend, Pete, actually had a database of tunes he’d scraped several years ago from a variety of sources. So he just sent me a SQL dump of that and I was off running.

Measuring Similarity

The first question was how to measure the similarity between a given pair of tunes. Because the tune data online is typically stored in an ASCII format called ABC Notation, calculating string distances seemed the best solution. I tried both the Hamming and the Levenshtein distances and settled on Levenshtein because it supports comparing strings of different lengths.

Visualizing Similarities

I tried several ways of visualizing the melodies to highlight how few differences there were between pairs of tunes. Above is a more visually pleasing variant of my first attempts. I started by representing each note as a rectangle. Each row represents a tune. When a note is identical between that tune and the starting tune, the note is drawn. If the notes are different, that space is left blank. This gives you a picture of where the tunes are similar, but now how they differ.

My second attempt (the first image) overlays all tunes similar enough to the original. Again, each note is represented as a rectangle and only common notes are drawn. Each rectangle is mostly transparent in this visualization. Because all the tunes are drawn on top of each other, the darker the rectangle, the more often that note appears in that position in this set of tunes. Although I think this results in a rather attractive, barcode-like image, it doesn’t say much about how the tunes are similar.

Finally (at Golan’s suggestion) I represented the tunes as line graphs; plotting the notes in order horizontally and according to pitch vertically. Now you can see where the tunes overlap and how they differ when they don’t overlap. The important feature this rendition highlights is when a phrase is repeated between two tunes but out of phase with the original. Previously, common phrases were not visible unless they occurred in the same place in both tunes.

At this point I had to add some interactivity because simply plotting all the selected tunes on top of each other was too noisy to glean any useful information. So I faded out most of the lines and allowed users to select a single tune for comparison using the up and down arrows.

In the above side-by-side comparison, you can see that there are two identical tunes (the top pink bar on the left, and the top line chart on the right) with different names. A common occurrence in this style of music. The next most similar tune appears to have many similar phrases, but not in the same place as the original. Thus, on the left, it doesn’t appear to have much in common, but on the right you can see similar contours slightly out of phase with each other.

Critique

I think this is an interesting start, and the line graphs give me confidence that there are some interesting similarities to explore between tunes. I think a more interactive visualization might be more compelling; something in which users could explore the data more freely. The limited interactivity (see the video below) was mostly just a convenience for me to quickly generate artifacts for the presentation.

[vimeo=https://vimeo.com/36328177]

It’s also worth noting that my calculations of similarity are not correct. I simply treated the ABC data as strings, but that doesn’t result in a correct comparison because in many cases multiple characters are required to represent a single note. For example a low b is represented as “B,”, so the distance between the phrase “GB,D” and “GbD” should be one, because there is only one note—and thus one substitution—difference. However, simply comparing the strings “GB,D” and “GbD” results in a Levenshtein distance of two.

The fact that there are still so many similar looking tunes despite this oversight on my part is what encourages me that there is more to explore here.

Technical Details

I started out storing the data in a SQLite database, but it quickly grew too large and I moved it to a MySQL database. I used Python to perform the distance and longest common substring calculations on the tune data; that code can be found here. The visualization was performed in Processing, and that sketch can be found here. A zip file of the entire project—including a dump of the MySQL database—is available here (the SQL dump was too large to upload to WordPress).

Project2::InfoVis::zackJW

by zack @ 11:55 pm 6 February 2012

What happens if you compare the language of government with the language of the rest of the culture? Well. You probably make a mess… Where am I coming from?

I was initially interested in trying to visualize DISINFORMATION while the world is ecstatic about new ways to visualize NON-DISINFORMATION, a.k.a INFORMATION.  The hypothesis was that for every vetted, scientific, evidence-based government study conclusion, there exists an opposite conclusion with the same backing.

How and what came of it?

I initially thought that I would somehow try to visualize point-counterpoint in government officials perspectives.  Global warming is a fact./Global warming is a theory.  That kind of thing.  That boiled down to the question, is it right and wrong or is perception truly reality.  At that point I realized I could use the language of government as a cultural probe and perhaps find a relationship between their words, and ours.  If perception is reality, then a representative government’s perceptions may be revealed by filtering their own language through the ultimate cultural barometer: YouTube…Government + YouTube =…you get the point.

What’s the process?

First, data.  Transcripts of Executive, Legislative and Judicial proceedings were cooked down using tagCrowd.com. It’s a classic word cloud generator that boils thousands of words down to 600 words down to this:

 

Inspired by Dan Shiffman’s text mirror, the words were put into an array then used to search YouTube and the first (non-promoted, non-featured) video was loaded into an array in processing.  The comparison titles looked like this, Gub’ment words on the left…

able — God is Able

act — How to Act Good w/Hugh Jackman

american — Americans are NOT stupid – WITH SUBTITLES

believe — Cher – Believe [Official Music Video] [HQ]

bill — IPHONE BILL

business — Entrepreneur’s Mission Statement

care — Kid Rock-Care Music Video

change — deftones-change

clear — Cybotron-Clear

chief — Chief – Night & Day (2010)

company — Company( the Musical): Part 1

congress — C-SPAN: Stephen Colbert Opening Statement

country — Jason Aldean – She’s Country

court — Pepsi – King’s Court Super Bowl

economic — Economy key in Nevada Republican caucus

economy — Keiser Report: Starving the Economy (E243)fact —

federal — Federal BMX – Cologne

future — Future – Magic ft. TI

life — The Life (An anthropology graduate student (Denise Richards), guided by her prostituting neighbor (Darryl Hannah)…)

number — 10 Little Numbers

parents — PARENTS SUCK!

pass — PASS THIS ON

pay — Soulja Slim – I’ll Pay For It

people — Foster The People – Pumped Up Kicks

percent — Percent of a Number – YourTeacher.com – Math Help

program — The Program – Alvin Mack

give — Pitbull – Give Me Everything ft. Ne-Yo, Afrojack, Nayer

government — Alex Jones: US government spies on everybody

health — HEALTH :: DIE SLOW :: MUSIC VIDEO

help — Help! in the Style of “The Beatles” with lyrics (no lead vocal)

important — The Most IMPORTANT Video You’ll Ever See (part 1 of 8)

insurance — Auto Insurance

issue — Escape the Fate – Issues

jobs — Steve Jobs’ 2005 Stanford Commencement Address

law — LA Law theme

million — Nipsey Hussle “A Million” (Music Video)

money — Money – Pink Floyd HD (Studio Version)

question — System Of A Down – Question!

reform — Social Security Reform Bill Encourages Americans To Live Faster, Die Younger

rule — Take That – Rule The World – Official Music Video

start — DEPAPEPE – START(PV)

state — The State MTV: Louie “I Wanna Dip My Balls…”

statute — Lesson4- Society and Statutes Part 1.mp4

subject — English Grammar & Punctuation : What Is a Subject-Verb Agreement?

support — tech support

system — System Of A Down – Chop Suey!

true — Spandau Ballet – True

united — United Breaks Guitars

work — Ciara featuring Missy Elliott – Work ft. Missy Elliott

 

 

Some favorites…

The rest looked something like this…

[youtube https://www.youtube.com/watch?v=CEyt9PEHHwY]

There are more questions raised here than answers and without a brief description the visualization doesn’t clearly explain what the viewer is seeing.  Admittedly, I chose to use this brief study to explore programming skills I hadn’t previously undertaken.  And not until I had gotten through them did I begin to explore the potential visual output.  Even then, I was more interested in the vis than the info, but limited by my programming skills.

One interesting observation about the data, however, is that the language of government tends to translate into entertainment for YouTube-ers.  Fully 54% of the videos were music videos for recording artists. Only 10%had anything to do with matters of government and three of those were jokes or conspiracy theories. According to YouTube, the government looks like a song-and-dance.

/*iacd 2012, Zack Jacobson-Weaver copywrite 2012
youTube files:
https://www.youtube.com/watch?v=_2exW2cUdC4
https://www.youtube.com/watch?v=Ul86UVXqpxs
https://www.youtube.com/watch?v=UdULhkh6yeA&feature=fvst
https://www.youtube.com/watch?v=J-ywH_1rUFA&feature=pyv
https://www.youtube.com/watch?v=2Ccjjt5OihM&ob=av3n
https://www.youtube.com/watch?v=ZL4MGwlZuAc
https://www.youtube.com/watch?v=fGqiBFqWCTU
https://www.youtube.com/watch?v=XLPlfODCBlI
https://www.youtube.com/watch?v=JJsd_Cvk_rw
*/
import processing.video.*;
 
int videoScale = 14;
int cols, rows, pCharCount, nextShuffle, b,c;
int frames =5;
int nextFirstLetter=0;
 
String [] titles;
String [] finalWords;
int [] startChars = new int [100];
 
Movie [] videos = new Movie[9];
 
String chars = "";
 
void setup()
{
  size(480, 360, P2D);
  cols = width/videoScale;
  rows = height/videoScale;
  //frameRate(24);
 
 b = 0;
 titles = loadStrings("youtubes.txt");
 finalWords = loadStrings("Final.txt");
 
for (String s : finalWords)
{
   chars = chars + s + " ";
}
for (int i=0; i  nextShuffle)
  {
   nextShuffle += frames;
   pCharCount = (pCharCount + 1) % chars.length();
  }
 
  int charcount = pCharCount;
 
  for (int j=0; j< rows; j++)
  {
    for (int i=0; i < cols; i++)     {       int x = i*videoScale;       int y = j*videoScale;                if (charcount >= startChars[b] && charcount < startChars[b +1])
        {
          fill(255);
        }
        else
        {
        fill(255,45);
        }
      textSize(20);
      text(chars.charAt(charcount),x,y);
 
      charcount = (charcount + 1) % chars.length();
    }
  }
 
}
  void keyPressed()
{
  if (keyPressed && key == ' ' )
  {
    b = (b + 1) % videos.length;
  }
}

Ju Young Park – infoVis

by ju @ 8:42 pm

presentation file

For my information visualization, I decided to visualize text structure of fortune cookie messages in a manner of text clouds. I personally believe in fortune telling, and I have a habit of collecting fortune cookie messages in my wallet. Fortune cookie is a Chinese crisp cookie with a “fortune” wrapped inside. Since the “fortune” inside the cookie is randomly chosen, people tend to believe the words of wisdom written on the message. These days, fortune cookie is largely consumed all around the world, and it has become like a common “culture” or “custom” in many countries, especially, United States.

My purpose of the project was to create an interactive interface where people can see the texts used in fortune cookie messages and where people can look for a message that contains a certain word. In addition, my ultimate goal was to transform “information” into an “artwork.” In this way, I spent a lot of time developing final design that can be something attractive, beautiful, and engaging.

 

I have collected data from web, friends, and local Chinese restaurants. As a result, I assembled dataset of approximately 4,000 fortune cookie messages. With this list, I tried to visualize 50 the most common words used in the messages like a text-cloud. I generated this text-cloud using openCloud library in Processing. You can find online database of fortune cookie messages from here

 

I was also heavily influenced by Claude Monet’s Reflections of Clouds on the Water Lily Pond Series in building the final design.

 

 

For the design, I wanted the information of text-cloud of fortune cookie messages to action and look like a real “cloud” in an abstract manner. Therefore, I animated the texts to float over as if clouds float in sky. In order to represent background as sky, I filled the background with color blue. In China, color red represents “lucky,” and yellow/gold is considered the most beautiful color. Yellow also signifies “good luck,” so many people in China pair yellow with red for more luck. Therefore, I developed the final design with colors red and yellow.

 

 

[youtube=https://www.youtube.com/watch?v=A5zIf_T-AAQ]

 

 

I am pretty satisfied with final design of my project. I tried my best to employ the concept of representing text clouds of fortune cookie messages as well as meaning of “fortune” or “good luck.” However, I had hard time in collecting data, and since I dealt with texts in this project I obviously needed a lot fortune cookie quotes. My original goal was to collect around 6,000 fortune cookie messages, but this attempt failed. I do not think this project as a “final” form, because I plan to expand it more data and lucky numbers. But, I find this successful in a way that “information” has turned into an “artwork.”

 

Ju Young Park – Sketch

by ju @ 8:21 pm

For my information visualization, I decided to visualize text structure of fortune cookie messages in a manner of text clouds.

 

My purpose of the project was to create an interactive interface where people can see the texts used in fortune cookie messages and where people can look for a message that contains a certain word. In addition, my ultimate goal was to translate “information” into an “artwork.”

 

I have collected data from web, friends, and local Chinese restaurants. As a result, I assembled dataset of approximately 4,000 fortune cookie messages. With this list, I tried to visualize 50 the most common words used in the messages like a text-cloud.

 

For the design, I wanted the information of text-cloud of fortune cookie messages to action and look like a real “cloud” in an abstract manner. Therefore, I animated the texts to float over as if clouds float in sky. In order to represent background as sky, I filled the background with color blue.

 

Here is my initial sketch

Alex Wolfe | Project 2 | Death Stare

by a.wolfe @ 8:13 am 2 February 2012

Some Quick Exposition

So for those of you who are unaware, Marina Abramovic is a extremely well renowned performance artist who had a retrospective in the MoMa about two years ago. For the duration of the exhibit, Abramovic  performed “The Artist is Present”. After wandering through her life’s work, a visitor could sit across from Marina and behold the artist for as long as he or she could stand it. In return, Marina presents you with the world’s most perfected deadpan

The MoMa kept track of every person who sat across from Marina in the form of a headshot and a small note of how long that sat for. These are all available on Flickr, for your scraping pleasure.

Just to test the face morphing algorithm, I manually defined some basic control points on a small subset of this data. Corresponding points should be on relatively the same area of the face, so consistency is key. The more points you define, the better the final average, but the process is pretty laborious so I settled for around 40.

 

Taking these points, I determined a delaunay a triangulation of each face, and also a triangulation for the “average” face. In order to make a clean composite, I morphed each face to the average face by using an affine transformation on each of the triangles. Since my super useful point label sketch had glasses, I forgot to point define the eyebrows, so the eyes look a bit distorted here, but with a better point cloud

 

So the next step is to take an average of all of the points given, and then morph each face to that average, by interpolating the pixels inside each of the corresponding triangles. The small set I was working with while testing this algorithm was decidedly female, so the two girls below don’t suffer too much distortion, but the man in the middle gets very squished.

When you overlay all these morphed images on top of each other and blend, you get something like this

(sample set of 8 images unweighted)

sample set 8, weighted by time

sample set 25, weighted

Scraping Points with Face OSC
[vimeo https://vimeo.com/36338138 w=500&h=400]
Next I batched processed the 844 images and threw them into FaceOSC in order to automatically pick out control points. The output files averaged around 237 points, so it was far superior to my 20.


 

Unfortunately, in exchange for bulk, a bit of accuracy was lost. Though most face meshes generated looked like they could approximately fit the face, very rarely were they perfectly and accurately lined up in the way required for facial averaging. Most look something like the one above.

However the nice thing about the algorithm is that if you average enough points together, you get pretty close to the mean, regardless of extraneous data.

Average Face points from 250 faces

Face to Average

Twitter API Resources

by heather @ 7:48 am

Start here with Jer’s tutorial:
http://blog.blprnt.com/blog/blprnt/updated-quick-tutorial-processing-twitter

And/or Golan’s instructions from last year:
https://ems.andrew.cmu.edu/2011/a/unit-50/tweeting/

API Stuff I might use:
screen_name
retweet_count
favourites_count
followers_count
location
statuses_count
profile_background_image_url
— the tweet —
text
id
in_reply_to_user_id
in_reply_to_screen_name
in_reply_to_status_id

example using one of the above:

https://dev.twitter.com/docs/api/1/get/statuses/retweets/:id

Sean @big_sean: Good Morning to the Chicks that got a Breakup Text Saved in her Drafts waiting to see if he gone get his S*** together!

Navdeep@pnavdeep26: Valentines day is two weeks away!! You still have time to breakup n save money!!! Wat say??? :) :)

Laugh.@WereJustTeenss: Saying “we can still be friends” after a breakup, is like saying “hey the dog died but we can still keep it.” … -.-

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity