Project 2: Data Visualization – Mapping Our Intangible Connection to Music
General Concept
Music is an incredible trigger for human emotion. We use it for its specific emotional function a lot of the time, using music to cheer us up or calm us down, as a powerful contextual device in theater and film, and for the worship of our deities of choice. Although it is very easy for an average listener to make objective observations about tempo and level of intensity, it is harder to standardize responses to the more intangible scale of how we connect to the music emotionally. This study aims to gain some insight on that connection by forcing participants to convert those intangible emotional responses to a basic scale-of-1-to-10 input.
The goal of this project is to establish a completely open-ended set of guidelines for the participant in order to collect a completely open-ended set of data. Whether correlations in that data can be made (or whether any inference can be made based on those correlations) becomes somewhat irrelevant due to the oversimplification and sheer arbitrariness of the data.
Execution
An example of an application of a real-time system for audience analysis is the response graph at the bottom of the CNN screen during political debates. The reaction of the audience members, displayed by partisanship, is graphed to show the topic-by-topic approval level during the speech. By having a participant listen to a specific piece of music (in this case, Sufjan Stevens’ five-part piece Impossible Soul) and follow along using a program I created in Max/MSP to graph response over time, I can fashion a crude visual map of where the music took that person emotionally.
Data & Analysis
Data was gathered from a total of ten participants, and the graphs show some interesting connections. First off are the similarities within the opening movement of the piece; from talking with the participants there seemed to be a general sense of difficulty standardizing one’s own responses. This led to a general downward curve once the listener realized that there was a lot more breadth to the piece than the quiet opening lets on. Second is the somewhat obvious conclusion that the sweeping climax of the piece put everyone more or less towards the top of the spectrum. The third pattern is more interesting to consider: people were split down the middle with how to approach the song’s ending. To some it served as an appropriately minimalist conclusion to a very maximalist piece of music, to others it seemed forced and dry.
Areas of Difficulty & Learning Experiences
- The song is 25 minutes long, far too long for most CMU students to remove their noses from their books.
- As the original plan was to have a physical knob for the listener to use, I had an Arduino rig all set up to input to my patch when I fried my knob component and had to scale back to an on-screen knob. Nowhere near as cool.
- A good bit of knowledge was exchanged for the brutal amount of time wasted on my initial attempt to do this using Processing.
- I have become extremely familiar with the coll object in Max, a tool I was previously unaware of and that has proved EXTREMELY useful and necessary.
Code
Download Max patches as .zip: DataVis
Asa, this blog post is incomplete documentation of your project. Please include screenshots and photos of your project — your entire presentation should be in the blog post, in some form or another. –GL
—————————————-
Comments from the PiratePad:
Good reference to the buzzer-in-the-crowd thingie as a way to gauge people’s reaction. Got lost somewhere… how did you get their reaction?
Wouldnt this form factor bias the people towards equating song volume to knob state? The graphs kind of support this. Most people will relate a circular progression to volume.
25 minute song is way too long.
Interesting data and results, though it doesn’t necessarily seem like it is connected to emotion? Because of self-definition of knob. The fact that it’s some unnamed response is still interesting due to the correlation between results.
Interesting. I think it’s good work.
I like this idea. Interesting—mapping something as complicated as human emotion through a narrow tube like a slider… Too tired to articulate. It’s cool, though.
good idea…i’d like to see an overlay of an alaysis of the song (eg tempo). I agree.
nice playback of data
frryyyy ur knob
thanks for showing the comparison of the wavelength
Nice concept. I like that you collected data on a number of different people, and how you made comparisons between the results. Nice job on the combined graph. Really thorough job on your analysis.
simple, clear, and interesting. It would be cool to compare different types of music.
I know you used an arduino for data collecting; why aren’t you talking about it!?!
How did you collect this? (oh, ok now that you explain it that’s pretty clever, gives them a lot of freedom) Is up = positive, down = negative? I really like the analysis, with all of the responses combined compared to the actual song data.
I agree, I’m not clear how the data is analyzed. The polarizing is certainly the most interesting. I appreciate seeing the overlay. Also, nice to see a little more work in Max and the explanation of the programming is intersting.
How exactly was their response measured? Maybe there is a way to visualize the amount of deviation at different times for different responders. I think it’s a really novel idea.
The display looks a little bit too much like a seismograph reading or something
Actually it reminded me of the Music Animation Machine – http://www.musanim.com/
Very nice graph comparison of the 10 different responses.
show your knob early …
Interesting choice to allow people to self-define the meaning of the knobs.
You have no picture of the knobs you built (photo of the arduino? screenshot of the max patch?)
Are the thick blue lines computed or added by hand? By hand. I’d like you to think about how you might compute an average over time (local averaging) and a avergae of all the graphs together
The use of a stepped knob is nice, imho, instead of a continuously variable knob. People have a sense for what “4” means, etc.
This is a very cool concept… but I’m curious why the resolution of your data is so low. Why are there only 10-15 possible “intensity” values being displayed on your graphs? There are a lot of us HCI guys in the audience, and we’d really like to see the interface you provided your users to input this data ;-)
The overlay data looks pretty cool – it’s interesting to see how things are compared to the wavelength information from the song. I wonder if the users just moved their knobs according to the songs loudness, though. It seems like there’s quite a bit of correlation there. I’d like to see an R squared test on that!
Interesting analysis of behaivor at the start of the song. I would have liked a few quotes from the users. Tj
Oh, the overlay is very nice – agreed about a buffer interface: a version that jumps between single set, multiple sets, and/or mentioned interesting stories would be awesome.
Nice data collection and interesting results. Research is thorough with analysis of trends across different people. However I was confused as to what exactly people were tested on until the end of the presentation. Also the plot isn’t the most aesthetically pleasing.