Soul Searching is a game about soul searching, delivering the metaphor through maze navigation.
The project spawned from a story idea I had long ago, of a person whose soul was shattered into multiple pieces. The original form was lost and is now trying to return to the person, collecting its fragments along the way. The gameplay focuses on maze navigation, but you can’t see the entire maze at one time.
I took much inspiration from the games I’ve played: Dear Esther, Sense of Connectedness, Thomas Was Alone, etc. I wanted to create something familiar (maze solving) while presenting it in an unfamiliar way.
Technicalities-wise, I took much of it from algorithms online and programs people already wrote on openprocessing; there’s practically an entire culture out there that obsesses over maze generation, trying to figure out the best way to generate mazes, already making creative games with it, and diverging from traditional mazes.
Unfortunately, the program is suffering from a big bug for the moment. I’ve been trying to rat it out for the past several days. It’s slow going and I feel really bad that I don’t have much to show because of that stupid bug. Needless to say I’ll still be working on this throughout the break until I finish it because I’ve spent too much time on it to stop. Proper documentation will go up once it’s done. Sorry!
The Rainbox is a box which produces the sound of rain when a user is nearby.
This project is a simple Arduino setup that involves the use of a servo motor and a rainstick. The Arduino is connected to a flex sensor that is intended to be hidden under a pillow or a mattress. When a person lies down on the resting place, the servo turns 180 degrees, and the rainstick attached to it will also turn. This causes the rainstick to simulate the sound of rain for a few seconds until all the beads in the stick reach the bottom. The servo then turns back 180 degrees and the process repeats until the user leaves the resting spot or thirty minutes after the flex sensor was first activated. All of this is enclosed within a box, along with a blue LED light which emits a soft glow through a hole in the box.
The concept behind this project arose as a response to one of my long lasting personal issues – the inability to sleep in silence. Maybe it is a symptom of a generation that grew up on television, but the lack of any sensory input used to be very unsettling to me, and it would cause my mind to wander into uncomfortable and frightening places. The sound of rain and the glow of muted television often helped me in these. The rainbox was designed to substitute all of this.
All in all, I ended up creating a working prototype, but it is nowhere near a form that I would want to present in public. The box was intended to make the rainstick echo for a more rich sound, but it ended up just being bulky. Having the setup more exposed but pleasing to look at is a goal for this project. A tighter documentation would help a lot in presentation. The light is also something to experiment with, as people thought it would be more distracting than comforting.
Overview
The ‘Voice Box’ is a musical instrument (of sorts) that receives audio input from the microphone and performs real-time pitch changes with a custom glove-controller. It can be used as both a personal listening device and a means of communication: the user has the option to either speak directly into the microphone and have their altered voice projected from the speaker, or plug in headsets and listen to the distorted noises of the world around them.
Inspiration / Critical Reflection
The project was inspired by a number of things that were not necessarily directly related to each other. Initially when I wanted to make simple piano gloves I was actually inspired by my frequent practice of tapping on tables or chairs that I developed as a result of not having ready access to a piano. In order to manifest this habit, I decided to create a portable instrument that allowed other people to listen to the sounds I hear in my head. But soon I discovered that a number of people have made instruments like these in the past, so instead of being a personal project it turned into a re-implementation of what has already been done countless times. So I decided to refocus my scope of inspiration in an effort to create something that was more novel. When I stumbled across Adafruit’s Wave Shield and Voice Changer project I immediately had my heart set on making a device that distorted voices in some way. I was initially aiming to create gloves that allowed a person to autotune their voice in real-time and make them sound like Imogen Heap, but given the limited time I had and my lack of understanding of how sound frequencies work I had to keep things relatively simple. Thus instead of a real-time autotuner, I built a real-time pitch-shifter.
The Voice Box surprisingly became a device that had some personal value as well, as its concept revolves around the difficulty to understand others and their difficulty to understand me. As I was testing the final product, I become engrossed in puppeteering other people’s voices and speaking in voices that were hardly decipherable – and it was then that I realized these gloves had created a wall between myself and society. Using these gloves turned into a very self-reflective experience, as it caused me to exhibit strange control freak behaviors and made me think about why I was able to extract so much enjoyment out of exercising power over others.
Technical Details
Electrodes are placed around the joints of each of my fingers so that whenever I bend one of them I would cause the electrodes to make contact – triggering a switch that creates the voice pitch-shifting effect. Essentially the electrodes behave like normal momentary switches, but were specifically designed to function without having to make contact with an external surface/object. This allows for an ease of use and enables user to make the more natural gestures common in playing keyboard instruments and typing.
Some technical hurdles I had to overcome: Although using electrodes seems to be a conceptually simple idea, they were surprisingly difficult to implement properly. I initially only had a pull-up resistor for each finger (to prevent short circuiting), but when I tested it out I noticed that the Arduino was not correctly interpreting the digital input data; namely, when the electrodes made contact with each other the input was read as 1’s, but when they were separated the input was just a jumbled mess of 0’s and 1’s. To overcome this issue I had to add pull-down resistors to explicitly make the ‘open’ and ‘closed’ states distinct. But however annoying the resistor handling was, I think the greatest technical hurdle I overcame was getting the pitch shifting to actually work. Adafruit’s original voice changer project uses a potentiometer to make pitch shifts, and because that is an analog input it is not possible to change your voice in real-time (running two analog inputs concurrently is beyond the capacity of an Arduino). So I theorized that while it’s not possible to dynamically change pitch using an analog input, it could technically be possible with multiple digital inputs. Luckily my theory was correct, and making things work just required some simple modifications to Adafruit’s original code.
Images
(Sorry for not using Fritzing – there are too many parts to the device and I felt it would be much easier for me to show what’s going on with photos)
The soldered side of the circuit board
The connected pull-up and pull-down resistors
Here’s the wave shield
Tiny amplifier and speaker
The whole pack
A nice shot of the speaker and box
/* Code adapted from ADAVOICE, an Arduino-based voice pitch changer */
#include
#include
SdReader card; // This object holds the information for the card
FatVolume vol; // This holds the information for the partition on the card
FatReader root; // This holds the information for the volumes root directory
FatReader file; // This object represent the WAV file for a pi digit or period
WaveHC wave; // This is the only wave (audio) object, -- we only play one at a time
#define error(msg) error_P(PSTR(msg)) // Macro allows error messages in flash memory
#define ADC_CHANNEL 0 // Microphone on Analog pin 0
// Wave shield DAC: digital pins 2, 3, 4, 5
#define DAC_CS_PORT PORTD
#define DAC_CS PORTD2
#define DAC_CLK_PORT PORTD
#define DAC_CLK PORTD3
#define DAC_DI_PORT PORTD
#define DAC_DI PORTD4
#define DAC_LATCH_PORT PORTD
#define DAC_LATCH PORTD5
uint16_t in = 0, out = 0, xf = 0, nSamples; // Audio sample counters
uint8_t adc_save; // Default ADC mode
// WaveHC didn't declare it's working buffers private or static,
// so we can be sneaky and borrow the same RAM for audio sampling!
extern uint8_t
buffer1[PLAYBUFFLEN], // Audio sample LSB
buffer2[PLAYBUFFLEN]; // Audio sample MSB
#define XFADE 16 // Number of samples for cross-fade
#define MAX_SAMPLES (PLAYBUFFLEN - XFADE) // Remaining available audio samples
// Keypad/WAV information. Number of elements here should match the
// number of keypad rows times the number of columns, plus one:
const char *sound[] = {
"startup" }; // Extra item = boot sound
int button6State = 0;
int button7State = 0;
int button8State = 0;
int button9State = 0;
int button11State = 0;
//////////////////////////////////// SETUP
void setup() {
uint8_t i;
Serial.begin(9600);
// The WaveHC library normally initializes the DAC pins...but only after
// an SD card is detected and a valid file is passed. Need to init the
// pins manually here so that voice FX works even without a card.
pinMode(2, OUTPUT); // Chip select
pinMode(3, OUTPUT); // Serial clock
pinMode(4, OUTPUT); // Serial data
pinMode(5, OUTPUT); // Latch
digitalWrite(2, HIGH); // Set chip select high
// Init SD library, show root directory. Note that errors are displayed
// but NOT regarded as fatal -- the program will continue with voice FX!
if(!card.init()) SerialPrint_P("Card init. failed!");
else if(!vol.init(card)) SerialPrint_P("No partition!");
else if(!root.openRoot(vol)) SerialPrint_P("Couldn't open dir");
else {
PgmPrintln("Files found:");
root.ls();
// Play startup sound (last file in array).
playfile(sizeof(sound) / sizeof(sound[0]) - 1);
}
// Optional, but may make sampling and playback a little smoother:
// Disable Timer0 interrupt. This means delay(), millis() etc. won't
// work. Comment this out if you really, really need those functions.
TIMSK0 = 0;
// Set up Analog-to-Digital converter:
analogReference(EXTERNAL); // 3.3V to AREF
adc_save = ADCSRA; // Save ADC setting for restore later
//initialization
for(int i = 6; i < = 9; i++) {
pinMode(i, INPUT);
}
pinMode(11, INPUT);
while(wave.isplaying); // Wait for startup sound to finish...
startPitchShift(700); // and start the pitch-shift mode by default.
}
//////////////////////////////////// LOOP
// As written here, the loop function scans a keypad to triggers sounds
// (stopping and restarting the voice effect as needed). If all you need
// is a couple of buttons, it may be easier to tear this out and start
// over with some simple digitalRead() calls.
void loop() {
button6State = digitalRead(6);
button7State = digitalRead(7);
button8State = digitalRead(8);
button9State = digitalRead(9);
button11State = digitalRead(11);
if (button6State == HIGH) { //thumb
startPitchShift(0);
}
else if (button7State == HIGH) { //pointer
startPitchShift(350);
}
else if (button8State == HIGH) { //middle
startPitchShift(700);
}
else if (button9State == HIGH) { //ring
startPitchShift(820);
}
if (button11State == HIGH) { //pinky
startPitchShift(1000);
}
}
//////////////////////////////////// HELPERS
// Open and start playing a WAV file
void playfile(int idx) {
char filename[13];
(void)sprintf(filename,"%s.wav", sound[idx]);
Serial.print("File: ");
Serial.println(filename);
if(!file.open(root, filename)) {
PgmPrint("Couldn't open file ");
Serial.print(filename);
return;
}
if(!wave.create(file)) {
PgmPrintln("Not a valid WAV");
return;
}
wave.play();
}
//////////////////////////////////// PITCH-SHIFT CODE
void startPitchShift(int pitch) {
// Right now the sketch just uses a fixed sound buffer length of
// 128 samples. It may be the case that the buffer length should
// vary with pitch for better results...further experimentation
// is required here.
nSamples = 128;
//nSamples = F_CPU / 3200 / OCR2A; // ???
//if(nSamples > MAX_SAMPLES) nSamples = MAX_SAMPLES;
//else if(nSamples < (XFADE * 2)) nSamples = XFADE * 2;
memset(buffer1, 0, nSamples + XFADE); // Clear sample buffers
memset(buffer2, 2, nSamples + XFADE); // (set all samples to 512)
// WaveHC library already defines a Timer1 interrupt handler. Since we
// want to use the stock library and not require a special fork, Timer2
// is used for a sample-playing interrupt here. As it's only an 8-bit
// timer, a sizeable prescaler is used (32:1) to generate intervals
// spanning the desired range (~4.8 KHz to ~19 KHz, or +/- 1 octave
// from the sampling frequency). This does limit the available number
// of speed 'steps' in between (about 79 total), but seems enough.
TCCR2A = _BV(WGM21) | _BV(WGM20); // Mode 7 (fast PWM), OC2 disconnected
TCCR2B = _BV(WGM22) | _BV(CS21) | _BV(CS20); // 32:1 prescale
OCR2A = map(pitch, 0, 1023,
F_CPU / 32 / (9615 / 2), // Lowest pitch = -1 octave
F_CPU / 32 / (9615 * 2)); // Highest pitch = +1 octave
// Start up ADC in free-run mode for audio sampling:
DIDR0 |= _BV(ADC0D); // Disable digital input buffer on ADC0
ADMUX = ADC_CHANNEL; // Channel sel, right-adj, AREF to 3.3V regulator
ADCSRB = 0; // Free-run mode
ADCSRA = _BV(ADEN) | // Enable ADC
_BV(ADSC) | // Start conversions
_BV(ADATE) | // Auto-trigger enable
_BV(ADIE) | // Interrupt enable
_BV(ADPS2) | // 128:1 prescale...
_BV(ADPS1) | // ...yields 125 KHz ADC clock...
_BV(ADPS0); // ...13 cycles/conversion = ~9615 Hz
TIMSK2 |= _BV(TOIE2); // Enable Timer2 overflow interrupt
sei(); // Enable interrupts
}
void stopPitchShift() {
ADCSRA = adc_save; // Disable ADC interrupt and allow normal use
TIMSK2 = 0; // Disable Timer2 Interrupt
}
ISR(ADC_vect, ISR_BLOCK) { // ADC conversion complete
// Save old sample from 'in' position to xfade buffer:
buffer1[nSamples + xf] = buffer1[in];
buffer2[nSamples + xf] = buffer2[in];
if(++xf >= XFADE) xf = 0;
// Store new value in sample buffers:
buffer1[in] = ADCL; // MUST read ADCL first!
buffer2[in] = ADCH;
if(++in >= nSamples) in = 0;
}
ISR(TIMER2_OVF_vect) { // Playback interrupt
uint16_t s;
uint8_t w, inv, hi, lo, bit;
int o2, i2, pos;
// Cross fade around circular buffer 'seam'.
if((o2 = (int)out) == (i2 = (int)in)) {
// Sample positions coincide. Use cross-fade buffer data directly.
pos = nSamples + xf;
hi = (buffer2[pos] < < 2) | (buffer1[pos] >> 6); // Expand 10-bit data
lo = (buffer1[pos] < < 2) | buffer2[pos]; // to 12 bits
}
if((o2 < i2) && (o2 > (i2 - XFADE))) {
// Output sample is close to end of input samples. Cross-fade to
// avoid click. The shift operations here assume that XFADE is 16;
// will need adjustment if that changes.
w = in - out; // Weight of sample (1-n)
inv = XFADE - w; // Weight of xfade
pos = nSamples + ((inv + xf) % XFADE);
s = ((buffer2[out] < < 8) | buffer1[out]) * w +
((buffer2[pos] << 8) | buffer1[pos]) * inv;
hi = s >> 10; // Shift 14 bit result
lo = s >> 2; // down to 12 bits
}
else if (o2 > (i2 + nSamples - XFADE)) {
// More cross-fade condition
w = in + nSamples - out;
inv = XFADE - w;
pos = nSamples + ((inv + xf) % XFADE);
s = ((buffer2[out] < < 8) | buffer1[out]) * w +
((buffer2[pos] << 8) | buffer1[pos]) * inv;
hi = s >> 10; // Shift 14 bit result
lo = s >> 2; // down to 12 bits
}
else {
// Input and output counters don't coincide -- just use sample directly.
hi = (buffer2[out] < < 2) | (buffer1[out] >> 6); // Expand 10-bit data
lo = (buffer1[out] < < 2) | buffer2[out]; // to 12 bits
}
// Might be possible to tweak 'hi' and 'lo' at this point to achieve
// different voice modulations -- robot effect, etc.?
DAC_CS_PORT &= ~_BV(DAC_CS); // Select DAC
// Clock out 4 bits DAC config (not in loop because it's constant)
DAC_DI_PORT &= ~_BV(DAC_DI); // 0 = Select DAC A, unbuffered
DAC_CLK_PORT |= _BV(DAC_CLK);
DAC_CLK_PORT &= ~_BV(DAC_CLK);
DAC_CLK_PORT |= _BV(DAC_CLK);
DAC_CLK_PORT &= ~_BV(DAC_CLK);
DAC_DI_PORT |= _BV(DAC_DI); // 1X gain, enable = 1
DAC_CLK_PORT |= _BV(DAC_CLK);
DAC_CLK_PORT &= ~_BV(DAC_CLK);
DAC_CLK_PORT |= _BV(DAC_CLK);
DAC_CLK_PORT &= ~_BV(DAC_CLK);
for(bit=0x08; bit; bit>>=1) { // Clock out first 4 bits of data
if(hi & bit) DAC_DI_PORT |= _BV(DAC_DI);
else DAC_DI_PORT &= ~_BV(DAC_DI);
DAC_CLK_PORT |= _BV(DAC_CLK);
DAC_CLK_PORT &= ~_BV(DAC_CLK);
}
for(bit=0x80; bit; bit>>=1) { // Clock out last 8 bits of data
if(lo & bit) DAC_DI_PORT |= _BV(DAC_DI);
else DAC_DI_PORT &= ~_BV(DAC_DI);
DAC_CLK_PORT |= _BV(DAC_CLK);
DAC_CLK_PORT &= ~_BV(DAC_CLK);
}
DAC_CS_PORT |= _BV(DAC_CS); // Unselect DAC
if(++out >= nSamples) out = 0;
}
This is a music visualizer that simulates the starry night sky.
A music-loving friend of mine once told me he missed seeing the stars at night after coming to Pittsburgh. The idea for this project came as an idea for a present for that friend. I liked the idea of a portable, personal set of stars that could be charmed to life by playing music. The stars react to new notes being played, and the aurora appears at certain volume of music and duration of continuous music. (This may not seem very obvious in the video at the beginning because I wasn’t playing the notes hard enough. Also pardon my rustiness on piano – I haven’t really played in 2 years.)
The end product uses an Arduino Mega 2560, with an Electret Mic Amplifier for sound input, and loads of LEDs for display. Frequency analysis utilizes code from Adafruit’s Piccolo (https://github.com/adafruit/piccolo), which uses Elm-Chan’s FFT (Fast Fourier Transformation) library.
The creation of this project was a long and arduous process for me. My initial idea was to have a box filled with blue origami stars (https://fc04.deviantart.net/fs25/f/2008/072/f/e/Straw_Stars_by_Miraka.jpg), with white LEDs hidden inside white origami stars scattered around in the box. However, I quickly ran out of material for making the blue origami stars, and so replaced it with black cardstock and tissue paper. The end result of the stars adhere to my original idea in terms of visuals and functionality. The end product still has the white LEDs hidden inside white origami stars, and you just can’t tell clearly because they are now covered by black tissue paper. The white origami stars make the light of the white LEDs spread a little bit, and if you look carefully, the spread is in the shape of 5-pointed stars. I also wanted more white LED stars, but was limited by the number of PWM pins on the board (and later, space for the wires).
I also wanted to actually learn how to use the FFT library to implement more accurate frequency measurement, for picking out very roughly which notes are being played. It turned out that this is actually quite difficult due to harmonics, and it was hard to understand how to use the library partly due to poor documentation, so I ended up working with code from Adafruit for frequency analysis. A lot of testing was done to get it more suited for piano music. After getting the stars to work the way I wanted them to, I reflected on I could make it appear more interesting/visually appealing. The easy answer was “colors”, so I tried to implement something that appears similar to auroras. The source of the auroras are a number of LEDs. The ideal way to do this would be to use a LED strip (like this one https://www.adafruit.com/products/306), but since this was late into the project, I didn’t have time to get one.
Physically putting this together was also very hard and time-consuming. I had a lot of trouble getting the connections for all the LEDs to work. I had to basically tear my project apart several times because the conductive copper tape wasn’t effective for LEDs, or wires broke, or solder wasn’t strong enough, etc. In the end my breadboard had almost every single slot filled. Then more things fell apart as I was trying to get everything to fit inside a small box. I didn’t realize all those wires would take up so much space.
Weird, but useful tidbits I’ve learned about Arduino:
– variables with types that don’t match won’t raise an error while compiling, but would cause weird things when run
– error in uploading program to Mega board can sometimes be fixed by unplugging a few pins
In the end, I was fairly satisfied with the final product. The stars worked almost as well as I hoped they would. I just wish I was able to show off the craftsmanship that went into this project more. If I get up enough energy, I’d replace the RGB LEDs with an RGB strip. It would be difficult though, because I’d literally have to tear apart my project again, both physically and coding-wise. I enjoy watching it as someone else is playing the piano. Too bad I can’t really watch it while playing at the same time, since I have to watch the keyboard, haha.
[I just realized I accidentally named this the same as that famous van Gogh piece. Ugh. Need better naming skills.]
Code, if you’re interested. It’s messy and long and uncommented:
/* Starry Night -
a music visualizer that simulates the starry night sky.
Parts of the code are written by Adafruit Industries. Distributed under the BSD license.
See https://github.com/adafruit/piccolo for original source.
Additional code written by Jun Huo.
*/
#include
#include
#include
Arousal vs. Time: a seismometer for arousal, as measured by facial expressions.
Overview
One way to infer inner emotional states without access to a person’s thoughts is to observe their facial expressions. As the name suggests, Arousal vs. Time is a visualization of excitement levels over time. The more you deviate from your resting expression, the more excited you are presumed to be. An interesting context for this tool is in everyday social interactions. Watching the seismometer while talking to a friend can generate insights into the nature of that relationship. It might reveal which person tends to lead the conversation, or who is the more introverted of the two. Watching a conversation unfold in this visual manner is both soothing and unsettling.
Inspiration
Arousal vs. Time is the latest iteration in a series of studies. After receiving useful feedback on my last foray into face tracking, I decided to rework the piece to include sound, two styrofoam heads, and text for clarity. Daito Manabe’s and Kyle McDonald’s face-related projects – ”Face Instrument”, “Happy Things” – informed the sensibility of this work.
“Face Instrument” – Daito Manabe
“Happy Things” – Kyle McDonald
Implementation
A casual conversation between myself and a friend was recorded on video and in XML files. I wrote the two software components of this artwork – the seismometer and the playback mechanism – in openFrameworks 0.8. I used the following three addons:
ofxXMLSettings – for recording and playing back face data
ofxMtlMapping2D – projection mapping
ofxFaceTracker – tracking facial expressions
The set
The projection mapping on the styrofoam heads was carried out on two laptops with two pico projectors. I stored facial data in XML files, and recorded video and audio with an HD video camera and an audio recorder.
The audio file was manipulated in Ableton Live to obscure the content of the conversation. I used chroma keying in Adobe Premiere to remove the background of the video, such that the graphs would seem to emerge from behind the heads, and not from some unseen bounding box. Finally, the materials – a video file, two XML files, and an audio file – were brought together in a second “player” application, also built in openFrameworks.
Reflection
Regarding a conceptual impetus for this project, I keep thinking back to a point professor Ali Momeni made when I showed an earlier version of this project during critique. He questioned not my craft, but my language: the fact that I used the word ”disingenuous” to describe my project. I’m still don’t have a satisfying response to this, just more speculation.
Am I trying to critique self-quantification by proposing an alienating use of face tracking? Or am I making a sincere attempt to learn something about social interaction through technology? The ambivalence I feel toward the idea of self-quantification leads me to believe that it is worthwhile territory for me to continue to explore.
I made a projection of virtual butterflies which will come land on you (well, your projected silhouette) if you hold still, and will fly away if you move.
Inspiration
This semester, a friend of mine successfully lobbied for the creation of a “Mindfulness Room” to be created in one of the dorms on campus. The room is meant to be a place where students go to relax, meditate, and, as the name implies, be more mindful.
For my final project, I wanted to create something that was for a particular place, and so I chose the Mindfulness Room. Having tried to meditate in the past, I know it can be very challenging to clear your mind and sit entirely still for very long. So, the core of this project was to make something that would make you want to be still (and that would also fit in with the overall look and feel of the room.)
Technical Aspects
Some of the technical hurdles in this project:
Capturing a silhouette from a Kinect cam image. I tried to DIY this initially, which didn’t go well. Instead, I ended up finding this tutorial about integrating a Kinect and PBox2D. I fixed the tutorial code so that it would run in the most recent version of Processing and with the most recent version of the SimpleOpenNI library.
Integrating assorted libraries: SimpleOpenNI, blobDetection, PBox2D, ToxicLibs, standard Java libraries. I almost certainly didn’t actually need to use all of them, but figured that out too late.
Dealing with janky parts of those libraries (e.g., jitteriness in the blobDetection library, fussiness of SimpleOpenNI). Using the libraries made my project possible, but I also couldn’t fix some things about them. I did, however, manage to improve blob detection from the Kinect cam image by filtering out all non-blue pixels (the Kinect highlights a User in blue).
Trying to simulate butterflies flying—with physics. Trying to simulate a whimsical flight path using forces in PBox2D had only ok results. I think it would be easier to create their paths in vanilla Processing or with another library, (though that might make collision detection far more challenging.)
Finding a computationally cheap way to do motion tracking. When I tried simple motion tracking, my program ate all my computer’s memory and still didn’t run. I ended up taking the Kinect/SimpleOpenNI provided “Center of Mass” and using that to track motion, which worked pretty well for my purposes.
Critical Reflection
As I worked on this project, I was unsure throughout that all the pieces (butterflies, kinect, etc.) would come together and/or work well. I think they came together fairly well in the end. Even though the project right now doesn’t live up to what I imagined in my head at the beginning, it still does what I essentially wanted it to do—making you want to stay still.
When people saw the project, their general response was “that’s really cool”, which was rewarding. Also, the person in charge of the Mindfulness room liked it enough that she wanted me to figure out how to make it work there long term. (Which could be really logistically difficult, in terms of setup and security because the room is always open and unsupervised, and drilling into the walls to mount things isn’t allowed.)
So, though there’s a list of things I think should be better about this project (see below), I think I managed to my concept simplistically, and well given that simplicity.
Things that could be better about this:
Butterflies’ visual appeal. Ideally, the wings would be hinged-together PBox2D objects. And antennae/other details would add a lot.
Butterflies movement. Could be more butterfly-like.
Attraction to person should probably be more gradual/a few butterflies at a time.
Code cleanliness: not good.
Ragged edge of person’s silhouette should be smooth.
Better capture of user. Sometimes the Kinect refuses to recognize a person as a User, or stops tracking it. This could have to do with how I treat the cam image, or placement, or lighting, or just be part of how I was doing Kinect/SimpleOpenNI. After talking with Golan, I think ditching OpenNI altogether and doing thresholding on the depth image would work best.
Inspired conceptually by websites like Kitten War and classic games like “Would You Rather?” and technically by projects like Post-Circuit Board by the Graffiti Research Lab, This or That is an electronic voting poster that allows passersby to vote on two different options as chosen by other strangers.
The poster consists of a voting button and seven-segment display on each side, as well as a reset button, all of which are controlled by a single ATTiny84. There are no wires on the poster besides the alligator clips connecting to the power–all the traces were made with copper tape.
While we are becoming more interconnected digital, electronics are becoming more and more personal–our laptops and cellphones are not devices that are meant to be shared physically, and we even get physically anxious when they’re out of our reach for too long. This or That is a “public” electronic, its charm and fun comes from its communal usage.
Older iteration (I’ve since learned the art of making pretty traces!).
At one point I traded my coin cell battery in for a sweet LiPo battery that someone had lying around.
chargin’ a poster whaaat
Code & Circuits
DIAGRAM:
I have a .ai file that needs cleaning that has both the poster text + lightly drawn traces that you can use to create a poster for your own, so bear with me! 🙂 Here’s a Fritzing diagram for now:
MATERIALS:
Adafruits’s 7-Segment LED Displays x2 (these aren’t actually soldered directly onto the poster–instead I just used header pins on the poster so I could use the displays on other projects if I wanted 🙂 )
If you have any questions regarding construction, feel free to email me at maddyvarner (at) gmail (dot) com (preferably with the subject line “This or That”).
Arduinolin isa project designed to investigate the evolution of material possessions based on electronic trends by taking a traditional object and recreating a new object which is not only a modern, electrified version of the former but also extends the object using the capabilities of digital media.
Overview
I decided upon a violin as my “traditional object” of choice, mainly because I thought it would be reasonable to use a stringed object as the ability to reprogram the touch sensors on the gloves to play different pitches when activated supports my concept. I immediately researched the evolution the violin and the electronic violin, which is all documented in this previous blog post.
Inspiration
The concept was inspired by a conversation I had with my father one night. I was on the phone with him and he was talking about an app he had just purchased for his iPhone. What caught my attention was that he had actually paid over $3.oo for it – I am not in the habit of downloading an application unless it is free. That set me thinking, how many apps have you purchased? How much money have you spent on virtual material? Is it worth it? How will very highly valued items which gain value as they age be transferred into the electronic world, and will that transfer ever be successful?
There is also an ecological argument accompanying the evolutionary argument which entails comparing the carbon footprint of apps and actual instruments, and how this could all eventually be handled by a single piece of technology.
Technical Aspects
The bulk of the work in this project was in figuring out how I was going to wire touch sensitive capacitors and make them individually interactive to the human touch. I had considered going with something like a pressure sensor which would return a different value depending where on the strip pressure was applied but turned this down in favor of using materials which could easily be translated onto conductive fabric for the purpose of making the final product wearable. In retrospect, I greatly regret not pursuing the first option, which would have made for a much smoother transition between instruments and a greater degree of musicality.
Additionally, I was unsure of how I was going to wire everything together onto the glove. The palm of the hand alone features twenty four wires, all interwoven into the glove itself using conductive thread. In the end I did use jumper cables to transfer data from the individual pins to the glove, then touched the end of the jumper cable to the thread. From there, all the Arduino does is loop through each pin then for each pin loop through each note in an array and if the pin corresponds to the note, play the note. I also have some “fun” touchpads at the moment which loop through a series of notes.
Critical Reflection
There are many existing versions of this project ranging in degrees of professionalism from those hobbyist who work out of the garage to commercially marketed products (typically designed to help one learn an instrument). I see that there are few limits to the extent to which I could improve upon my project, at least in theory. However, there are two things I would like to improve upon more than anything else.
Firstly, I am greatly irritated by the fact that although I have two gloves, the left hand representing the violin and the right hand representing the bow, the right hand does not actually need to do anything for the violin to work. Although in principle placing an accelerometer on the right glove would not be difficult, convincing the accelerometer to communicate with the arduino might have been something of a challenge perhaps involving a wireless shield.
Secondly, I am not impressed by the sound quality at all. I understand that it is very possible to synthesize sounds using MaxMSP which would be a far more rewarding result than the current buzzes provided by the Piezo element. It would be also be very rewarding to have a proper headphone jack to output the audio. I enjoy the personal experience my current edition supplies, but would certainly enhance this wherever possible. (Sticking a Piezo buzzer in one’s hat does not necessarily result in the best audio.)
This crafty measuring device is meant to draw attention to the daily usage of revolving doors at Carnegie Mellon’s University Center building. It logs the time, proximity, and rpm data, but also incites a little competitive spirit on its free voltage.
This project revisited our previous class assignment that utilized seven segment displays to capture an interesting measurement. In my original idea, I wanted to choose a unique and fun way to portray numbers, and what better way to do that than with rankings? The reason I chose revolving doors as my subject matter was more or less because I was interested in the calculations involved with an accelerometer.
But as I developed my idea in this assignment, I wanted to convey more useful information about my subjects, the revolving doors. The research changed its direction from “interesting calculations” to bringing attention to those mundane doors that we pass through without a second thought. And I have to thank Maddy Varner and Golan Levin for reminding me that an extra seven segment display and data logging shield were just the things I needed to accomplish this.
That said, the actual wiring of all these new devices, as well as figuring out their libraries were the most technical aspect of the project. Through this process, I came to understand JUST HOW INVALUABLE neat soldering can be. But in the end, the effort was definitely worth it. (See below for fritzing and code). I had some technical difficulties along the way (I seem to have jynxed technology a lot this semester), but I find the data that came from my 7 hours of installation really valuable. The animated GIF below shows the plotted data from the data logging shield, and there are clear patterns of usage for these doors. (Click the image.)
Facts and Figures:
124 people used the door in those 7 hours
The highest score was 40 rpm
There were 4 notable mishaps
Of course, I won’t forget to address the most exciting–and hazardous–part of this project: the participants. I may have underestimated the competitive spirit of college students, because I felt fear watching some of them. My project installation time was cut short because the UC staff asked me not to display the high score portion, and I personally thought the goings could get worse since it is finals week. On a side note, I was extremely happy with how the magic arm stabilized the box. The whole contraption was incredibly sturdy.
In conclusion, I am very satisfied with this project. Although video editing is not my strong point, I did enjoy watching over my project and seeing people have fun and expressing a genuine interest.
Supported in part by a microgrant from the Frank-Ratchye Fund For Art at the Frontier
URL: bit.ly/revolving-games