Using MongoDB and MapReduce, analysis of my dataset has finally become tractable. It’s still early, but I feel like I have the analysis framework mostly taken care of which gives me time to focus on the visualization.
This simple (and ugly) visualization of downtown Pittsburgh and the surrounding area maps the average speed of buses to colors. Each pixel represents a roughly 500 foot square region. The color interpolates from red for the regions where the buses travel on average the slowest to green for areas where buses are moving quickly. For time’s sake, I only sampled around two percent of the data when creating this image. An image sampling the entire dataset would have fewer holes and less noise.
Next I’m going to work on creating more effective and aesthetic visualizations of this data and extracting new data through analysis. There are some low-hanging fruit here (bicubic interpolation of color values, map overlay) and also some more challenging directions I can take things like map distortion for isochronic maps.
I have a good grasp of how the Kinect works, so I don’t think there’s going to be much technical challenge there. The other thing that I’m kinda toying with is integrating an iPad into the loop. I’m starting to look at programming for the iPad and evaluating if this can be done within the time frame. I’ll be a happy camper if I can integrate that and build a fun self discovery app.
Many of the hard parts of my project have been conquered. I’ve successfully implemented cross-domain YouTube video queries that will load search results and allow the user to select videos to load into the application. I can then have multiple HTML5 videos be displayed at a time. Finally, I can programatically play clips of the video and make them loop.
The next difficult part will be creating a global clock, implementing a GUI for clip selection, and also making an interactive timeline. Now that I know the video player will work, the rest will be a matter of testing and implementation.
For my hard part done, I’ve solved the problem of how to insert characters into the landscapes. I’ve gotten 2D sprites that can move around the landscape, getting drawn at the correct level despite however the landscape is moving under them, and even scaling to different heights when they’re further away or closer. In this project, the x/y plane is rotated, so the grid the landscape is drawn onto is the x/y plane, and the rising landscape rises on the z axis. I had to first map sprite coordinates onto this coordinate system, then figure out how to draw them on the x/z plane, perpendicular to the landscape. As I’m not used to OpenGL, this ended up being a lot harder than it sounds at first.
My next steps are related to how my project has evolved. At this point, I’m feeling fairly certain that I want to turn this project into a 2-player, cooperative game. One player will control an onscreen character with a controller, while another will build the terrain this character can walk on. The character-player will have to avoid falling into the water (lava?), roaming flying enemies (birds? jellyfish? clouds of death?), and other hostile monsters, and visit four different islands before returning to the starting location. The builder-player will build the paths the character-player can walk on, but will have a limited amount of resources to work with. By collecting falling objects, the character-player can unlock the ability for the builder-player to use more colors of clay to build with, making things easier for themselves as well.
More interaction will be available, for example, the builder-player can protect the character-player by building a mountain behind him which pursuing monsters can’t get past (but blocking the characters own backward progression as well). Designing the enemies and interactions of this is probably my next task, and coding a few proof-of-concepts to test various gameplay mechanics. Also, I’ll probably be coding a basic color recognition/tracking system to allow me to turn off/on various colors of clay or colored blocks (or give them different effects).
I’m VERY MUCH looking for any and all feedback on this, or just people to playtest stuff once I’ve got a build, so talk to me or comment on this or email me. (timothy at cmu dot edu).
Thanks!
Comments Off on Tim Sherman – Final Project – Hard Part Done
I have user’s button presses on their cell phones being delivered via Twilio (thank you to the pirate pader who recommended it) to a Java application on my computer, using the HttpServer class in Java 6. You call my number, and then press buttons, and the numbers you press show up on my screen after about a half-second delay. I can also get your phone number (useful to differentiate users) and the home city/state/zip of your phone. And I can record your voice and play audio files to you, as well. So the basic technical challenge is solved. I still am not decided on my project, though. I’ve got several ideas for games that could be played on a large screen with many players. From the big-screen-lots-of-users interactions I’ve seen in the past, the challenge is to give each user enough control over the system to become invested in it. One should not feel one is fighting the other users in order to experience the game. I’m also thinking about what sort of things it would be cool to control anonymously with your phone. Like maybe a giant projection where each person gets control of a few pixels, and together people could build images. Or a radio station. Or a robot. The challenging with button-based interaction is that there’s no fine control there are only 12 binary switches, and you can only push one at a time, and there’s a half second delay before the system reacts to your choice. Perhaps a voice-based interaction would be better, although I don’t have real-time voice with Twilio. I can only record you and then download the recording asynchronously.
I started this project without a clear idea as to how I wanted it to look or what sort of interaction I wanted to use. But so far, I’ve been working with the trackpad on my computer to create a little fingerpainting program, so I am going to narrow my focus to see how I can use this in a way that suits my project. I’ve also been able to integrate some sound output into the fingerpainting program, so that the way you move your fingers around the trackpad controls the sound coming out of the computer. I think this is a good foundation for implementing the rest of my project, I just need to make some decisions on how I want it to work in the end.
1. Source of inspiration
My source of inspiration for this project is Hakim El Hattab’s HTML5 blob demo. I really liked the fluidness and interaction afforded by the demo, and I believe it would be a great experiment with this interaction further.
2. Artistic or design goal
I want to create a space where the user can interact with fluid blobs like these in an intuitive and enjoyable manner. Perhaps the user can interact with the blobs via a webcam or mouse. Another thought is to create on online synchronous platform where users can interact with one another’s blobs.
3. Technical hurdles
If the goal is to create an online synchronous platform, there would be hurdles in how synchronous play would be implemented.
4. Question to ask group
Any ideas on how to implement an online synchronous platform?
Inspired by the stunning Japanese drum show KODO (http://goo.gl/9Y9OK) in Pittsburgh a couple of weeks ago, Cheng & I had the idea of re-perform those variational and euphoric drum pattern by using a set of “drum robots”. In the show, there were less then 10 people and each handled one or two drums; everyone’s drum patterns are not very complicate but as a whole, these simple drum patterns mixed together and sounds subtle and amazing.
In this project, we are trying to build a couple of drumbots. Each bot have a certain function like LOOP, 2xSPEED, 2/SPEED, ONEtoTWO, etc, and each bot will pass get the drum pattern from it’s parent drum, generate a new drum pattern based on it’s own function, and pass it to the next drum. (We get inspired by Andy Huntington’s TapTap project. It’s awesome.) Each drum can also has different types of drumstick (wood, metal,etc) to make different sound, and some of the bots may have even 2 or 3 drumsticks to pass it’s pattern to more drum bots.
To test our thought, I spend a few days and write a dirty drum simulator with Processing. It’s pretty bad looking and simple functional, but it do shows the potential of the project.
We have sketched several potential look-and-feel for the drum bots and haven’t make a final decision, but we feel cube could be really compelling. Not only because it is simple and stable, but also because it has a potential to create more interesting combinations. It has 6 faces, which means it may get the beat pattern not only by it’s top face, but also it’s 4 side faces. All faces all the same, which means we can put robots down on one side or pile them up, like the right picture below.
The mechanical thing is annoying. We are trying to use Arduino as micro controller, solenoid as actuator and piazo sensor as input, and no usb, just battery. But after doing some experiment, we find the solenoid eat lots of power but still the beat is not good enough. This’s the another problem we need to solve beside the look-and-feel.
So there hasn’t been much precedence for this, since contemporary knitting machines are ungodly expensive, and the older ones, generally the brother models, that people have at home are so unwieldy that changing stitches is more of a pain to do this way than by hand. But if I can figure out someway to make it work out, I think knitting has ridiculous potential for generative/algorithmic garment making. Since it is possible to create intense volume/pattern in one seamless piece of fabric, simply though a mathematical output of pattern. It would be excellent just to be able to “print” these creations on the spot, and do more than just fair isle.
I sent off a couple emails to hackpgh, but I’ll try to stop by their actual office today or tomorrow and just ask them in person if I can borrow/use their machine
Here’s a pretty well known knitting machine hack, for printing out images in fair-isle. This is awesome, but I was hoping to be able to play more with volume and texture than color
Computational Crochet
Sonya Baumel crocheted these crazy gloves based off of bacteria placement on the skin
User Interactive Particles
I also really enjoyed the work we did for the kinect project, and would be interested in pursuing more complicated user generated forms. These two short films by FIELD design are particularly lovely
Generative Jewelry
I also would be interested in continuing my work with project 4. I guess not really continuing, since I want to abandon flocking entirely and focus on getting the snakes or a different generative system up and running to create the meshes to make some more aesthetically pleasing forms. Asides from snakes, I want to look into Voronoi, like the butterflies on the barbarian blog.
Comments Off on Alex Wolfe | Final Project | Looking Outwards