Getting Closer

by Max Hawkins @ 6:17 am 13 April 2011

Using MongoDB and MapReduce, analysis of my dataset has finally become tractable. It’s still early, but I feel like I have the analysis framework mostly taken care of which gives me time to focus on the visualization.

This simple (and ugly) visualization of downtown Pittsburgh and the surrounding area maps the average speed of buses to colors. Each pixel represents a roughly 500 foot square region. The color interpolates from red for the regions where the buses travel on average the slowest to green for areas where buses are moving quickly. For time’s sake, I only sampled around two percent of the data when creating this image. An image sampling the entire dataset would have fewer holes and less noise.

Next I’m going to work on creating more effective and aesthetic visualizations of this data and extracting new data through analysis. There are some low-hanging fruit here (bicubic interpolation of color values, map overlay) and also some more challenging directions I can take things like map distortion for isochronic maps.

KinectPortal – Progress Update

by Ward Penney @ 1:35 am

Kinect Portal

KinectPortal Sketch

KinectPortal Sketch

Tasks

  1. Develop acrylic panels with handles and Vellum on one side for users
  2. Threshold capture the blob contours with openCV in depth grayscale
  3. Discover corners with Dot Product testing
  4. Inverse Perspective Transform to get real “square” and discount user tilt
  5. Draw 3D image
  6. Projection mapping back onto square in 3D space

Currently working on Thresholding capture (step 1). It’s just gonna take all weekend.

Progress Blog post

by Chong Han Chua @ 12:58 am

I have a good grasp of how the Kinect works, so I don’t think there’s going to be much technical challenge there. The other thing that I’m kinda toying with is integrating an iPad into the loop. I’m starting to look at programming for the iPad and evaluating if this can be done within the time frame. I’ll be a happy camper if I can integrate that and build a fun self discovery app.

Mark Shuster – Final – Progress

by mshuster @ 12:50 am

Many of the hard parts of my project have been conquered. I’ve successfully implemented cross-domain YouTube video queries that will load search results and allow the user to select videos to load into the application. I can then have multiple HTML5 videos be displayed at a time. Finally, I can programatically play clips of the video and make them loop.

The next difficult part will be creating a global clock, implementing a GUI for clip selection, and also making an interactive timeline. Now that I know the video player will work, the rest will be a matter of testing and implementation.

Tim Sherman – Final Project – Hard Part Done

by Timothy Sherman @ 12:25 am

For my hard part done, I’ve solved the problem of how to insert characters into the landscapes. I’ve gotten 2D sprites that can move around the landscape, getting drawn at the correct level despite however the landscape is moving under them, and even scaling to different heights when they’re further away or closer. In this project, the x/y plane is rotated, so the grid the landscape is drawn onto is the x/y plane, and the rising landscape rises on the z axis. I had to first map sprite coordinates onto this coordinate system, then figure out how to draw them on the x/z plane, perpendicular to the landscape. As I’m not used to OpenGL, this ended up being a lot harder than it sounds at first.

My next steps are related to how my project has evolved. At this point, I’m feeling fairly certain that I want to turn this project into a 2-player, cooperative game. One player will control an onscreen character with a controller, while another will build the terrain this character can walk on. The character-player will have to avoid falling into the water (lava?), roaming flying enemies (birds? jellyfish? clouds of death?), and other hostile monsters, and visit four different islands before returning to the starting location. The builder-player will build the paths the character-player can walk on, but will have a limited amount of resources to work with. By collecting falling objects, the character-player can unlock the ability for the builder-player to use more colors of clay to build with, making things easier for themselves as well.

More interaction will be available, for example, the builder-player can protect the character-player by building a mountain behind him which pursuing monsters can’t get past (but blocking the characters own backward progression as well). Designing the enemies and interactions of this is probably my next task, and coding a few proof-of-concepts to test various gameplay mechanics. Also, I’ll probably be coding a basic color recognition/tracking system to allow me to turn off/on various colors of clay or colored blocks (or give them different effects).

I’m VERY MUCH looking for any and all feedback on this, or just people to playtest stuff once I’ve got a build, so talk to me or comment on this or email me. (timothy at cmu dot edu).

Thanks!

Regarding Buttons

by ppm @ 12:07 am

I have user’s button presses on their cell phones being delivered via Twilio (thank you to the pirate pader who recommended it) to a Java application on my computer, using the HttpServer class in Java 6. You call my number, and then press buttons, and the numbers you press show up on my screen after about a half-second delay. I can also get your phone number (useful to differentiate users) and the home city/state/zip of your phone. And I can record your voice and play audio files to you, as well. So the basic technical challenge is solved. I still am not decided on my project, though. I’ve got several ideas for games that could be played on a large screen with many players. From the big-screen-lots-of-users interactions I’ve seen in the past, the challenge is to give each user enough control over the system to become invested in it. One should not feel one is fighting the other users in order to experience the game. I’m also thinking about what sort of things it would be cool to control anonymously with your phone. Like maybe a giant projection where each person gets control of a few pixels, and together people could build images. Or a radio station. Or a robot. The challenging with button-based interaction is that there’s no fine control there are only 12 binary switches, and you can only push one at a time, and there’s a half second delay before the system reacts to your choice. Perhaps a voice-based interaction would be better, although I don’t have real-time voice with Twilio. I can only record you and then download the recording asynchronously.

Le Wei – Final Project update

by Le Wei @ 10:36 pm 12 April 2011

Hard Part Solved update

What’s happening

I started this project without a clear idea as to how I wanted it to look or what sort of interaction I wanted to use. But so far, I’ve been working with the trackpad on my computer to create a little fingerpainting program, so I am going to narrow my focus to see how I can use this in a way that suits my project. I’ve also been able to integrate some sound output into the fingerpainting program, so that the way you move your fingers around the trackpad controls the sound coming out of the computer. I think this is a good foundation for implementing the rest of my project, I just need to make some decisions on how I want it to work in the end.

What’s solved

What now

  • Thinking about what I actually want (sketching, brainstorming)
  • Learning how to make sounds that aren’t ugly (using math and stuff I guess)

Susan Lin – Final Project: I changed my mind.

by susanlin @ 7:48 am 4 April 2011

Inspiration

Goal Recreate gestural animation effects using code.

Hurdle Final form: drawing tool? layer on top of video? images?

Napkin Sketch I call it “LoFi”

Question Recommended short-films to watch or things to read about animation/rotoscoping techniques? :)

Final project ideas

by honray @ 2:40 am 30 March 2011

1. Source of inspiration
My source of inspiration for this project is Hakim El Hattab’s HTML5 blob demo. I really liked the fluidness and interaction afforded by the demo, and I believe it would be a great experiment with this interaction further.

2. Artistic or design goal
I want to create a space where the user can interact with fluid blobs like these in an intuitive and enjoyable manner. Perhaps the user can interact with the blobs via a webcam or mouse. Another thought is to create on online synchronous platform where users can interact with one another’s blobs.

3. Technical hurdles
If the goal is to create an online synchronous platform, there would be hurdles in how synchronous play would be implemented.

4. Question to ask group
Any ideas on how to implement an online synchronous platform?

View pdf

Final Proposal

by huaishup @ 1:46 am

Susan Lin – Final Project: The Cute Manifesto, Brief Presentation

by susanlin @ 1:44 am

a link, it goes to the pdf version!

EDIT more thinking went into this before reaching dead-end

SamiaAhmed-Final-Concept

by Samia @ 1:11 am

Alex Wolfe | Final Project | Brainstorm Prezi

by Alex Wolfe @ 10:55 pm 29 March 2011

Alex Wolfe | Final Project | Looking Outwards

by Alex Wolfe @ 4:05 pm 27 March 2011

Generative Knitting

So there hasn’t been much precedence for this, since contemporary knitting machines are ungodly expensive, and the older ones, generally the brother models, that people have at home are so unwieldy that changing stitches is more of a pain to do this way than by hand. But if I can figure out someway to make it work out, I think knitting has ridiculous potential for generative/algorithmic garment making. Since it is possible to create intense volume/pattern in one seamless piece of fabric, simply though a mathematical output of pattern. It would be excellent just to be able to “print” these creations on the spot, and do more than just fair isle.

I sent off a couple emails to hackpgh, but I’ll try to stop by their actual office today or tomorrow and just ask them in person if I can borrow/use their machine

here’s an example of a pattern generator based off of fractals and other mathy things

how to create knitting patterns that focus purely on texture

Perl script to Cellular Automaton knitting

Here’s a pretty well known knitting machine hack, for printing out images in fair-isle. This is awesome, but I was hoping to be able to play more with volume and texture than color

Computational Crochet

Sonya Baumel crocheted these crazy gloves based off of bacteria placement on the skin

User Interactive Particles

I also really enjoyed the work we did for the kinect project, and would be interested in pursuing more complicated user generated forms. These two short films by FIELD design are particularly lovely

 

Generative Jewelry

I also would be interested in continuing my work with project 4. I guess not really continuing, since I want to abandon flocking entirely and focus on getting the snakes or a different generative system up and running to create the meshes to make some more aesthetically pleasing forms. Asides from snakes, I want to look into Voronoi, like the butterflies on the barbarian blog.

 

« Previous Page
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity