For this project, I’ve created a simple drawing program using a Kinect, OpenFrameworks and Synapse. The concept here is relatively simple. The user’s left hand controls the brush, while their right hand sets a drawing context based on a pop-up menu on the right side of the screen.
I was interested in playing around with two-handed interaction and gestural drawing during the development of this project, and the choice of hands was deliberate. Because I’m left handed, I designed the system to handle detailed interactions with my dominant hand and course, contextual selection made with my sub-dominant right hand.
As the project developed, several difficulties arrived many of which were, in retrospect, due to my attempt at compressing a 3d drawing environment (the depth image captured by a Kinect) into a flat, 2D drawing context. Because the lateral size (width and height) of the view angle expands at a distance, simply flattening and mapping the depth-space makes it difficult to reach edges and corners past a certain depth. Conversely, if the user is too close to the camera, their body occupies too much of the Kinect’s field of vision and makes drawing difficult.
The results of this project are mixed. On the one hand, it was technically successful in as much as I managed to build a fairly solid bit of code in OpenFrameworks (first time!), and managed to do exactly what I intended. However, the results instantly exposed the boring-ness of simply drawing onto a screen using a skeleton tracker. Nevertheless, this project laid the groundwork for my much more successful final piece.