FacePaint allows the user to paint with his face. Position of the face controls the “brush” position; tilt of the face controls the paint color; the size of the mouth controls the brush size. Control is quite imprecise, particularly if FaceOSC has difficulty tracking the user, but with some effort one can learn to control the various parameters.
I envisioned this project as a way for children to participate in art classes when they might not ordinarily be able to, due to disability or injury or simply the inability to journey to school. Because FaceOSC streams data over the internet, it would be possible for a sick child to connect in to art class and still interact with his peers.
I would like to extend this project to handle multiple FaceOSC inputs, differentiating users in some way, to facilitate a sort of group-painting process. This would also benefit from being able to recognize multiple faces in one FaceOSC instance.
FacePaint was written in Processing 2.0b7 and interfaces with FaceOSC for facial recognition. You can check out the source on Github.