Making a Visual Instrument: Boosts
- Face tracking libraries
- face landmark and pose tracking (FaceOSC, BRFv4, clmTracker, Handsfree.js)
- eye tracking (EyeOSC, WebGazer.js, Eyetribe)
- Body skeleton tracking libraries
We also have a variety of user interface devices:
- Leap Motion Sensor (3D hand tracking)
- 3dConnexion SpaceNavigator (6DOF joystick)
- Sensel Morph (20k texel touchpad)
- Apple Wireless Magic Trackpad (multitouch pad)
- ofxMultiTouchPad: openFrameworks addon
- SendMultiTouches: program to send multitouch data over OSC
- Wacom Cintiq (pressure-sensitive tablet/screen)
- Wacom Inkling (digital pen)
- LIDARs (Hokuyo URG-04LX-UG01; Scanse)
It’s also worth considering how portable hardware like your phone is already instrumented:
- GyrOSC transmits your phone’s accelerometer, gyroscope, compass, orientation matrix, and GPS coordinates over a WIFI network.
- Processing’s Android Mode makes it easy to build apps that access these sensors directly
- Alternatively, consider inexpensive sensors like the Arduino-compatible Sparkfun 9DOF Razor IMU or Sparkfun 9DOF Sensor Stick ($16) as means of acquiring gesture data.
More exotically, we also have:
- Depth cameras (gets a depth image)
- Thermal cameras (sensing heat)
- Various webcams (including stereo, 360°)
- Binaural (audio) microphone
- ultrasonic (audio) microphone
For a few lucky people, we also have some interesting output displays:
- Oculus VR headsets
- LookingGlass 3D display
- Square screen (Eizo EV2730Q, 1920×1920)
- Long screen (NEC X431BT, 1920×480)
- 1W 40kpps galvanometer laser projector
- UR5 Robot
Machine Learning + Drawings
One of the most interesting opportunities presented by the current moment is afforded by the possibility of machines that know something about how people draw (and even, what they’re drawing).
Text to Image
- Text-to-Image with AttnGAN in RunwayApp
- WordsEye