This set of Deliverables is Due next Wednesday, April 21. It has two components:
-
- ML Image Processor
- Situated Eye
- Install Unity, per the instructions here.
ML Image Processor
- Create a blog post, titled nickname-ImageProcessor, and categorized, 08-ImageProcessor
- Include a written description of your project, of 100 words, including a discussion of your process and some thoughts about your results.
- Embed visual documentation of your project, such as before/after images, animated GIFs, etc. so as to make clear what you made and how you made it.
Situated Eye
You are asked to use Google’s Teachable Machine tool, which allows you to train a (neural-network-based) image recognition system, in the browser. This is intended to be the main part of Deliverables 08b, and might take you 2-5 hours.
- Train a detector using your computer’s webcam and Google’s Teachable Machine.
- Optionally, create an interactive system in p5.js that uses your detector as an input. (For example, my simple example project was based off of the template code provided by Teachable Machine when I trained my model.)
- Record an animated GIF of your detector and/or system in action.
- Embed your GIF in a blog post entitled nickname-TeachableMachine, and categorize your blog post with the WordPress category, 08-TeachableMachine.
- In your blog post, write a reflective sentence or two about your experience using this tool.
In this project, you are invited to consider how you can create a “situated eye” – a “contextualized classifier” – a “purposeful detector” – a “poetic surveillant”. Ideally, you will create a camera-based system:
- which is located in a specific place;
- which is trained to detect a specific thing (or things);
- and (optionally, with p5.js) which responds to what it sees, in an interesting way.
Design Considerations
- Consider escaping the typical physical context of the laptop. Don’t limit yourself to the physical constraints of your laptop’s webcam, and the implicit assumptions it imposes on where a camera can be (on a table, in a room, at waist height, with a level pitch). If necessary, borrow a peripheral USB camera and a camera mount.
- Further to this point: Give extremely careful consideration to where your camera is located, and/or what it observes. Is your camera on a building? In a refrigerator? Above a pet dish? Part of a microscope? Pointed at the sky, or at the floor? Looking at custom cardboard game pieces on a table? Observing objects on a conveyor belt? This is not a speculative matter; actually do the thing.
- Your system might respond to the actions of a live interacting user (i.e. in the manner of game controller), or it might respond to people, vehicles, animals, or other phenomena that are wholly unaware that they are being observed. *It is understood that you will not violate anyone’s privacy.
- Your system might respond in real-time, or it might serve as a system for recording, logging, or counting what it observes. Keep in mind that you can save files (data, images) to disk…
List of Deliverables:
- A blog post, titled nickname-SituatedEye, and categorized, 08b-SituatedEye
- A written description of your project, 100-200 words, including an evaluation of your project.
- A brief demonstration video, embedded in the post (uploaded, or linked to YouTube or Vimeo)
- At least 3 pieces of visual documentation of your project, consisting (as appropriate and/or possible) of photos/scans of your notebook sketches; photographs of the project in situ; screenshots of the project; technical diagrams; example images from your training data, etc.
- An animated GIF