I did not get to finish the project. However, my idea was to have a program that notices when you are asleep during class and wakes you up with a sound whenever you start dozing off. This was inspired by me always falling asleep during class.
Problems I ran into: Other than problems in my code, I’ve noticed that it becomes difficult for the machine to delineate between me closing my eyes or looking down and sleeping.
So, I’ve been having some trouble with roommates being really loud when I’m trying to sleep. Therefore, I made a program that would alert them if they are being loud, so that I don’t have to repeatedly come out of my room and let them know. Thank you technology!
I tried to train the Teachable Machine to recognize when I’m awake/alert versus when I’m sleeping/tired. For awakeness, I recorded clips of me focusing on the camera, usually eyes open, and I also tried to capture more “alert” body language. For tiredness, I recorded clips of me with my eyes closed in bed, yawning, and “tired” body language (like me with my head leaned against other surfaces).
These are the ringtones that my sister, whom I share a room with, annoyingly uses for all of her alarms. I can wake up with ease from my alarms and I even wake up naturally on time. However, my older sister is the opposite. She needs over 10 alarms and she tends to sleep through them all. It’s pretty annoying when you sleep 10 feet away from her. As a result, these two alarm sounds TRIGGER me. When I was playing these to annoy her, she told me to just wake her up when they go off, because she can’t do it herself. So I created this system that alerts me to wake her up when it hears any of these two alarms. I used my iPhone to play the sounds directly into my airpod microphone. The first alarm was clearly distinguishable to teachablemachine but the second alarm, called Slow Rise, was jumping around from all three categories. After evaluation, it seems like Slow Rise sounds too much like Constellation. I might need to insert a different alarm sound or continue to record different audio samples. I think this project could take it a step further with some coding.
^gif of how I transferred the ringtones to teachablemachine. (ps. I’m not mad)
Hand Gesture Teachable Machine Model w/ Arduino Connection/Soon-to-be Arduino Kinetic Sculpture
So, I started out doing something kind of boring where I was going to teach a model to recognize when I touch my hair (nervous habit of mine) and make an annoying sound. This hopefully would’ve helped me break that habit. However! I started thinking about a project I had started in my physical computing class where I was going to write a hand tracking/gesture recognition program to control an Arduino kinetic sculpture. I am currently using the openCV library for Processing… but Teachable Machine offers up a different approach. My idea was: if I could use TM as my hand gesture recognition model… this project would probably be much simpler on the programming end and then I could focus more on the visual/sculpture. So… I brought my model into p5.js and researched how to get serial output from there. This is where I found P5.serialport, which is a p5.js library that allows your p5 sketch to communicate with Arduino.
So, this model is pretty buggy. Luckily redoing the teachable machine part is fairly simple, so I will continue to test and refine this until it works properly and easily. I also want to try out some other classes that I didn’t include in this demo. I had made a hand waving and finger gun class but it seemed to mess things up a bit, so I disabled them for this demo.
I have not been able to work on the actual arduino sculpture much yet. So far, I just have the Arduino controls set up to 3 basic servos so I know things are working and its receiving a connection.
Downloading the P5.serialserver
For P5 to communicate with the Arduino, I need the webSocket server (p5.serialcontrol). Unfortunately, I tried to download the GUI from github but my computer does not recognize it as safe software – so that didn’t work.
Then I tried to run the other option of using it without the GUI and running it from my command line… that also didn’t work…
So basically, I cannot go any further with this project until I figure out how to work around my computer blocking me from running/downloading this app. But! I am optimistic!
I created a detector for if I’m washing my hands. I couldn’t get the p5 code to work so I took stills from a video to detect if my hands were in the sink washing my hands or no hands in the shot. I was thinking particularly about this application as an enforcer rather than something on an individual basis to check one’s self. A world where everything even how long and properly we wash our hands…while with good and productive goals and intentions, the thought is still discomforting. Below are the results of how well my detector worked. Class 1 below is not washing and class 2 is washing.
The door to my room doesn’t close, which created a unique problem once our puppy Chloe realized slamming her head into my door would open it and she could enter even though she knows she’s not allowed to.
So, I decided my model would warn me every time Chloe came into my room. I taught it that the door being closed, the door being open, and me and my roommates coming in and out were okay (class labeled “No intruder”), and then I taught it that Chloe coming in was not okay (class labeled “INTRUDER”).
Then, I went into p5.js with my model, and made it so that whenever the video was classified as “INTRUDER” it would play an alarm:
(Disclaimer: I don’t even know if this code makes any sense; I just kept adjusting the lines until it worked because I ran into a lot of weird glitches with other variations.)
So, I present to you all: the cutest burglar you have ever seen:
Here is the gif, though sound is obviously an important aspect of my project: