I made an annoying robot that withholds a cute picture from you as long as it can. It's based in a terminal.
I wanted it to be a little unclear how much this robot knew, and at one point it asks you to do something and then doesn't even check if you've done it at all, it just moves on regardless. But I also just wanted to see how much you can make somebody work if you just slowly add more and more tasks. It's like the sunk-cost fallacy; they've come this far so they can't stop now. It's just a prototype, and I'd like it polished up and longer, but it captures the idea, I think.
Here are some videos of people playing it:
Here's the link to the project: https://editor.p5js.org/gray/sketches/XQDS1U6gE
At first, I wanted to use PoseNet's full-body tracking system, but I found it to be unreliable on my machine. Instead, I opted to use the slow, but reliable BRFv4 face tracker. I sketched out a rough idea for transforming the viewer's face into a cartoon character. However, once I started programming it I realized that I was getting outside my scope for this assignment, plus I was running into a lot of glitches and fixing them would be time-consuming.
On attempt 2, I wanted to use 3D objects. I sketched out a plan for a low-quality 3D face inside a monitor. Professor Levin taught me how to rotate objects using my facial tracker. Then I imported a 3D monitor that I made from AutoDesk Maya and filled in the face using the vertex () command. For the final touch, when the viewer would open their mouth, the head would give off an audible scream. I took the sound effect from this video and heavily modified it using Audacity.
This was a fun deliverable.
EDIT 9.25.19 11:00PM: I was inspired by Cain from RoboCop 2 (see below).
My initial idea was to create a sort of creative imagining of the idea that reality is an illusion, one of the core principles behind the movie franchise the Matrix. The first Matrix movie's core tenets are that reality is an illusion and that one will only become truly enlightened until they realize this fact, and are able to see through the illusion. I wanted to create a realistic yet virtual representation of this, but was not quite able to achieve what I wanted. Ultimately, I was able to learn a bit about face tracking and the WEBGL 3D object animation library in p5.js. There were a lot of things I wish I could have added, such as numbers streaming down in the background, as well as some related text on the face, but I could not get 2D text to work in a 3D space. I later found out that 3D text was required to make it happen. This also applied to some 2D planes and shapes that I wish I could have added, but did not have enough time to completely figure it out. I still wanted to add something cool to the background, so I referenced one of Dan Shiffman's videos on "Warp Speed Stars". I decided to use lower opacity and strokes on some of the 3D shapes to add to the virtual effect. I also added some interaction by checking if the eyebrows were raised or if the mouth was open, to affect the virtual figure. Finally, I added some sound in the background so that there would be some auditory stimulus on top of the visual artwork.
For this project I was interested in balloon animals and how balloon animal artists simplify complex forms to create a temporary goofy object to represent that or to decorate and wear. For the actual execution I was thinking about face and body and experimented separately to find different solutions. Overall I had a lot of trouble with this project, trying to figure out how to load 3D shapes from P5JS or outside 3D objects/files was not working for me and trying to figure out parts of the templates that I did not understand was also tricky. At the end of the day, I came up with a few mediocre 2D solutions but this week, I want to try to execute this in a more effective and clean way.
I wanted to combine the face tracker and the body tracker, so I added their code together into one file. That was relatively simple. I wish I could have used a more sophisticated body tracker which had more skeletal points and wasn't so jittery. Originally I wanted to make a 3D skeleton and skull which moved with the viewer, but I was unable to implement this. I added a 3D model of a penguin character I had made which moved with the tilting of the head to add an extra element and make the piece slightly more interesting.
Link to Project: https://editor.p5js.org/rsunadaw/sketches/_HzJPPzHj
This was inspired by livestreaming, and how viewers give likes during the livestream. On very popular livestreams, there would be a flood of likes.
I used BRFv4 face tracker, and it gives a flood of "likes" only when a person is smiling. The smile is detected if the ratio (of the mouth length: mouth to nose) is greater than around 1. Position of the head changes the heart color, depth of the face changes the size of the heart. The hearts are bound to the face, so that when the person smiles the hearts cover the face like a mask.
There is only one viewer/person "watching", and they leave likes on their own video. For me, this action feels comforting (self love/self care?), but also superficial.
I noticed that the ratio between the mouth corners, and the nose to mouth is close to a equilateral triangle. I made the ratio a certain threshold to recongize a smile. I later realized that it doesn't work as well if the person is tilting their head up or down.
My original idea was to create a scene that does something really cool when you close your eyes, but when you open your eyes again, it goes away. It is, however, difficult to make something really cool so I went with something silly instead.
Finding a way to consistently detect closed eyes required a lot of troubleshooting. First I tried using machine learning paired with the second template for face tracking (BRFv4) to detect the difference between someone's eyes open and closed, but the difference wasn't great enough to be legible to the computer. Also that template lagged a ton. Then I found a way that someone had done it online computationally. I tried this first on BRFv4, but that didn't work. The face points were too rigid to react to eye movements. Finally, when I used the computational method on the clmTracker I was able to detect closed eyes consistently enough.
I'm happy with the final product, although there are ways it could be more. The static image could react subtly to the viewer before they close their eyes, the whole conceit could be more immersive.
This project was inspired by nothing but my exhaustion and my desire to finish this before 3 am. I was playing with masks in p5.js and isolating parts of my face recursively, trying to think of an idea. I was reminded of Ann Hamilton's work with mouth photography that we discussed in class, so I decided to make an interactive version of that through p5.js. The masking and the scaling is quite sloppy, so if I have the time and willpower, I'd like to come back and refine it in the future. I wanted to play with orifices on the face in particular, so maybe I can also expand this to other parts of the face like the eyes or maybe the nostrils. Maybe this says something about how you reveal an image of yourself by opening your orifices (e.g. talking), or maybe I am just trying to invent meaning to a quick sketch at 1:54am. It would be interesting to explore this as a social media communication form, like a chat lobby where everyone is depicted in this form. Maybe in the future, if I can figure out how to use handsfree.js on glitch.
Again, I have no sketches because I thought of this idea at about 11pm.
This project served as a pressure valve to release some anxieties. I decided to approach this work conceptually instead of technically. The mouth in general, and specifically my mouth, is a focal point of some of my stressors. When stressed, I'm a chatterbox. Having braces is also an (admittedly petty) aesthetic anxiety. The mouth even is lent a cleaning ritual independent of the rest of one's body.
On deciding the body part to focus on, I then had to decide whether I would exacerbate or hide the point of contention. I chose exacerbation because I thought it would be more interesting and there seems to be a certain humor in the discomfort it brings.
Finally, I had to choose how to focus on the conceptual point. I had the idea of taking clippings of the mouth and expanding them similar to a megaphone.
The work isn't a masterpiece, but the novelty was there. Many of my friends giggled and wanted to try it out. If I was to keep focusing on this, I would focus on increasing tracking stability. One idea I played with but didn't make it into the final version was storing different mouth photos and displaying them randomly among the panes, but the effect was a little different from what I wanted.
This piece began as an exploration of marionettes and how funny yet creepy they are when they move. Through the process of creating figures, and trying to decide what visual language I wanted to use, I came across some paper cut out dolls and an amazing Good Housewife magazines from the 50's. The ads were so funny and I though that the slightly uncomfortable motion, that I found resulted from the marionette segmented image tactic I was interested in using, could work to create (what I at least find to be) a really fun piece. The motion is jerky and the body parts disjointed. While this is in large part initially due to errors in the program, I found I really liked how the head pops off and how glitchy it is at times. I think it adds to my laughing at the ads. (Note: if you clap your hands together off to the sides, the images change). In a lot of ways this is a mess and a half, but despite that (or maybe because of it?), I think this is my favorite piece thus far.
I was hoping to turn this into a game of sorts, to have the funky dancing figure cover the ad screen with the trail of where the participant moved, however tinting the image caused it to slow down too much to work. (see below - tint trial ~300x speed)