vingu – SituatedEye

I made a survelience ramen bot that takes a picture when it sees someone take instant ramen out of the pantry, and tweets the image on twitter. I thought it would be interesting to document me and my housemates' instant ramen eating habits, since our pantry is always stocked with it.

I worked backwards, starting with the twitterbot. I used Twit API, and node.js. (Most of the work was from setting up the twitter account and learning about command prompt.) Then I added the image capture to the Feature Extractor template. I struggled with connecting the two programs, since one runs on pj5s (client-based?) and the other on node (local computer?). I tried to call the twitterbot code in the feature extractor code(trying out different modules and programs), but I couldn't get it to work. I opted to make the twitterbot run continously once I call it in the command prompt; it calls a function every 5 seconds to check if there is a new image to post.

I made the twitter account header/profile look like a food blog/food channel account. I thought it would be a fun visual contrast between the tweets.

code (I didn't run on pj5s editor, I ran it locally from my computer)

Some after thoughts:

  • It would be better if I finished this earlier, so that there would be more realistic twitter documentation of me and my housemates. none of my housemates were avaliable/awake by the time I finished
  • find a better camera location, so it looks less performative
  • I should of done samples of holding food that wasn't instant ramen
  • this can only be run locally from my computer, maybe upload to heroku?


Scrolling through my tester tweets.