A virtual pet from 2008/2048. Homage to Susan Kare, Ling's Cars, Kai's Power Tools, Jonathan Ive, Neopets, Webkinz, Club Penguin, and great design everywhere 🙂

gotta augment reality from Izzy Stephen on Vimeo.

I am simultaneously excited about and disgusted by shiny, sexy, 3D rendered and Photoshopped technodystopias. I also have a lot of nostalgia for the simple yet nerdy look of Apple products in the early 2000's, combined with nostalgia for the hideousness of the rest of the internet back in those days. I also miss scifi movie scenes where there's a crazy hologram UI with graphs, sine waves, textures, gradients, wireframes, and dials everywhere. It's like the future we never had! Nostalgia is not a productive emotion, so I decided to make a delicious AR snack with it.

This piece can exist anywhere its intended audience (sad preteens) can. However, it feels most at home in domestic spaces filled with friends. For my revision, I am thinking of de-interfacing it, disconnecting it from the mainframe, and making it more site-specific.


Even compared to all the other projects, I spent a very long time troubleshooting and down-scoping my idea for this project! My first idea was to train a model to recognize its physical form -- a model interpreting footage of the laptop the code was running on, or webcam footage reflecting the camera (the model's 'eyes') back at it. However, training for such specific situations with so much variability would have required thousands of training data.

Next, I waffled between several other ideas, especially using a two-dimensional regressor. I was feeling pretty bad about the whole project because none of my ideas expressed interesting ideas in a simple but conceptually sophisticated way. I endeavored to get the 2D regressor working (which was its own bag of fun monkeys,) and make the program track the point of my pen as I drew.

Luckily, Golan showed me an awesome USB microscope camera! The first thing I noticed when experimenting with this camera was how gross my skin was. There were tiny hairs and dust particles all over my fingers, and a hangnail which I tried to pull off, causing my finger to bleed. Though the bleeding healed within a few hours, it inspired a project about a deceptively cute vampiric bacterium who is a big fan of fingers.

This project makes use of two regressors (determining the x and y location of the fingertip) and a classifier (to determine whether a finger is present and if it is bloody.) I did not show the training process in my video because it takes a while. If I had more time, I think there is lots of potential for compelling interactions with the Bacterium. I wanted him to provoke some pity and disgust in the viewer, while also being very cute.

In conclusion, I spent many hours on this project and tried hard. I really like Machine Learning, so I wanted my piece to be 'better' and 'more'. But I learnt a lot and made an amusing thing so I don't feel unfulfilled by it.




A. Pix2Pix

I spent some time on Edges2Cats making people's fursonas.

This tool is lots of fun, but frustrating to use because of the discrepancies between how humans interpret edges and how an edge detector does. Connie suggested that I look at the original inputs (edge detections of cat photos) to see what arrangements of edges produced what output. This technique was necessary to understand how a cat nose/mouth was generated. As such, all my drawings have a very pronounced upper lip area to force the cat mouth to appear.

Here are facades of windows within doors, and an image of a dark future where humans are the handbags of omnipotent AIs.

B. GANpaint studio

GANpaint was probably the most frustrating tool in this assignment, but I think it has a lot of potential! Currently, the lack of layers you can 'paint' with and the very low resolution output limit what you can create. I assume that as the encoding of semantic features becomes better understood, more features will be available to manipulate in later demos.

C. Artbreeder

I have spent so much time on Artbreeder that I don't even know what to say about it! The new Portraits category is very frustrating (I can't understand what the sliders actually encode) but in a way that makes me want to spend more time with it. I tried to make myself, but it didn't turn out very well. Other people have made amazing Danny Devitos and Elon Musks.

Here is my Artbreeder profile if you want to see all my creations.

D. Infinite Patterns


E. GPT-2

That was the funnest hour of my life so far. All original text came from humorous online posts, it is not mine.

F. Google AI Experiments

I played Semantris for a couple of rounds. Perhaps it was not the most creative use of my time, but I enjoyed it a lot. It made me wonder if I was contributing training data to something. The most obvious answer would be a semantic mapping algorithm. It would be interesting if destroying multiple connected blocks required you to relate the two words in a clue.


A gamified recode of Scott Snibbe's Boundary Functions (1998.)


We're all prisoners of capitalism, what matters is the size of your cell. Choices abound: will you maximize your resources, or minimize your space wastage for efficiency's sake? Outmaneuver your "friends!" This recode of a classic interactive projection piece uses the participants' mouse locations to construct a Voronoi diagram through synchronous collaboration. The game is anonymous and purposefully competitive, pitting players against each other in a very, very small-scale simulation of the 'real world'.

Reducing users to sites on a Voronoi diagram, a graphical representation of proximity and 'territory', seems to imply something about the failure of communication inherent among discrete entities like human beings. Depending on each user's goal (and whether they share complimentary goals), patterns of motion created here include chasing others, fleeing from them, or hiding in a corner, like one might do at any college party.

Unlike Snibbe's piece, I used the p5 Voronoi library instead of doing complicated math. I also took advantage of polygon area and merge sort functions I found online. The hardest part was integrating sockets (and this functionality is still very buggy.) The gif below shows the slightly-more-functional single-player version with randomized sites. If I had more time, I'd ideally let the players (collectively?) decide how many random sites they wanted to be generated.

Sketches (sadly, this is all I have):


10. The interface uses metaphors that create illusions: I am free, I can go back, I have unlimited memory, I am anonymous, I am popular, I am creative, it's free, it's neutral, it is simple, it is universal. Beware of illusions!


Imagine your desktop is a kitchen, a garden, a hospital, a computer. Now, imagine it using no metaphor.

Call 5 random facebook friends and ask them for money.

Perform Ctrl+Z on real life. Invent new gestures to bring digital possibilities to oral conversations.

Some of my favorite tenets were 3, 7, 9, and 13, but 10 was my absolute favorite. I first realized the extent to which interfaces create illusion back when iOS switched from skeumorphism to flat design in 2012, and it's been weirdly haunting to think about ever since. On one level, illusions include how the Contacts app  on iOS used to look like a Rolodex with realistic colors, textures, and sound design. That was a visual illusion that imbued the interface with a sense of productivity, utility, and authority. Now the Contacts app is a matte white Apple Interface (TM) like any other, but this is another kind of illusion as described in the tenet -- one of "free"-ness, "neutrality", "simplicity", and "universality." In truth, it is not any of these for everyone anywhere (not even 'simple'! Maybe it's simple for me, but it might not be for an elderly person in Mongolia.)

But it goes beyond one app on one OS to the entire collective understanding of how an interface should work. The 'trash', the idea of 'documents' and the 'folders' they're stored in, and especially 'windows' and the 'desktop' are such fascinating metaphors that were invented in very specific contexts by very specific (probably white straight male) software engineers in the '80s (other than Susan Kare, she is cool.) The ability to "go back" or 'refresh' your situation, the idea that you have established another degree of human relationship by becoming 'friends' on Facebook, it's all an illusion.

As an aspiring (sort of) UX designer, I want to know how I can use my deeper understanding of how interfaces are constructed to make art that is critical of them.


For this LO, I wanted to write about the BIY (Believe it Yourself) project - "real-fictional belief-based computing kits" by Shanghai-based design studio Automato.

biy.Move helps the user "move around following harmonious paths" in accordance with Chinese Geomancy and feng shui (specifically, the position of mountains and rivers nearby.) Its directions can be used to guide both humans and robots.

BIY - Harmonious Self Driving Kit from automato on Vimeo.

biy.See processes the stream of data coming through the camera in its 'eye' using object recognition algorithms, and classifies lucky or unlucky objects based on Italian folk magic. It will warn the user when a 'bad luck' object or configuration is present, like "13 people sitting around a dinner table" or the cutout black cat shown in the still below.

BIY - Fortune Recognition Kit from automato on Vimeo.

biy.Hear processes language and identifies names to calculate lucky numbers and read the 'destiny' inherent within them according to Indian numerology. It prints out its conclusions on a nice little receipt.

BIY - Numerological Language Processing Kit from automato on Vimeo.

If the gist of the typical physical computing project is to interpret sensor information in an interesting way, I think this one did a good job upending my expectations of the relationship between sensor data, the supposedly flawless logic of computers, and reality. Technology can and does inspire spiritual experiences, but this can be overshadowed by the idea prevalent among STEM people that mushy logic (like that required to deal with spirituality, faith, and art) is invalid and not valuable. Maybe it's a reach, but I think this project points at how prevailing belief systems and epistemological frames of mind influence technology and culture.


p5 Examples

My favorite example was Springs. Though it's one of the more mathematically intense examples, it shows that it's still within the scope of p5 to simulate some types of satisfying, realistic motion. Not that realism is the key to everything, or that computer graphics shouldn't look like computer graphics -- it's just fun.

p5 Libraries

RiTa.js is used for generative text, and has features like a user-customizable lexicon. The projects in the gallery and examples include generated poetry, haikus, and even resumes.

Glitch Primitives

You could use the Cesium Viewer to show geographical data on a map, or maybe in combination with p5.geolocation to display user location. Maybe you see a different aspect of the project depending on where you access it from. You could also use this building block to ask questions about the ethics of tracking user location, and what nefarious purposes this data could be used for.