Okay, where was this when I lived and breathed graphic design? The project I’ve chosen to highlight is titled Fontjoy, by Jack Quiao, which generates font pairings. Its aim is to select pairs of fonts that have distinct aesthetics, but share similar enough qualities for them to be functional as a font pair. I don’t quite understand how it all works, but the fact that there’s a neural net out there that has fonts categorized on a scale of most to least similar warms my heart, if not for any reason other than that someone has made the thing that I’ve very much wished existed every time I go to select fonts on a document.
Use the tool here.
I think this project is super cool. I’m very interested in how it explores objects that contain human-ness to them. The object they use in this piece is a bed sheet and I paint a lot of bed sheets in my art, and have always enjoyed painting beds because they have such an intimate quality about them, so this is inspiring and relatable. He used his body movements to make movements in the bed sheet which is super interesting, because it is delving into how humans affect the objects around us, and therefore, the objects become more human-like.
This piece really speaks to me, it doesnt really feel like its generated by AI. But still, it is. I love that, but am also terrified by it. This however is a beautiful piece joined by AI and human hands. I like it a lot 🙂
What first caught my eye with this project was the texture of the masks. I am fascinated with certain textures, and these masks fall into that category. Some of them look like they could be rocks, and others look like a bunch of cotton balls stuck together or something. Even though it’s sort of not the point of the project, I am very intrigued by how they look. These masks are supposed to make a face unrecognizable to surveillance and facial recognition algorithms – and I think that this is a really interesting-looking approach to doing that.
This project uses different textures and patterns seen in famous paintings and applies them to various 3D renderings. I liked how the program allowed people to use almost any combination of a painting/texture and 3D rendering. The 3D aspect of it made it such an immersive experience, and the results were very beautiful. Additionally, my favorite part of the project was that it makes it easier for people to create worlds that fit their own personal artistic tastes.
Daniel Ambrosi, Infinite Dreams (2020).
After looking at the dozens of pieces featured on these websites, I ultimately chose this piece because of how it made me feel. Lately, within my art practice and just looking art, I forget how much your feelings can be emulated by color, texture, tone, etc. First looking at this piece, I suddenly got a such a familiar and warm feeling that reminded me of a certain summer night that has had a really big impact on me.
Obviously the colors really caught my eye at first, but looking further into the piece, you can really see the impact of the details within each square. I almost just feel as if I want to live in that world of cubes and colorful squares like a maze.
Visualizing High-Dimensional Spaces
by Daniel Smilkov, Fernanda Viégas, Martin Wattenberg & the Big Picture team
In high-dimensional spaces, this project visualizes data sets. In order to organize a variety of characteristics about people, objects, words, and more, machine learning converts these into data points that are organized in a high-dimensional space. I found this project interesting because it is an applicable and foundational program that could be used for people beyond the art and science fields. A variety of data sets can be explored using this system. The program utilizes pixel recognition which is then clustered by the machine. I find it interesting that the machine is learning something about each item or object that is then used for others to learn.
Above is an image from the project “Fooling Facial Detection with Fashion” by Bruce MacDonald. The goal of the project was to create an adversarial to the “histogram of oriented gradients” method of facial tracking. Because our faces are constantly being scanned many people have invested effort into “protecting” themselves from the constant surveillance. This project exists as a small novel experiment in the very real context of adversarial attacks and reminds me particularly of deep fake detectors and deep fake detector breakers. The slightly scary idea that if you provide enough images of your face, something we present to our phone cameras near-daily, that your identity, privacy, individuality, can be compromised. We are still in the infancy of what surveillance technology can do, and while paranoid, this project gives me a feeling of dystopia where there is no corner of the earth where you can escape being viewed, and to protect ourselves we have to fool an AI that we created.
I had such a hard time narrowing down which project I wanted to talk about! I had so many tabs open that they were all too small to even read the first letter of the tab! But after a very difficult March Madness bracket, the one that stuck out most to me was Simpsons vs. Family Guy by Parag K. Mital.
Basically, he created a database of segmented frames from the Family Guy intro and then created a video resynthesis of the Simpsons intro using only images from that Family Guy database. It resulted in this beautifully chaotic mosaic of the Simpsons intro, and his side-by-side shows it beautifully:
Simpsons vs. Family Guy from Parag K Mital on Vimeo.
Being a huge fan of adult cartoons my whole life (you remember my Family Guy option on the TV of my Bitsy game?), this was so striking to me. I could pick out parts of Family Guy characters’ faces repurposed as Simpsons objects, and it was sooooo cool.
I find this seriously cool and inspiring, and now all I want to do is make my own version of this!!