What will a text to image generator do with abstract language depicting love and pain?
i remember when carmine rivers seeped from my
shins, tracing a route back to the bathtub. once,
i dreamt of bathing in my own bruises. my legs
still bleed whenever i miss you.
who cares about subclavian
and carotid arteries. blood
is blood. everyone knows that
model hearts don’t look
like the real thing, they’re
all too red and stiff; genuine
myocardium is pink, fragile,
too fragile, disgusting, raw.
It was dark. You were faceless. The air was stagnant. I was silent.
I didn’t touch you, but I knew you were warm.
We lay our bodies by the marsh, staring up at the sky,
silence slitting our throats. The darkness shrouds our bodies
like a pall. I wondered if two cadavers could kiss.
I tried to train the Teachable Machine to recognize when I’m awake/alert versus when I’m sleeping/tired. For awakeness, I recorded clips of me focusing on the camera, usually eyes open, and I also tried to capture more “alert” body language. For tiredness, I recorded clips of me with my eyes closed in bed, yawning, and “tired” body language (like me with my head leaned against other surfaces).
A heart with eyes, a stegosaurus, and the word “кошка” (cat in Russian). I think I’ve used this site before, and I find it really interesting that it tries to detect smaller circles and transform them into eyes.
tw: vague child abuse mention, eating disorder mention
The “10,000 Bowls of Oatmeal Problem” describes a problem that some algorithms in certain contexts can run into when generating a large amount of content. Though generative programs are technically capable of producing any amount of content, most are unlikely to generate perceptually “interesting”, “novel”, or “unique” results when many artifacts have to be made.
Perceptual uniqueness is not necessary in contexts where “perceptual differentiation” is satisfactory enough – for example, if one was generating the appearance of multiple small details such as grains of sand or waves on water, slight variations to their appearance would be 1) less work for the creator of the generator 2) beneficial to creating a pleasing sense of visual uniformity. However, for larger, more noticeable things, such as say, fish in an aquarium, higher levels of differentiation are needed to prevent an “uncanny valley” effect. One could increase the amount of options that can be generated, or make the generative options as different as possible from each other.
Generative Valentines [link]
I first started by creating a Bezier heart. Then, I made various transformed copies of it that were all colored based off of a randomly generated color background. Then, I added bordered Comic Sans text with two generative variables, following the format “Have a(n) [adjective] [synonym for Valentine’s Day] Day,” that would produce a different result every time the Valentine’s Day card generator cycled through the 12 names in my generator.
I originally had the generator randomly choose from a list of my classmates; however, I realized that it made it harder for the generator to consistently produce a result for every classmate. Instead, I changed the algorithm to cycle through the names in a predetermined order, ending the loop after one Valentine was generated for each classmate.
I also modified the code used to save JPGs of the Valentines so that it would name each card “Valentine_[classmate’s name]” rather than “valentine_[number]” to make accessing the downloaded files easier.
10,000 Bowls of Oatmeal Problem
I feel like I am starting to achieve some of the Oatmeal Problem, as I think that the varying colors + generative text gives my cards some degree of uniqueness, however, their overall composition is static, which is an aspect that makes them less interesting from each other.
I was particularly drawn to this Helena Sarin “#latentdoodle” because although it is a non-representational piece, this AI-generated work has a composition and texture that reminds me of the appearance of sand or salt under a microscope.