People were interested in the accidents that could result in the game play. I think human error and showing that through the UI is an important part of the experience.
People thought the emergent strategies of the game could be interesting.
Non matrixed representation by Michael Kirby was mentioned, which is a term in acting to refer to performers who do not do not do anything to reinforce the information or identification. When the performer is themselves and isn’t pretending, they are non-matrixed.
The second is doodle-place (doodle-place.glitch.me). I’ll be adding some grouping to the creatures so the world is more organized and interesting to navigate. I might also try Golan’s suggestion which is synchronize the creatures’ movement to some music.
Finally I want to add a simple entry screen for the emoji game (emojia.glitch.me). On it you’ll be able to customize to some extent your outfit, or maybe see some hints about gameplay. I’m not sure if this will be an improvement, but I think it can be quickly implemented and figured out. I also want to put the game on an emoji domain I bought last year: http://🤯🤮.ws (and finally prove it’s not a waste of my $5)
My online multiplayer world made of emojis received a lot of feedback. I’m most happy to learn about related things and people such as Battle Royal, Everything game, and Yung Jake.
I noticed that the two most frequent keywords used to describe the game are “humorous” and “violent”, which I find accurate.
I received comments about gameplay, such as accommodating more players, more explanation of what is going on, having ability to change clothes etc. I’m considering implementing many of the suggestions.
During the critique, people seem to enjoy playing it. I also found out that people could not distinguish which players were controlled by AI, until I pointed out that the AI’s have the robot face emoji as their heads. I wonder if this means that my program passed the Turing test.
So far, I have created a software with some basic linkages and the ability to paste drawings on top of them. I have yet to add functionality for these linkages to be exported as a vector file that can be laser cut. I also want to connect to the Ponoko API, so those without access to a laser cutter will be able to order and assemble their own linkage toys. My project also needs an interface that will allow users to assemble the custom linkage toys. I am planning on creating a “dress-up game” type interface, where users can choose from a preset collection of body parts that will snap on to the linkages.
There are certain audio files that, when listened to, make me feel like being human isn’t so bad after all. They can be anything: songs, recordings from a friend, sound clips from a movie, or Formula 1 team radio exchanges.
However, the interfaces and procedures to access these files are dehumanizing and everyday, conveying no sense of occasion (e.g. below). I want to build a player that lets me play these files in a human, simple, clear way. Additionally, I want there to be a physical interaction that allows me to find a ritualized focus on the sound, with minimal distractions from UI’s and screens. Vinyl, cd’s, and cassettes provide such an interface, but are laborious to produce and record your own content onto. My device will utilize micro SD cards so files can quickly be loaded on using Finder, a nice calm place.
Notice below how when trying to listen to this one specific file, while fast and convenient, I get bombarded by all these other distracting messages that have nothing to do with the actual thing I’m trying to hear.
pt 1
Form: I found these screenshots on Simone Reubadengo’s Are.na and they really inspired me. Since an intensely personal project, I don’t mind just having the form given so that aspect is fixed. I want this prototype to focus on me building a high craft, actually working product with high fidelity electronic prototyping. This area is definitely still open to interpretation, below are my initial cad models. The red top pieces would be interchangeable cartridges that would contain MicroSD cards, connecting to an arduino inside the device through pogo pins when they are inserted.
Additionally, I want to test my ability to interpret something fairly abstract such as these forms into a fully working electronic device.
For the final, I plan on continuing my work on the new tab screen.
Though most of its functionality was working for Project 4, the projects are not actually scrolling and dynamically updating the navigation. I definitely want to make the time aspect of the project actually work. Additionally, it would be great if I could add an archive for navigating through old drawings.
I also want to network this project so people can create a new room, connect with other people, and actually use this new tab screen. Ideally this will project will live on as a chrome extension.
From the feedback I received, I think it’s a good idea to
make this an actual chrome extension! I was glad to hear people would want to use it.
have the drawings continuously scroll rather than scroll down all at once. This allows a more scroll like flow, as well as better negotiates time differences between people.
leave the drawing tools as is. I was originally planning on adding a text tool to add typed text, but the group’s discussion on how the screen feels more intimate when hand written makes me want to leave out the text tool.
add an archive for old drawings. I hadn’t really thought through what would happen once you have an over abundance of navigation dots, so adding an archive beyond the last week or drawings sounds like a good move.
Thanks everyone who was in my group for the critique :~)
I created a shared new tab space where multiple people can draw and leave messages. There’s a toolbar in the top right corner. You can draw with the pencil or pen (for straight lines), erase, and change the color. As the day goes by, your drawing will scroll down out of view, leaving a clean slate for a new day. Old drawings can be viewed by scrolling down, or clicking on the left hand navigation of dates.
When thinking about telematic art, my mind goes to how we can make more intimate communication two, or a small group, of people. The new tab screen is a place we go to hundreds of times per day, and yet usually serves little purpose (aside from luring us back into our frequented sites). What if we could leave messages for each other on these screens?
Messages on a new tab don’t send you notifications, yet you can count on someone viewing it within the hour. For people apart, it’s an intersection to intentionally yet also coincidentally meet.
Robots are often thought to be deterministic beings devoid of spiritual practice. However, as physical subjects, they share our timely limit bound to this mortal coil. Who is to say they lack the yearning to communicate beyond the earthly plane to their dead brethren? Not us. In fact, we should support them if they so desire.
Robots should be able to use Ouija boards. As everyone knows, robots read in their native language, barcode. As everyone also knows, Ouija boards operate through the ideomotor phenomenon. This phenomenon expresses our subconscious (our best connection to the dead) and requires that users be able to read the text as they perform the ceremony.
Therefore, we built a robot Ouija board. It is similar to a normal Ouija board except that the letters, 0 (for “no”) and 1 (for “yes”) are in barcode. The planchette carries the start and end characters. It occasionally aligns with characters causing the robot vision sensors (modeled in our video with a bar code scanner) to read the character. Through this action, they build their message.
Playing with the guts of machine learning models to create a conversational design partner.
For my RA work with the Archaeology of CAD project I am recreating Nicholas Negroponte’s URBAN5 design system. Built in the 1970’s, its purpose was to “study the desirability and feasibility of conversing with a machine about an environmental design project.” For my final project, I would like to revisit this idea with a modern machine (i.e. a machine learning model).
Most applications of ML are focused on automatically classifying, generating, stylizing, completing, etc. I would like to create an artifact that frames the interaction as an open-ended conversation with an intelligent design partner.
In its early stages, machine learning functioned as a black box. It developed an understanding of the world in its subconscious. Just like us, it had trouble articulating it’s intuitions. As we work on explainability, we develop tools that allow the machine learning model to communicate it’s understanding.
This project investigates this area through a drawing program with a chat bot powered by the mixed4d GoogLeNet hidden layer. As you draw, it will calculate the difference in the mixed4d layer between your drawing and a design intent (which could be an image or set of images) and then return the neurons with the greatest difference. Google provides an API for visualizing these neurons. This will produce a set of high level abstractions that represent what your picture might be missing (given your intent). These images will be shown through various Tracery.js prompts. The purpose is to make it feel like a conversation with the machine instead of an insistence that you do what the machine tells you. I could also add some stochasticity or novelty checks to keep the suggestions fresh.
This piece takes new machine learning interpretability technology, applies the idea of comparing high level abstraction vectors, and frames it as a conversation with a machine. It proposes an interaction with machines as partners instead of ‘auto’-bots that do everything for us, make all our decisions, free us from work, and control our fate.