SMALL TURING // LGG EPFL
“OPTICS” // CHRIS WOOD
Hello World.
I have decided to change my final project and will be making a game similar to spacechem where the player builds railroad circuits for performing computations on train cars.
I will still be trying to make the game as user friendly as possible.
Personal Information That I Have to See If You Can Be
Shorter:
People performing conversations written by personalized keyboards.
Longer:
Android OS offers “Personalized Suggestions” for its software keyboard, based on data it has stored from that user’s activity on Google apps and products. For a given typed word, the keyboard will offer three suggestions for a next word to choose.
This allows one to write absurdist “predictive poetry”, which tends to capture affective qualities of the user’s writing style. There are many examples of this kind of poetry online, especially with the introduction of QuickType for iOS 8. This is something I wrote using my housemate’s predictive keyboard:
On Android, the files that contain the tracked information about the user’s typing habits are stored locally. By rooting my phone, I was able to gain access to them. Since these files are synced with one’s Google account, I could presumably obtain the same dictionary files for anyone with a Google account. I was looking through one of my dictionary files, and while most of it was not human readable I saw many words that I remember typing years ago.
The data is encoded in data structures designed specifically for this use. All the code for Android is online so it should be possible to decode. Finding a way to navigate the “trie” of stored data is where the technical challenge lies for me.
I’d like to live scripted conversations between several people, where the scripts are generated with the help of each person’s personalized keyboard data. Each conversation will center on a certain key word, which will serve as the seed word for the text generation. The scripts won’t be generated in an automatic way. Instead, each participant will sit in front of a computer for a few minutes and build up their personalized script by rapidly selecting words with their eyes. After this has occurred, they can review the script and choose to remove words, but they can’t add anything. Doing it this way, they don’t have much time to react and think about their choices, and it gives me more flexibility to orchestrate a flowing conversation.
The project will happen in a few stages:
1. meeting with each participant to collect their personalized keyboard information, and to work with them to compose their personalized script
2. the scripts are compiled into a conversation
3. the participants meet to perform the conversation from their personalized scripts
Background:
Detailed Tutorial of Selections
Data Binding
Update:
Which methods to involve in visualization:
I’ve decided that each of these should have rules associated with them for visualization for a couple of reasons. First of all, they are basic things you need to know about d3 from the beginning. Also, they’re often chained together in calls.
Parsing the code to get the lines that involve these methods and are relevant for vis:
Right now I have two approaches to breaking down the code to find things to visualize. The first is overkill and involves using a parser that will break down any snippet of javascript like so:
This output doesn’t really simplify anything so I decided to go about this with a more targeted approach and just look specifically for lines that involved method chaining and then narrow this down to those with the specified methods above, then get the values of the variables/arguments to these calls. I’ve completed all but the last part (getting the values of variables) using regular expressions. Next I need to break this down into a format for visualization.
Additionally, I’ve also begun work on the website so that input to a text box will be taken in, parsed and then visualized. This is in the very beginning steps and I got just the bare minimum done so that I can test different ways of visualizing these lines.
Questions
1.Does it seem helpful to have a visual of what you’re working with as you type code?
2. Which of the tutorials above, if any seem like they actually intuitively explain things?
3. Is this worth the effort/useful/overkill?
Plan:
First I will get a sense of what visual representations are feasible/make sense by manually sketching some based on example code on the d3 website. This is what I am working on now and will give me a sense of the variation and challenges I will have to deal with in parsing and visualizing code consistently.
After this step I should have narrowed the visualization to specific methods/concepts in d3 that I want to clarify and begin to write code that parses those aspects of the code, and consistently visualizes it. I think the starting point for this should be parsing the code, then looking for the methods I’ve decided on focusing on, then generating some text based on that. This is a good way to start writing the code that I will need without getting to stuck on the visualization problems/decisions that will come up.
Then the part of this which is most difficult to delineate steps for in advance is the visualization. Once I have a sense of what my narrowed scope is this will be something that I clarify. As for now, some challenges I think I will come across are figuring out what visual analogy I will consistently use, how much detail is useful to provide in a visualization, the layout of the graphical elements (with respect to each other and the code if that is also displayed with it). Additionally, many of the concepts lend themselves to simple text summaries so I am thinking that would also be useful in conjunction with visual elements. To support this process I’ve been looking at automatic code documentation/summarization to see how they try to get the relevant aspects of code and consistently generate text based on it.