Category Archives: CapstoneProposal

Matthew Kellogg – Capstone Schedule

Week 1: 3/24 – 3/30

  • Set up NaCl build environment
  • Find or plan graphics engine
  • Name nine-patch system
  • Implement nine-patch system
  • Find and import physics (either box2D or bullet)
  • Think about characters and world (sketches)
  • Create specific schedule

Week 2: 3/31 – 4/6

  • Define level format and make a text-based file importer
  • Add player and keyboard control
  • Add placeholders for solid objects, dangerous objects, and anything else
  • Make models and textures for a tileset and a couple objects

Week 3: 4/7 – 4/13

  • Add more objects (booster, spikes, moving obstacle and platforms, lasers)
  • Add animated character model (pre-baked if necessary, but skeleton would be nice)
  • Add particle system
  • Add dynamic lights with shadows
  • Add background

Week 4: 4/14 – 4/20

  • Add more objects
  • Add dynamic gravity, air jets, portals
  • Add controller support
  • Polish HUD
  • Make 10 levels
  • Add reflections

Week 5: 4/21 – 4/27

  • Polish

sejalpopat

26 Mar 2015

Background:
Detailed Tutorial of Selections
Data Binding

Update:
Which methods to involve in visualization:

  1. select
  2. selectAll
  3. data
  4. enter
  5. exit
  6. append

I’ve decided that each of these should have rules associated with them for visualization for a couple of reasons. First of all, they are basic things you need to know about d3 from the beginning. Also, they’re often chained together in calls.

Parsing the code to get the lines that involve these methods and are relevant for vis:
Right now I have two approaches to breaking down the code to find things to visualize. The first is overkill and involves using a parser that will break down any snippet of javascript like so:

This output doesn’t really simplify anything so I decided to go about this with a more targeted approach and just look specifically for lines that involved method chaining and then narrow this down to those with the specified methods above, then get the values of the variables/arguments to these calls. I’ve completed all but the last part (getting the values of variables) using regular expressions. Next I need to break this down into a format for visualization.

Additionally, I’ve also begun work on the website so that input to a text box will be taken in, parsed and then visualized. This is in the very beginning steps and I got just the bare minimum done so that I can test different ways of visualizing these lines.

Questions

1.Does it seem helpful to have a visual of what you’re working with as you type code?

2. Which of the tutorials above, if any seem like they actually intuitively explain things?

3. Is this worth the effort/useful/overkill?

Plan:

First I will get a sense of what visual representations are feasible/make sense by manually sketching some based on example code on the d3 website. This is what I am working on now and will give me a sense of the variation and challenges I will have to deal with in parsing and visualizing code consistently.

After this step I should have narrowed the visualization to specific methods/concepts in d3 that I want to clarify and begin to write code that parses those aspects of the code, and consistently visualizes it. I think the starting point for this should be parsing the code, then looking for the methods I’ve decided on focusing on, then generating some text based on that. This is a good way to start writing the code that I will need without getting to stuck on the visualization problems/decisions that will come up.

Then the part of this which is most difficult to delineate steps for in advance is the visualization. Once I have a sense of what my narrowed scope is this will be something that I clarify. As for now, some challenges I think I will come across are figuring out what visual analogy I will consistently use, how much detail is useful to provide in a visualization, the layout of the graphical elements (with respect to each other and the code if that is also displayed with it). Additionally, many of the concepts lend themselves to simple text summaries so I am thinking that would also be useful in conjunction with visual elements. To support this process I’ve been looking at automatic code documentation/summarization to see how they try to get the relevant aspects of code and consistently generate text based on it.

Ron

26 Mar 2015

As described the project overview, I am looking to analyze the text of over 10,000 Dilbert comic strips and then create some kind of mashup that allows for the creation of strips with new content.

My thoughts on next steps would include:

Using a Python module for image manipulation, develop code to go through each of the 10,000 strips and separate it by panel. The weekday strips follow a standard three-panel format, so they can be cropped by thirds.

A module for optical character recognition can then be used to perform OCR on each of the panels for each strip. The previously-scraped dialogue (that would serve as the ground truth for OCR output) does not specify which line is associated with a particular panel, so using a Levenshtein distance algorithm through a Python module can perform the task of matching the OCR output with the ground truth.

To perform textual analysis of the now-recognized text, the Java-based package MAchine Learning for LanguagE Toolkit (MALLET) can be used to perform topic modeling. This process would examine clusters of words that occur often together in each strip’s dialogue, and then, using contextual clues, connect words with similar meanings to build a topic model.

After this process, the idea is to replace the strip’s dialoge with another source of content. Using the image manipulation package, the existing dialog would be removed and replaced with new content, using a Dilbert-like font. I’m not sure exactly what would replace it, but it would be based on the results of the topic modeling process performed earlier so that the replacement text retains similar meaning and context. One option is for the replacement text would be an Old English dialect; another option would be to update the strip for the current decade by examining lines from the “Silicon Valley” TV series through topic modeling and select new, related content for a given strip. I am thinking the user would be able to select a theme or keyword, and the result would be a recreated strip.

mmontenegro

25 Mar 2015

Stage 1: (03/26 – 04/02)
– Successfully change color of clothing in real time with the use of the kinect.
– If the image looks realistic, stick to original plan
– If image doesn’t look realistic, change the magic mirror to show you in a “Fantastical” way. Lean towards fictional characters.

Stage 2: (04/02 – 04/16)
– Detect certain pieces of clothing: pants, sweater, etc. to change the color
– Add gesture recognition to allow user to select what piece of clothing they want to change –> Hand gesture
– Add gesture recognition to allow user to select new color –> Hand gesture
– If the image looks realistic, stick to original plan
– If image doesn’t look realistic, change the magic mirror to show you in a “Fantastical” way. Lean towards fictional characters.

Stage 3: (04/16 – 04/23)
– Make the magic mirror a more interesting object by using a 3D mesh and projection mapping.
– Not a very complicated projection mapping, just NOT a flat screen

Stage 4: (04/20 – 04/26)
– Successfully “grab” the current image and send it to your phone to share it.

Stage 5: (04/26 – 04/30)
– Polish

Thomas Langerak – Capstone Plan V1

Revised Idea:

To create a password alternative with a physical object, currently and most likely a game of chess.
One has to remember famous games of chess in order to unlock something (in this case probably a poem), to unlock this something one has to play the correct game against a computer.

Key in this concept is to show the progress in decrypting the message. When one gets the move right the correct word/letters is/are shown, with a wrong move the wrong letters are shown. In the first concept one was “gameover” but the conclusion was that this made brute-forcing incredibly easy.

I really want to make a tangible interface for this all, I think adds far more value to the project and I do not think it will be a large challenge. Yet it is not important for proof of concept. This brings me to a plan for the upcoming weeks.

On afterthought one could see it as the chess game in the first harry potter book/movie. Solve a challenge to move forward. But instead of winning the remembered moves are essential.

Plan:

  • I have thought long and hard about programming environment. Both Openframeworks and Processing appeal to me (maybe only Arduino is enough already). When going for a final design which is tangible I should look into a raspberry Pi. Here is a simple tutorial to get Arduino + Processing working the RaspberryPi:

http://scruss.com/blog/2014/01/07/processing-2-1-oracle-java-raspberry-pi-serial-arduino-%E2%98%BA/

The final choice will be made with regard to in which environment a chess engine is most easily implementable.

  • For a first prototype a chess engine is not essential. When everything is done correctly the moves of each piece are known beforehand, therefore no calculation needs to be done. Therefore I will probably will start with a mockup in processing.
  • The artistic challenge lies in the visualization. I am still still contemplating the how and what. I would love to make a tangible interface, since this solves some problems and I think it has more value both as a concept and an artistic piece. Though I do not think piece recognition will be hard, moving the chesspiece will be more of a challenge but definitely a solvable one.

To elaborate on the first prototype ( I am still wondering whether start doing this on a raspberryPi). This prototype will be text input based.

  • Notation into array per move (one array for moves of both sides)
  • Classic notation to more easily understandable notation. Classic notion is quite hard to get. When I am able to translate this from the current standard to something like A1-A2 (something from A1 moved to A2), A1XA2 (something from A1 hit something at A2), etc. This will allow for easier debugging and makes it easier to translate to an more advanced prototype.
  • Add input and output (textual for now)

This will enable to me to check the most important workings of the final piece (the encryption/decryption). From here on I can decide how to continue, with regard to the visualization.

Based on the fact that the final design is tangible:

Arduino                <—>       RasberryPi + Processing/OFx
Detection                            Check move
Movement                          Calculate Move
Visualization