Category Archives: CapstoneProposal

dsrusso

24 Mar 2015

Overview

This project is an open source and affordable hardware platform for advanced cinematics.  These technologies for camera movement are typically only available to financially able production companies, but this device will give the same level of dynamics to anyone with a laptop.

What’s Out There

There are a large number of commercial products and diy tutorials that cover this subject.  However, there is quite a disparity between the diy/ open source builds versus the commercial.  On top of this fact, the commercial does not reach the ability to track subjects in real time until the budgetary stratosphere is reached.  This space remains entirely unexplored as an affordable means to obtain high quality, and dynamic footage.

Hardware

+What is needed?

-Stepper Motors (sound control // smoothness)

-Aluminium rail (light // strong)

-Acrylic parts (easy to fabricate // inexpensive)

-Arduino + Drivers

-Kinect

 

 

IMG_3781

IMG_3789

Code Schematic

Code_dsr

 

What’s Next

+Develop Pan Tilt Mechanism

+Devise Tracking Algorithms

+Machine Connectors for a Sturdy Result

+Define Choreography

 

Schedule

March 31st– Completed Pan Tilt Mechanism [Design and Build]

-Strategy for Choreography

 April 14th– Kinect / Development Completed

April 28th– Completed/Refined Prototype for Critique

 

Epic Jefferson

24 Mar 2015

 

Mano-Extendida: Free-hand gestures for sound manipulation using leap motion controller

Current Status: I started this project on a linux system and have attempted to transfer over to osx with extremely poor results. First copying from git and simply running the openframeworks project resulted in errors I couldn’t work around, so I rewrote the of program to get the leap data and send it over OSC. Ok, now that that is working my pd patch should work exactly the same because the messages are exactly the same before… NOPE! For some reason, Pd-extended is only receiving OSC about 4 seconds AFTER I REMOVE MY HANDS from the leap’s view. It just gets a shitload of data spit out together. So I check and see that pd vanilla 0.46 has a new built-in osc parsing object and test it out. After the initial test it seems that this will work and proceed to completely rebuild the pd patch because, naturally, I can’t just open my pd-extended patches in pd vanilla and I can’t copy/paste either, great. After that was done with, what do you know, Pd is totally shutting down my OSC parsing, this time when DSP is turned on! The same thing as before, about 4 seconds after removing my hand from the leap’s view, I get a lumped together a bunch of osc data. I tried installing jack to see if the issue is with pulseaudio but the result is the same. That’s where I am now, totally frustrated.

On a different note, I’ve been giving some thought to the gestures I want to implement for sound design and manipulation. Here are some illustrations, hopefully they’re self explanatory and thus appropriate for the task.

Mano1 Mano2 Mano3 Mano4 Mano5

Here’s one that’s not self-explanatory, but I’m inclined toward it anyway.Mano6

 

Here’s some of the research I’ve done, mainly comparing systems the use gloves vs no gloves for sound synthesis. It’s very light on the gesture side of things but I’m working on that.

pedro

24 Mar 2015

My data visualization project is a necessary reference for my capstone proposal. Originally, I was investigating the possibility of an interactive system that allowed the analysis of building footprints. This analysis would be the base to take the buildings out of their context and propose a remapping. In order to focus on visualization, I amplified my data set and produced static remapping of different cities based on a ranking of different shape metrics related to dispersion or regularity.

datamenu

original proposal for data visualization (with clustering, force collapsible graph and interaction)

The capstone proposal will try to go back to the origins of this investigation and will reformulate  the idea of remapping.

[here should be the new sketch, but the blog is not loading….]

Based on the visualization of the osm file in the processing environment, the user would be able to analyse and reformulate the organization of the buildings. The buildings will be stored in specific classes containing not only its own geometrical elements but also the characteristics of their shapes. Well, this is the basic idea, but there are still many branches to choose. Once these branches will demand different algorithms and approaches to the problem, I decided to use this post as a place to make this intersection explicit.

These are some 2 divergent general approaches to the problem:

a Interactivity: there will be many event-based functions to allow the user to choose specific buildings and specific shape metrics to organize a new map in parallel.

b “Generativity”: a more complex algorithm will be used to generate automatically a new map based on multi-dimensional shape metrics. Probably all the map will be analysed as a whole.

c Some place between? This would be the ideal, but probably the least feasible.

There are some possible types of map:

1 1d and 2d ranking: creates a precise position for each building in the new space based on its properties.

2 Pack, puzzle or collage: the agglomeration of the building based on the boundary and the neighbours.

3 Graph + attraction + repulsion: a general data-structure to cluster and organize the buildings in the new map. Probably this is the best data-structure + algorithm to balance interaction and generativity.

4 Self-Organizing feature map / Kohonen map: a artificial neural network able to convert high-dimensional data to low dimensional space (such as the modular maps) with weight vectors. It generates a complete new territory.

Ron

24 Mar 2015

For my capstone project proposal, I am performing a textual analysis of Dilbert, a comic strip about an engineer in a bureaucratic corporate machine. Since the late 80s, the author has been creating over 10,000 strips for nearly 26 years. In an earlier assignment, I scraped the dialogue of each of these individual strips and downloaded each strip as separate images; this data was used to produce a visualization indicate the relationship strength of each character pair. For my capstone project, I would like to extend this exercise to extract more interesting information from this data set and apply it to a creative application.

I currently have dialogue for each strip, but the dialogue is not associated by individual panel (there are three panels on a typical strip). Using Python image manipulation and optical character recognition (Tesseract) bindings, I intended to slice each of the strips into individual panels and then perform OCR to associate dialogue with a specific panel. Levenshtein Distance algorithm would be used to determine the OCR text with the ground truth of the scraped dialogue. Once this task has been completed, I can wipe the original dialogue from the strips’ image files. Then, using natural language processing techniques, I can compare the topic/subjects of an individual strip with other bureaucratic content, such as CSPAN transcripts or United Nations assembly meeting minutes, and then insert this new text content with a Dilbert-like font to effectively create a new strip.

Dilbert

Part of my previous research looked into what other information I could extract from the comic strip’s dialogue. They include my Looking Outwards 9 post, which showed examples of plotting the character appearance frequency in a given chapter of Les Miserables and noting the setting’s mood in which the character appeared. Another example in the Looking Outwards 9 post showed a visualization of common phrases or words spoken by various pubic speakers. At Golan’s suggestion, I have also looked into MALLET (MAchine Learning for LanguagE Toolkit), which provides a way to analyze large amounts of unlabeled text and group them into topics, which are determined by analyzing cluster of words that frequently occur together. This toolkit can be applied to bring out common themes and topics that occur throughout the Dilbert strip and use this data to determine what type of content can replace the strip’s original dialogue in a way that makes sense.

A 140-character description of this project can be worded:

My proposal is to textually examine dialogue from every Dilbert comic strip and replace them with new content from outside sources.

JohnMars—CapstoneProposal

ibldi.xyz

Tweet

ibldi by @john_a_mars is a web app that creates customizable 3D printed models of urban areas.

Description

This will be a continuation of Assignment 21: Parametric Object, in which I developed CityGrabber, an openFrameworks application that extracts the 3D tiles, i.e., buildings, textures, and terrains, from Here.com née Nokia Maps.

I plan to develop a webapp that allows users to select custom areas within cities (or anywhere that has 3D data available), and have them printed with Shapeways. There’s two competitors in this general area, but both of them are focused on terrain, not buildings.

Milestones

  • Develop method for turning single-face meshes into solid meshes
  • Build website locally
    • Develop website backend
      • Determine website framework (Node.js, Django, Flask, etc.)
      • Embed openFrameworks on server
      • Implement a map API for displaying available cities and picking them
      • Implement Shapeways SDK
    • Develop website frontend
  • Deploy DigitalOcean droplet (possibly Heroku)
  • Purchase Domain
  • Deploy website on server
  • Test print from Shapeways
  • Open Shapeways store
  • Publicity
  • Profit

Visuals

Research