Category Archives: CapstoneProposal

Matthew Kellogg – New Capstone Proposal

Tweetable: An editor and viewer for immersive 3D environments for the purpose of storytelling.

For my project I will be making a two part system for creating and viewing an immersive 3D environments composed of fully surrounding images (Skybox) and directional sounds. I will be focusing on the following features.

  • Environment Viewer
    • Web viewable
    • Clickable regions
    • Intuitive click and drag and pinch to zoom controls
    • Fading between scenes
    • Play directional sounds and background music
  • Scene Drawing Helper
    • Draw on panable and zoomable spherical scene view
    • Export to high resolution cube map / skybox format
    • Import models for shape definitions
    • Import cube map (to add more stuff or make notes on drawn parts)
    • Layers support
    • Click region definition
    • JSON or XML output of click regions and any other metadata
    • Scene linking tools
    • Full comic file export (an archive of metadata and media that can be opened in the viewer)

I will try to follow the following schedule.

  • Week 1: 3/31 – 4/6
    • Create a skybox viewer in Three.js
    • Click and drag viewing
    • Be able to draw on skybox with simple lines and export to cubemap
    • Play sounds
  • Week 2: 4/7 – 4/13
    • Add orientation axis
    • Import .obj models
    • Add layer support
    • Import cubemap as layer
    • Define file format for scenes
    • Scene switching in viewer using dynamic page loads
  • Week 3: 4/14 – 4/20
    • Define click regions
    • Click regions to navigate
    • Better drawing tools (line, square, colors)
    • Comic export
  • Week 4: 4/21 – 4/27
    • Stretch time (Either finishing previous tasks or polishing the results)

John Mars

31 Mar 2015

Original Idea:

ibldi by @john_a_mars is a web app that creates customizable 3D printed models of urban areas.

Modified Idea:

ibldi by @john_a_mars is a web app that creates customizable 3D printed models of urban areas.

I.e., there’s been no change. I’m still on track for the same thing, albeit with a smaller scope of available cities to print.

mileshiroo

31 Mar 2015

Personal Information That I Have to See If You Can Be

Shorter:

People performing conversations written by personalized keyboards.

Longer:

Android OS offers “Personalized Suggestions” for its software keyboard, based on data it has stored from that user’s activity on Google apps and products. For a given typed word, the keyboard will offer three suggestions for a next word to choose.

Screenshot_2015-03-31-06-37-24

This allows one to write absurdist “predictive poetry”, which tends to capture affective qualities of the user’s writing style. There are many examples of this kind of poetry online, especially with the introduction of QuickType for iOS 8. This is something I wrote using my housemate’s predictive keyboard:

Screen Shot 2015-03-31 at 6.40.33 AM

On Android, the files that contain the tracked information about the user’s typing habits are stored locally. By rooting my phone, I was able to gain access to them. Since these files are synced with one’s Google account, I could presumably obtain the same dictionary files for anyone with a Google account. I was looking through one of my dictionary files, and while most of it was not human readable I saw many words that I remember typing years ago.

Screen Shot 2015-03-31 at 6.46.14 AM

The data is encoded in data structures designed specifically for this use. All the code for Android is online so it should be possible to decode. Finding a way to navigate the “trie” of stored data is where the technical challenge lies for me.

I’d like to live scripted conversations between several people, where the scripts are generated with the help of each person’s personalized keyboard data. Each conversation will center on a certain key word, which will serve as the seed word for the text generation. The scripts won’t be generated in an automatic way. Instead, each participant will sit in front of a computer for a few minutes and build up their personalized script by rapidly selecting words with their eyes. After this has occurred, they can review the script and choose to remove words, but they can’t add anything. Doing it this way, they don’t have much time to react and think about their choices, and it gives me more flexibility to orchestrate a flowing conversation.

 

The project will happen in a few stages:

1. meeting with each participant to collect their personalized keyboard information, and to work with them to compose their personalized script
2. the scripts are compiled into a conversation
3. the participants meet to perform the conversation from their personalized scripts

depositphotos_30501327-Vector-Illustration-of-Panel-Discussion

 

LValley

30 Mar 2015

Original plan:

To make a dress based on the weather

New plan:

Make a dress based on breathing

Changes:

With the original dress, I wasn’t a fan of how data viz-y the dress was looking.

I liked the idea of a mechanical dress, but I didn’t want it to just respond to stimulus, I wanted it more to look like a creature of its own.

With this new breathe dress, the only input is breathing, but the dress itself will be positioned in a metal cage-like space. The skirt of the dress will be attached in certain places using a fishing line thread reel system.

When the person inhales, the dress will float upward, and when the person exhales, it lowers, hence breathing.

Inspiration:

I was more or less inspired by The Senster in how it was a dynamic machine that ran on a single input.

Sketches:

IMG_4371 copy

 

IMG_4368

IMG_4370

chen

26 Mar 2015

Week 1

Build the basic model and finish the audio sketch

  • Connect XboX controller and my computer, test in Max/Msp
  • Use Max/Msp to make a simple drum machine, that can trigger several playbacks
  • Also, controller need to be able to set duration and volume for the current note.

Week 2

Build the visual sketch

  • Connect Max/Msp with Processing together
  • Pass control signal from Max/Msp to Processing
  • Designa and implement animation for control signals

Week 3

Build the robot part

  • Find out how to hack Keepon
  • Use this knowledge to pass control signal to Keepon through Arduino

Week 4

  • Redesign audio samples to make the output sound more interesting
  • Group audio samples into several groups and make a selection GUI in the main animation

Week 5

  • Deign more Keepon dancing patterns
  • Look up the part about music emotion system, find out if there is one that ready-to-use
  • Improve details in animation and interaction