Looking Outward: Capstone Ideas

by Nara @ 9:54 am 17 March 2010

So I have two ideas for what I want to do for my final project, and I’m not sure which one I’d rather do / which one would be a more worthwhile use of my time.

Idea 1

Basically, this is just to redo Project 2, which I thought was a good start but I really just didn’t have time to develop into what I wanted it to be. I also felt like this would be a good project for me because it has a physical component (laser-cutting the drop caps), and I’ve not done any projects that combine art/design and programming and have output anything other than an either interactive or static display on a screen. Another pro would be already having a starting point, even though the program would largely need to be reworked.

Idea 2

This is an idea for a project that I’ve had since last semester, but at the time I was only just beginning to learn about how to do data visualization, and I wasn’t sure how I’d go about implementing it. Now that I have over a semester’s worth of experience under my belt, I feel better-equipped to tackle it. The idea is to algorithmically/heuristically classify typefaces based on the shapes of letterforms and typographic properties. You’d think it’s been done, but it actually hasn’t, not involving programming anyhow. Some examples of static visualizations done by hand are The Periodic Table of Typefaces by Cameron Wilde and and this typeface classification by Martin Plonka. While both of these examples are beautifully done, they are limited to relatively small datasets and don’t necessarily help a user answer a question like, “What kind of subsets exists within types of typefaces?” and “If I want a typeface that’s a lot like Helvetica but not Helvetica, what should I use?” I know that some designers might hate me if I did this project because it reduces knowledge that we’ve acquired through years of learning and experience to a simple set of heuristics and mathematical comparisons, but I don’t think that’s a reason not to do it. The visualization component itself would also not be easy but could (and should) be done quite beautifully.

If anybody has any suggestions or opinions on which they think would make for a better project, let me know!

Looking Outwards – Flight Maps Revisited

by rcameron @ 11:48 pm 16 March 2010

I’ve been really interested in building out my Project #1 (screenshot below) and am determined to do that for this final project, but I also want to do something with projection. So, I’ve decided to mash them together and make a projected version of the final product which triggers as people walk by. I thought a floor projection would work really well, but haven’t really thought through logistics yet.

Some projects that seemed somewhat related are:

Feedtank – Enzimi installation

SNIFF (which we saw in class)

Catchyoo

Nextfest Interactive Video Wall

Water Wall

Looking Outwards – Capstone

by jsinclai @ 5:44 pm

I’ll start with a brief discussion of my current capstone ideas:

1. Something with sticky-notes!
I’m in MHCI…and we love stickies. We love stickies more than eating, sleeping, and sex. We put them everywhere, we do everything with them…
I really liked that piece we saw in class where participants would use sticky notes to grab virtual objects and then place the virtual objects on themselves. It was highly interactive and very fun!
Unfortunately, I can’t find the video.

2. Polishing one of my first 3 projects and taking it to the next level:
–the happy hardcore visualization.
The visualization itself needs polishing, and perhaps a reiteration. It could also use more data, and I should probably write a crawler+scraper for discogs, and some sort of automatic way to tie the data together. If it was automated, I could essentially choose a genre of music and visualize the inbrededness of that genre (maybe even giving it a numerical value?) and then compare the imbrededness of multiple genres!

–Making Faces
Make some better looking faces…is there enough technology for procedurally generating real-life faces? Maybe I can take something from http://www.faceresearch.org/demos/average ?
That’s really only the first part. The next part is naming faces!
But the really interesting part is seeing the names, or then taking the data from the faces and names. What does “John” look like? Take all the faces named John and average them together.

–The “Makes you dance and sing” installation thing.
I could definitely make it into a fun and interactive installation. It certainly needs some polishing, but I’d definitely have to think about what I could do to make it a BIG capstone project. This would probably be the easiest capstone to do since I think it already works so well!

3. Activating the Gates Center (GHC) Helix
The gates center has this really interesting, 630-foot-long spiral walkway at the heart of the Gates Center that goes from the third to the fifth floor, “the helix.” It’s significantly slower to traverse floors using the helix compared to the stairs and elevator, but there is something tacitly enjoyable about strolling up or down. Unfortunately, the Helix is rarely used except for the students who have classes nestled inside of it.
How could I make this space come to life? How could I make this space interesting and fun?
Clearly something that crosses a lot of people’s minds is “rolling” down the helix. It just seems like fun, like a roller coaster! But a new sign forbidding any sort of “rolling” (skateboards, roller blades, etc…) appeared after a friend of mine took a rolling chair from the fifth floor down to the third. Needless to say, he had a blast.
After seeing some of the videos in class on Monday, I really started to think about having objects flow down the helix. What if there were a bunch of bouncy balls flowing downwards? What would happen then if a patron spawns a square block…something that doesn’t roll? Could the bouncy balls get stuck behind the block and slowly start pushing the block downward? What other objects could patrons “spawn?” Could they create environments to appear as they walked down the helix? Could environments create themselves as people walk along? Could the traffic density cause different events?

-Bravia Bouncy balls — flowing down an incline https://www.youtube.com/watch?v=2Bb8P7dfjVw
I’d like to be a part of that, or experience it from the inside. What if this beautiful deluge follows the user?
-http://www.todo.to.it/#projects/ad
-http://www.shpixel.com/blog/?p=3
-in the event that projectors aren’t feasible:
-LED Mesh https://www.youtube.com/watch?v=e_4M9VIhGhk
-Conductive paint to engage surfaces: http://www.newscientist.com/article/dn18066-living-wallpaper-that-devices-can-relate-to.html?full=true&print=true

More thoughts: If projectors will be used in places that patrons will walk in front of them, maybe their shadow can contribute to any bounding boxes.

More to post as I explore!

Looking Outwards – Ant Farm

by Max Hawkins @ 11:07 pm 15 March 2010

Choosing the topic for my capstone project, I am inspired by the 1970s San Francisco architects and artist collective Ant Farm. Their work, a series of experimental architecture and film pieces produced between 1971 and 1978, is inspired by the space age, nomadism, and radical counterculture. Their 1974 installation Calillac Ranch has become an icon of American culture.

I am particularly interested in Ant Farm’s early work. In 1970 the group produced a set of temporary inflatable buildings designed as communal living spaces and performance centers. Politically charged, the inflatables challenged American consumerism culture by suggesting a radically different and communal way of living. They could be quickly and inexpensively constructed, moved, and augmented, supporting a nomadic way of life.

Ant Farm was also interested in making architecture accessible to non-experts. In early 1971 they released their “Inflatocookbook”, a primer on inflatable construction with practical tips based on their experiences producing inflatables around the country. The group toured art museums and university campuses teaching people how to make inflatables.

For my capstone, I want to continue Ant Farm’s effort to bring architecture to the masses by creating a 3D inflatable blueprint creation tool. My project will use 3D unwrapping algorithms like those available in Blender to flatten computer-based 3D models and prepare them to be inflated. Later editions could perform calculations to determine the type of fans needed or how to tie down the inflatables.

The potential uses for such a tool are nearly unlimited. By lowering the bar for making architecture, the program allows underrepresented groups to create large architectural statements. Inflatables can serve as temporary shelters, canvases for video projection, tools for political dissidents looking to make a dramatic statement, or just a place to hang out. I want the software to be easy enough that ordinary people can use it to experiment with inflatable structures on the weekends—Ant Farm’s dream of architecture for the masses inflating in your back yard.

Sources
http://we-make-money-not-art.com/archives/2008/03/centro-andaluz-de-arte-contemp.php
https://www.flickr.com/photos/chiplord/sets/72157605232897509/
http://on1.zkm.de/zkm/stories/storyReader$4601
http://www.spatialagency.net/database/ant.farm

FaceFlip

by Max Hawkins @ 9:10 pm

FaceFlip from Max Hawkins on Vimeo.

Since Chatroulette is all the rage these days, I decided to freak people out on the website by flipping their faces upside down.

The project was implemented as a plugin for a mac webcam augmentation software named CamTwist using OpenCV and Apple’s Core Image.

The source is available on github:
http://github.com/maxhawkins/FaceFlip

Looking outwards: TypeStar

by davidyen @ 4:12 pm

Since I’m doing my final project in the typography and speech visualization space, I thought this project was relevant.

TypeStar by Scott Gamer
He doesn’t say but I think it uses timed lyric data to create a kinetic typography visualization with a bunch of “preset visualization schemes.” Asked to make a kinetic type piece (see youtube) but instead made this software that generates kinetic type pieces.

This seems like a really useful tool, as audio-synced motion pieces can be a real pain to create. Imagine instead just finding the lyrics to a song online, and using a simple Processing app to record mouseclicks everytime each word is sung, quickly generating a file with timestamps for each word. With a timestamped lyric file and the mp3 file, you can get an infinite gallery of live visualizations to visually see what works right away. From there you could make a more fine tuned “canned” piece, with some editing and post processing work. Or, the results could be generative and not canned at all, you could generate timestamped files for a whole playlist and have the visualizations play on a large display at a party or club, if you wanted to go crazy you could even use amazon turk to get large amounts of timestamped lyric files.

–edit–
Apparently it uses “UltraStar” text files, I don’t have a clue what those are, but the Processing app I described above would do the same thing.

Looking Outwards–Capstone

by aburridg @ 8:49 am

I want to do a continuation/deeper exploration of the theme I tried to portray in my first project. So, I will be doing an information visualization project. So, I figured looking up some more Info Vis art projects would be helpful and inspirational.

Here’s the first one: 10×10

I think we’ve seen a lot of projects like this one in class. The project uses a real time feeds from various news sites to take at most 100 images from these sites along with words associated with each picture. I suppose I would be interested in doing something along these lines–maybe have my participants draw what they think a color represents. Another reason I like this project is that it seems like an easy way to get the news all at once (and I know I need help with that since I rarely read the news). However, on that point, I think I would’ve liked this project more if the artist had instead used headlines or captions associated with pictures instead of just a word.

Here’s another: 10 Revealing Infographics about the Web

This webpage is very interesting (and funny!): it contains pictures that visualize information about the internet. It’s not interactive, but some of the graphs and map pictures were designed very nicely. I got some good ideas on how to design a good interface from these pictures. I know for my final project I want to display the data in more than one way.

I think we may have seen this in class, but I don’t remember. digg labs/arc

I found this project interesting because of the colors–since I’m going to be working with colors for my capstone project. This project uses colors to distinguish between different digg users though.

Looking outwards

by xiaoyuan @ 8:03 am

http://www.kongregate.com/games/DannySeven/drunken-masters

A fun game where you can be a bartender. Very well-crafted, detailed, and fun. Unique gameplay elements and excellent flair all around. I’ve learned about cocktail mixing from this game. It makes me want to have a drink.

Touchbook

by Michael Hill @ 7:19 am

The underlying idea for my project is to convert a netbook into a small Wacom tablet similar to a Cintiq.  After this, I would like to create different kinds of applications for it.  The end product would be geared towards artists as a digital “sketchbook”.  In the case that I cannot build the hardware itself, I may opt for a more fleshed out version of the software.

While doing my research I stumbled across two different items.  The first of which is called a “Touch Book“.  This is basically a tablet pc running on an Arm processor on a custom operating system.  Hardware like this would be more ideal, but I do not really have the time to start from the ground and work up.

The second article I found was published this morning on Hack-a-Day‘s website.  While technically the reverse process of what I’m trying to do, the underlying principle of combining a screen and a scrap wacom tablet is basically the same:

Looking Outward (Concept Exploration): Photo Petition

by sbisker @ 8:38 pm 14 March 2010

I’ve been thinking about how uber-cheap digital cameras can be used in public spaces for my final project. Mostly, I’ve been thinking about what individuals can do with cameras that they place in public places *themselves* that allows them to carry on a dialog with others in the community (as opposed to cameras placed by central authorities for surveillance and other “common good” purposes.)

Lately I’ve been thinking about interactions where individuals invite members of the community to actively take pictures of themselves (as opposed to taking “surveillance style” photos of people and places going about their normal business.)

One direction I am considering going in with this is the idea of a “photo petition.” Essentially, people are invited to take pictures of themselves rather than simply signing a piece of paper in order to “put a face” on the people who endorse or oppose a certain side of an issue. This isn’t really a new idea, but one particular photo petition caught my eye in particular – a project for Amnesty International where those petitioning were asked to hold up messages in the photos, messages chosen from a small selection of messages by the petition signer (it seems individual messages weren’t allowed, perhaps so they wouldn’t all have to be individually read and censored? or perhaps because of literacy issues in the region in question?).

Petitions, at least to me, often feel “empty” – my signature (or headshot) is only worth so much where there are millions of other people on the petition. I think that Amnesty sensed this as well, and they tried to get around it by adding a “personalization” element to their petition. But one person’s opinion, randomly chosen, still feels like just one person’s opinion – and reading one after another begins to feel like a jumble. Somehow the sense of “community” is lost – that is, unless the person collecting signatures explicitly explains what set of people he’s picked the individuals from.

What if we could give some context to these opinions? It seems like one way to do a photo petition would be to break down the petition into the smallest groups possible – and then build a visualization around those groups. What if a camera was placed in every neighborhood at the same time, collecting these opinions in parallel? Say, one at each major street corner. We could see the make up of who supports it in each region, and see how it changes from region to region. Local decision makers can see a local face on a petition – and more importantly, the context and backgrounds of the photos adds to the petition itself (since you’re not just dealing with faces, you’re dealing with pictures of PLACES too – a picture of me on an identified street is different then a picture of me against a black background or a generic street.)

If this idea seems like it could be one of a million similar photo community projects, well, that’s intentional. (In particular, it reminds me of Story Corps, except in this case, the Story Corps van costs $10 and is many places at once.) I’m trying to break down the “personal public camera” object into a “building block” of sorts, rather than custom-making it to a single project. If you’re going to invest some time in building hardware for a new interaction, it’s worth figuring out what it is exactly your interaction is capable of in general, in addition to any specific projects you may be thinking of doing with it.

looking-outwards: typorganism

by caudenri @ 7:12 pm

Typorganism (www.typorganism.com) is a robust project containing a series of computational and interactive typography explorations.
typorganism home page

The site design is lovely and I find many of the projects to be interesting augmentation approaches to type (check out “Motion Sickness”) Some of the projects such as “Visual Composer” are quite complex and interesting as a standalone piece. My biggest problem with this piece is that it really doesn’t work well as a unified project. The individual projects within the site don’t have much in common with each other except for the fact that they’re about kinetic or interactive typography, so I don’t quite see why they’re grouped this way. Otherwise, most of them are nice inspiration for a variety of type-based interactions.

[Looking-Outwards capstone]printing:what and how?

by Cheng @ 12:53 pm

We set the theme down to interactive fabrication. What material are we using and how is it spilled/dropped/extruded to form a shape is yet unknown.

A few interesting/informative projects I found when looking around:

Henrik Menne‘s machines that makes art

Additive 3D output
null
Subtractive 3D output
null

Summarizing some extrusion methods i’ve seen
Pressing gel out of injector: Fab@home
null
make sells them for $20
null
I wander how often you have to refill the injector, and , wouldn’t that cause discontinuity in the output?

heat barrel and nozzel
reprap example

spiral extrusion
for wire shaped material
null

and feeding system
null

an updated version:
null
for grains


wax is commercially available in bricks, grains, or columns(candle), so we have to design around that.

In terms what participants get out of the interaction, if we use wax, we could
1. pile wax up on a piece of baking paper and -tada- a piece of sculpture to take home
2. somehow add wick to it and make a candle?
3. print wax on a piece of cloth and dye it into table cloth?
4. use the wax model to make some more permanent piece?

Looking Outwards – Capstone – Robotic Tattoo Machine

by Karl DD @ 8:42 pm 13 March 2010

For our capstone project we are working with stepper motors to create a 3-Axis print head. There are a lot of things that can be done with such a setup, and in fact even with 2-Axis movement. Below is a Robotic Tattoo Machine named ‘Freddy’ created in 2002 by Niki Passath.

The tattoo designs are generative, and it is apparently hard to do user testing…

“It was a hard job because the only person I could test it on was myself which was painful but a good incentive to get it right as soon as possible.

“He’s an artist of course so he always decides what design the person is going to get, they can’t choose. But I haven’t had any complaints yet.”

There is something interesting about the permanency of the tattoo form. So many computer generated graphics are disposable. When randomization is involved we cycle through them in search of the ‘perfect’ iteration, but this project commits one design in an irreversible way.

Looking Outward – Related topic

by kuanjuw @ 6:45 pm

For my term project I am looking for techniques that are used to control multiple objects.

Audience – rAndom international

Royal Opera House: Audience. from rAndom International on Vimeo.

And also, maybe it is ambitious: school of fish behavior.
CEATEC: Nissan robots avoid traffic accidents and congestion

Wavefunction by Rafael Lozano-Hemmer. An array of chairs that move up and down responding to the presence of the public by creating waves that propagate over the exhibition room.

———————————————————————————————————
To control multiple objects I am thinking of using Arduino with shift registers. But how does the object communicate with each other (ideally in wireless), XBee is one option.
(Overall, the technical difficulty seems to be the main issue for me).

AI Brushes

by xiaoyuan @ 2:05 am 10 March 2010

Get Adobe Flash player

Intelligent digital paintbrushes using the Boids AI algorithm. 10 customizable “broishes” exhibiting flocking and goal-seeking behavior.

Looking Outwards: LastHistory

by jedmund @ 5:50 pm 6 March 2010

This is why I use last.fm.

LastHistory visualizes your last.fm history. It displays every song you’ve ever scrobbled and the end looks like “a decoded genome, and has probably just as much metadata powering it.” If you click on the song and last.fm has the song data, it’ll play, and if you select a time period it’ll play your top tracks from that time period. You can also filter by year, time of day, day of the week, month, and pretty much any piece of metadata imaginable. It also will pull in your iPhoto events and iCal events and put them on the timeline as well. You can listen to the songs you might have been listening to during these things as well, and in the case of an iPhoto event, look at the photos at the same time.

LastHistory – Interactive Visualization of Last.fm Listening Histories and Personal Streams from Frederik Seiffert on Vimeo.

Mac users can go to the GitHub and download the binary, and try it out themselves. I highly recommend it. This is a real monster of a visualization project, so I thought I’d share. It was done by Frederik Seiffert at the University of Munich as a thesis project.

Project 3: Ghost trails

by rcameron @ 3:39 am

OSX executable (~5MB)

I began playing with modifying the camera feed using openFrameworks and went through a bunch of effects.  I eventually decided to create a ghost trail similar to something envisioned in Mark Weiser’s ‘The Computer for the 21st Century’. The frame is captured when movement is detected through background subtraction. The new frame is then alpha blended with the original frame. The ghost images fade out over time through the alpha blending.

I also played around with color averaging in an attempt to generate a rotoscoped effect, which started out like this.

I also implemented Sobel edge detection on top of it and ended up with a Shepard Fairey-ish Obama poster effect.

Project 3: Fireflies

by ryun @ 11:26 am 5 March 2010

IDEA
In this project I focused on the Interaction between the screen(display) and the everyday life object. In the Ubiquitous computing, one important aspect of it is all the system becomes invisible(go behind the wall) and people do not even notice their gesture and action is tracking by the computer. I wanted to make a simple interaction following this idea. So the viewer use the everyday life objects and the system subtly recognize the human behavior and do something(i.e. shows display accordingly). I had many objects in my mind as candidates for this project such as a laser pointer,a flashlight, a soap bubble stick, a mini fan, a paint brush and so on… I decided to use a flashlight because this is the simplest one to make this happen. (due to the time limitation)

PROCESS
I spent a while to figure out how to make this happen. I was thinking about using openframeworks but never had experience of C++ so decided to use processing again. Luckily, one Japanese developer built the library for communication between the Wiimote and the Infrared light via bluetooth. It was not easy to understand and use it at first but after I spent a while it turned out that it is pretty good library for this project.

APPLICATION
I believe this has a huge potential. For this project I wanted to show one of the possibilities of it and it was the art installation for children. For example, there could be a projected screen to the ceiling of the childrens museum and children can use the flashlight, make their own fish or bird (as an avatar) and play with it. Here in this project I built circles as fireflies chasing the light as an example.

CONCLUSION
In class, I received good, constructive criticism about the display which is not the very perfect use of the the technology. I should have spent more time working on it. Maybe color sucking drawing tool or multi-user interaction should have made this more interesting. I would like to use this technology that I learned and expand more for my final project.

Source Code (Processing)

fantastic elastic type

by davidyen @ 10:03 pm 3 March 2010

My file is too large to upload so I made this documentation video:

Notes:
I used Processing, Box2D (Dan Shiffman’s pBox2d / Eric Jordan’s jBox2D), and the Ricard Marxer’s Geomerative library. The letters are pressurized soft bodies.

I want to do something more with the project someday, like a music video or something. I’ll update this post if I get there.

David

Project 3: You Control Mario

by Nara @ 11:56 am

The Idea

For my project, I was inspired by this augmented reality version of the retro game Paratrooper. My first idea was to create an “augmented reality” 2-player Pong, but I decided not to pursue that because I was worried it had been done many times before and that the Pong implementation would not be challenging enough. Then I started thinking about what else could be done with games, and during my searching I found that there were some remakes of retro games that used camera input as the controls, but often the gestures they were using were not analogous to the gestures of the character in the game. I used that as my jumping-off point and decided I wanted to do something where the player could actually “become” the character, so that when they move, the character moves, and when they jump, the character jumps, etcetera. To make sure that there were analogous movements for all of the game’s controls, the game I decided to implement was Mario.

The Implementation

I knew almost straight away that this project was best implemented in C++ and openFrameworks, both because any OpenCV implementation would likely be much faster, and because there is a much larger library of open source games available for C++. (Golan gave me permission to hack someone else’s game code for this since there realistically was no time to implement Mario from scratch.) I even found a Visual Studio project for a Mario game I wanted to try, but I basically spent all of last Saturday trying to get Visual Studio and openFrameworks to work, to no avail. So, I ended up using Java and Processing for this project, which is one of the reasons why it isn’t as successful as it could’ve been (which I’ll discuss later). The source code for the Mario implementation I used is from here.

The program basically has 3 parts to it: the original Mario source code (which, other than making a couple of variables public, was untouched), a Processing PApplet that sets up an OpenCV camera input and renders it to the screen if called, and then a package of classes for an event listener that I created myself to do some motion detection and then send the right signals to the game to control the character’s movements. In essence, when it detects movement in a certain direction, it’ll tell the game that the corresponding arrow key was pressed so that the character will respond.

The Problems

First of all, the OpenCV library for Processing is pretty bad. It’s not a full implementation (it doesn’t do any real motion detection), the documentation is pretty vague and not helpful, and I even read somewhere that it has a memory leak. Just running OpenCV in a Processing applet has a slight lag. Also, I wanted to use full body tracking for the motion detection (my ultimate goal if I got it to work was to use this implementation with a port of Mario War, a multiplayer version of Mario, although I never got that far) but the body tracker was extremely buggy and would lose the signal very often, so I ended up just using the face detector, which was the least buggy.

Using a combination of the Mario game (which is implemented in a JFrame) and a PApplet together in the same window also doesn’t really work well. I read somewhere that even without OpenCV, the fastest framerate you can get when using both a JFrame and a PApplet together is about 30fps.

Because of the combination of all of these factors, even though the game technically works (it can pick up the movements and Mario will respond accordingly), there is a big lag between when the user moves, the camera detects it, the motion event listener is called to action, and Mario moves — usually at least 1-2 seconds if not longer. The consequence is that the user is forced to try to anticipate what Mario will need to do 2 seconds from now, which on a static level is not too bad, but on a level with a lot of enemies, it’s almost impossible. I still haven’t been able to make it more than 2/3 of the way through a level.

The Merits

Even though my implementation wasn’t working as well as I would’ve liked, I’m still really proud of the fact that I did get it working — I’m pretty sure the problem isn’t so much with the code as it is with the tools (Java and Processing and the OpenCV for Processing library). I know that there’s room for improvement, but I still think that the final product is a lot of fun and it certainly presents itself as an interesting critique of video games. I’m a hardcore gamer myself (PS3 and PC) but sometimes it does bother me that all I’m doing is pressing some buttons on a controller or a keyboard, so the controls are in no way analogous to what my avatar is doing. Hopefully Project Natal and the Sony Motion Controller will be a step in the right direction. I have high hopes for better virtual reality gaming in the future.

The code is pretty large — a good 20-30MB or so, I think — so I’ll post a video, though probably not until Spring Break.

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2016 Special Topics in Interactive Art & Computational Design | powered by WordPress with Barecity