Problem with OF 62 video player

by chaotic*neutral @ 6:11 pm 25 April 2011

After trying for hours to get ofMoviePlayer to do a simple loadmovie case switch, I found on the forums that there is a problem with the 62 video player.

Therefore for generative video cuts, I have to rollback to OF61 video player

http://forum.openframeworks.cc/index.php?topic=5729.0

SamiaAhmed-Final-LatePhase

by Samia @ 9:55 am

A pdf! Containing a large number of screenshots



KinectPortal – Final Check-In

by Ward Penney @ 9:49 am


as

This is my initial auto thresholding. You can see the depth histogram on the bottom.

This one uses ofxControlPanel to allow for adjustment of some settings and a video library.

a

Le Wei – Final Project Final Update

by Le Wei @ 7:57 am

I had a hard time coming up with a concrete concept for my project, so what I have so far is a bit of a hodge-podge of little exercises I did. I wanted to achieve the effect of finger painting with sound, with different paints representing different sounds. However, I’m having a really hard time using the maximilian library to make sounds that actually sound good and mix well together. So as a proof to myself that some reasonable music can be made, I implemented a little keyboard thing and stuck it in as well. I think the project would be immensely better to use with the wireless trackpad, since it’s bigger and you can hold it in your hand, but I haven’t gotten it to work with my program on my computer (although it might on another computer w/o a trackpad).

So what I did get done was this:

  • Multi touch, so different sounds can play at the same time. But the finger tracker is kind of imperfect.
  • Picking up different sounds by dipping your finger in a paint bucket.
  • One octave keyboard

And what I desperately need to get done for Thursday:

  • Nicer sounds
  • Nicer looks
  • Getting the magic trackpad working
  • A paper(?) overlay on the trackpad so that its easier to see where to touch.

 

Special Thanks

Nisha Kurani

Ben Gotow

Project 4: Final Days…

by Ben Gotow @ 3:16 am

For the last couple weeks, I’ve been working on a kinect hack that performs body detection and extracts individuals from the scene, distorts them using GLSL shaders, and pastes them back into the scene using OpenGL multitexturing. The concept is relatively straightforward. Blob detection on the depth image determines the pixels that are part of each individual. The color pixels within the body are copied into a texture, and the non-interesting parts of the image are copied into a second background texture. Since distortions are applied to bodies in the scene, the holes in the background image need to be filled. To accomplish this, the most distant pixel at each point is cached from frame to frame and substituted in when body blobs are cut out.

It’s proved difficult to pull out the bodies in color. Because the depth camera and the color camera in the Kinect do not align perfectly, using a depth image blob as a mask for color image does not work. On my Kinect, the mask region was off by more than 15 pixels, and color pixels flagged as belonging to a blob might actually be part of the background.

To fix this, Max Hawkins pointed me in the direction of a Cinder project which used OpenNI to correct the perspective of the color image to match the depth image. Somehow, that impressive feat of computer imaging is accomplished with these five lines of code:


// Align depth and image generators
printf("Trying to set alt. viewpoint");
if( g_DepthGenerator.IsCapabilitySupported(XN_CAPABILITY_ALTERNATIVE_VIEW_POINT) )
{
printf("Setting alt. viewpoint");
g_DepthGenerator.GetAlternativeViewPointCap().ResetViewPoint();
if( g_ImageGenerator ) g_DepthGenerator.GetAlternativeViewPointCap().SetViewPoint( g_ImageGenerator );
}

I hadn’t used Cinder before, and I decided to migrate the project to Cinder since it seemed to be a much more natural environment to use GLSL shaders in. Unfortunately, the Kinect OpenNI drivers in Cinder seemed to be crap compared to the ones in OpenFrameworks, et. al. The console often reported that the “depth buffer size was incorrect” and that the “depth frame is invalid”. Onscreen, the image from the camera flashed and occasionally frames appeared misaligned or half missing.

I continued fighting with Cinder until last night, when at 10PM I found this video in an online forum:

This video is intriguing, because it shows the real-time detection and unique identification of multiple people with no configuration. AKA it’s hot shit. It turns out, the video is made with PrimeSense, the technology used for hand / gesture / person detection on the XBox.

I downloaded PrimeSense and compiled the samples. Behavior in the above video achieved. The scene analysis code is incredibly fast and highly robust. It kills the blob detection code I wrote performance-wise, and doesn’t require that people’s legs intersect with the bottom of the frame (the technique I was using assumed the nearest blob intersecting the bottom of the frame was the user.)

I re-implemented the project on top of the PrimeSense sample in C++. I migrated the depth+color alignment code over from Cinder and built a background cache and rebuilt the display on top of a GLSL shader. Since I was just using Cinder to wrap OpenGL shaders, I decided it wasn’t worth linking it in to the sample code. It’s 8 source files, it compiles on the command line. It was ungodly fast. I was in love.

Rather than apply an effect to all the individuals in the scene, I decided it was more interesting to distort one. Since the PrimeSense library assigns each blob a unique identifier, this was an easy task. The video below shows the progress so far. Unfortunately, it doesn’t show off the frame rate, which is a cool 30 or 40fps.

My next step is to try to improve the edge of the extracted blob and create more interesting shaders that blur someone in the scene or convert them to “8-bit”. Stay tuned!

Checkpoint 04/25

by Chong Han Chua @ 2:57 am

The previous checkpoint was a reality check, and I scrapped the computer vision project for a continuation of my twingring project.

A short list of things I have to do and where I am:
1. Put on the web
2. Fix bugs
3. Modulate sound to create different voices
4. Do dictionary swaps and replacements of text
5. Switch to real time API and increase filtering options
6. Design and multiple parties

Instead of doing a search, the new option will revolve around looking for hashtags or using a starting message id. With this, we can prototype a play with multiple actors as well as real time. This would enable twingring to act as a real life twitter play of some sort, which should be fun to watch.

On the user interface side, there’ll be some work required to improve the display of messages, the display of users, as well as a way to visualize who is talking and who isn’t. Some other work includes making it robust and possibly port it for iPad (probably not).

To check out the current progress, visit www.twingring.com

Meg Richards – Final Project Update

by Meg Richards @ 2:57 am

I’m working on correctly calculating the bounce direction and velocity. Using OpenNI, I track the y position of the base of the neck. With a small time delta, I look at the difference between the two positions to get a reasonable approximation of both direction and velocity. After introducing an actual trampoline, I had to significantly reduce my periodic sampling because I could perform a full bounce and return to a position close to the source position before the sample would be taken. I haven’t mapped velocity to sidescrolling action, so there isn’t much to see, but here’s a picture of a trampoline:

Bounce bounce bounce.

Three Red Blobs

by ppm @ 2:34 am

I have a Pure Data patch supplying pitch detection to a Java application, which will be drawing and animating fish in a fish tank based on the sounds people make with their voices. These red blobs are the precursors to the fish, where vertical width corresponds to pitch over a one-second recording. I plan to add colors, smooth contours, fins, and googly eyes.

Here is the patch:

I may end up ditching the cell phones. The point of the phone integration was so that many people could interact simultaneously, but now that I’m using Pure Data, which does real-time processing (not exactly what I wanted in the first place) it would be inconvenient to process more than one voice at a time.

Timothy Sherman – Final Project Update

by Timothy Sherman @ 2:26 am

Over this weekend, I’ve succeeded in finishing a basic build of a game using the Magrathea system. The game is for any number of players, who build a landscape so that the A.I. controlled character (shown above) can walk around collecting flowers, and avoiding trolls.

Building this game involved implementing a lot of features – automated systems for moving sprites, keeping track of what sprites existed, etc, and finding modular, expandable ways to do this was a lot of my recent work. The sprites can now move around the world, avoid stepping into water, display and scale properly, etc.

The design of the game is very simple right now. The hero is extremely dumb – he basically can only try to step towards the flower. He’s smart enough not to walk into water, and not to walk up or down too-steep cliffs, but not to find another path. The troll is pretty dumb too, as he can only step towards the hero (though he can track a moving hero).

I’m not keeping track of any sort of score now (though I could keep track of how many flowers you’ve collected, or make that effect the world), because I’m concerned about the game eclipsing the rest of the tool, and I think that’s what I’m struggling with now.

Basically, I’m nervous that the ‘game’ isn’t really compelling enough, and that it’s driven the focus away from the fun, interesting part of the project (building terrain) and pushed it into another direction (waiting as some dumb asshole sprite walks across your arm).

That said, I do think watching the sprites move around and grab stuff is fun. But the enemies are too difficult to deal with reliably, and the hero a little too dumb to trust to do anything correct, requiring too much constant babysitting.

I also realize that I’ve been super-involved with this for the last 72 hours, so this is totally the time when I need feedback. I think the work I’ve done has gone to good use – I’ve learned how to code behaviors, display sprites better, smooth their movement, ensure they are added onto existing terrain, etc. What I’m trying to decide now is if I should continue in the direction of refining this gameplay, or make it into more of a sandbox. Here’s the theoretical design of how that could happen (Note, all of the framework for this has been implemented, so doing this would require mostly the creation of more graphical elements).:

The user(s) builds terrain, which is populated with a few (3? 4?) characters who wander around until something (food, a flower, eachother) catches their attention. When this happens, a thought balloon pops up over their head (or as part of a GUI above the terrain? this would obscure less of the action) indicating their desire for that thing, and they start (dumbly) moving towards it. When they get to it, they do a short animation. Perhaps they permanently affect the world (pick up a flower then scatter seeds, growing more flowers?)

This may sound very dangerous or like I’m in a crisis, but what I’ve developed right now is essentially what’s described above, but with one character, one item, and the presence of a malicious element (the troll), so this path would really just be an extension of what I’ve done, but in a different direction than the game.

I’m pretty pleased with my progress, and feel that with feedback, I’ll be able to decide which direction to go in. If people want to playtest, please let me know!!

(also, i realize some sprites (THE HOUSE) are not totally in the same world as everything else yet, it’s a placeholder/an experiment)

screen shots (click for full):

Final project update

by huaishup @ 11:16 pm 24 April 2011

I am trying to work along my plan and schedule for this final project. Ideally I will finish 3~5 fully functioned drum bots to demonstrate potential combination of music and algorithm.

 

What I have done:

Spend some time redesigned my own Arduino-compatible circuit board, which is less smaller than the original one and with all sockets and electronic parts on. (1 x 1 inch)

Ordered all parts from Digikey last Wed.


Tried a lot. But still remains some problems:

piezo sensor is really fragile and not stable.

Solenoids are either too weak or eat too much current & voltage.

Batteries are always the problem.

Circuit board never shipped.

Sounds effect.

 

Expectation:

In Thursday have a working demo with 3~5 drumboxs

Software is hard, hardware is HARDER!

Charles Doomany: Final Project Concept: UPDATE

by cdoomany @ 10:22 am 20 April 2011

 

pieces

by susanlin @ 8:57 am

start here

1. color – sepia, minimal
2. lineart – edges are important
3. trails – particles or such


Live Feed, 2-toned



Edge detection, understood and working stand-alone



Combining? Broke it. Inefficient keyboard banging to no means, eyes bleeding. (This is a overlay Photoshopped to demonstrate the somewhat desired effect.)



Next: oFlow, learning from this good example found at openprocessing…




Scoping… Make this into a part in series of learning bits of coding.
Combos in mind include:
1. 2 color + trails
2. 8bit/block people + springs
3. Ballpit people / colorblind test people + boids



Display may be something like this..

final project early phase presentation

by honray @ 8:01 am

Link to demo

Original idea

  • Users each control a blob
  • Blobs can interact with each other
  • Playstation Home, but with blobs

New Idea

  • Collaborative platformer
  • Blob has to get from point A to point B
  • 2 player game
  • Person 1 is blob
  • Person 2 controls the level
  • P2 controls levers, ramps, mechanics of level
  • Goal is to help p1 pass the level
  • How does p2 help p1 without communicating directly?

What’s been done

  • Box2d up & running
  • Blob mechanics
  • Basic level design
  • Keyboard control of blob

Hurdles

•Collaboration
  • 2 people go to website and register (php)
  • Create websocket server (python), each player communicates via web browser (chrome) and websockets
•Maintaining state via websockets
  • P1 (blob player) is master, maintains overall state
  • P2 is slave
  • P2 (level player) sends level state changes to P1
  • P1 sends blob position/velocity updates to P2
  • Any other ideas on how to do this?

Mark Shuster – Final – Update

by mshuster @ 7:47 am

My project lives as it grows at http://markshuster.com/iacd/4/

Updates soon.

Asa & Caitlin :: We Be Monsters miniupdate :: DOT. PRODUCT. + sound + springs

by Caitlin Boyle @ 7:47 am

We’re not going to have a polished, pretty, finished version of mr. BEHEMOTH kicking around the STUDIO until the show on the 28th, but we’ve managed to make some progress despite the universe’s best efforts to keep us down. With Golan’s help, Asa wrangled the mighty DOT PRODUCT, “an algebraic operation that takes two equal-length sequences of numbers and returns a single number obtained by multiplying corresponding entries and then summing those products.”


In layman’s terms, our program now reads the angles between sets of skeleton joints in 3 dimensions, instead of 2. This is preferable for a multitude of reasons; mostly for the sheer intuitive nature of the project, which was practically nonexistent in the first iteration: Mr. BEHEMOTH moved as nicely as he did because Asa & I understood the precise way to stand in order to make it not look like our puppet was having a seizure.

Now, a person can be much more relaxed when operating the puppet, and the results will be much more fluid.

Caitlin’s in charge of making the BEHEMOTH more dynamic, & created a sound trigger for the star-vomit from our update before last. Now, users can ROAR (or sing, or talk loudly) into a microphone, and the stream of stars will either be tiny or RIDICULOUS depending on the volume and length of the noise.

+ =

Golan is trying to talk Caitlin into using this same feature to make the BEHEMOTH poop. Caitlin really doesn’t wanna make it poop. Your thoughts on poop, class?


C is also still kicking around with Box2D to add some springy, physicsy elements to the puppet, but these need some work, as Caitlin reads slowly and is still picking apart Shiffman’s Box2D lectures. Once she gets this down, she’s adding collision detection; the BEHEMOTH should be able to kick the stars, stomp on them, eat them back up, and generally be able to react to the starsoundvomit (that is the technical term).

In short :: it runs nicer now, we have sound vomit, & the general springiness/animation of the puppet is getting a bit of an overhaul.

Is there any particular feature you REALLY want to see on the BEHEMOTH, other than what we have? We’re focusing on just polishing the one puppet, rather than rushing to make three puppets that are just ehhhhh.

Alex Wolfe | Final Project | Checkpoint

by Alex Wolfe @ 7:38 am

So after several miserable knitting machine swatch failures, I’m come to the conclusion that in order to create a visually appealing pattern, I’m going to have to work forwards and backwards, making sure that the generative pattern actually conforms to more “rules” of knitting rather than being purely chaotic.

I found an awesome book, “Creative Knitting: A New Art Form” by Mary Walker Phillip which is chock full of stuff like this which really made me want to step up my aesthetic game.

I also need more stitch variation than the previous sketches were producing in order to create something like this. At the very least eyelets and cables which need to run in parallel lines. So taking a suggesting from Golan I started playing with reaction diffusion that can be spawned off of underlying layers of different equations in order to create more variety in stitch pattern and also more interesting forms. I still like the idea of using it to “grow” your own skin, since the same algorithm is used to produce so many different patterns in nature (often for pelts that we kill for and make into clothing anyway). I’m aiming for a user interactive app that takes some input into the system and generates the pattern for a swatch as far as the software end goes

So here are some initial attempts in processing…

But really its that time again where Processing reaches its limit and its time to port over to Cinder. Fortunately Robert Hodgin and rdex-fluxus had some sample Shader scripts online specifically for reaction diffusion so it was much easier to get started that I initially feared.

 

Andrew from the Museum of Arts and Design got back to me as well. Unfortunately his machine doesn’t do cables, but I could try substituting different kinds of stitches to mimic the effect.

In the meantime, I bought a super simple SingerLK off of craigslist, and after a pretty rocky learning curve, its actually pretty simple to generate alot of fabric quickly by hand. Also, you can pretty much do any kind of stitch on it (if you’re willing to do the extra work to set it up. So since I’m trying to suppress the gnawing fear that the whole send out my pattern and get it nicely printed and sent back in time for the show is going to blow up in my face, its reassuring to know that if I had to, I’m pretty sure I could generate at least swatches myself.

Marynel Vázquez – More on people’s opinion

by Marynel Vázquez @ 7:04 am

Some additional cheap tasks

I collected ~600 statements I collected about robots in the future (see previous post). Now I asked people to judge these statements as follows:

  • This is good for our future {agree/disagree in a 7-point scale}
  • The statement makes me feel … {anger, disgust, fear, happiness, sadness, surprise}
  • I foresee a bad future if the statement becomes true {agree/disagree in a 7-point scale}

This was fast! I collected 5 responses from different workers for each set of answers (paying $0.01 per HIT).

And let people draw!

I asked people to hand draw a robot. The robot had to be designed according to one of the statements and, once drawn, the workers had to upload a picture of it to MTurk. The next picture shows the type of responses I got when paying $0.05 per drawing:



Instead of asking people to take a picture, I set up a website using Google App Engine to let them draw this time. The good side of letting people draw with the mouse is that they are obligated to make a composition, and that each drawing is going to be unique at the end. The bad side to me is the more mouse-style kind of drawing that results…

I made the drawing application using processing.js:

The tricky pars consists in converting what people draw with the processing script into an image, and pushing that into the server. The way I did it was using HTML5 and jQuery:

<div style="float: left; width: 600px">
<!-- This is the canvas where people draw, managed by processing.js -->
<canvas data-processing-sources="/js/processing/draw-robot.pjs"></canvas>
</div>
 
<!-- This is the form that allows to submit the drawing -->
<div style="float: left;">
<form id="canvasForm" method="post" action="/draw">
<input type="hidden" name="idea" id="idea" value="{{ idea }}"/>
<div><label>Name your robot:</label></div>
<div><textarea name="robotname" id="robotname" rows="1" cols="40"></textarea></div>
<div>
<p>
Click the 'Save Drawing' button after you have drawn the robot.<br/>
Make sure it looks as required.
</p>
</div>
<div><input type="submit" value="Save Drawing" name="canvasFormSaveButton"/></div>
</form>
 
<div id="result"></div>
</div>
 
<!-- When the input button of the form is pressed, we prevent normal execution -->
<!-- convert the canvas to an image using the function toDataURL() and, finally, -->
<!-- submit the POST request -->
<script type="text/javascript">
 
$("#canvasForm").submit(function(event) {
 
event.preventDefault(); 
 
$("#result").html( "Processing... Don't reload or close the page." );
 
var $form = $( this );
var ideaval = $form.find( 'input[name="idea"]' ).val();
var robotnameval = $form.find( 'input[name="robotname"]' ).val();
 
$.post( "/draw", 
{ 
  idea: ideaval,
  robotname: robotnameval,
  format: "png",
  img: document.getElementsByTagName("canvas")[0].toDataURL()
},
      function( data ) {
          var content = $( data );
          $("#result").html( content );
      }
    );
});
</script>

This script doesn’t work pretty well with Internet Explorer, so also I redirect the user to an error page if IE is detected:

<script>
  if ($.browser.msie){
  window.location.replace("/draw");
  }
</script>

The backend of the app runs in python:

class DrawRobot(webapp.RequestHandler):
    dataUrlPattern = re.compile('data:image/(png|jpeg);base64,(.*)$')
 
    def post(self):
        robot = RobotPic()
        robot.idea = int(self.request.get("idea"))
        robot.name = self.request.get("robotname")
        robot.format = self.request.get("format")
        img = self.request.get("img")
 
        imgb64 = self.dataUrlPattern.match(img).group(2)
        if imgb64 is not None and len(imgb64) > 0:
            robot.image = db.Blob(base64.b64decode(imgb64))
            robot.put()
            robotid = robot.key().id();
            self.response.out.write("""
<div id="content">
<p>
Thanks. Your drawing was saved.<br/><u>Your identifier for MTurk is  
<span style="color:#FF0033">%s</span>.</u>
</p>
</div>""" % (robotid.__str__()));
 
        else:
            self.response.out.write("<div id='content'>Nope. Didn't work!</div>");

Here are some of the drawings I got by paying $0.03:



Eric Brockmeyer – Final Project Update – CNC Food

by eric.brockmeyer @ 6:52 am

CNC MERINGUE – FAIL

Mixing meringue.


Attempt at pushing and pulling meringue peaks.


Various tooling attempts, some 3d printed, some household utensils.


Unfortunate results…

CNC Meringue from eric brockmeyer on Vimeo.

.
.
.
.
.
.
.
.
.
.
CNC M AND M’S

M and M pattern.


M and M setup including, CNC machine, hacked vacuum cleaner, and LinuxCNC Controller.


Pick and place M and M machine.


Results.

CNC M and M’s from eric brockmeyer on Vimeo.

shawn sims-final project so very close

by Shawn Sims @ 2:12 am

Here is a brief update with Interactive Robotic Fabrication…these are a few screen shots of the RAPID code running in Robot Studio which is receiving quaternion coordinates from openFrameworks via an open socket connection over wifi. Currently the openNI library is reading the skeleton / gesture or the right arm and producing a vector at the continuation of the neck to arm. This gives the robot an orientation that is relative to the users position.

Setup Begins with openNI and ofx which produces the RAPID code to send to Robot Studio…

making the socket connection…

specific IP address. This may interesting if the user can be anywhere within the same wifi network and controlling a robot someplace else…

Once the socket connection has been made the port for receiving the coordinates begins to listen…This is a view of the Virtual Controller which mimics the physical one.

Last is a short video of actual live control of the robot simulation via the process described above.

Plan for interaction….
I have ordered 50lbs of clay and lots of foam to expirement with the kneeding / poking of clay. The tool that will be used is like a pyramid which will yield a pattern something like Aranda Lasch furniture.

final project – an update – to be worked on during carnival

by susanlin @ 6:16 pm 13 April 2011

The good news is that I’m still obsessed, enamored, and re-watching the same inspiration animation over and over. The bad news is, everything (else) has been seriously overwhelming and I haven’t been able to reciprocate the love. Not going to lie about it.

Just thinking/researching about the pieces which will make this work, keeping in mind that half the battle is learning the algorithm and learning to code better. Will also try to leverage a strength to make it come together: visuals.

[ ] Optical Flow

Optical Flow.

w/ Distortion.

[ ] Edge Detection

Edges in action.
Reading: Other Methods of Edge Detection, Edge Detection via Processing

[ ] Sepia Colorization
TBD

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity