Problem with OF 62 video player

by chaotic*neutral @ 6:11 pm 25 April 2011

After trying for hours to get ofMoviePlayer to do a simple loadmovie case switch, I found on the forums that there is a problem with the 62 video player.

Therefore for generative video cuts, I have to rollback to OF61 video player

http://forum.openframeworks.cc/index.php?topic=5729.0

SamiaAhmed-Final-LatePhase

by Samia @ 9:55 am

A pdf! Containing a large number of screenshots



KinectPortal – Final Check-In

by Ward Penney @ 9:49 am


as

This is my initial auto thresholding. You can see the depth histogram on the bottom.

This one uses ofxControlPanel to allow for adjustment of some settings and a video library.

a

Meg Richards – Final Project Update

by Meg Richards @ 2:57 am

I’m working on correctly calculating the bounce direction and velocity. Using OpenNI, I track the y position of the base of the neck. With a small time delta, I look at the difference between the two positions to get a reasonable approximation of both direction and velocity. After introducing an actual trampoline, I had to significantly reduce my periodic sampling because I could perform a full bounce and return to a position close to the source position before the sample would be taken. I haven’t mapped velocity to sidescrolling action, so there isn’t much to see, but here’s a picture of a trampoline:

Bounce bounce bounce.

Timothy Sherman – Final Project Update

by Timothy Sherman @ 2:26 am

Over this weekend, I’ve succeeded in finishing a basic build of a game using the Magrathea system. The game is for any number of players, who build a landscape so that the A.I. controlled character (shown above) can walk around collecting flowers, and avoiding trolls.

Building this game involved implementing a lot of features – automated systems for moving sprites, keeping track of what sprites existed, etc, and finding modular, expandable ways to do this was a lot of my recent work. The sprites can now move around the world, avoid stepping into water, display and scale properly, etc.

The design of the game is very simple right now. The hero is extremely dumb – he basically can only try to step towards the flower. He’s smart enough not to walk into water, and not to walk up or down too-steep cliffs, but not to find another path. The troll is pretty dumb too, as he can only step towards the hero (though he can track a moving hero).

I’m not keeping track of any sort of score now (though I could keep track of how many flowers you’ve collected, or make that effect the world), because I’m concerned about the game eclipsing the rest of the tool, and I think that’s what I’m struggling with now.

Basically, I’m nervous that the ‘game’ isn’t really compelling enough, and that it’s driven the focus away from the fun, interesting part of the project (building terrain) and pushed it into another direction (waiting as some dumb asshole sprite walks across your arm).

That said, I do think watching the sprites move around and grab stuff is fun. But the enemies are too difficult to deal with reliably, and the hero a little too dumb to trust to do anything correct, requiring too much constant babysitting.

I also realize that I’ve been super-involved with this for the last 72 hours, so this is totally the time when I need feedback. I think the work I’ve done has gone to good use – I’ve learned how to code behaviors, display sprites better, smooth their movement, ensure they are added onto existing terrain, etc. What I’m trying to decide now is if I should continue in the direction of refining this gameplay, or make it into more of a sandbox. Here’s the theoretical design of how that could happen (Note, all of the framework for this has been implemented, so doing this would require mostly the creation of more graphical elements).:

The user(s) builds terrain, which is populated with a few (3? 4?) characters who wander around until something (food, a flower, eachother) catches their attention. When this happens, a thought balloon pops up over their head (or as part of a GUI above the terrain? this would obscure less of the action) indicating their desire for that thing, and they start (dumbly) moving towards it. When they get to it, they do a short animation. Perhaps they permanently affect the world (pick up a flower then scatter seeds, growing more flowers?)

This may sound very dangerous or like I’m in a crisis, but what I’ve developed right now is essentially what’s described above, but with one character, one item, and the presence of a malicious element (the troll), so this path would really just be an extension of what I’ve done, but in a different direction than the game.

I’m pretty pleased with my progress, and feel that with feedback, I’ll be able to decide which direction to go in. If people want to playtest, please let me know!!

(also, i realize some sprites (THE HOUSE) are not totally in the same world as everything else yet, it’s a placeholder/an experiment)

screen shots (click for full):

Final project update

by huaishup @ 11:16 pm 24 April 2011

I am trying to work along my plan and schedule for this final project. Ideally I will finish 3~5 fully functioned drum bots to demonstrate potential combination of music and algorithm.

 

What I have done:

Spend some time redesigned my own Arduino-compatible circuit board, which is less smaller than the original one and with all sockets and electronic parts on. (1 x 1 inch)

Ordered all parts from Digikey last Wed.


Tried a lot. But still remains some problems:

piezo sensor is really fragile and not stable.

Solenoids are either too weak or eat too much current & voltage.

Batteries are always the problem.

Circuit board never shipped.

Sounds effect.

 

Expectation:

In Thursday have a working demo with 3~5 drumboxs

Software is hard, hardware is HARDER!

pieces

by susanlin @ 8:57 am 20 April 2011

start here

1. color – sepia, minimal
2. lineart – edges are important
3. trails – particles or such


Live Feed, 2-toned



Edge detection, understood and working stand-alone



Combining? Broke it. Inefficient keyboard banging to no means, eyes bleeding. (This is a overlay Photoshopped to demonstrate the somewhat desired effect.)



Next: oFlow, learning from this good example found at openprocessing…




Scoping… Make this into a part in series of learning bits of coding.
Combos in mind include:
1. 2 color + trails
2. 8bit/block people + springs
3. Ballpit people / colorblind test people + boids



Display may be something like this..

final project early phase presentation

by honray @ 8:01 am

Link to demo

Original idea

  • Users each control a blob
  • Blobs can interact with each other
  • Playstation Home, but with blobs

New Idea

  • Collaborative platformer
  • Blob has to get from point A to point B
  • 2 player game
  • Person 1 is blob
  • Person 2 controls the level
  • P2 controls levers, ramps, mechanics of level
  • Goal is to help p1 pass the level
  • How does p2 help p1 without communicating directly?

What’s been done

  • Box2d up & running
  • Blob mechanics
  • Basic level design
  • Keyboard control of blob

Hurdles

•Collaboration
  • 2 people go to website and register (php)
  • Create websocket server (python), each player communicates via web browser (chrome) and websockets
•Maintaining state via websockets
  • P1 (blob player) is master, maintains overall state
  • P2 is slave
  • P2 (level player) sends level state changes to P1
  • P1 sends blob position/velocity updates to P2
  • Any other ideas on how to do this?

Mark Shuster – Final – Update

by mshuster @ 7:47 am

My project lives as it grows at http://markshuster.com/iacd/4/

Updates soon.

Asa & Caitlin :: We Be Monsters miniupdate :: DOT. PRODUCT. + sound + springs

by Caitlin Boyle @ 7:47 am

We’re not going to have a polished, pretty, finished version of mr. BEHEMOTH kicking around the STUDIO until the show on the 28th, but we’ve managed to make some progress despite the universe’s best efforts to keep us down. With Golan’s help, Asa wrangled the mighty DOT PRODUCT, “an algebraic operation that takes two equal-length sequences of numbers and returns a single number obtained by multiplying corresponding entries and then summing those products.”


In layman’s terms, our program now reads the angles between sets of skeleton joints in 3 dimensions, instead of 2. This is preferable for a multitude of reasons; mostly for the sheer intuitive nature of the project, which was practically nonexistent in the first iteration: Mr. BEHEMOTH moved as nicely as he did because Asa & I understood the precise way to stand in order to make it not look like our puppet was having a seizure.

Now, a person can be much more relaxed when operating the puppet, and the results will be much more fluid.

Caitlin’s in charge of making the BEHEMOTH more dynamic, & created a sound trigger for the star-vomit from our update before last. Now, users can ROAR (or sing, or talk loudly) into a microphone, and the stream of stars will either be tiny or RIDICULOUS depending on the volume and length of the noise.

+ =

Golan is trying to talk Caitlin into using this same feature to make the BEHEMOTH poop. Caitlin really doesn’t wanna make it poop. Your thoughts on poop, class?


C is also still kicking around with Box2D to add some springy, physicsy elements to the puppet, but these need some work, as Caitlin reads slowly and is still picking apart Shiffman’s Box2D lectures. Once she gets this down, she’s adding collision detection; the BEHEMOTH should be able to kick the stars, stomp on them, eat them back up, and generally be able to react to the starsoundvomit (that is the technical term).

In short :: it runs nicer now, we have sound vomit, & the general springiness/animation of the puppet is getting a bit of an overhaul.

Is there any particular feature you REALLY want to see on the BEHEMOTH, other than what we have? We’re focusing on just polishing the one puppet, rather than rushing to make three puppets that are just ehhhhh.

Alex Wolfe | Final Project | Checkpoint

by Alex Wolfe @ 7:38 am

So after several miserable knitting machine swatch failures, I’m come to the conclusion that in order to create a visually appealing pattern, I’m going to have to work forwards and backwards, making sure that the generative pattern actually conforms to more “rules” of knitting rather than being purely chaotic.

I found an awesome book, “Creative Knitting: A New Art Form” by Mary Walker Phillip which is chock full of stuff like this which really made me want to step up my aesthetic game.

I also need more stitch variation than the previous sketches were producing in order to create something like this. At the very least eyelets and cables which need to run in parallel lines. So taking a suggesting from Golan I started playing with reaction diffusion that can be spawned off of underlying layers of different equations in order to create more variety in stitch pattern and also more interesting forms. I still like the idea of using it to “grow” your own skin, since the same algorithm is used to produce so many different patterns in nature (often for pelts that we kill for and make into clothing anyway). I’m aiming for a user interactive app that takes some input into the system and generates the pattern for a swatch as far as the software end goes

So here are some initial attempts in processing…

But really its that time again where Processing reaches its limit and its time to port over to Cinder. Fortunately Robert Hodgin and rdex-fluxus had some sample Shader scripts online specifically for reaction diffusion so it was much easier to get started that I initially feared.

 

Andrew from the Museum of Arts and Design got back to me as well. Unfortunately his machine doesn’t do cables, but I could try substituting different kinds of stitches to mimic the effect.

In the meantime, I bought a super simple SingerLK off of craigslist, and after a pretty rocky learning curve, its actually pretty simple to generate alot of fabric quickly by hand. Also, you can pretty much do any kind of stitch on it (if you’re willing to do the extra work to set it up. So since I’m trying to suppress the gnawing fear that the whole send out my pattern and get it nicely printed and sent back in time for the show is going to blow up in my face, its reassuring to know that if I had to, I’m pretty sure I could generate at least swatches myself.

Marynel Vázquez – More on people’s opinion

by Marynel Vázquez @ 7:04 am

Some additional cheap tasks

I collected ~600 statements I collected about robots in the future (see previous post). Now I asked people to judge these statements as follows:

  • This is good for our future {agree/disagree in a 7-point scale}
  • The statement makes me feel … {anger, disgust, fear, happiness, sadness, surprise}
  • I foresee a bad future if the statement becomes true {agree/disagree in a 7-point scale}

This was fast! I collected 5 responses from different workers for each set of answers (paying $0.01 per HIT).

And let people draw!

I asked people to hand draw a robot. The robot had to be designed according to one of the statements and, once drawn, the workers had to upload a picture of it to MTurk. The next picture shows the type of responses I got when paying $0.05 per drawing:



Instead of asking people to take a picture, I set up a website using Google App Engine to let them draw this time. The good side of letting people draw with the mouse is that they are obligated to make a composition, and that each drawing is going to be unique at the end. The bad side to me is the more mouse-style kind of drawing that results…

I made the drawing application using processing.js:

The tricky pars consists in converting what people draw with the processing script into an image, and pushing that into the server. The way I did it was using HTML5 and jQuery:

<div style="float: left; width: 600px">
<!-- This is the canvas where people draw, managed by processing.js -->
<canvas data-processing-sources="/js/processing/draw-robot.pjs"></canvas>
</div>
 
<!-- This is the form that allows to submit the drawing -->
<div style="float: left;">
<form id="canvasForm" method="post" action="/draw">
<input type="hidden" name="idea" id="idea" value="{{ idea }}"/>
<div><label>Name your robot:</label></div>
<div><textarea name="robotname" id="robotname" rows="1" cols="40"></textarea></div>
<div>
<p>
Click the 'Save Drawing' button after you have drawn the robot.<br/>
Make sure it looks as required.
</p>
</div>
<div><input type="submit" value="Save Drawing" name="canvasFormSaveButton"/></div>
</form>
 
<div id="result"></div>
</div>
 
<!-- When the input button of the form is pressed, we prevent normal execution -->
<!-- convert the canvas to an image using the function toDataURL() and, finally, -->
<!-- submit the POST request -->
<script type="text/javascript">
 
$("#canvasForm").submit(function(event) {
 
event.preventDefault(); 
 
$("#result").html( "Processing... Don't reload or close the page." );
 
var $form = $( this );
var ideaval = $form.find( 'input[name="idea"]' ).val();
var robotnameval = $form.find( 'input[name="robotname"]' ).val();
 
$.post( "/draw", 
{ 
  idea: ideaval,
  robotname: robotnameval,
  format: "png",
  img: document.getElementsByTagName("canvas")[0].toDataURL()
},
      function( data ) {
          var content = $( data );
          $("#result").html( content );
      }
    );
});
</script>

This script doesn’t work pretty well with Internet Explorer, so also I redirect the user to an error page if IE is detected:

<script>
  if ($.browser.msie){
  window.location.replace("/draw");
  }
</script>

The backend of the app runs in python:

class DrawRobot(webapp.RequestHandler):
    dataUrlPattern = re.compile('data:image/(png|jpeg);base64,(.*)$')
 
    def post(self):
        robot = RobotPic()
        robot.idea = int(self.request.get("idea"))
        robot.name = self.request.get("robotname")
        robot.format = self.request.get("format")
        img = self.request.get("img")
 
        imgb64 = self.dataUrlPattern.match(img).group(2)
        if imgb64 is not None and len(imgb64) > 0:
            robot.image = db.Blob(base64.b64decode(imgb64))
            robot.put()
            robotid = robot.key().id();
            self.response.out.write("""
<div id="content">
<p>
Thanks. Your drawing was saved.<br/><u>Your identifier for MTurk is  
<span style="color:#FF0033">%s</span>.</u>
</p>
</div>""" % (robotid.__str__()));
 
        else:
            self.response.out.write("<div id='content'>Nope. Didn't work!</div>");

Here are some of the drawings I got by paying $0.03:



Eric Brockmeyer – Final Project Update – CNC Food

by eric.brockmeyer @ 6:52 am

CNC MERINGUE – FAIL

Mixing meringue.


Attempt at pushing and pulling meringue peaks.


Various tooling attempts, some 3d printed, some household utensils.


Unfortunate results…

CNC Meringue from eric brockmeyer on Vimeo.

.
.
.
.
.
.
.
.
.
.
CNC M AND M’S

M and M pattern.


M and M setup including, CNC machine, hacked vacuum cleaner, and LinuxCNC Controller.


Pick and place M and M machine.


Results.

CNC M and M’s from eric brockmeyer on Vimeo.

shawn sims-final project so very close

by Shawn Sims @ 2:12 am

Here is a brief update with Interactive Robotic Fabrication…these are a few screen shots of the RAPID code running in Robot Studio which is receiving quaternion coordinates from openFrameworks via an open socket connection over wifi. Currently the openNI library is reading the skeleton / gesture or the right arm and producing a vector at the continuation of the neck to arm. This gives the robot an orientation that is relative to the users position.

Setup Begins with openNI and ofx which produces the RAPID code to send to Robot Studio…

making the socket connection…

specific IP address. This may interesting if the user can be anywhere within the same wifi network and controlling a robot someplace else…

Once the socket connection has been made the port for receiving the coordinates begins to listen…This is a view of the Virtual Controller which mimics the physical one.

Last is a short video of actual live control of the robot simulation via the process described above.

Plan for interaction….
I have ordered 50lbs of clay and lots of foam to expirement with the kneeding / poking of clay. The tool that will be used is like a pyramid which will yield a pattern something like Aranda Lasch furniture.

final project – an update – to be worked on during carnival

by susanlin @ 6:16 pm 13 April 2011

The good news is that I’m still obsessed, enamored, and re-watching the same inspiration animation over and over. The bad news is, everything (else) has been seriously overwhelming and I haven’t been able to reciprocate the love. Not going to lie about it.

Just thinking/researching about the pieces which will make this work, keeping in mind that half the battle is learning the algorithm and learning to code better. Will also try to leverage a strength to make it come together: visuals.

[ ] Optical Flow

Optical Flow.

w/ Distortion.

[ ] Edge Detection

Edges in action.
Reading: Other Methods of Edge Detection, Edge Detection via Processing

[ ] Sepia Colorization
TBD

Marynel Vázquez – Hard parts

by Marynel Vázquez @ 8:24 am

Robots and the future?

I would like to collect opinions about robots in the future (what they will be doing and what they should do). I decided to try Amazon Mechanical Turk, and started paying little cents for people to give ideas about robots.

I had to use different strategies to get responses from people. I started with a very descriptive task with more than 20 lines of descriptions and examples about how to complete it. The task consisted in completing 6 sentences: “In X years, some robots will …”; and “In X years, robots should …” with X = {10,20,30}. Not many people filled the survey, and so I had the impression that too much description at the beginning of a task might make people think it’s too hard and not worth the effort. I simplified my description over time, and changed how much I was paying many times. At some point seemed like paying too much was also counterproductive, as people started thinking that they had to do more than simply filling the short sentences. After several trials, I concluded that paying $0.03 for the 6 sentences was the more effective strategy.

There is always the possibility that people will cheat in MTurk. To try to reduce the amount of rejected tasks I was getting with my 3 cents, I decided to change the task to rather write two 30-words paragraphs about about what robots will and should do. I started paying 6 cents for the paragraphs without much luck; not many people was completing the task. I raised the incentive to $0.11, but just a few people reacted to the new price.

My last test consisted in asking about “robotics” instead of “robots”, as suggested by a few people in the class. During the past data collection, I realized that too many people was thinking about robots killing us, controlling us, etc.. and wanted to see if I could get some more diverse set of answers. I got just a few responses this time as well.

The unprocessed list of valid ideas I got about robots is presented at the end of this post.  Probably I ended up getting ~600 different ideas, though I still need to count the repetitions. I would like to be able to visualize the data in different formats (year, will/should, topic, agreement), so I started clustering the ideas by topic or by a single word description. Also, as soon as I organize a little better the data, I want to ask people to judge the ideas. Meaning, say how much you agree that robots will be doing this or that robots should be doing this other thing.

I would like to collect drawings about robots capable of doing what the people described. For this task I plan to ask people to draw by hand, take a picture, and upload it to MTurk. I expect price calibration to be an issue again :S

Finally, here’s the list of ideas (each sentence either starts with “In X years, some robots will…” or “In X years, robots should…”, but this is omitted to simplify the reading of the post),

build houses
go on space missions
do housework
make dinner
fly planes
act as a personal assistant
be independent beings who can make decisions and act as citizens of the world
be able to express some sort of feeling
be free-thinking and helpful beings
be able to take on any physical job that is needed by the workforce
have feelings
take over jobs that are usually about manual labor
ruin a country
do the biological things
be appearing in all stages of life
control all man made activity
enter politics
do all things that a computer programmer can do
help rule the world
mingle in our life
conquer the world
work in most of the industries
start to work in governmental organizations
start to think like humans
have households of their own
be involved in the government
be driving
be able to fly people around
cook for families
be in all households
be competent enough to challenge humans
be independent thinkers faciliitating all aspects of humans life
try to overcome humans
bring complete change in travel and tourism
be at par with humans
be a part of our daily life
be able to communicate without humans knowing they are robots
be everywhere
be able to bake
do all the housework
complete complex original tasks
be able to navigate across a room without stepping on pets
be more advance than we could ever imagine
be destroyed
begin to have artificial intelligence
be slightly feared
take over the world
be a part of every day life
be so commonplace to humans that they do not give them a second thought, such as the way we look at automobiles today
become commonplace in medical treatment
serve as police and soldiers to perform dangerous tasks
be able to help human beings in daily life activities
perform surgeries without the aid of human control
drive our cars without need for human control
be roaming this earth
be able to extraordinary things to protect humans from danger
not exist because people won’t trust them
be able to be of good service for the world (should only be used for wars or other necessary actions of protection)
more advanced
only be invented if they are strictly used to fight in wars (this is the only use for something that can potentially be harmful)
be serving many of our needs
provide both services and companionship, and health and medical needs
become more integrated into our daily lives
be able to read our gestures and anticipate what we want
be fully integrated into our lives
be nearly invisible, part of the background, like a microwave or a couch
be designed to make sure they are not harmful to humans
be a common sight in everyone’s home
have taken over human kind
have advanced dramatically in capabilities from today
be used for hard labor jobs and tasks
be able to adequately serve human beings
be used for doing certain works for which humans take a lot of time
be used to earn money
be used for major operations
be used as fighting machines
have emotions in them
be used for jobs like cooking, driving, gardening, etc.
be sent for space expeditions and they will actually be able to think like we, humans
be used as a means of social entertainment
overcome the huge problems
take most of the human brain
predict the needs of humans
believe in life
carry humans
follow the crowd
be in most households in some form or another (ex: smart vacuum cleaner)
be commonplace
be able to self improve/diagnose
be able to accomplish a lot more than we can imagine right now (technology is growing exponentially)
be rapidly taking jobs previously done by people, like self serve at the grocery store but large scale
know what we need before we even tell them. They will analyze, remember, and predict behavior
be equals to humans for the law
help humans with complex tasks
want to have rights
serve the humans for domestic tasks
be a common property
improve its human aspect
be professors
be watchman at night and also local police
be a doctor
be in anti terrorist forces
be friends with people
be in the military
take rest in their depressed mode
be thoughtful like wise people
become famous for their outstanding contribution
think as human beings can
take part in scientific researches
be able to do our household activities
help human beings
be equal to humans
dominate human beings
rock
occupy the whole world
do all work without following human instructions
have baby robots
be assistants
work like most managers
work in companies
replace some humans
be more like humans
be already spread as commercial goods
be helpful to men
rule the world
think and analyze
talk
be able to do things independently
be able to reason out good and bad commands
be able to talk
be everywhere
be cheap enough for humans to purchase and use in their houses
be doing all jobs that human can
be cheaper and more available
be doing house work in homes
be able to serve people at restaurants
be able to drive cars
be able to have conversations with people
replace 90% of the human workforce
no longer be considered our friends
attain consciousness
be more widely used than they are today
be used for running elevators in shopping malls
be free
be self aware
be able to think freely an have basic emotions
be nearly indistinguishable from humans
be able to understand many languages
be able to express emotion and feelings
be able to communicate with us
not do anything more advanced
do the dishes and put them away
help protect rich people from poor people
do the laundry and put it away
replace some doctors for poor people
replace some caregivers for disabled people
bring back the greenary
replace all fuel
be there to help humans from all natural calamity
learn how to grow trees
be functioning only on solar energy
be using sun screen lotions due to UV rays
be more or less human in many aspects, able to have real conversations and interactions and act very human
dress and mimic humans in many ways
be visible all over the country to the point of being normal and accepted as something to interact with
be babysitting our children and working as communicative maids and domestic help
be working in offices, doing clerical and customer service type of positions, thereby lessening the number of humans performing these functions
be able to do household chores for those who can afford one
take care of basic house work
be smaller than the eye can see
be able to think for themselves
be self aware
be able to anticipate the needs of their owners
be doing jobs that require learning and quick thinking
as a human..
do autonomous work but it should not control the human
be in all hazardous applications
be like humans (we can’t identify humans from robots)
think as like a man automatically
do automatic work
destroy the entire world
be capable of going to space and work there by themselves
rule human
be capable of doing risky works in battle fields, atomic plants, etc.
overcome human beings in some field
work with man in every fields
still be our slaves
clean the environment
clean your house
fight wars
be able to act, think and learn on their own
be able to sort and fold your laundry
conduct a polling collections
destroy everything
increase workplace safety (workers are moved to supervisory roles, so they no longer have to perform dangerous applications in hazardous settings)
be God
rule the World
be masters and peoples should be slaves
rule the world
drive the cars
replace all the workers
engage in wars
drive the cars
start doing some menial office jobs
be personal tutors
have valuable places in industry and the military
wash dishes
have the ability to perform household tasks
take the place of infantry
have more tasks to perform in industry
behave like man
drive cars
be our parents
have their own brains
be our partners
be our friends
be able to speak exactly like humans
take over the planet so humans can watch TV all day
be able to speak with human voices
go to work instead of humans
be replacing humans in the work force
start driving cars
improve quality of life for humans
be able to take over the world
be able replace soldiers
be able to talk like a human
become a part of human life
do all the work of human beings
become more helpful to human beings
be the new generation or kingdom of human life
become the leaders of human beings
progress their growth
be ruling the humans
realize earth is going to be destroyed and go to the moon
be as intelligent as humans
be able to form a government
land on the moon
be able to do all mechanical tasks
save the human race
be more energy efficient
evolve into life and reproduce
become cheaper
do jobs a human will do, like be a receptionist
perform surgery
do some work in a factory
be equal to human in doing anything
assist us and help us in house keeping
share the work burden of humans
do a lot of work and drive a car or even a flight
be very much helpful to human
be on par with human intelligence
go thru gravity tests
learn how to fly airplanes
be sent to other planets for exploration
be on board in a space shuttle
fly an airplane
be considered very dangerous!
be able to be considered ‘companions’
be virtually indistinguishable from humans
be able to provide emotional comfort
be able to adapt mentally and emotionally to a situation
feel human-like emotions
go to a different galaxy
go back to where they came from
be extremely successful on Earth
prepare to dominate the world
be renewed
die out
start doing cooking
dispense fuel at fuel stations
do the job of a chauffeur
be a car mechanic
be a gun man
start doing the job of a watch man
refill ATMs
wash cars
be a prestigious thing for humans like TVs in the olden days
start robots rights commission
start to blame human beings
start to create a robot language and speak accordingly
think like human beings and have the capacity to verify the thinks
be used for protection purposes
should be protected from their vigorous activities
should be reduced in number
start to control men
be controlled
kill other robots
create other robots for themselves
begin to do things like being your personal assistant
become more commonplace for routine chores
be available to people who are hugely wealthy and eccentric
be limited so they don’t infringe on our rights
be available to many different households
be making life somewhat easier but in limited ways
perform surgery
pick up garbage
drive cars
assist with medical procedures
clean house
do housework
reproduce without human interaction
take the place of stupid people
be smarter that Earthlings
have brains
rule the world
be readily available to do housework and chores
be able to do all kind of things
be strictly supervised
affect the society
work the same like us
be part of our life
be able to help the peoples
be more efficient than most people
be surpassing our expectations in most areas
be breaking new ground
be able to fly planes
be able to drive cars
be able to prepare meals
a common household item
able to be ordered off the internet
work better and cost less
be able to perform surgery in an operating room from a doctor’s office
be able to cut the grass
wake you up in the morning, make your coffee and make your bed
kill all humans
be able to *think*
be capable of performing various tasks reliably
program humans to work for them
be way more advanced than humans
work instead of humans
be doing more involved tasks such as driving our cars
look more human and blend into society
be able to assist with domestic chores
be more advanced and able to handle more tasks that minimum wage people would handle previously
take over simple tasks in society
be more evolved with better technology
be the leaders of the world
give advises to the rulers of the country
be the rulers of the country
acquire the ability to have emotional life
be the soldiers in the boarder areas of the country
be able to grasp human language
look like humans
perform like a person
takeover all forms of government and run them efficiently
become human companions
takeover humanity and create a peaceful world
take the place of humans in dangerous jobs
learn how to reproduce
be capable of performing surgery
be able to prepare food and drinks
move heavy items from different places
fix things that are broken
be able to clean a house without making a mess of their own
take the place of some actors in movies
be able to drive a car.
become more useful in our everyday lives
completely eliminate humans working as cashiers
be humanlike and in every home
make homeowner’s lives easier (they should be commonplace)
replace humans in many facets
be used within every home to complete various household tasks
start destroying nature
do social service for free
start killing
start planting trees
rule the world
help people
be carefully monitored by actual humans
be useful to professionals in many fields
perform surgeries
be intuitive
recognize human speech
be available for average people to purchase
drive cars for us
serve as our military
go food shopping for us
be able to have emotions and converse with humans perfectly
become common household items
serve as janitors
guide the world
decrease the amount of green house effect
participate in social affairs
heal the world
take part in several invention widely
be able to communicate
fully run our home automatically
be considered a higher life form
take over our working factories
be more highly advance an more intelligent
run the world more or less
be advanced for assistance in the medical field
be readily available to carry out care taking work in day care, elder care and other types of facilities
be widely used in businesses for routine tasks
be companions to handicapped and elderly people
be readily available at reasonable prices for individuals and families
be able to do routine mechanics, such as repairing automobiles and completing some home construction tasks
be able to complete simple household chores
overtake human beings
not allowed to be so
start thinking like human beings
be the competitive to the human beings
not be allowed to think, they should always obey human orders
replace the jobs of many construction workers
exist in the classrooms as assistant teachers
assist in more homes as housekeepers
environmentally friendly
exist in more homes
be voice activated
be building buildings and developing land
be working in airports as baggage handlers
be more innovative in completing aesthetic tasks, like cut the grass of homes and companies
be able analyze personalities
be commonplace in our lives
guide our interactions with others
be familiar to many people
be used to further medical research
be able to do things we have never imagined
damage someone
stop
entertain the world
create some problem
be the nuisance of the world
rule the world
be considered very dangerous!
be able to adapt physically and emotionally, just as humans do
be virtually indistinguishable from humans
be able to feel emotions and react accordingly
mimic human beings and possibly substitute as ‘companions’
be kept as pets
perform chores
take on more complex tasks
smile
be dependable
be stewardesses/stewards
be friendly
have a place in society
be smaller in size, but bigger in the amount of abilities they have
be driving cars and flying airplanes
be more advanced
be interacting with you in public
be doing the jobs that others do today
build the perfect human
take over the world
rule the Earth
have a reset button
be human
take over human jobs
be used wisely
have to be controlled
demolish the human life
be a part of human life
take the place of humans in the defense forces
take up the place of human in factories
resemble the human
replace humans in some work fields
undergo slight changes
be more in activities
talk like human
make a statement
be one of the working members in most of families
be the General Manager of multinational companies
become the familiar faces in industry
become an employee in all departments and industries
become just like a man
be able to manage some of the work done by human beings
be able to think
be in every household
be very life like
be available for industrial use
be performing everyday duties
be able to communicate
travel to mars
fly airplanes
fight in wars
help in home making
perform heart surgery
drive cars
be ubiquitous in the home, the office, and industry
perform many useful functions just existing as a box
kill
be capable of performing any human function
have undergone a major shift in the workforce as robots take over most jobs
assume most routine lab duties
ease out many jobs
do most of the work
be very very special
do most of the work that humans do in an organization
rule the world
replace managers and high post officers
make humans dependable
marry
do all mankind works
walk along the streets
reproduce
take over the world
cook food
be in all offices
be available in the market
conquer the world
be able to do our homework
be so advanced that they can do anything
be able to know what we are thinking
be really advanced
be able to drive
be able to do some of our daily duties
be able to translate from one language to another
remain subservient to their human owners
wear clothes
remain quiet while completing their tasks
try to write books
have specific warranties for their parts
be used in rescue situations
be used in place of humans to explore deep space
dominate assembly line factory work
be able to clean our ovens, just as the Roomba vacuums our floors now
be performing remote construction work
be used to assist physicians as microscopic robot-cameras in exploratory surgery
be defunct
should be as sophisticate and as close to human functioning as possible
be moving around the world
be able to help the elderly live a normal life as much as possible
be treated as obsolete
be useful medically
blend in with society
take away a lot of jobs
live in most homes
gain emotions
make it so humans have to do no work whatsoever
take anthropomorphic forms
drive cars
cry
replace children
be unplugged
be going to work in place of their human owners
be able to take over the world
be able to play the violin well
be indistinguishable from humans
be able to hold a conversation with a human
get older like humans
take decisions
look like human
understand nature
play guitar
achieve success
not talk to you but will get everything of you if you come in front
judge music and dance shows
check the intelligence of humans by their face, movement, etc.
anchor dance and music shows
be getting on the judge seat
be used in war
be more intelligent then humans
be functioning with humans
not be utilized for negative purposes
be obsolete
exist
be friend of the environment
marry people
improvise with dancers in popular concerts all around the world
prepare each and every thing in this world without the help of humans
drive the flight and do the unimaginable things itself
rule the World
influence humans and persuade them to think that they cannot live without robots
make another robots
become more powerful
become very smart, find us useless and take over the world
learn and think
walk and talk like real humans
be humanoid and perform actions like human beings
have emotions and feelings
be able to express
perform heart surgeries
land on Mars
be cab drivers
perform surgery
run the government
start a war
provide better personal assistance with greater mobility capabilities
have full artificial intelligence and unlimited battery usage
communicate better
process paper work
determine what’s wrong with someone at a hospital
drive cars
be brain controlled artificial limbs
be a high resolution artificial eye implant for humans
do voice recognition in houses and commonplaces
be flying cars
be taxis
clone
function as lost limbs in a fully functional manner with a person’s motor skills
replace humans that operate public transportation (busses, trains, and taxi cabs)
do basic surgeries revolutionizing the health care industry
be able to work
be able to think
react agains opposition
become merge with humans
help in homes
drive our vehicles
control all vehicles (no need for drivers)
turn agains the human race
surrogates representing humans
replace blood cells with thousands of times higher efficiency of work
be able to work as replacement for most of the vital organs of human beings
provide companionship to their owners
work in architecture-related tasks, doing the most repeating jobs like drilling or checking the stability of buildings
help doctors and surgeons
support everything from rescue teams to spaceships to hospitals
build a building
automate all easy tasks inside our homes
work as implants in human bodies
eliminate the need for human soldiers
do simple, non-invasive surgery
drive vehicles to their destination without human intervention
perform surgery on a human being with little assistance from humans
drive city buses down city streets shared by passenger vehicles, pick up passengers from bus stops and collect fare
be able to move cargo from docked ships to trucks without constant supervision from humans
be a fully functioning and true to life imitation of an actual human being, making it difficult to differentiate human and robot apart
be able to work as fully functional limbs, stronger than their organic counterparts
be capable of advanced thought processes, such as technical problem solving skills, on their own
be humanoid and suitable for companionship
be tiny and go in the bloodstream performing tasks like delivering medicine and attacking cancer cells
be a bionic arm that can be attached to nerves in the body, so that a person can control it and feel it like a normal hand
make the world dependable upon them
be like human beings
will rock
be very intelligent
help humans without any commands

 

shawn sims-big ass robot update

by Shawn Sims @ 8:07 am

Currently the RAPID output is working smoothly. The only issues currently are some coordinate system misalignment. The x,y,z orientations of the robot (RAPID), kinect (ofx), and the TCP (tool center point) are all different. We have solved most of these issues but just need to scale the input. Currently the interaction method is you move your hand and a vector is created from your chest to hand. That vector determines where the robot looks or where it operates.

Some hurdles we have covered….
+ Making openNI actually 3d instead of a flat image
+ RAPID output is clean with no errors (except joint limits when it moves to a position out of range)
+ Simulation is running live in Robot Studio (virtual controller) via OF

Things to do…
+ Map/Scale Quaternion inputs from openNI to limit input. This will minimize joint limit errors
+ Create a tool to operate on a soft material
+ Create a safe interaction

Ben Gotow-Final Projects-Hard Part…

by Ben Gotow @ 7:47 am

I’m revisiting the Kinect for my final project. I’m separating the background of an image from the foreground and using OpenGL GLSL Multitexturing Shaders to apply effects to the foreground.

GLSL Shaders work in OpenFrameworks, which is cool. However, there’s a trick that took me about three days to find. By default, ofTextures use an OpenGL plugin that allows for non-power of two textures. Even if you use a power of two texture the plugin is enabled and allocates textures that can’t be referenced from GLSL. FML.

The first GLSL shader I wrote distorted the foreground texture layer on the Y axis using a sine wave to adjust the image fragments that are mapped onto a textured quad.

I wrote another shader that blurs the foreground texture using textel averaging. You can see that the background is unaffected by the filter!

Alex Wolfe | FInal Project | Over the Hill

by Alex Wolfe @ 7:20 am

So there actually has been some great strides. I got in touch with Andrew Salomone through Becky Stern and he agreed to help me create a prototype on his computerized knitting machine sometime next week! Which definitely constitutes conquering probably the hardest “hard part”. He also sent me the manual to his machine so I can convert my patterns over into something recognizable.

 

Until then, I received this wonderfully bulky old LK100 Singer and maybe a pound of terrible acrylic off of craigslist with which to play around with. (Since I probably won’t have an excessive amount of time for prototyping on Andrew’s machine, might as well make sure the pattern looks right).  The lady who sold it to me showed me how to make a simple gauge swatch like so…..and also how to cable. Which is time consuming but braindead simple.

 

After looking at knitwear designers like Sandra Backlund, and the Cables after Whiskey sweater pattern  (that selects where cables will be placed with a probability function that produces pretty interesting results) both shown below, I want to create patterns with pseudo random cables/pleats in order to generate interesting volumes (ideally still very simple tube-based dresses).

As far as a “meaningful function”, perlin nosie/ flocking would  be sort of perfect, its so graceful, but I feel like that horse is quite dead by now. So I was looking at Reaction Diffusion algorithms, Grey Scott in particular. Given a standard boring ugly sweater pattern like this one, you can control the rate of diffusion to literally “grow” the diffusion to fit the pieces you are creating.

This creates very interesting results, especially if you knit so that the cables are nested within each other or maybe mapping each isolated circle to an eyelit. Since this pattern is actually impossible to translate into knitted cables due to their orientation and proximity, it could warp the fabric in good or bad ways. I’m also debating the merits of using an actual generative system, or just having at with simple probability.

This sketch begins with parallel lines, and then decides if a given line will intersect with any other with a random probability. I also realized this sort of technique could easily be applied to a sensor and collect some kind of data and visualize it that way. So I guess solving my previous “hard part” really has just given me alot more room to brood over this one. I think its important that its not just a textured knit produced electronically, but that the driving pattern underneath has some sort of relevance/significance other than pure chaos.

final project early update

by honray @ 7:00 am

I’m working on reproducing the Blob at http://hakim.se/experiments/html5/blob/03/. So far, I’ve gotten box2d on javascript working, and am working on trying to implement a blob with the same kind of fluidity and dynamic.
I’ve thought about implementing a bunch of particles attached by springs, where each particle is connected to by an even number of springs, half on the left and half on the right.

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2023 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity