What I found missing from his work in current from was a lack of practicality or way to imagine how more complex interfaces and experiences would be enabled within his worldview. On the other hand, the above project by the Fluid Interfaces Group uses a series of existing technologies to make their prototypes completely possible today.
That’s to say I think the interaction of holding a lens to everything and manipulating the physical world with zero to little non-visual feedback is necessarily a good idea though. Having used and made software and interfaces like this, I know they can be exceedingly frustrating and not overly enjoyable.
3. The Critical Engineer deconstructs and incites suspicion of rich user experiences.
from CE
“Any sufficiently advanced technology is indistinguishable from magic.” is the third of Arthur C. Clarke’s three laws. In many cases, the work of artists, designers, and others who deliver ‘human-experiences’ use technology to create this ‘rich user experience’ magic. Of course, those doing this art & design work are often their own engineers, doing a type of engineering themselves (even if they’re using art-engineering toolkits like openFrameworks or Processing.) If by extension we then say they should adopt the mindset of Critical Engineering, the point at which the show is over and the trick can be ‘revealed’ is one of contention.
I think the very existence of the open source software movement in conjunction with GitHub has shown that many artist-engineers are freely willing to share what they make and how they do it. Even companies like Disney and Microsoft reveal a great many of their tricks through their large research organizations which publish findings regularly.
Andante by the Tangible Media Group at MIT visualizes animated characters walking along a piano keyboard, as if they are playing the physical keys with each step. It was by chance that this project attracted me because of its aesthetic of the lit up figures, reminiscent of my recent Mocap project where I modeled the human figure as a pedestrian walking signal. The luminescent representation of the bodies are similar, and this project as a whole feels well-considered and completed; the attributes of the visualization successfully complement each other, and seamlessly integrate the virtual visuals with the physicality of the piano keys being pressed at the right times. I especially found the motive of the work to be very appealing as well: that it is based off the expressive, full-body, and communicative characteristic of learning music, thus promoting an understanding of how music is instinctively rooted in the human body–something that any audience can find relatable and introspective. It is also noted that such approach took advantage of walking as being one of the most fundamental human rhythms, indicating the careful and admirable consideration that went into the decision of representing the movement. I find charming in the color palette of the visuals, especially the brightness in conjunction with how the video in particular was recorded, amidst a darker background that strengthens the design choice of the figures–I can almost see the clips being from a dream or bedtime story scenario, of little lit fantasy characters bringing music to life in some sort of tale. Another neat characteristic is the variety of forms and physiques that were represented: different body types, walking postures, and even some animals! All things considered, I think this would make practicing the piano less lonely, and playing the piano more fun; it is indeed like a little other world.
The excerpt from “The Epic Struggle of the Internet of Things” read to me as… overly complicated and perhaps oddly unnecessary to me–as did the “Critical Engineering Manifesto”, but the latter delved into a depth of societal introspection that I merely could not grasp nor conjure enough interest. What I absorbed from the former, at least, was the hypothetical scenario of a consumer attempting to merge the utility of two seemingly unrelated products; oftentimes this assumes the objects to be of different technological time periods, which implies the desire to technologically advance the more mundane, archaic device of the two. I personally relate to the individuals who do not see the practical point in this mindset: with modern day technology naturally being comparatively impressive to the past, and constantly advancing to encompass more opportunities, there is constantly the rise of arbitrary explorations in how to make sometimes the most unnecessary things “more technologically advanced”. This can be the result of different factors, from pure curiosity to inherent human laziness. Though this “struggle of the internet of things” has potential to pave ways to new, useful, innovative inventions that are practical and reasonable, people believe, and I can sympathize, that plenty results are probably made purely “just because”, and with no real usable value. I have not dealt with this concept much at all, but perhaps a reflexive example I can come up with is the new Apple updates recently, for both the newest iPhone and Macbooks, both of which ignited anger and frustration in all my tech-savvy friends (much to my personal apathy as a non-Apple fanatic); from what I’ve heard, it feels like Apple has just been “upgrading” (debatable, haha), its specs of its newer products just for the sake of implementing some change, not really thinking about the practicality in actual use and potential feedback of its loyal customers. Although this isn’t combining two objects, this was the closest example I could probably relate to as someone who has never thought about, and is confirmed to be thoroughly uninterested, in this “struggle of the internet of things”.
I thought that Lineform – by Hiroshi Ishii, Ken Nakagaki, and Sean Follmer was particularly compelling due to not only its simplicity, but its range of applicability. Lineform is a “shape changing interface” in the form of a ‘line’ it is a kind of robotic string that can change itself to fit multiple proposes. It can act as a telephone, transmit and receive data as a touchpad, it can represent digital change through movement that represents digital actions, it can record your motions and make your repeat them, it can serve as a stencil, it can function as a lamp when a light bulb is plugged into it, or as whatever is required when anything else is attached. Just the amount of things that you can do with a 3d line is really impressive to me, and is nicely straightforward. Overall I think that the possibilities that lineform represent are very interesting, but I would like to see it applied to more practical/everyday tasks and actions
It has to be a game. I don’t care whether or not I create assets for it, it just has to be a game. I understand that it is a short project, so the message I design it to convey will be similarly short. I would prefer it to be 3D (looking forward to learning Unity/Openframeworks). For an idea of my scope (but I intend to do much less): http://store.steampowered.com/app/387860/
(^ this game is short and free by the way, I hope you’ll give it a try)
In the past, I always felt a little helpless knowing that I really only had enough skill to recreate Super Mario, but I’ve gained skill and hope from taking this class, and I feel like I’m ready to try making games again.
Gif of time slider animation pre-render in Maya (you may need to click it to see it run):
Sketches I did of the characters:
For my project, Golan suggested that instead of using Processing or Three.js, I could learn scripting in Maya because of my interest in animation. I was very excited to start this project, and took to it with a more story-focused mindset with the motion capture than I think most of the class did. I wanted to use the scripting to do things in Maya that I couldn’t do by hand (or at least couldn’t bear to do by hand or in the given time frame) that would supplement a story, no matter how short. The initial idea I had for this was a pair of disgraced/fallen/unfit samurai that circled each other in blame, getting closer and farther together with an audience of masks turning to look always at the two of them and closing in gradually. Eventually, I realized I didn’t have time to model two samurai and settled on modelling the shell (mask, gloves, socks, cape) of a disgraced/fallen/unfit samurai warrior and trying to achieve a feeling of melancholy and nostalgia for a better time. I wanted to use python scripting to generate and randomly place another modelled mask, and make it so that whenever the main mocap samurai moved, the masks would turn their faces to always follow him. Starting this project, I watched video tutorials on how to python script in Maya, following along with them. After figuring out if I could do what I wanted to do, which actually the video tutorials basically covered, I started modelling. Before this project, I had only had a bit of basic modelling experience and a general broad overview of what Maya could do. The modelling ended up taking me more time than I thought. Afterwards, I also learned how to import a BVH file into Maya and how to rig/bind a model to the BVH skeleton. When I got to coding, I came to an unexpected circumstance. Although the masks would turn to face the samurai, after the samurai was binded to the skeleton, this no longer worked. At first I tried to bind the skeleton different ways, but in the end I made a separate object that I gave 100% transparency that I hand animated to follow the samurai around. The masks then followed that object. In the end, I didn’t end up liking the effect of the turning masks because they made the scene more confusing because the masks didn’t turn enough to be horribly noticeable. After finally getting everything set up and moving, I learned how to render. This is the first time I’ve rendered a scene, and I didn’t expect the end number of frames to be around 2000. The 2000 frames took longer to render than I thought they would. I tried to change the frame rate to 24 fps, but doing so significantly slowed down the mocap. The final step was to take my rendered scenes and stitch them together in Premiere. The end product was slower than it looked in Maya so I sped it up, utlimately shortening it by half, and also rendered darker than my test frame renders. I didn’t have time to re-render all the frames, but I think it was good experience going into the next time I try to render something. In the end I think I’m satisfied with the project, but I would definitely like to do more with it given more time to really get things to move, thinking more interactively along with my story-focus and getting more interactivity (leaving enough time for when things I want to work out don’t and so on). I want to utilize code more and dig deeper into what I can do with it, and also learn more the Maya-Python vocabulary.
Once again the WP-Syntax tool still hates me, and so here is the Github link to the code:
I was first exposed to Hiroshi Ishii’s work last year thanks to Austin Lee, my studio professor at the time. As a professor for the environments track for the school of design, Austin showed us Hiroshi’s work as a way to help communicate what environments design means. His work helps create harmony between digital and physical interactions and environments.
Being able to see and meet Hiroshi Ishii after studying his work was a wonderful experience. After discovering his passionate, inspirational, and whimsical attitude towards education and design, his work brought on a new life. Hearing him speak helped me not only understand his work better, but also to look at new technology, art and design differently. I think we are often caught up in the technical power of a piece that we often dismiss work as a tech demo rather than a simple art piece. For instance, with his Levitation piece, when I first saw it a year ago, I was in awe of the technology, but now after hearing Hiroshi talk, I see it in a new light. His work gives off the impression that it is magic, and I think this shows that we often take technology for granted.
Perhaps what resonated with me the most during his talk was when he argued about the boundaries between art, design, philosophy, and computer science. He told us not to label these disciplines, or ourselves, because labels tell the world what you are not, just as much as they say what you are. These fields live together and survive because of one another. I enjoyed how he used verbs to identify when to optimize each field: Envision (art and philosophy), embody (design and technology), and inspire (art and aesthetics).
Additionally, I really appreciated his comments on friendship and collaboration. I think that this is one of the greatest skills I have acquired from the school of design. My closest friends are the one who critique the hardest, push me the furthest, and challenge me the most. I also respect that even as successful as he is, he is still humble and takes a significant amount of time to recognize those who have helped him along the way. As the world, particularly in America at the moment, feels more divided than ever, I appreciate that Hiroshii emphasizes the importance of friendship and collaboration.
7. The Critical Engineer observes the space between the production and consumption of technology. Acting rapidly to changes in this space, the Critical Engineer serves to expose moments of imbalance and deception.
In my own words — Critical engineers must be aware, understand, and take responsibility when creating a new innovation that will be provided to the general public. Critical engineers must strike a balance between the idea of new innovation, and natural human behavior when humans interact with new technologies.
I find this tenet interesting because it resonates with a lot of what we are taught in CMU’s School of Design. Until today, the main “manifesto” that most designers talk about is Dieter Ram’s 10 Principles for Good Design. I think there’s a lot of similar themes such as sustainability and responsibility that we have the ability to affect human behavior, especially for product designers who work with engineers.
CMU Design revamped their curriculum when I entered in 2014 to focus on Transition Design, or designing for sustainability. I came here thinking I’d learn how to make pretty, aesthetic things that people would buy because they looked pretty. NOW I realize the responsibility we have as makers to think about the magnitude of our decisions and how we can have a real influence in how people live their lives. While new technology and the “Internet of Things” sounds like cool stuff, are conversations and decisions being made about user needs, important intentions, and what type of future we want to live in?
inForm is a project by the Tangible Media Group that I really enjoyed. The project wants to create a relationship between a user’s digital information, and tangible space. I found this project to be amazingly playful and fun. The idea of transferring digital to physical data is very interesting to me. In a digital world, often we don’t make much of an effort to be physical anymore, even though the physical world is so important and integral to living. While we stare at our screens, we’re almost living in a virtual world that is constantly evolving to fit us better, to be more addicting, and to not let us go. When everyone is making such an effort to turn everything digital, it’s so refreshing to see a way physicality can factor back into digital space and possibly make digital space better. In the end, I think the thought processes and the technology behind this project have great potential going into a future of digital takeover, and that it will help us in developing spaces of digital and physical combination, interactions and interfaces that satisfy our invisible and tangible needs, which truly, I think are the best kind.
Also, they really made it look damn good in the video.
One tenet of the manifesto I found interesting was tenet number 1. The tenet basically says that every piece of technology we depend on must be considered both a “challenge and a threat.” Because we depend on these objects, it is imperative that we know them inside and out, all their workings, so we can rise to challenges that may arise due to our dependency on them, perhaps even shake ourselves from their shackles, and also to be prepared for the event of their failure or effects. This should be done with all technology regardless of “ownership or legal provision.” To me, this tenet is extremely important as we continue in the technological era. Technology exists so much around us that it’s not something a lot of us think too much about anymore. We assume that all our commodities will continue working forever. This dependency combined with our mindlessness could end in catastrophe. Therefore I agree that it’s true that a critical engineer must not only think about the great effects that a new or old invention may have if it exists, but also the negative effects of its existence along with the effects of its absence after a prolonged period. If when the internet, a year after it was first invented suddenly crashed and disappeared, it would maybe have been inconvenient, but if the internet crashed tomorrow, there would be a global crisis. Panic would spread as information would be lost, communication down, and a massive amount of commodities the internet provides that we simply don’t know how to live without. I know some people that can’t even get around without Google Maps. For this perhaps eventual crisis, I don’t know if we have a backup. I don’t know what the damage could be, and that is terrifying. In order to be able to bring in new technology, we have to first be the critical engineer, and look further than the technology itself so we can gauge the cost of dependency and the perhaps unexpected costs of its existence.
From the manifesto reading, Tenet #1 is the most compelling to me. Tenet #1 says that the Critical Engineer looks at technology and its effects on the well-being of society. If this technology proves to be a possible threat to said society, then the Critical Engineer’s job is to evaluate the threat and propose a change/solution regardless of any legal protections. I think this is interesting because a Critical Engineer could be anyone in society. I feel like this tenet says it’s up to the people that make up the social structure to determine if the technology is a possible threat and whether to abolish, change, or keep it.
An example of this tenet is obvious in intellectual property laws. Although the entire point of intellectual property laws are to give ownership to technology/work, there are cases where information/technology is seen as public domain and deemed imperative that citizens have access.
Mocap is cool. This project was fun just to get my hands on 3D software and also to actually see a mocap setup for the first time. Being my own model was not so great (my ‘performance’ is not very compelling, though I did try to do one of the dances from my generative book – just the foot pattern without much flourishing). Doing this reminds me I need to expand my network in Pittsburgh of performers, dancers, etc. – which I will do.
I didn’t write code for my final output, but I did get Golan’s example code working in Processing with my BVH. Then I moved onto exploring the 3D animation software, Cinema 4D. I’d learned a little of this program about two years ago, so it was great to get back into it a little. I think I’ll try more things with this software now. I know that scripting in Python is possible in Cinema 4D. I didn’t script in this project, but would try this on the second iteration.
The project was fun. My output isn’t thrilling, but I’m glad to play with 3D (and remember why I love editing animation/video) and learn about cloners, physics tags (rigid body, collider body, force, friction, etc), lighting effects, and using the mocap skeleton.
A project from Hiroshi Ishii and the Tangible Media Group at (MIT Media Lab) that I became very interested with is Materiable. The project is based off of Hiroshi Ishii’s concept of tangible works called radical atoms. The idea behind radical atoms is a combination of computational screen work and actual physical work: using technology to make previously un-tangible data, tangible. Materiable exemplifies this idea very well. In this project, intractable prisms/pins come together to create a larger malleable prism that is responsive to touch and is able to replicate dynamic material (i.e. – sponge, elastic surface, etc.). This work, among many other Hiroshi Ishii works, is on the forefront of new technology/dynamic works. I’m excited to see how this concept of radical atoms can expand farther from its roots and effect other previously intangible media like film.
After going to Ishii’s wonderful lecture, I was looking through the collection of projects from the Tangible Media Group and was particularly struck by Cilllia. Over the last year or so, I’ve become increasingly interested in biomimicry and the design insights that can be gleaned from studying natural systems. Simultaneously, I’ve become increasingly skeptical of 3D printing as a medium for genuine innovation, as so much of the hype surrounding it boils down to little more than overpriced on-demand desk decorations.
However, this project thoroughly impressed me. By framing 3D printing not as the end medium, but using it as a method of synthesizing a unique material that itself has new properties, the TMG explores some many compelling use-cases for these furry plastic doodads. Additionally, the output is astoundingly low-tech. Aside from the complex production method, it doesn’t require electricity or hardware to function, but instead reveals new possibilities when combined with tech.
Some of the scenarios presented are more rooted in aesthetics, unique textures, and even a little goofy, but others-such as the directional touch recognition-are beautifully functional. Overall, this project, and its documentation, are a phenomenal example of what exploratory and experimental design should do: open doors for new ideas and provoke the audience into questioning possibilities with this creation in a way that invites response and collaboration.
I chose to write about “Conditional Lover” because I thought it was absolutely charming. It’s a robot that uses data it gathers from the pictures on your phone to figure out what sort of facial features you would find attractive. Then, it uses its camera and “fingers” to use Tinder for you, deciding which users you would like and swiping left or right accordingly.
I love this idea (as an art piece more than a practical tool), because it makes an objective, impersonal process out of dating, which should be very personal. However, when you think about it, Tinder has already done that, replacing meaningful connections with “Do I find him/her attractive at first glance?” If Tinder is going to take most of the humanity out of dating, why not just hand the whole thing over to a robot? This piece really made me think about our superficial culture surrounding relationships, if only for a little while, so I think it has succeeded not only as a work of technology, but as a work of art.
So we had to get very, very tough on cyber and cyber warfare. It is a huge problem. I have a son—he’s 10 years old. He has computers. He is so good with these computers. It’s unbelievable. The security aspect of cyber is very, very tough. And maybe, it’s hardly doable. But I will say, we are not doing the job we should be doing.
2. The Critical Engineer raises awareness that with each technological advance our techno-political literacy is challenged.
This second point in the Critical Engineer’s Manifesto was the most thought-provoking for me, because it underscores a point that goes so often unacknowledged in discussions of new technologies: Not only do advancements in technology pull us father from understanding the mechanics of the new technology, but abstract and obscure our ability to discuss the ethical, political, and social implications of their implementation.
One of the most significant examples of this comes in the form of politicians discussing the relatively recent phenomena of cyber-warfare. Listening to almost any politician discuss their opinions or policy surrounding cyber-warfare, it becomes apparent that usually lack an even shaky understanding of cryptography, hacking techniques, much less how the internet fundamentally works. But the thing is, you can’t blame them! The average citizen doesn’t have any of this knowledge either, hardly anyone does except discipline experts, and even then, you would need a panel of specialists to explain every part of it.
While these layers of technological abstraction, buildings blocks on building blocks that afford us everything that the internet offers us on an essentially hourly basis, it makes us as a society, policy makers and citizens, largely unable to have any sort of informed or sophisticated conversation about the ethics, limitations, and boundaries of these systems. Arguments get boiled down to meaningless phrases and rhetoric that lack any real substance.
While I don’t exactly see a solution to this growing divide between knowledge of tech systems and legislation relating to them, I think those who are working and studying in the sphere of tech need to be much more firmly brought into conversations about ethics, understanding the immense power and scale of their field.
One part of the manifesto that stuck out to me was tenet number 4: The Critical Engineer looks beyond the “awe of implementation” to determine methods of influence and their specific effects. This is basically saying that it’s important to consider exactly why you are making the choices you are, and why you are developing the things you are, and if the answer is “because we can,” maybe you should reconsider. It reminded me of Jurassic Park when Ian says “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” Just because something can be developed doesn’t mean its worth the time and resources, even if it would be really impressive or cool. Ultimately, the point of engineering is to improve lives, not to impress others.
For this Looking Outwards assignment, I chose to focus on the bioLogic project from the MIT Tangible Media Group. Hiroshi Ishii discussed it in his lecture, but I didn’t understand it the first time around, and I wanted to research it because I am very interested in the idea of biological entities that play an active role in art, and especially in new media, which is often regarded as an art form that is often associated with tech more than anything else.
Essentially, bioLogic was an investigation into programming “living organisms and invent responsive and transformable interfaces of the future.” The group focused on the Bacillus subtilis natto, which expands and contracts in response to atmospheric moisture. The group utilized this property of the bacteria to make several products including little synthetic flowers, but most notably, a garment that ventilates based on the sweat coming off of the body.
Their process documentation is very helpful in understanding how their project actually works:
Finally, their project webpage contains a couple of interviews — one with a representative of New Balance, who talks about the importance of bioLogic to the atheltic industry, highlighting the success of the team in making an interesting and useful product.
Of course the Critical Engineer manifesto starts counting at 0.
As a whole I think the manifesto is very similar to the unspoken oath we take as designers. We look to other disciplines and fields to learn and re-apply their methods. At CMU it is mandatory for all designers to take classes such as Intro to Psychology, Systems, Cultures, and Futures. We are constantly taught to think about how design works with the user, the people included in the production of the design, and the society and culture it will live in. We are taught to think of unintended uses and unpredicted malicious affordances. We are always looking past the shiny and pretty, and looking for true innovation and disruption. I think I really resonated withe piece and that with a bit of change in language, it could easily be the designer’s manifesto. I guess this piece really showed the that critical and engineer and the conscious designer are sisters in a sense.
The only piece that I felt a bit of confusion and that I needed to delve further into was the last. 10. The Critical Engineer considers the exploit to be the most desirable form of exposure.
After struggling to get past the word exploit, I looked it up to be sure I was interpreting it correctly.
a bold or daring feat: the most heroic and secretive exploits of the war.
a software tool designed to take advantage of a flaw in a computer system, typically for malicious purposes such as installing malware: if someone you don’t know tweets you a link, it’s either spam, an exploit, or probably both.
I just think that the two definitions give different, but equally interesting meanings to the final piece of the manifesto. The critical engineer finds that the greatest achievement or challenge is the most preferred way of being viewed. It could also be interpreted as the critical engineer the flaw or the bug the best way to objectively see the system. Is it our greatest flaw or our greatest achievement that will show off our truest self?
After showing the manifesto with my peer, an electrical computer engineer major, I asked for his opinion. He thought the piece was elegant and incredibly accurate, and mentioned that he thought that all engineers should be required to read it. I asked which was his favourite, and he too picked the 10th tenant. He provided an alternative, and probably more accurate, interpretation. He explained that getting hacked would be the greatest form of flattery. If your work is getting hacked then it suggests that the work is big enough or important enough for someone else to spend time trying to break it. In essence, the engineer believes that if their work is being exploited, this means their work is good enough to be worth trying to exploit.
For my mocap project I wanted to do a study of the nCloth feature in Maya used with motion as well as get a basic grasp of the capabilities of scripting. In both aims I think I was quite successful. Each gif below is taken from separate playblasts (screencasts), all of which can be downloaded here--they chronicle the process of getting the result above.
Process
To start I knew I wanted some fairly clean mocap data– capturing it myself would come with its own set of challenges. Mixamo‘s animation library is pretty extensive and set up with Maya takes practically no time (setting up the auto-rig feature is simple, easy and most importantly free), so I set up a simple bellydancing animation and looked at the character’s skeleton. The first script (2nd picture on the left) was basically a test which iterated through the skeleton and parent an object at its x and y coordinates. If one does not want any joints in the chain to have an object parented to them (such as the fingers, which were not very crucial in this particular animation) its easy enough to unparent them Mixamo skeleton and place them in a separate group.
My second script essentially did the same as the first but for a polyPlane instead (pictured bottom left). These would become nCloth once the feature was applied.
The most time-intensive part of the project was experimenting with the nCloth feature, which I knew to be pretty finicky to work with; keeping cloth simulations from glitching and flying in unexpected directions takes time. Tutorials are any Maya-user’s best friend, so I found a quick but helpful tutorial using a transform constraint to keep the cloth moving with the dancing form. My third script produced the gifs shown below, which essentially put into action each step the tutorial instruction but in code form.
Finally, my last script loops third script to create the final product shown below (minus the shading material). I ran the first one to create and parent spheres at every joint except the fingers, then ran the second one to create a plane at each joint as well. The last script iterates through each of those spheres and planes and assigns a collider, nCloth, (respectively) and then applies a transform constraint to the two, so the cloth follows the parented spheres. If one wishes to run the script more than once or on different objects, the iteration number must be updated accordingly, since when Maya creates nCloth it names it “polySurface” and then the next number in the outliner.
Conclusion
From this project, I learned that scripting isn’t that hard! Essentially all you are doing is translating into code every action you would be doing manually. Commands can easily be looked up, and even someone with limited knowledge of Python would be able to pick up on it quickly. There’s also a reference describing every command and its flags. One can even call the maya.mel.eval function which directly calls a MEL shell command. It made a project which would’ve been possible yet painstaking to do manually fairly quick and simple.
For my physical computing project I decided to create a little box to protect my chocolates from warm weather. Once the environmental (outdoor) temperature reaches 65 degrees Fahrenheit or greater, the fans kick in until the temperature drops back below 65. Who wants melted chocolate during the summer? This will keep them nice and cool, but not hardened like putting them in a fridge would.
Using the littleBits was rather simple, as Golan described. Setting up my cloudBit was a no-brainer and linking it with IFTTT was also extremely simple. Wiring up my box also did not take too much time but the connections are a bit unreliable, due to their magnetic nature. Unfortunately if the box is jolted and the cloudBit loses power, it takes 15 seconds to reboot, and will not start the fans back up if they were already on before the “power outage.” This is because on start, the cloudBit awaits a trigger from IFTTT, which wont send a new “turn on fan” one until it goes above, then back below 65 degrees.
This project was created using littleBits and a cloudBit, as well as IFTTT. Here are the recipes:
I believe that among all the engineering tenets, #9 is the most important, relevant and (in my opinion) impactful. I say this because the code in anything defines how digital technology works, and digital technology is by far the biggest, most influential, life-impacting and ever-present form of technology, innovation and invention since the start of mankind.
Its very important to maintain a balance between human and machine interaction. If one confuses the other the balance will be broken. This tenet is saying how engineers should not write code purely for function but for the response and perception by people that interact with it. They should delve into psychological and social realms as the tenet says, and should work to create the most immersive digital experience possible without ruining the harmony with the physical world. There are a few large corporations that seek to create a seamless communication between the two, “perfecting” the experience, such as Apple (during the years Steve Jobs was in charge). The company has always aimed to create a simple-to use (to minimize frustration) yet powerful operating system experience on all of the devices they own, from mobile devices like the iPhone to desktops and laptops. Not only do they strive to minimize the friction between person and machine interaction, they have developed a tight and invisible connection between all of their own machines, creating a machine-to-machine environment that creates an even better experience as you switch from one to the other without problem.
VRDoodler is a comprehensive in-browser 3D drawing tool that lets you draw and explore your drawings in 3D, with or without virtual reality gear. It is definitely an interesting, unique concept that would not have been possible before our generation’s available technologies. Unfortunately I could not stay during her lecture at Weird Reality very long because of volunteering commitments, however from the short amount of time I was present, I was able to understand a few things. One is that, it can be frustrating as it has a bit of a learning curve. You can often find yourself drawing at multiple depths/distances without realizing it until you spin your camera. One critique is that it is a bit iffy on a phone, laggy and not fluid. It is best on a computer, with a tablet.
For this project, I really wanted to alter some characteristics of previously created narratives, in hopes of changing their concepts. My initial idea consisted of imitating lead roles in films and switching their living forms with inanimate objects. (i.e. – Replace movie characters with the tools/objects they use.)
PROCESS
When coming up with possible movies to imitate, I regarded the key objects (i.e. – staff, gun, etc.) and how they related to their character’s role in the film (i.e. – police guard, wizard, etc.). The film that I thought would convey this the best was Quinton Tarantino’s Pulp Fiction. More specifically, I aimed to re-create Jules, played by Samuel L. Jackson, and a specific dialogue he has with one of his boss’s “business partners”. After reviewing the scene multiple times, I then decided to change up my concept and replace the main characters with a sort of visual pun (Hint: Pulp Fiction and Oranges).
After finalizing details, I recorded multiple BVH files of Jules and the business partner, Brett. This process was a bit difficult since the camera used (Kinect V2) didn’t particularly like the fast movements I was trying to imitate while standing and sitting. As a result, some of the movements came out a little glitchy and some of the previous “aggressive” movements had to be slowed down.
After recording, I inputted the BVH files and adjusted camera angles similar to those in the actual scene. This took quite a while, as timing was key. After the scenes were lined up, I proceeded to create a set that would fit the new concept I was aiming for (i.e. – kitchen counter). I then rendered out the figures and adjusted certain characteristics at certain points of the film. For example, when the Brett Orange is shot, his color begins to change to a greener, more vile color.
REVIEW
I am particularly happy with the results I created. Although the rendering of the characters is not as high of quality as I would like for it to be, I am happy with the results given a rather chaotic week.
I will definitely continue to make this project better in the future (i.e. – work on developing software to automatically rotor-scope an inputted scene, make adjustments to character rendering for smoother movement, etc.). Once I have a better understanding of the bugs I’m facing and also have created more efficient programs to render out these scenes, I may even continue to recreate the entire film!