Description this free application collects information about your locations. Purpose tracking your locations could be a good indicator of how adventurous you are… Visual Techniques the application uses your device’s GPS and provides a graphic showing the places you visited as well as how frequently you visit them.
Areas of Improvement: none
Description Mood Panda tracks your daily mood as well as the reasons behind you mood
Purpose: having a clear assessment of your daily mood could indicate if you’re a happy person or not? Visual Techniques the mobile application asks your mood every day and generates a graph of your past moods Areas for Improvement the design needs more work; Self-assessment of mood might not be accurate
Description Mint for iPad is a mobile application that reviews all of your personal finances Purpose understand where your money goes Visual Techniques the application visualizes and categorizes your financial expenses in a chart Areas for Improvement none
AskMeEvery is a very simple app that asks the user a question every day through email or text-messaging and plots their responses on a graph. What’s particularly nice about this app is its flexibility, as it allows the user to customize their own questions. I decided to use AskMeEvery as a means for me to monitor my stress / happiness level, so I’m having it ask me whether I had a good day every evening.
Sleep Cycle is an iPhone app that serves as both an alarm clock and a sleep monitor–what’s interesting is that it uses the accelerometer in the iPhone to monitor the user’s movement and determine which sleep phase they are in, which allows the app to wake the user up at their lightest sleep phase. In addition, the app keeps track of the user’s sleep quality, and other data such as the effect of eating / stress on sleep quality is monitored and graphed.
People often get so entangled in their busy lives that they forget to maintain a healthy lifestyle–in particular, their work schedule prevents them from getting sufficient sleep each night. As someone who neither sleeps very well nor gets enough sleep per night, I hope Sleep Cycle will allow me to better understand how much damage I am doing to my body (!!?) and how my daily activities can either benefit or adversely affect my sleep quality.
RescueTime is an app that monitors your time spent on websites / applications and keeps a log of how you spend your time on the computer. The user’s activities are sorted into a number of different categories including Business, Shopping, Reference & Learning, and Software Development and the user can view a summary of how much time they have spent on each on their dashboard. I think what’s effective about RescueTime is that it creates self-awareness by explicitly logging the user’s activities on the computer and displaying it for the user to see. As it is easy to lose track of time in the bustle of the day, people often forget how they spend it. Electronic devices such as laptops almost work like double-edged swords, because while they allow people to get work done more efficiently, they are also equipped with a wealth of distractions that can break an effective workflow. I myself can be an avid procrastinator even during the times I need to stay focused the most. At the same time, sometimes I underestimate my productivity and assume I had not been able to make good use of my time when in reality I was able to get a lot of tasks done. By logging my activity on my laptop RescueTime will allow me to see a better picture of how I spend my time–whether it will be to my delight or my dismay.
inFORM is the project I admired profoundly. It is a dynamic shape display that can create physical 3-D content from digital data, while allowing users real-time interaction. It exists of a 30×30 actuator table to represent the 3-D forms while displays and projectors color and enhance the visuals of these forms. For interactivity, a Kinect is used to capture input from the user which can then be represented on the actuator table. Under the hood, it uses openFrameworks for the processing and Arduino PCBs as hardware circuitry.
I’ve always been excited about the idea to project physical 3-D objects in thin-air (I thought magnetic levitation could be used for that if a magnetic gas exists!) and this project is a step closer to that dream. The idea itself is very strong: being remote, yet having a strong physical existence and interaction. I also feel they documented the project reall well. As further improvements, the 30×30 table could have better resolution. The user behind the screen should also have haptic feedback from the objects he virtually ‘touches’.
Do Not Touch
by studio Moniker.
This one surprised me. Do Not Touch is a crowd-sourced music video which stores the cursor positions of all the users who take part to interact with it. Such a simple idea turning out so exciting. I saw the cursors as bees flying around, trying to find their way. The thing I liked most is that it totally does justice to its aim of celebrating the humble mouse cursor- and guess what, anyone can go and take part!
Color a Sound
by Blair Neal
Color a sound is an installation that enables participants to color a grid on a transparency and make music. As this transparency passes over an observation line, sounds are triggered corresponding to the marks on the transparency. This project uses a camera as an input and is made in MAX/MSP. The music is restricted to a major scale, indicating that the project is aimed at a general (non-musician) audience. I feel this project could have been better. I understand the artist’s idea of using a grid on a transparency as being inspired by a roller with marks on a music-box- but if that’s the analogy we’re making, why don’t the physical aesthetics support that. The project could have been made to look more like a beautiful music box, and that would have had a stronger impact on the audience. Also, if we’re not sticking to the music box and we’re aiming for a general audience, why not make music by unconventional means: like converting an ‘ink splatter’ or ‘floor marks’ into triggered music.
What can I say? Omicron is an amazing audiovisual projection that happens to be a permanent installation in Hala Stulecia, a large dome in Poland. Omicron was the result of a collaboration between about 6 people (I won’t list them here because they’re in the description of the Vimeo video), so it is a large-scale project that was created by a relatively small group of people. It was inspired by the 1910’s and “historical and artistic references were used to reveal the architecture of the space.”
The way lights and shadows are employed to create the illusion of smooth/rugged surfaces is fantastic and the music is very well integrated with the projection. The piece certainly has a futuristic feel to it (which was the intention of the artists), but I wonder how they can push the idea even further and immerse the viewers into the space. Perhaps instead of restricting the projection to the ceiling, they could also project on the floor to create an even more surreal atmosphere.
FaceRig is basically a real-time ‘face-changer’ that uses your webcam to map your face to a high-quality 3D character. While the concept is not a novel one, what surprised me about FaceRig was how smoothly the face-tracker seemed to work in the video. In my experience with working with face-tracking technology such as FaceOSC for Processing, the tools still have a fair share of limitations (such as not working well for people with facial hair, people with small eyes, or people with glasses) and a small portion of annoying bugs. Of course, since FaceOSC is a free library for Processing it may not be as refined as the other not-so-free face-trackers out there–so I’m looking forward to seeing how FaceRig will overcome some of the aforementioned limitations. Also if this project ends up being commercially available, it would really interesting to use it with Voice Over IP services such as Skype. :D
3. A project that could have been great – EYJAFJALLAJÖKULL
I’m someone who really enjoys augmented projections, so it’s no surprise that EYJAFJALLAJÖKULL (which apparently means ‘island mountain glacier’) caught my eye. Inspired by an unexpected volcano eruption that prevented him from flying to New York, Lemercier created a projection of mountains intended to trick the senses of the viewer by creating the illusion of depth on a flat surface. The optical illusion was achieved by generating fake shadows and light sources as well as depicting the mountains with grids to explicitly define the surface planes. However, while the result was clearly beautiful I could not help but feel like there was something missing and that it needed some extra *magic*. For example, a nice touch I thought he made was the group of ‘falling stars’ that landed on random areas around the mountains–however, it would have been nice if the animation was refined further, possibly by making the stars ‘splash’ when they hit a surface. Still, changes like this are more like minute enhancements and my critique is not so much on an ‘okay’ project could have ‘great’, but rather on a ‘great’ project that could have been ‘amazing’.
I remember seeing many digital scratcher projects done in a full array of interaction techniques (touch screens, Leap Motion, Kinect, etc…), but I find the V Motion Project to be the most amazing of them all. For one thing, the project allows dancers to control audio with their full body motion, unlike traditional scratchers that limit motion to hands. This brings much more expressive performance into the project and makes the show so engaging. I realize that they primarily use the hands as controls even though they track the full body. For example, nearly all buttons are pushed by hands. The distance between two hands and the distance between hands and the floor both determine some aspects of the audio. Other body parts (head, foot, etc) are rarely used. I think this is a wise simplification to make because if all body parts control something, the body becomes over constraint. Dancers may not be able to freely move their bodies without causing unwanted changes.
And ohhhh the graphics! It blows my mind. I love the tessellated shape of the body and all the tiny mesh-like fragments that emerge when buttons are punched.
Project that surprises me:
Karl Sims – Evolved virtual creatures, evolution simulation, 1994
This video shows a virtual creature evolution simulation from twenty years ago. The video shows results from a research project that studies Darwinian evolutions of virtual block creatures. Hundreds of virtual block creatures are created within a super computer (twenty years ago) and tested on several tasks. Only the outperforming creatures survive, after which their virtual genes and mutations of their virtual genes are preserved for further evolution. The video strikes me simply because the result seems to make sense despite the creatures look so absurd! I think it’s very impressive that their virtual creatures move and evolve in believable ways. I’m very interested in seeing what algorithms Karl Sims adopted to make this work though I wasn’t able to find more details.
Imagine what we can do with this idea with nowadays technology. With some distributed computing facilities like MapReduce, we can produce massive amount of such creatures (say millions) and test them against a gigantic skill set… So curious to know what the winning creatures would be and if they share similarities with real world species.
Project that could be cool but (slightly) disappointed me
A hundred years of rock in less a minute
http://www.concerthotels.com/100-years-of-rock
This project has a beautiful concept and the results are rendered beautifully by an animation that walks people through the music genres between 1900 and 2000. After playing around with the beautiful graph for a while I was disappointed to find that no band information is shown at all. It’d be so interesting to see the bands behind each genre and how they changed their styles and influenced their peers overtime. After all it was these musicians that propelled the evolution in music.
While looking forward to NMA&D, I explored vimeo videos (partially because I wanted to watch high-res videos with entrancing background music, they set the mood!). I am familiar with processing so this is the first tag I looked for during my search. Later on I found out how powerful openFrameworks is and then searched it as well on vimeo. I came across many beautiful projects, and here are three.
This project surprised me. LAIKA is a dynamic typeface in which different characteristics like boldness, italicizeness (if that’s a word) and serrif / sans serrif can be analogously and dynamically varied. After having seen the dynamic typography in today’s class, I didn’t expect something very different in this project, but was held in shock and bewilderment when I saw the applications of the typeface. The team who created it installed words in this dynamic typeface in different public areas and using either camera or motion sensors they detect and dynamically alter the typeface of the text in congruence with the motion of the motion of passers by in the streets. I wasn’t inspired by the computational complexity but by the impact that such dynamic fonts can have in the public. Suddenly, sign boards can now become interactive and this led me to think how much more informative and meaningful information could now become simply by allowing people to play with text. The project does miss out on some opportunities. It could be more engaging if it employed colors, altered the curves of the letters and delved into weird fonts like the ones with squiggly lines or those written like human writing. With such elements the fonts would come more real rather than looking like standard fonts that people type on computers with. Another thing the project lacks in is the possible alteration hand gestures can make to the font. Right now the fonts just change in boldness or tiltidness. They could have made them more playful by allowing the font to change in shape or form. What they really achieved well however is the potential public impact and the interaction workflow. Letters change but as soon as the user leave proximity they snap back to their original shapes. This gives the user an impression that whatever they do isn’t deconstructive. Since this work is a thesis, I didn’t actually come across prior work. It appears dynamic typography traces its roots to kinetic typography, the idea behind which is that if text moves around on a screen people are more likely to get engaged in it and understand it.
This is a project I really admire. It is a rendition of Vincent Van Gogh’s paintings, with a take of interactive arts and media. The artist brings the van gogh paintings to life by making the strokes move along paths that they naturally seem to be taking. It looks beautiful. In addition, the author has made the paintings interactive too by allowing the user to play with the strokes. The user can use their fingers to disturb the strokes that are in the painting. This level of interactivity into such a popular painting really brings it to life and allows the user to interact and play with the strokes that the author brushed many decades ago. The only critique I have of the project is that it shows a bright light everywhere the user touches the screen. This sort of bars the user from looking at exactly how the strokes are being disturbed by the user’s finger motion. Also, the artist tried to make the sound respond to the flow of the strokes but since there is always sound in the background, it is hard to tell the background sound apart from the sound that the stroke change is making.
This is a project that disappointed me. The creators have made an interactive display which superimposes their video feed with patterns and colors to make it look interesting. The users can touch a box and it records their pulse. After some time it shows the pulse rate of the user in an obscure animation. I feel this project had a lot of potential of making an ordinary job so interactive and fun. We perform such actions often, like checking our weight, measuring our height, or checking our pulse. If these actions could be accompanied with user friendly and interactive visuals, the waiting process could be less frustrating and more engaging. However I think the creators missed out on opportunities to achieve that. The pulse information is really tiny and impossible to read. Also, it doesn’t give the user more information apart from just the pulse. In addition, I believe the interactivity could be richer had the author allowed the users to touch the screen and interact with the circles while waiting for the pulse to be measured. In terms of prior work – I believe the artist was inspired to create this after looking at the code released to detect and measure pulse using Arduino. The artist probably wanted to make this mundane task more exciting. Which did happen, but unfortunately, only with some missed opportunity.
This was a bit of a tie while looking around… Both of these projects caught my fancy. The Love Machine and Nosaj Thing “Eclipse/Blue” bring to live performance/theatre what I want to further investigate. A stage that is truly interactive and augmented. Not just an illusion to sell a world to the audience, but one that the actor can truly interact and have impact on during a show. Both of these projects sell the idea to the fullest extent leaving the actors in control instead of the media being in control of them. For this reason, I admire both of these wonderful efforts.
SURPRISED ME:
This video caught me off guard when I originally found it. I believe that it is quite the scandalous and brilliant idea. A projector used as a flashlight to reveal alternate realities for the viewer! Such a great idea! Although this concept could be pushed using traditional projection mapping, what is really special about this is that the use of multiple disciplines (such as animation, 3D modeling, film, and code) not only bring to life characters and new dimensions in closed spaces, but also creates life/character for the projector itself. This new “character” is brilliantly used towards the end of the video when your eye is tricked to the point were one really starts to lose track of what is the “true” reality.
A TAD DISAPPOINTED:
This media wall caught my attention at first due to it’s slick design, but the more I watched the less impressed I became. Although I think the idea of live updating twitter among other feeds is a cool idea, I still think there seems to be a distancing of the relationship between the human and the piece of work. Something to make it personal to the viewer would have been nice. My mind started to race right back to the theatre during this video. Thinking of other ways to perhaps incorporate the audience into the performance using different feeds. One at which their communal presence could have and impact on the show.
Kinematics is a 4D printing system created by Nervous System. It enables the creation of foldable, complicated and evolved forms that take account of our body structure. It is an amazing combination of computational design, creative programming and digital fabrication. The most important aspect of Kinematics is that it empowers people who have access to 3D printing machine to create their unique designs with an easy to use interface.
A Project Surprised Me
I found Petting Zoo, this installation by Minimaforms really intriguing and stunning. These “creatures” interact with the audience in really speculative and humorous ways. With Kinect and Processing, I am totally astonished by the powerful outcome this combination can produce. Also the setup is a simple and intuitive, when people approach them, they will start “dancing” and hence audience will move more to provoke more interactions.
A Project Could Have Been Great
I love the idea of integrating new and emerging media with the traditional art forms. And there are so many great skills such as precise mapping and integrating other different interactive elements. However, I think the concept could be developed and there are some values to be gained out of playing this game?
As a part of NHDK, photographer Víctor Enrich photographs and digitally reconfigures the same building in 88 configurations and orientations. This piece really surprised me because conceptually it sounds pretty simple but has more weight to it than I expected when browsing. By many constants, including most notably the angle and the subject, the variations of the artist’s interpretations become the subject matter.
Disappoint
Building Sound features an incredibly unique interaction with sound that feels both familiar and foreign. Its simplicity is elegant, and the interaction works quite well. I appreciate the bold take on stripping text from web communication, and exploring alternatives that are still aesthetically appealing and relatively practical. However, the current approach feels lacking somehow. It’s difficult to find precisely what you want, in particular if the website featured more content than it currently does. It’s merely functional enough for its current use case.
Admire
Museum, a tech demo for CMU student group Pillow Castle’s first person puzzler is quite impressive as a tech demo. The game mechanics are fascinating, though currently unbounded. Given proper structure and form, I see this game (a sort of combination between Portal and Perspective) being an entertaining, challenging puzzler with a great amount of variety possible among the current game mechanics presented provided some proper constraints are added to the implemented mechanics.
I admire this project because it is beautiful, but also useful. I also like how Bloom used the star-planet-moon relationship to represent the artist-cd-track relationship, which isn’t quite the same, but makes sense when organized visually. I wasn’t able to try out the app myself, but the demo video indicates that there are natural motions and animations in place, which is also a plus.
A project that fell short:Letters-pairs analysis
I had high hopes for this project, since it analyses the frequencies of grouped pairs of letters in various languages, and I’m a huge language nerd. However, I feel that the project falls short visually, and left me disappointed.
When you go to the website, the first thing you see is a rapidly-updating set of data, forming a set of jostling circles that seem to be placed randomly. Once you locate the information about the project, you can begin to appreciate what’s happening a little more, but it’s still visually jarring to have such jerky, quick changes on the screen!
Additionally, the visualizations fall short in that it’s hard to compare them. There are examples of analyses in different languages, but because the visualization is so jumbled, it is hard to create comparisons.
In this project, I was surprised by the level of emotion and amount of time the people in the video seemed to invest in the simple robot tendrils hanging from the ceiling. The robot’s interactions are very limited, but it was clearly enough to elicit a response. Striving for this kind of simplicity could inform how I approach projects in this class.