Category Archives: Uncategorized

Yingri Guan

21 Jan 2014

My initial proposal is to track my daily water intake and also the actual amount of water I need.

Tools I will be using:

Fitbit: Fitbit will be used to track my daily activities which through calculations will be translated into data about water needed to sustain daily activities.

Hygrometer: hygrometer is an iPhone app that is used to track humidity levels so that this information will be a measure of environment’s influence on my water need.

Daytum: Daytum is an application that is used to record detailed information about my daily activities, especially the amount of water intake.

AskMeEvery will be used to track the above information and keep track of them.

 

 

Maya Kreidieh

16 Jan 2014

I’m Maya Kreidieh, and I’m currently a student in the Master of Human Computer Interaction at HCII. My background is in computer and communication engineering. I’m interested in building systems that extract and visualize data. I’m also super interested in generative art, although I’ve only dabbled in it very briefly. Before coming to CMU, I founded with a few friends a hackerspace in Beirut, Lamba Labs. I’m very interested in hacker culture and the politics of hackerspaces.
Github : github.com/acrylc
Twitter : @acrylc
Portfolio : acry.lc

ThingsJS

ThingsJS is a library that allows you to represent physical devices as tags in your html and manipulate them using javascript. Each web component has an internal web socket connecting it to a physical component (a sensor or actuator on an arduino for now). The web-component basically emulates the physical device in HTML. So, developers can build IOT applications, prototype hardware or even build an open portal to their stream of physical data in HTML. I’m still in the process of documenting this, but you checkout the code at for now at github.com/acrylc/thingsJS .

Collin Burger

16 Jan 2014

A Piece That I Admire:
​Drei Klavierstuke by Cory Arcangel

I enjoy the hacker mentality of combining things together that at first glance should not be together and just making it work. I also enjoy works that mimic, mock, and experiment with 90s pop-culture and early Internet culture (See Arcangel’s other work, CLICKISTAN, or Jacob Bakkila’s @Horse_ebooks).  Even though this project is not computationally impressive on Arcangel’s part, I think that the premise and execution certainly makes up for it.

One Work That Surprised Me:
PHYSIS by Fabrica

PHYSIS from Fabrica on Vimeo.
I found PHYSIS to be upsettingly simple and admirable. I wish that I had made this.  I like how the simple combination of actuators and plants can conjure the small animal running through the brush that exists in the common animal and human consciousness.

And One That Disappointed Me:
Botanicus Interactus by Disney Research
This is the beginning of a very interesting project that I think falls short, unfortunately.  The video alludes to some evolving and interactive audiovisual displays attached to plants, however it is not discussed in detail.  It really is a novel mode of interacting with an object which is not known for its interactivity, however I would like to see it used to control something other than a few sounds or visual displays.

Andrew Sweet

16 Jan 2014

Surprise!

As a part of NHDK, photographer Víctor Enrich photographs and digitally reconfigures the same building in 88 configurations and orientations. This piece really surprised me because conceptually it sounds pretty simple but has more weight to it than I expected when browsing. By many constants, including most notably the angle and the subject, the variations of the artist’s interpretations become the subject matter.

Disappoint

Building Sound features an incredibly unique interaction with sound that feels both familiar and foreign. Its simplicity is elegant, and the interaction works quite well. I appreciate the bold take on stripping text from web communication, and exploring alternatives that are still aesthetically appealing and relatively practical. However, the current approach feels lacking somehow. It’s difficult to find precisely what you want, in particular if the website featured more content than it currently does. It’s merely functional enough for its current use case.

Admire

Museum, a tech demo for CMU student group Pillow Castle’s first person puzzler is quite impressive as a tech demo. The game mechanics are fascinating, though currently unbounded. Given proper structure and form, I see this game (a sort of combination between Portal and Perspective) being an entertaining, challenging puzzler with a great amount of variety possible among the current game mechanics presented provided some proper constraints are added to the implemented mechanics.

My name is Collin Burger and I am an electrical and computer engineer trying to escape my fate of dwelling in a cubicle.  I am currently pursuing a master’s degree in Electrical and Computer Engineering  at CMU in order to delay my aforementioned fate and perhaps make some entertaining things in the process.  Thematically, I am interested in cultural analytics, people’s relationships with technology, and humorous artworks.
Find me on the Twitter @cyburgee
Find me on the Github at https://github.com/cyburgee

Feelers
Feelers Feelers

Feelers is an interactive installation with game elements in which two participants control the environment with skin to skin contact.  Participants are attached to the installation and instructed to match colored lights and the frequency of two sine waves by varying the area and pressure of skin contact.  Spectators are treated to a voyeuristic display of the players’ actions that possesses the quality of a foreign ritual. Feelers invites participants and spectators to explore each other’s bodies and investigate the notions of personal space.

Feelers is funded in part by the Frank-Ratchye Fund for Art at the Frontier.

Feelers on Github

Project 0

Hello! I’m Emily.

I started programming by going to C-MITES weekend workshops at Carnegie Mellon to learn HTML. Fast forward a decade, and I’m a master’s student in human-computer interaction at the same institution.

I’m a recent graduate of the University of Rochester, where I studied computer science, linguistics, and music. In this course, I hope to merge the creativity and lightheartedness of my humanities background with the tech savviness and forward thinking of my science background to create some beautiful, useful things. Maybe some not-so-useful things, too.

A recent project of mine is GestureCam, an Android app that I developed for the Software Structures for User Interfaces course last semester. I take a lot of pictures with my phone, and I get frustrated when I have to navigate through menus to find the setting or filter that I want, especially when I’m trying to capture an image quickly. To solve the problem, I added gesture recognition on top of a custom Android camera application, and set it up to recognize a few gestures. Now, instead of fiddling around to find the flash button, I can simply draw a lightning bolt shape, and instead of searching for a black-and-white option, I can draw a capital B on the screen. Below is a list of the gestures that my app accepts:

 

 

 

 

 

 

 

The following is a screenshot of my app running. The majority of the screen is taken up by the camera, not buttons or menus obscuring the image. To change settings, the user draws shapes on the screen. If the user doesn’t know or forgets the available settings, pressing the “help” button will create a popup with instructions. At the time, my camera was facing my laptop, and I took a picture of the presentation I was about to give.

The UI is lacking in style; creating a custom camera app for Android took me way longer than I expected. It’s hard. Someday, I’ll write a screenplay about my struggles.

On a positive note, it worked! Below are some examples of pictures that GestureCam took:

The entire SSUI class, looking surprisingly photogenic.

The entire SSUI class, looking surprisingly photogenic.

Our classmate, preparing for his presentation.

Our classmate, preparing for his presentation.

Does this guy look familiar?

Does this guy look familiar?

Wanfang Diao

15 Jan 2014

1.The work I admire

Chase No Face / BELL from zach lieberman on Vimeo.

I really like this project because of the natural interaction between the sound, face and light!
people have very rich facial expression when singing. In this work, song, face and light combine together perfectly and beautifully.

2.The one supervised me

Submergence01 from squidsoup on Vimeo.

The richness of the light effect supervised me, especially the last few seconds of the video.
The large scale of the light balls is also impressive. It would better if the interaction part(when people in the matrix of balls) are more.

3.The one disappointed me

CELL | 1 | showreel from Keiichi Matsuda on Vimeo.

Author said that project’s aim is to lead people think of the tags form social media. However, these tags on screen are all random, which made me really disappointed. With these meaningless tags, audience focused on figure out the mapping between body movements and the tags’ transform but pay no attention to these tags. If the point is only the movement control, then why tags? What’s the relationship between the Internet tags and people’s true character? What’s the feeling when people are labeled by these tags unfairly? In my opinion, I want to know more about these question from this project.
I guess the author should give audience more awareness of what are they interacting with.

Austin McCasland

15 Jan 2014

Ice Angel – Dominic Harris
I found this piece to be surprising because I thought it would be a cool visualization piece, but that there would just be a set of wings appearing behind the people. As I watched the video, the fact that there were multiple morphing wings was very pleasant, and pleasantly less shallow than expected.

CSIS Data Chandelier by SOSO Limited
This is a great combination of a classy chandelier which flows well with the architecture it is installed in. You might never guess that it is what it is, and when you view it from different angles while ascending the staircase it completely skews the visualization. I would love to see it in person.

depth of field
This piece is disappointing to me because this guy has created this really cool looking 3d landscape so that he could showcase depth of field changes, but that’s all it shows. I want to see the blocks move and the terrain undulate. I think this project would have even showcased the depth of field effect more effectively with individually moving blocks. He’s created an ocean and frozen the waves.

Andrew Russell

15 Jan 2014

Welcome to my post.

I am a masters in music and technology student, which is a half music, half CS, and half ECE degree. As such, I am very interested in computers, both hardware and software, as well as music. I started programming over ten years ago and cannot even remember when I played my first song. I also compose my own music and like to tinker with guitar pedals.

My interests don’t stop with music and computers though.  I love to play sports (doesn’t matter which sport), craft beers (the hoppier the better!), and gaming (of both the video and the board variety).

Second Screen

All engineering students at the University of Waterloo are required to complete an upper year design project during their last year and a half at school with a group of three to five members.  This project is supposed to be an actual product which the students could theoretically start a company around after they graduate (and quite a number do).  My team worked on Second Screen.

Second Screen is a TV buddy application, design to enhance your experience watching TV shows.  Upon opening, it will listen through your phone’s microphone for up to 30 seconds and, using acoustic fingerprinting, figure out what TV show and what episode you are currently watching as well as the current time in the show you are at.  It will then display information in real time as the show goes on, such as relevant plot points, show trivia, first appearances by actors, and friend’s comments. There is also a list of dialogue shown as it is spoken.

SS - 1 SS - 2SS - 3
Second Screen Workflow

Github:
https://github.com/TeamFauna/dumbo
Teammates:
Andrew Munn, Fravic Fernando, Noah Sugarman, Will Hughes

Links

Website:
ajrussell.ca (redesign coming soon)
Youtube:
https://www.youtube.com/user/DeadHeadRussell
Github:
https://github.com/deadheadrussell
My latest video:

Kevyn McPhail

15 Jan 2014

Hello, My Name is Kevyn McPhail.

I am a 4th year architecture student at Carnegie Mellon. I am a huge fabrication buff. I love the process of machining and crafting objects. In addition to using machines, I love making and understanding machines and their processes. I am excited for this class, because I see it as a gateway to create and/or manipulate machines to do my bidding. Just Kidding, but the software aspect of this class will allow me to understand fabrication and machines on a deeper level.

Twitter: @studiobfirm

GitHub: https://github.com/kevyn5902

Speaking of Machines, Here is one I made with two of my friends last year.

DSC_0748

Its called the Solarc. Its a sun simulation table. Basically its and automated turntable and movable light source that cast shadows on a students architectural model to influence their design by  giving them a real world understanding of the effects of the sun on their building. In addition to help with designing and fabricating the table, I coded up the software for it. Since I was in a class that forced me to use python, all the “heavy” computing is done using python, which sends signals via serial to the arduino telling the motors how to move.

Here’s a video!