aahdee – yeen – BarcodeProject

https://barcode-storage.glitch.me/

What’s a better way to partake in C A P I T A L I S M besides buying products? That’s right, becoming the product! With this new *~swanky~* application, just input basic information about yourself and obtain your product code! You can put this label anywhere on your body so that anyone from Big Brother can scan it at anytime to know the quick facts about you! Don’t worry about the lower classes rising up and closing your tabs, your data will always be available just where you left it! Just remember to put the label in an easy to reach and convenient place, because under C A P I T A L I S M time is money, and as a product your worth is based on how quickly you can do tasks. So come on down, get rid of that name that’s either so ubiquitous it is an inconvenience or so hard to pronounce and thus an inconvenience, and grab your new brand new unique product code!

lumar + greecus barcode project

Symphony of Geagle lanes

(abbridged demo with single Entropy lane)

Checking out to the theme of Game Of Thrones from Marisa Lu on Vimeo.

 

Greecus and Lumar imagined a whole row of checkout lanes in the grocery store all playing to the same timeline, each item checked out playing the next note in the song.

Unfortunately, Giant Eagle said they were ‘private property’ and would have to ‘contact corporate for permission’.

(The video here had the beeps overlaid on top because it’s actually rather difficult for a single person to scan things quickly enough for the music to work. The original concept for all the lanes to work in unison doesn’t work so well with a single entropy employee)

inspired by a New York turnstiles project

greecus-FinalProposal

I am thinking of using p5.js, webGL and hopefully some other libraries to create shaders that give an abstract representation of places, times and color palettes that I associate with certain places and times in my life.

An important aspect of this project is going to be creating a representation of these places that feels true to me but is also relatable to other peoples’ experiences. I think that I will need to think a lot about sound, velocity and interaction in order to make this a satisfying experience for me as a creator as well as for the audience.

greecus-project4feedback

I thought that the feedback that my peers gave me was incredibly helpful. For the most part, the feedback seemed to fall under one of two categories. Some students felt that there was too much focus on the backend implementation of the app and that I should have focused more on the frontend and artistic statement of the device. That probably happened because I was unsure about how I wanted the app to look and decided to fall back into a space where I was familiar and knew that I could get right, and that is something that I am actively trying to improve on. Some students felt that the idea of what I was trying to implement was somewhat confusing and seemed to go into too many different directions. For example, it was unclear whether I was trying to make a game or a user-hostile chatbot; I believe I was trying to create something at the intersection of those things, but the idea of what exactly I was trying to create was never fully clear. I think that for the most part, this confusion came from the fact that I had stumbled upon a relatively rich problem space and was unsure about what direction I wanted to take my project. I learned a lot from this project about how easily ideas can evolve while they are being brought to life, and how important it is to have a clear plan of attack before I finish the ideation step.

sjang-FinalProposal

I plan to build a system that allows anyone to create their own constellations in virtual reality (VR), and easily save and share their creations with other people. Users would be able to select stars from an interactive 3D star map of our Galaxy that consists of the 2,000 brightest stars in the Hipparcos Catalog, and connect any of the stars together to form a unique custom constellation shape from Earth’s perspective. When the constellation is saved and named, users would be able to select and access detailed information on each star. They would also be able to break away from Earth’s fixed viewpoint to explore and experience their constellation forms in 3D space. The constellation would be added to a database of newly created constellations, which could be toggled on and off from a UI panel hovering in space.

The saved constellation data would also be used to generate a visualization of the constellation on the web, which would provide an interactive 3D view of the actual 3D shape of the constellation, with a list of its constituent stars and detailed information of each star. The visualization may potentially include how the constellation shape would have appeared at a certain point in time, and how it has changed over the years (e.g. a timeline spanning Earth’s entire history).

The core concept of the project is to give people the freedom to draw any shape or pattern they desire with existing stars and create a constellation of personal significance. The generated visualization would be a digital artifact people could actively engage with to uncover the complex, hidden dimensions of their own creations and make new discoveries.

A sketch of the concept below:

A rough sketch of the builder and visualization functionalities
Interactive 3D star map I built in VR
Exploring constellation forms in 3D space, and accessing their info

aahdee – FinalProposal

For my final project, I would like to create one or two interactive wall projections. I’ve always imagined my Box2D project displayed on a large wall where people could approach it and interact with the shapes by touching it, so I think expanding on that idea wouldn’t be too bad of an idea. I also just have a large affinity for simple geometries.

The first step would be to get the first Box2D project working as an interactive projection. Next, I would use that as a stepping stone to  create different ones. One thought that I had was a pulsing blob that people could pluck smaller blobs from to play with and merge with other blobs that are on the projection. Another that I thought of was a scene similar to the warp speed animations in Star Wars where particles would converge to a focus point (I can’t think of the correct term at the moment) and more focus points would be generated if more people approached it.

To achieve this I would use a standard projector for the animation and a kinect for body tracking since its implemented rather well in that hardware. I think that my main technical issue would be the calibration of the kinect to the projection on the wall.

ngdon-sjang-BarcodeJigsaw

The initial concept was to create a jigsaw puzzle game you can play by scanning barcodes in a certain sequence. The idea was to give a sneak preview of a part of an image(“a puzzle piece”) by scanning a single barcode, and have people piece together the whole image.

To generate the barcodes that correspond to the pieces of an image, we converted an image to ascii code (pixel shade -> char), and split the ascii into strings that were encoded into barcodes for printing, using code-128. Ideally the order of the image pieces will be scrambled. The video shows how the barcode scanning in a specific order reconstructs the whole image.

We thought of various ways of revealing the image pieces encoded in the barcodes. Right now the program continuously appends the decoded image piece from left to right, top to bottom. Another way is to show the most recently scanned image piece on a ‘clue’ canvas, and have parts of the image only revealed on an ‘puzzle’ canvas when two adjacent puzzle pieces were scanned one after the other.

This interaction concept could be expanded to create a drawing tool. The barcode scanner would act as a paintbrush, and the sequences of barcodes would act as a palette. Scanning certain barcodes in a specific order would create unique image patterns, gradients, or edges, which could be combined to paint a picture.

Project Glitch Link 

 

a – FinalProposal

I want to make a system that attempts to maximise some bodily response from a viewer.

This requires a parametric image and a way to measure bodily response in real-time.  Given the hardware available, the simplest options seem to be either using heartbeat data, or the Muse EEG headband.

The project works as follows: Modify the parametric image. Evaluate response.  Estimate gradient of emotional response w.r.t parametric parameters of image.  Take a step of gradient ascent in the direction in parameter space that maximises the estimated emotional response function using reinforcement learning or genetic algorithms. Repeat. An alternative route would be to train a neural network to predict emotional response, and optimize using its surrogate world-model gradient, which would enable using stochastic gradient descent to optimize the image much faster.

Given the slow response of heartbeat data, I should use the Muse headband. In addition, we know the approximate timeframe that a given visual signal takes to process in a brain, although it remains to be seen if the noisy data from the EEG headband can be optimized against.

This project parallels work done using biofeedback in therapy and meditation, although with the opposite goal. An example of a project attempting to do this is SOLAR (below), in which the VR environment is designed to guide (using biofeedback, presumable using a Muse-like sensor) the participant into meditation.

For the parametric image, there are a variety of options. Currently, I am leaning towards using either a large colour field, or a generative neural network to provide me with a differentiatable parametric output. It would be awesome to use bigGAN to generate complex imagery, but the simplicity of the colour field is also appealing.  A midway option would be to use something like a CPPN, a neural network architecture that produces interesting abstract patterns that can be optimized into recognizeable shapes.

http://picbreeder.com
from https://blog.otoro.net/2016/03/25/generating-abstract-patterns-with-tensorflow/