For my AR sculpture, I am using an image anchor on my body (tattoo) to trigger an emerging animation. While learning Unity I figured out how to create this tuft of grass that looks like it emerges under the skin and grows out. For this week I plan to come up with a more compelling short story linking the image anchor and the Unity AR animation more than I have in the first round (changing the image anchor to a birth mark or another tattoo shape and changing the object that comes out).
This an AR app that lets you place a wooden toy train set on flat vertical surfaces.
I wanted to fulfill a loooong forgotten childhood aspiration. When I was younger I thought that having a train set on a verticle surface would be cool. Alas, 5 year old me did not have enough duct tape and glue to make it happen. I figured I could get back to it sometime down the line when I had more experience in making toy train sets.
Unfortunately, in the present-day, I don't have access to a wooden train set so I had to make a virtual train set, to-scale, complete with one of my favorite toy trains: The Polar Express.
A tiny AR Golan featuring some of our favorite Golan quotes to stand guard outside the doors of the Studio For Creative Inquiry and show off his dance skills.
Our initial intention was to have the figure turn and blink at the viewer when 'looked at', however ran into the conflict of animating the mesh rather than the rig, so we decided instead to transition to a spinning around state when the camera centers on the character. We initially wanted to work off of the placeAnchorsOnPlanes example, but found that surface detection wasn't working as we needed it to. Instead, we decided to place the mesh at the center of the scene, so wherever a user was in space when the app was launched, that was where the model appeared. The model was modeled by Sophia and we rigged and animated it using Mixamo. We also experimented with blendtrees and morphing animation states together, and Lumi figured out how to switch between states well without jumping. The gameobject for the model switches animation states when the camera centers on it (using raycasting). For future iterations, we would like to be able to place multiple animated meshes in a space using raycasting and switch between multiple animation states.
The Concept is to add Groucho glasses to the portraits in the Carnegie Museum of Art.
Portraits, especially older portraits of rich people, tend to be very posh and stuffy. Groucho glasses instantly turn them silly, especially if they raise their eyebrows at you.
I didn't get the AR working in time. I made some nice Groucho Glasses though:
Edit: AR kinda working. Enough to make the following documentation:
It still doesn't totally work.
The Augmented Faces only works with the front facing camera - the video is achieved with camera trickery. I futzed for a long time with image targets, but I couldn't get the glasses to line up with the portraits consistently. I then found that the easiest way to get results was to detect faces in the portraits, but that can only be used with the selfie camera: not a good state of things for an AR that you want to point at things other than yourself.
AR Golden Experience Requim Stand from Jojo's Bizarre Adventure. Collaborated with sansal.
In Jojo's Bizare Adventure, Stands are a "visual manifestation of life energy" created by the Stand user. It presents itself hovering nearby the Stand user and posseses special (typically fighting) abilies. For this project, we used pose estimation in order for the stand to appear at the person's left shoulder. We used a template and package from Fritz.ai.
The code uses pose estimation to estimate the head, shoulders, and hands. If all the parts are within the camera view, the Stand model moves towards the left shoulder.
This is more of a work-in-progress; we ran into a lot of complications with the pose estimation and rigging of the model. Initially, we wanted to have the Stand to appear when the person stikes a certain pose .(Jojo Bizzare Adventure is famous for its iconic poses). However, the pose estimation was not that accurate, and very slow; made it impossible to train any models. In addition, Fritz AI had issues with depth, so we could not control the size of the 3D model (it would be really close or far away). We also planned to have the Stand model do animations, but ran into rigging issues.
Some Adjustments to be made:
rig and animate the model
add text effects (seen in the show)
add sound effects (seen in the show)
make the Stand fade in
some work in progress photos
3D model made in Blender
Fritz AI sometimes able to detect the head (white sphere), shoulders (cubes), and hands (colored spheres). 3D model moves to the left shoulder.
Was not able to instansiate the 3D model to the shoulder point, so the model just appears at the origin point.
Fritz AI works only half of the time. The head, shoulders, and hands are way off.
We initially started off by using a bunch of our limited developer builds (heads up for future builds: there is a limit of 10 per week, per free developer account lol) by testing the numerous different types of build templates that we could use to implement AR over our mouth, most particularly image target, face feature tracker, and face feature detector.
We actually got to successfully have an image tracker work for Meijie's open mouth, however, it was a very finicky system because she would have to force her mouth to be in the same exact shape, and very similar lighting, in order for it to register. We plugged in an apple prefab, and thought it was quite humorous as it almost was like being a big stuffed with an apple.
With this, we initially wanted to explore having an animation of some sort take place in the mouth. However, that proved difficult due to the lack of accuracy with small differences in depth, and also the amount of lighting that would need to be taken into consideration. Also, because the image target had some issues with detecting the mouth, we decided to migrate to the face mesh and facial feature detector.
We combined both the face mesh and feature detector, to trigger a duck to appear on the tongue when the mouth is open.
12.4.19 Updated Prototype
Having the duck appear (within grass and more refined detail) when mouth is first opened, and then having a raw duck (yum yum!) appear the second time mouth is opened.
Color Me Surprised is an experience designed by lsh & tli in which users describes the world around them by tapping to sample the color of that location.
This project explores the ways which we discover the world around us, and how information is stored digitally about our surroundings. Through tapping the screen, one begins to catalog or tag their surroundings, building up to eventually describe objects by color. Very early on, we knew we would want to use the massive color api to express the world around us, but decisions about representation and interaction where what drove this project forward. We originally attempted to use OpenCV to reduce the image and constantly describe everything the camera would see, but even on a laptop, the performance was awful, let alone on an iPhone. Another decision was whether or not to include the world that is being tagged. The decision rests on the idea of the interaction being an emergent experience versus a descriptive tool. Lastly, we had a series of struggles with Unity and Xcode, which are still being ironed out. I would say this project is successful in that it creates a novel experience.
I am simultaneously excited about and disgusted by shiny, sexy, 3D rendered and Photoshopped technodystopias. I also have a lot of nostalgia for the simple yet nerdy look of Apple products in the early 2000's, combined with nostalgia for the hideousness of the rest of the internet back in those days. I also miss scifi movie scenes where there's a crazy hologram UI with graphs, sine waves, textures, gradients, wireframes, and dials everywhere. It's like the future we never had! Nostalgia is not a productive emotion, so I decided to make a delicious AR snack with it.
This piece can exist anywhere its intended audience (sad preteens) can. However, it feels most at home in domestic spaces filled with friends. For my revision, I am thinking of de-interfacing it, disconnecting it from the mainframe, and making it more site-specific.