For our final we worked to expand on the AR Golan project, as there were certain interactions we wanted to explore and had not yet achieved for out first iteration. The first of these was to turn the character to face the "viewer" when the character is looked at. The next element was to include multiple characters, and have set each of them to perform the same turning, waving interaction. This is the piece we had the most issues with. While we are able to detect when each gameObject is being viewed, we are still unable to trigger the animation sequence in all except the first game object (and we do not yet understand why). So for today's version we have one kind penguin who rotates towards the camera and multiple other penguins floating around minding their own business.
We switched out the Golan model for a series of Coca-cola-esque penguins, one of which is textured with the advertisement, to celebrate the upcoming capitalist Christmas.
For my AR sculpture, I am using an image anchor on my body (tattoo) to trigger an emerging animation. While learning Unity I figured out how to create this tuft of grass that looks like it emerges under the skin and grows out. For this week I plan to come up with a more compelling short story linking the image anchor and the Unity AR animation more than I have in the first round (changing the image anchor to a birth mark or another tattoo shape and changing the object that comes out).
For my final project, I wanted to push my socket.io drawing project further by challenging myself to add new layers of complexity.
View project here
View project with obstacles here
One of my initial ideas was to create "heavy" lines that would sag as they're drawn and rest upon existing lines. I had difficulty creating this effect with all the qualities I wanted, so Golan suggested I try his spring simulation examples to experiment working with physics engines. View heavy lines here.
My next step is to get this working with socket.io so it can be a multiplayer drawing space. I like the idea creating a new type of interaction by being able to push, pull, and collapse other people's lines as they draw. I was able to get it working on Glitch as a single player app, but am running into some difficulty applying the physics engine with data received from other clients. View progress here on Glitch
I started this project thinking about busses and the chaotic/fleeting feeling they represent. They're a big part of city life in my experience, and I wanted to respond to that chaos and rush by replacing the windows with a window into a forest or similar, where the outside is just slowly moving by in a straight line. I was inspired by AR portals like this one:
I did some sketches to see what a bus would look like with the windows switched, and imagining somebody cutting off all the chaotic input to replace it with the calming view out the window (sorry it's really hard to see, I drew lightly).
So I watched a lot of videos on how to do this, mainly this great guy called Pirates Just AR, which is a great name, by the way.
I had some trouble extending his tutorials to a nice nature landscape though, and I never even got to the point of adding motion, which I'm sure would have been hard too.
I thought I could instead put a hole where the "Emergency Exit" on the roof of the bus is, and put a tree actually inside the bus breaking out through the hole. I also didn't want to go out and look for a bus at that point in the night, so I decided an elevator was a similar idea. So I changed the image target to the roof of the elevator, and added some calming music when the hole and tree appear. I think I can do a lot more with this concept, especially by adding elevator dings and bird sounds; I think that'd be a good contrast. It would also be nice to have flying birds, and add motion of the elevator somehow.
We initially started off by using a bunch of our limited developer builds (heads up for future builds: there is a limit of 10 per week, per free developer account lol) by testing the numerous different types of build templates that we could use to implement AR over our mouth, most particularly image target, face feature tracker, and face feature detector.
We actually got to successfully have an image tracker work for Meijie's open mouth, however, it was a very finicky system because she would have to force her mouth to be in the same exact shape, and very similar lighting, in order for it to register. We plugged in an apple prefab, and thought it was quite humorous as it almost was like being a big stuffed with an apple.
With this, we initially wanted to explore having an animation of some sort take place in the mouth. However, that proved difficult due to the lack of accuracy with small differences in depth, and also the amount of lighting that would need to be taken into consideration. Also, because the image target had some issues with detecting the mouth, we decided to migrate to the face mesh and facial feature detector.
We combined both the face mesh and feature detector, to trigger a duck to appear on the tongue when the mouth is open.
For the Justaline project Arden and I played around with different possibilities and ideas. Ultimately, we really enjoyed the depth effect we could make in the app by drawing multiple "doorways" to move through. We also took advantage of the potential to create transformative effects by having shaped frames change into other shapes while passing through. This augmented reality creates an anticipation for the viewer, kind of like a tunnel or a rabbit hole, as they travel through the floating framed shapes. Here we chose to transform a triangle into a circle:
Adam and I came up with an idea for a virtual forum where members can only access it in a certain location and through an AR app. Special thanks to Joshua Yeom for recording the over-the-shoulder shots.