I didn't know you could export directly from Maya; this would be helpful for me in the future since previously I would always export an .fbx file from my Maya file. Also interesting how often these two platforms are used that there is a direct transfer...
I have worked with Unity in the past, mainly from an asset standpoint, so I went into some intermediate tutorials to learn some new things. I followed an intermediate scripting tutorial as well as an intermediate lighting and rendering tutorial. They were more conceptual for tutorials, so there wasn't much to screenshot, but I got a good idea of how to apply these techniques to my own work when using my models and scripts in Unity.
For my Unity Scripting tutorial, I decided to start at the basic fundamentals and dive into the Course with Code. I went through the first tutorial of the series, Start Your 3D Engine, in which a simple car game "world" and assets are being created and imported, and then I went through the second tutorial, in which we create and apply the first C# script.
In the spring semester, I created a rough prototype of a drawing/rhythm game that drew lines on a canvas by following DDR-style arrows that scrolled on the screen. For my first Unity exercise, I prototyped an algorithm to dynamically generate meshes given a set of points on a canvas. I plan to use this algorithm in a feature that I will soon implement in my game.
There are a couple assumptions I make to simplify this implementation. Firstly, I assume that all vertices are points on a grid. Second, I assume that all lines are either straight or diagonals. These assumptions means that I can break up each square in the grid into four sections. By splitting it up this way, I can scan each row from left to right. I check if my scan intersects with a left edge, a forward diagonal edge, or a back diagonal edge. I then use this information to determine which vertices and which triangles to include in the mesh.
A pitfall of this algorithm is double-counting vertices, but that can be addressed by maintaining a data structure that tracks which vertices have already been included.
Shaders
My second Unity exercise is also a prototype of a feature I want to include in my drawing game. It's also an opportunity to work with shaders in Unity, which I have never done before. The visual target I want to achieve would be something like this:
I have no idea how to do this, so the first thing I did was scour the internet for pre-existing examples. My goal for a first prototype is to just render a splatter effect using Unity's shaders. This thread seems promising, and so does this repository. I am still illiterate when it comes to the language of shaders, so I asked lsh for an algorithm off the top of their head. The basic algorithm seems simple: sample a noise texture over time to create the splatter effect. lsh also suggests additively blending two texture samples. lsh was also powerful enough to spit out some pseudocode in the span of 5 minutes:
P = uv
Col = (0.5, 1.0, 0.3) //whatever
Opacity = 1.0 // scale down with time
radius = texture(noisetex, p)
splat = length(p) - radius
output = splat * col * opacity
Thank you lsh. You are too powerful.
After additional research, I decided that the Unity tool I'd need to utilize are Custom Render Textures. To be honest, I'm still unclear about the distinction between using a render texture as opposed to straight-up using a shader, but I was able to find some useful examples here and here. (Addendum: After talking to my adviser, it's clear that the custom render texture is outputting a texture, whereas the direct shader outputs to the camera).
Step 1: Figure out how to write a toy shader using this very useful tutorial.
Step 2: Get this shader to show up in the scene. This means creating a material using the shader and assigning the material to a game object.
I forgot to take a screenshot but the cube I placed in the scene turns a solid dark red color.
Step 3: Change this from a material to a custom render texture. To do this, I based this off of the fire effect render texture I linked to earlier. I create two shaders for this render texture, one for initialization and one for updates. It's important that I make a dynamic texture for this experiment because the intention is to create a visual that changes over time.
I then assign this to the fields in the custom render texture.
But this doesn't quite work...
After thirty minutes of frustration, I look at the original repository that the fire effect is based on. I realize that that experiment uses an update script that manually updates the texture.
I assign this to a game object and assign the custom render texture I created to the texture field. Now it works!
Step 4: Meet with my adviser and learn everything I was doing wrong.
After a brief meeting with my adviser, I learned the distinction between assigning a shader to a custom render texture to a material to an object, versus directly assigning a shader to a material to an object. I also learned to change the update method of the custom render texture, as well as how to acquire the texture from the previous frame in order to render the current frame. The result of this meeting is a shader that progressively blurs the start image at each timestep:
I'm very sorry to my adviser for knowing nothing.
Step 5: Actually make the watercolor shader... Will I achieve this? Probably not.
I really wanted this arm to be sticking out the side of my window:
But I first tried to use the feature classifier to tell if people are walking towards or away, which really did not work at all. It could barely even tell if people were there or not. I spent a lot of time getting a large dataset before even really testing if it would work, so I wasted a lot of time on that.
So then I switched to a KNN classifier mixed with posenet, to try to classify people walking in's poses. I think that might've worked with a bigger dataset, but I tried with around 150 examples for each category, and it just wasn't reliable, so I downsized again to it just recognizing me in my room. Hopefully I can extend it to outside, because I think it would be cool to have a robot that waves at people. I think I'll just hack it together rather than using a knn classifier, but for this project I had to train a model.
I actually used an example that didn't use p5js, because the example that I was using was not registering the poses properly when I was using it outside, and I had written my program around that example, so I just kept using it inside.
I first was going to use a raspberry pi or an arduino in conjunction with the browser, somehow?, and I tried a lot of different ways of doing that and got kinda close but it was just really complicated, so I actually used one of the worst hacks I've ever done. I got the browser to draw either a black or white square based on if someone was waving or not, and then I used a light sensor on the arduino to detect it and wave if the square was white. But if it works, it works, I guess.
I was really interested in training the computer specific images and using them in a different application to see what the computer would recognize. For the application I decided to use clouds as a field for the computer to recognize any images that it was trained to. I was inspired by a work Golan showed earlier in the semester called Cloud Face. This got me thinking about all the possibilities of things humans do recognize when looking up at the sky vs what a computer could/wouldn't be able to recognize through machine learning training.
We decided to do something simple, and came up with the idea to use ml5.js' feature extraction and send that prediction data through to Space Invaders, so that a person could play the game with just their hand. We used a 2-dimensional image regressor, tracking both the orientation of the hand (to move the spaceship) and the thumb's position (to trigger the blaster). We also modified a version of Space Invaders written in JavaScript that we found online into a p5.js environment (the code was provided from here and here.
Unfortunately, we weren't able to figure out how to save/load the model, so all of our code had to be on the same sketch, with the game mode being triggered by a specific key press after the training of the machine was over. Because of this, it caused the game mode to lag (severely), but it is functional and still playable. If we had more time on this project, we would have tried to port our ML training values into a game of our own making, but we are still somewhat satisfied with the overall result.
Well, it depends on the weather! If it's raining in Pittsburgh, it's half-empty. If there's nice weather, it's half full. Basically, depending on the weather the reader will either have a pessimist or optimist look on the world.
The machine learning algorithm is looking for three states: Full, Half, and Empty. Depending on the state, it will ask the Openweather API the state of the weather. If the weather is bad, the machine learning system will say that the Glass is half empty, else it will say that it is half full.
I suggest the viewer to train the model using a dark liquid because it would be easier for the computer to differentiate between the bottle and the background.
Originally, I wanted to do face tracking but then I realized that having to retrain ML5 over and over again would prove too repetitive, instead I decided to make something that would tell me how full a bottle of water is.
When I was working on the accuracy of my project, I modified Professor Levin's variant of the ml5 p5.js classifier so it would tell me what items it would see. For example, if I were to train the model using three different labels, it would give me all the labels in order of accuracy. The most accurate label would appear first, followed by the least accurate and so on.
At one point, I was trying to have a trained model load upon runtime because I didn't want to constantly retrain the classifier. Unfortunately, I couldn't get it to work in time.