At first, I wanted to use PoseNet's full-body tracking system, but I found it to be unreliable on my machine. Instead, I opted to use the slow, but reliable BRFv4 face tracker. I sketched out a rough idea for transforming the viewer's face into a cartoon character. However, once I started programming it I realized that I was getting outside my scope for this assignment, plus I was running into a lot of glitches and fixing them would be time-consuming.
On attempt 2, I wanted to use 3D objects. I sketched out a plan for a low-quality 3D face inside a monitor. Professor Levin taught me how to rotate objects using my facial tracker. Then I imported a 3D monitor that I made from AutoDesk Maya and filled in the face using the vertex () command. For the final touch, when the viewer would open their mouth, the head would give off an audible scream. I took the sound effect from this video and heavily modified it using Audacity.
This was a fun deliverable.
EDIT 9.25.19 11:00PM: I was inspired by Cain from RoboCop 2 (see below).
One such work is called "The Infinity Engine," a functional replica of a genetics lab. It is constructed from modular units used and is designed to show the potential that human engineering can bring by the creation of regenerative medicine, bioprinting, and DNA programming. The installation includes various genetic engineering tools, interviews with scientists, lab test information and even re-enactments of tests.
Areas of the installation are defined by color-coded areas which are enhanced with sounds of a genetics lab. Some of the things that the parts of the exhibit include are partly interactive multimedia installations, projections, wallpaper, electric screens, and 3D printed noses.
Visitors also have the option of discovering their DNA origin through reverse engineered facial-recognition software.
Leeson received a partial commission from a German museum ZKM and received contributions from The Wake Forest School of Regenerative Medicine and Organovo (they manufacture hardware for genetic engineers).
While I was procrastinating on this deliverable, I watched a childhood cartoon where a little boy shrunk down in an attempt to find out how his music box worked to fix it. And I was thinking, how cool would it be if I shrunk down to see a clock in action vertically. So I decided to make a 3D 12-hour clock.
The viewer sees the clock from the 9 o'clock position, therefore 12 o'clock is to the left and 6 o'clock is to the right.
I think my clock would be at home on a college campus or a park with other environmental art pieces. If I were to make it in real life, I'd have three hollow glass rectangular prisms rotating in each other. The edges would have a neon/LED light. The prisms would be resting on top of a rectangular pedestal with arrows coming out on each time axes.
Along the spectrum of First Word <---->Last Word, my work is more towards First Word because my medium of choice (VR), is not as widely explored and documented like other forms: drawing, computer art, film. Orson Welles put it well when he was talking about his first cameraman, "You never made a picture, and you don't know what can't be done!" I don't know what's impossible. As far as I'm concerned, it's the wild west!
In regards to how culture accepts technology: I think it's based on how well it's introduced, and only well-introduced tech can shape culture. For example, the novel idea of the touch screen has been around for decades before the iPhone popularized it with the first touch screen monitor produced in 1983. That's First Word. When the iPhone comes around and popularizes it and becomes embedded into modern culture, that's Last Word.
I started working on this piece with no idea how I'll program the animations. I made two completely different sketches from the end product at first. When I started coding, I realized that I didn't have time to figure out the nuances to program the original sketches. So I looked at some of my favorite childhood cartoons for inspiration. I found an animation loop of a sea monster playing a guitar and eating boats. I decided to program that.
I'm disappointed that I couldn't figure out how to program in time a loop of boats spawning on the top right corner to be eaten by the monster. If I had more time I would've worked on the graphics a tad bit more and had the guitar move around.
Regardless, at the end of the day, the knowledge I gained from this excesice is invaluable. In High School, to animate objects we used to move them using "for loops". Easing functions blow that method out of the water. I'm sure these functions will come in handy for future projects. I also programmed this using Visual Studio. I'll write a blog post (or just post a template) on my website on how to do that soon and link it here.
Slaves to Armok: God of Blood
Chapter II: Dwarf Fortress
Dwarf Fortress is a procedurally generated game made by brothers Tarn and Zach Adams released in 2006. Development is supported by crowdsourced donations. To play, the player needs to generate a new world through parameters ranging from resources to the length of the world's history. I imagine that the algorithm used to generate the world runs during the duration of the game. It likely randomly generates an elevation then picks areas to have biomes, then randomly picks what lives there and so on.
Graphics are simple text symbols, but they can be replaced with custom tilesets.
I admire the potential immersion that a player, they can literally do anything! The brothers' love for generating random worlds is visible.
The effective complexity of this piece is closer to disorder because the game world is constantly changing due to random events
The video that introduced me to Dwarf Fortress (view at your own discretion NSFW):
Cities that have been around since the middle ages are prime examples of effective complexity. In the olden days, urban planning was not widely used so people typically built buildings where they wanted to. On the scale between total randomness and total order, it's around the middle because as time progresses, the city gets more organized. There is a clear division between the two with the wide road surrounding the old city.
The artwork consists of many short black lines, on a white background
The lines all have the same length
The lines seem to be generated randomly
There is an occasional random space where the lines don't spawn
The size of these spaces are random
The line generation stays in the shape of a square (likely because of some margin).
The outermost points on the outermost lines start on the edge of this square
Lines typically intersect with each other
More lines are vertical than horizontal
After reading my assertions, I had a question: "How do I generate equal lines?" Apparently, all you need is a circle. I randomly generated my origin points and used this formula to create a start point for my line. To find the endpoint, I subtracted the midpoint from the start point. Then, I added this value to the midpoint and got the endpoint.
The next question was "How do I generate random empty spaces?" The answer: use noise.
The final question "When do I place the line?" was easy to answer. I just needed to place a line above a certain noise value
The Critical Engineer looks to the history of art, architecture, activism, philosophy and invention and finds exemplary works of Critical Engineering. Strategies, ideas and agendas from these disciplines will be adopted, re-purposed and deployed.
I see this rule as the following: "An Engineer should look back to the past to learn how to implement ideas in the present."
Interestingly, I was unknowingly following this rule for a couple of years. I tend to look how the work was done in the past, and how I can repurpose these ideas to work in the present.
When I made Dungeon Crawler VR, I created maze-like dungeons using modular pieces which spread up the development process. This tactic is used in professional game development. The actual gameplay derived from dungeon crawlers from the 1990s.
I often look for inspiration in the 1996 video game "The Elder Scrolls II: Daggerfall." It was developed by only 27 people (excluding beta testers), which is a small amount when compared to modern teams. Daggerfall ran on XnGine, one of the first true 3D game engines. In terms of gameplay, there is a lot of influence from the roleplaying game Dungeons and Dragons and earlier first-person PC RPGs like Ultima: Underworld. Aesthetically speaking, the art direction draws clear inspiration from medieval fantasy art. The lead artist, Mark Jones, worked on many DnD themed games. As to what we can learn about the game's immersion, I'll provide examples of what players can do: buy and sell buildings, loan money from banks, barter with merchants, own boats and explore a region the size of the United Kingdom. If all this was possible in 1996, why can't we do it now? I admire this project because of the ambitious world and the hand-drawn art it has.
It's worth mentioning that a small community of modders ported the game over Unity 3D and Daggerfall has never looked better!