I'm not sure what I thought I could accomplish at the Game Developers Conference. I was trying to get in on Microsoft's "XBox Incubator" program (encouraging independant studios to produce games for their console), but with no successful track record behind me, it wasn't going to happen. But there was still the chance I'd be able to get myself hired by someone with a little more influence. And, who knows what other opportunities might come up? It was the place to be.
- If I'm to be honest with myself, I should also mention that one of my best friends from 7th grade through High School used to attend GDC every year. He's Lead Producer on an intensely high-profile series of games now. And, while I refuse to be that loser from the past who calls asking him to put his neck on the line so I can bypass the process now that all his hard work has finally paid off, it would have been great to run into him.
But, the expo was glorious. I can't tell you how great it was to have technologies I'd been reading about for years demonstrated in person by the scientists who developed them. Being able to ask all my questions was what justified to me the price of admission. It was a geek's paradise.
And, I did find new opportunities. I went in with one agenda, and came home very different.
This change was brought about by product called FaceStation.
What FaceStation does is monitor the input from a DV camera, analyzing an actor's performance frame by frame, and jotting down their expression in a sort of shorthand which 3D Animation software can then make use of.
What does this mean? Well, in technical terms, it's simply face recognition driving morph targets. But it'll take longer to explain that in normal human language...
- In 3D Animation, you'll build many variations on a character's head. In one, the left eye is closed. In another, the mouth is wide open. Your character is smiling. Their nostrils flare. Their eyebrows raise. Etc.
When all that prep work is done, your software can analyze the difference between one head and the next, and treat those changes as combinable elements to form any expression. And not just in a mix-and-match sense, either. It's like those extra heads become spices, which you can add a little or a lot of, as needed. It's very flexible. But, each head represents one more control to keep track of, and it can get pretty complex.
FaceStation does the reverse of that. It looks at your face on camera, and figures out what percentage of each shape would combine to reproduce your current expression. To carry on the spice metaphor, it would be like taking a bite of something and jotting down a quick recipe to cook more of it.
And, because that's the approach, how you set up your assortment of heads defines the tone. Big goofy expressions will lead to big goofy animation. More subtle expressions lead to subtle animation. Which means there's still a lot of creativity involved, even if the computer handles some of the more mechanical decisions for you. Also, control of a character's performance is largely returned to the actor.
I had never really taken on facial animation before - it's beyond intimidating. Lip-sync in
- I could have a very attractive demo reel going into GDC the next year. Facial animation continues to be a weak spot in most games released, and it shouldn't be. The technical limitations aren't what they once were.
- I could produce some commercials, a few music videos... I could have a TV show on some obscure network! I could launch a damn studio if I play my cards right. The trick there is to get in the door before much better animation studios raise the bar.
- I could license that technology to produce an automated voice coach / speech therapist, thus contributing to society and making the world a slightly better place.
but most importantly, above all else...
- I could make faces at the computer all day, and have characters I've created mimic them back at me like some sort of deranged funhouse mirror.
Now, FaceStation is a $2,000 add-on for 3D Studio Max. 3D Studio Max costs $5,000. And on top of that, I'd need a much faster computer than I currently had access to. And before I could start saving up for it, I'd need to climb out of the substantial debt attending the conference had left me in.
Enter the day job.
You may recall a whole bunch of entries related to database administration, corporate mentality, and the occasional hint that I might in fact be working for criminals? Yeah, that's the one.
It took me all year, but I managed to buy everything I need. (Ready for some links?) I got Max, I got FaceStation, I got some pretty good cloth simulation. In the way of outside apps, I used to do all my modeling in a program called Organica, but completely replaced that in my workflow with ZBrush. And I purchased MotionBuilder for it's realtime feedback, intuitive character rig, and non-linear animation tools.
Peripherally related, I also picked up the Adobe Video Collection. And, some fonts from Comicraft.
...and, a digital orchestra. But, that's seriously on the back burner for at least another month.
So, what have I done with this fabulous technology?
Well, complain, mostly.
And make excuses.
The sum total of all that stuff is, frankly, more than I could chew.
Year in Review! Here's what I can say for myself in 2003:
- I did meet those purchasing goals, which meant staying in an unhealthy work environment long after it went bad. As much as I don't want to measure the year by items bought, I suspect it's the most positive light I can express things in.
- I earned my MCSE certification. Which is a dangerous step - I'm happy to have that line on my resume, but am always wary of day jobs turning into a career when I'm not looking. It's important that I be able to drop everything at a moment's notice if a creative opportunity presents itself. And this certificate basically positions me to take on more responsibility, to have more people depend on me than ever before. Not really what I'm after.
- I accumulated a TON of training materials. Went through some of it, but very little stuck because it wasn't tied into a project. My mind treated it as random trivia, rather than valuable solutions to problems not yet encountered. That's frustrating, but easily rectified.
- I caught up with all the changes since I last used 3D Studio Max, five or six years ago, and regained my old proficiency in this program.
- I got pretty good at modeling in ZBrush - nowhere near what some people are doing with it, but I'm certainly happy with my own work.
- I got much better at texturing my models, whether painting in ZBrush, creating procedural math-stuff in Max, or some combination of the two.
- Still not great with lighting, but I'm much better at working around that limitation than I used to be.
- I got FaceStation to work with my webcam, though erratically. For this to work right, I'll need to throw diffused ambient light on myself, which this should help with, though really, I should really invest in a better camera and some sort of professional lighting rig.
- It's also worth noting that these preliminary experiments were using FaceStation's sample character, there's a lot of work I'll have to do on my own characters to make them compatible, and there hasn't been much point yet, thanks to problems external (bad room lighting) and internal (characters weighed down by too many polygons).
- I haven't done a lot of full-body figures, because again, it's slow and painful to rig them with this many polygons. Which leads to my not having brought any characters into MotionBuilder to work with.
What have I been doing with myself now that I'm stranded without a car?
- Authored several DVDs, using other people's content to explore what all's possible within the format. My menus and transitions are competitive with most commercially produced titles now, and I have a decent showpiece or two for turning these skills into freelance work. But more importantly, I've faced down many technical problems, and am confident in the use of all software involved. Experience is handy that way.
- I've tackled polygon reduction, starting with the slowest possible methods, learning shortcuts, and eventually discovering a much easier way I should have been using from the beginning. In doing so, I also figured out how to detach body parts and put them back seamlessly, which is more useful than you'd expect.
- I hooked a character up to bones, and tweaked the skinning envelopes until he moved properly. I don't have a sense for why it worked, though. I'll have to do this a few more times to pick that up. Turns out my polygon count was actually not so unwieldy - I was just doing everything wrong at this stage. More work is required here.
- Had both bones and morph targets on the same character, which is a technical milestone unto itself, and very important if I'm to animate the body and head two different programs. The character needs to respond to different control schemes without getting in it's own way. It needs to be modular, but cohesive. Unfortunately, the method I came up with won't import into MotionBuilder - I'll have to do it backwards, it looks like.
- Much experimenting with morph targets. Turns out they're mad fun to work with, and it'll be tough to hand 'em over to FaceStation when the time comes. But I think I'll manage - see "funhouse mirror" comment above.
- I tried to bring a character into MotionBuilder for animation. It didn't go so well. I'm hitting one of those uncharted technical difficulties right now. Asked for advice on the official web board, and another online community. I'm hoping to get past that soon.
- Found a balance where I can help out the family and still get some work done in very cramped living conditions. That's not to say I won't savor the distance as soon as it's feasable.
- Made peace with myself in ways I can not yet articulate.
So, where does this leave me?
I have work to do. Far from helpless, I have unbelievable resources available to me. Last year was about lining those up. This year is about integration.
...and, that seems like a good place to
[ End Part 2 ]
Coming up in Part 3
A roundup of projects we've all given up on me ever completing.