Adobe receenly released Preview 4 of their amazing new Character Animator application. I’ve been watching this one since it was announced earlier this year. This app, while still not an official full v1.0 release, is something I wanted to go ahead and jump into.
This nearly magical application allows you to create and animate expressive characters in ways that greatly speed what is normally a very labor intensive process. The app uses motion capture, from your webcam and microphone, to map the motions of your head, eyes, eyebrows, mouth and more to characters that you create in either Photoshop or Illustrator. The source files for the characters are constructed so the various layers are named in very specific ways, and when imported into Character Animator, those layers are automatically rigged so that you can immediately start ‘acting’ and bringing the characters to life.
Even at this point the results can be amazing. But further, the characters can not only be animated with motion capture, they can also feature ‘behaviors’ that can set be up and coded to be triggered with keyboard keys, such as a comic surprise expression, or character transformation, or even something like suddenly breathing fire. And not just heads either; even body parts can be animated. There is so much here with which to work, and it’s only going to get better.
The main benefit of this app is greatly reduced production time. Adobe hits it out of the park with Character Animator. I wanted to play around with the technology, but didn’t want to use my own voice (though you can certainly do that), and wanted a more real-world application for it. So I decided to use one of the historical stories that we did for the Old Red Museum of Dallas County History and Culture a while back. The story I used is about the massive Trinity River levee engineering project that was done in 1933 to corral the Trinity River and protect the city of Dallas from recurring devastating floods.
We had excellent voice actors for all the 170+ stories we produced for the museum’s 41 touch screens, and so I used Phil Harrington’s voice performance of a Dallas newspaperman’s story of the enormous amount of earth that had to be moved to create the levees. For this test I used and modified one of the characters that Adobe made available as examples, and gave it a fedora press hat. I think it worked out pretty well.
I set up Character Animator to convert the voice recording to the mouth movements needed for the lip sync, which was surprisingly accurate, and then used various techniques and multiple takes to perform the head movements, blinking, eyelid and pupil movements to create a fairly expressive character animation. And all fairly quickly, too; a little over a couple of hours with multiple takes and tweaks as I was learning.
To have to hand animate that same piece with lip sync’d audio would’ve taken many many hours. Days, even. This technology is going to prove extremely useful and effective for doing informational and educational videos, and I can’t wait for a gig to present itself where I can use this app for production and really master it.