Wednesday, April 28, 2010


Comedy is the hardest form of writing. There's no formula, no recipe for making people laugh and very few good tips on how to write funny fiction. But there are rules.

First, humour must look effortless. Comedy must appear to happen by accident or the reader will feel manipulated. 

Surprise is important to a good laugh. The unexpected often makes us laugh even when the situation isn’t funny.

As pleasant as it is to be silly in real life, silly doesn’t work on the page.  It just looks stupid. A waste of reading time.

Then there's comic timing, a rhythm that's even harder to do in writing than in stand-up comedy. Short sentences are funnier than long ones and several short sentences together can be funny. Punctuation and spacing on the page can also contribute to comedic timing. A full stop is often funnier than an ellipsis or a comma.

Stand-up comics claim there are funny words. THIRTY-TWO is apparently side-splitting, as are: PUCK, KUMQUAT, PANTS, NOODLE  and BELGIAN, among others.  It must be the context or the human voice that makes them funny.

And good comedy is usually simple.  Pared down. Succinct. A word tincture.  Very difficult to do. Think Tolstoy's "Drops dripped", only funny.
    Every time I sit down to write something funny, I break out in a sweat, my neck stiffens, my claws clench on the keyboard and I begin to cry. I’d never be able to earn my living as a comedy writer.
    Not to worry, with the advent of the iPad, I'm pretty sure that e-books will soon have all sorts of audio and video links, including laugh tracks. We won't have to suffer so much as we write comedy, we'll just insert a link and slink away knowing that, even though we got a laugh, we failed to rise to the challenge of making readers laugh with mere words.

Sunday, April 11, 2010


  Eloi CHAMPAGNE is a motion designer, 
a 2D and 3D animator 
and the President of STUDIOCRONOS[4] 
   located in Montreal, Canada.


Q:  How exactly does human motion get translated from the built-in wire channels and data points on the actors' black spandex suits into digital motion and what role do animators have in the process. What do you start with on your computer screen when you begin a MOCAP scene. A stick figure, a CG model?

EC: I start with a file full of numbers, actually. Since MOCAP is not a visual process, it tracks data points on the actors’ suit during the performance and these data indicate the positions of specific points in 3D space, the X, Y, Z axes. 

Q: Do you import this file into a software programme?

EC: First I analyze and edit the file of MOCAP data on software like MotionBuilder and when I’m happy with it, I import it into another computer programme, Softimage. This will interpret the MOCAP data and build a skeleton.

Q: Can you put MOCAP on the screen without CGI animation?

EC: It is possible for MOCAP to go directly to the screen, with different degrees of success. There are 3 ways this can be done: 

1. The performance is recorded, the file is edited by a MOCAP specialist, then the cleaned-up file is imported to the animation software and applied to the previously created character. The resulting movement, a little stiff and wooden, could be put directly on the screen without going through an animator.  It's a workflow that we will see more and more in the gaming industry and possibly on some TV shows. 

2. A character, especially designed for real-time animation, can be moved directly by a performer in a MOCAP suit. I saw this done a few years  back for a children’s TV show. The new generation of GPUs (Graphics  Processing Unit), combined with the computing power that we now have, make it possible to render in real-time some impressive characters with fairly realistic lighting, texturing and physics simulation, although the action is still a bit stiff.

  3. This is kind of mix between the two preceding options. The actor’s performance is linked directly to the 3-D animation software which records the action as KEYFRAMES. I remember using this technique over 10 years ago in Softimage! The idea was to animate a saltshaker pouring salt on fries. Should’ve been easy but it wasn’t. Solution: linking the directional channels X, Y, Z to the mouse, with a different percentage for each mouse movement. All I needed to do was to shake the mouse back and forth, as I would have done with the saltshaker and voilà, basic MOCAP ready to render!  

Q: What do CG animators do to the MOCAP files?

EC: The work of animators often involves animating extra limbs or elements like a tail, hair, strange ears, clothing etc. and any secondary motion, like the bouncing of flesh, that’s not already controlled by the rig or the physics simulation of the animation software. 

Q: Can you do traditional animation tricks like stretch-and-squash, overlap, slow in-and-out in MOCAP animation?  

EC: One of the reasons why motion capture was so difficult to use a few years back was that it was so hard to edit. But now, with the creation of non-linear animation systems where the animation data can be treated like clips (layered, stretched cropped, looped or modified), it is possible to adjust the characters’ actions to obtain a cushioned slow in-and-out effect as well as a squash-and-stretch effect.   

Q: Do you think the work on BENJAMIN BUTTON was more advanced than AVATAR? 

EC:  No. BENJAMIN BUTTON shines in the superior quality of the 3D modeling, texturing, lighting and rendering of the heads in the various stages of the character's life and in the impressive compositing of the heads with the live actor. But the motion capture, as impressive as it was, was nothing new.

Q: What were the big technological advances in AVATAR?

EC:  AVATAR really pushed the envelope with the use of a huge MOCAP stage, life-sized props and the capture of both body movement and facial expression (PERFCAP) simultaneously, as well as the use of virtual cameras and the innovative software Motionbuilder. MotionBuilder made AVATAR possible. To fully understand the technological impact of  AVATAR, read Autodesk’s white paper: “The New Art of Virtual Moviemaking". (This is a PDF file and will download)

Q: James Cameron claims that AVATAR is not animation, but I'm told that what we see on screen is MOCAP "cleaned up" in CGI.

EC:  Even though James Cameron used the most sophisticated motion capture tools on AVATAR, he still needed CG animators to perfect and refine the action, to animate  tails, hair, clothing, tools and props etc., as well as the facial expressions. So AVATAR is MOCAP "cleaned up" or processed by CG animators. But it’s not animation because the body action and facial expression are generated by actors, not animators. It could be compared to rotoscoping which entails filming an actor, then tracing each frame of the film by hand to create a 2D hand-drawn character. This is not to say that Max Fleischer, who invented rotoscoping, was not a real animator, or that Disney's SNOW WHITE which uses it, is not real animation.  

Q: Many CG animators have told me that they don’t stretch and squash their work because “humans don’t stretch and squash.” But humans do stretch and squash and most CG animation and MOCAP have no weight and feel lifeless because of this.  How to fix this problem?

EC: LOLl! I know that this is your pet peeve! Well you’re right and I think the reason it‘s been this way for so long is that is was incredibly difficult to build a character rig that would permit control of stretch-and-squash. And making a good character rig is still very difficult. I find it painful! But it can be rewarding because if the rig is well done, then animating is much more fun and bouncy and lifelike. Take a look at this example of an excellent animation rig and the squishy plasticine-simulated animation made with that rig. Hard to believe it’s CGI isn’t it? 

Q:  What do you see in the future for MOCAP and animation?

EC:  First, some history. Real-time MOCAP animation was the goal when Montreal-based Kaydara Inc. developed Filmbox in 1996, later renamed MotionBuilder in 2002. Kaydara was acquired by Alias in 2004, then Alias was aquired by Autodesk in 2006. And now, after over 14 years of development, Autodesk’s MotionBuilder has become a really impressive piece of software that makes MOCAP a useful tool for filmmakers. Will this replace the work of skilled animators? No doubt it will be used increasingly in the gaming industry. It will certainly create more realistic-looking aliens and monsters in the movies, but I really don't see why Pixar, Disney or smaller animation studios would have any use for MOCAP in their animated movies anytime soon, if ever. I think there will always be a place for pure 2-D and 3-D animation, rotoscoping, stop-motion or any other type of animation.   

Thank you very much, Eloi, for giving us a clearer understanding of how MOCAP is done and the part animators plays in it.  Now we know that the actor’s motions are translated into a file that looks like this,  that software transforms these numbers into this and, with the help of animators, it ends up looking like this:

Here's what Eloi had to say on YouTube about how this video was made: 
    May 16, 2010 - A short demo I quickly put together (quickly being a very relative term when working in animation and 3D...) for N.L. Lumière's blog post/interview about Mocap.
I did not want to invest too much time in it since it was not part of an internal project or a paying job. The idea was to show how easy it is to work with Mocap in a small project, so I voluntarily cut more than a corner or two. For instance, I did not spend as much time as I would normally to adjust the *envelope weights,* so the 3D model deformations are far from perfect.  I did not make any effort to get proper muscle movement or flexing. In addition, other than what was animated via Mocap, I only animated the "tutu" (no facial expressions etc.).      Finally, the texturing, rendering and compositing was kept as basic as possible just to make things go faster.
Nevertheless, I think it turned out to be a fun little informative video for people curious about 3D and mocap.

Wednesday, April 7, 2010


Writing is thinking and thinking is hard, even though we do it all the time. Of course, I’m talking about the  organized thinking required for writing, not the self-absorbed mush that occupies our brain most of the time.  

Writerly thinking requires order and discipline or, at the very least, chocolate. My brain can’t be forced into thinking magical thoughts, conjuring up sparkling dialogue or even making sense of the world. It has to be coaxed and cajoled, bribed even: Come on, brain we’re going to, play a little Candy Crushgaze out the window for a bit, read a few Tweets with some soothing golf commentary in the background, maybe splash a little paint around, then, when you’re feeling relaxed and limber and all warmed up, boom, you can start thinking big thoughts, okay?

Some people, like aerospace engineers, mathematicians and physicists, have called my brain-limbering and getting unstuck exercises “non-linear”, “un-productive” and “procrastination.” Non-linear? Yes! Un-productive and procrastination? Well, no and yes. Engineers and scientists' brains may be able to dive right into deep thoughts and get them smoothly on to paper without any wrangling, but not mine. My brain will let my fingers fly over the keyboard only after I’ve stared, frittered and chocolated.

You may say, my word, you do have a skittish, unruly brain and you’d be right. My brain is such a high-maintenance diva, you have no idea. She requires constant attention, sunshine, good food, French wine, jokes, tea and vast amounts of chocolate. She sits up there, arbitrating, du haut de sa grandeur, which thoughts should be written and which should be allowed to slide back down the medulla oblongata into oblivion.

Of the above brain-bribing tactics, staring into space is by far, the most useful and frequently used, even for a few seconds. You see people doing this all the time, actually. They look away, disengaging from a conversation, to think. staring into space seems to be a way of disengaging from the writing, so we can think about what to write next. A sort of refresh button. Sometimes when I stare into space, I don’t even think about writing, I just let my thoughts wander and, quite often, they make connections I would never have made while at the keyboard. Sometimes free thoughts stray down dark alleys and make me angry or sad or both. Unpleasant, but good writing emotions.

Not that I can’t wrangle my brain into the traces and force it to work when I have to. It will perform quite adequately when saddled up and bridled. It will do the paragraphs, punctuation and grammar but not much more. The words don’t flow out, they march out stiffly in jackboots. So, this is where those stashed-away spaced-out thoughts come in. Those captured passions and emotions, unexpected associations, mad ideas can now soften the stiffness, fill voids, brighten dull spots, make connections, add atmosphere, create plot points and give the story and the characters color, depth and dimensions.
Cogito ergo scribo.

SomeBeans said...
Scientists I know are all for getting in the right frame of mind for thinking, and spend plenty of time in mental doodling! Someone in my lab even suggested that we should consider washing up the coffee mugs as we did it - but that didn't work.
APRIL 4, 2010 10:44 AM

Nora Lumiere said...
"Mental doodling", lovely.  
Washing dishes doesn't work for me either.  So what exactly do scientists do while mentally doodling?  Blow something up, examine something under a microscope or dash off a formula?
APRIL 4, 2010 11:13 AM