Out of the Blue

How years of R&D, a revolutionary virtual camera system and long hours of detailed performance capture and keyframe animation gave birth to the 3-D, magical world of James Cameron’s awe-inspiring Avatar.

Raise your hand if you want to become a Na’vi, live on Pandora, ride butterfly-colored dragons, suck juice from fluorescent flowers, and plug your braid directly into a rainforest energy field.

By taking audiences to an alien planet created entirely out of his imagination ‘ not filmed in Italy, Morocco, or anywhere else on this planet ‘ director James Cameron’s personal rhapsody in blue called Avatar has transformed science fiction films. Never before have we spent hours living with the native people of another world, the 10-foot tall willowy Na’vi who have tails like lions, blue skin with little spots that light up, yellow eyes, twitchy ears. Never before would we have thought this could ever be believable. And yet, here we are at the end of the first decade of the 21st century, visiting the planet Pandora and loving every minute. On opening weekend alone, the film earned $242 million worldwide.


Actor Sam Worthington plays Jake Sully, the hero of Avatar, a disabled soldier who inhabits and puppets a body built like a Na’vi, that is, an avatar. The avatar allows him to, in effect, breathe the air on Pandora and assimilate with the Na’vi.

In a brilliant piece of parallelism, Worthington also inhabits and puppets the CG avatar that we see in the film. He does so via a performance capture system designed, for his body, by Cameron’s Lightstorm Entertainment and Giant Studios, and a facial capture system designed by Weta Digital. Zoe Saldana plays Neytiri, the beautiful daughter of a Na’vi leader, who becomes Jake’s tutor and our guide into the Na’vi ways. Sigourney Weaver as Dr. Grace Augustine and Joel Moore as Norm Spellman also inhabit Na’vi avatars.

All the avatars and Na’vi are CG characters that live in a totally CG world created at Weta Digital in Wellington, New Zealand. Joe Letteri supervised that work, spreading the effort among six visual effects supervisors, Dan Lemmon, Stephen Rosenbaum, Eric Saindon, Wayne Stables, Chris White, and Guy Williams.

In Los Angeles, on a motion capture stage, with props built to emulate the digital landscape that Weta would create later, Cameron could see, in realtime, actors’ performances applied to digital characters in a game-like version of Pandora’s environment. As the actors performed, Cameron could ‘film’ the action using a virtual camera system, a nine-inch LCD screen with a steering wheel around it.

Data captured from tracking markers on the camera and from all the performers moved through Autodesk’s Motion Builder, even their facial expressions. Cameron was fearless. He filmed Jake and Neytiri giving emotional performances in tight close-ups of their faces.


‘Jim [Cameron] had done some facial motion capture tests for another film and was looking at a video head rig,’ Letteri says. ‘We knew that the more cameras, the more accurate the data, but they get in the way. Our goal was to work with a single camera system and software to track everything and re-project it back onto 3D characters.’

Rather than using the typical retro-reflective markers on the actors’ faces, Weta applied green dots with makeup. Each actor wore a helmet with a tiny camera attached to a boom arm. The boom arm swung into place between their nose and upper lip, high enough to capture their eyes, yet close enough to capture mouth movement. ‘We could track the facial gestures and muscle movement in realtime and apply the facial capture to the characters in realtime as Jim [Cameron] captured the performances,’ says Rosenbaum.

A ‘solving’ team at Weta cleaned the data from the body and facial capture sessions and applied it to animation rigs in Autodesk’s Maya. ‘The motion data drove our systems as if an animator was doing key frames,’ says Andy Jones, animation director. ‘It gave us a lot of the life and frame by frame motion that animators don’t want to do. Our biggest challenge was probably facial animation.’

Animators could look at footage taken with the facial camera and at HD reference and scrub the faces they animated to see if everything moved in the same way as the actors’ faces. ‘Jim would do 10 or 15 takes and pick the ones he wanted,’ Jones says. ‘He might like the way a lip moved, the inflections in the face. We had to make sure it was all in there and then some.’

Although the motion data was especially helpful for lip synch and mouth movement, animators keyframed much of the brow and eye animation, and the ears, which weren’t captured. They also keyframed Na’vi hands, fingers, and tails.


In addition to the Na’vi, the animation team had a planet-full of fantasy creatures to animate, flying dragon-like creatures, six-legged horse and panther-like creatures, bugs, insects, and luminescent white jelly-fish like souls that rain from the skies. ‘We worked for four months just doing motion studies and tests,’ Jones says.

A new volume-preserving muscle system that calculates fat layers more accurately than previous systems at Weta added secondary motion. ‘It took probably a year of R&D and another year to make sure it worked properly in the shots,’ Saindon says. ‘We had similar ideas before, but this system is completely written from the ground up to do more accurate simulations and more accurate volumes.’

As astounding as the Na’vi and these creatures are, the environment they live in is astonishing as well. At night, the exotic plants in shades of blue, purple, orange, white, and green shimmer and glow with bioluminescence; a stunning design that reflects Cameron’s love of underwater photography.

The landscape team at Weta started with simple representations of the environment from Lightstorm, FBX files that showed where Cameron wanted plants placed for particular camera angles. The digital landscapers then painted areas where they wanted plants to grow, and a landscape system placed pre-existing geometry in place at correct angles. The average plant had 100,000 polygons, and each plant had dynamics so that something could move on its own or react to the characters in every frame.

‘We also created a riverbed, an ocean-front scene, floating mountains, and there are waterfalls everywhere,’ Williams says. ‘It’s like an Amazon jungle on steroids. All those aerial shots in the third act take place in 3D space with 3D mountains and 3D trees because stereo is unforgiving. To have Jake running through a forest, the camera needs to move as fast as he does. We have 3D fluid solvers for water and clouds, hardly any 2D elements; this is a true stereo show with everything rendered in stereo.’


Rendering happened through Pixar’s RenderMan 15. ‘Although we wrote a ray tracing engine to calculate spherical harmonics, we pushed everything through RenderMan at the end,’ Letteri says.

To handle compositing for the stereo 3D film, Weta built a new compositing pipeline based on Apple’s Shake. ‘It does everything in parallel,’ Letteri says. ‘We had to do volumetric lighting, smoke and fire, and composite them volumetrically. It’s all depth based, so characters running through the jungle look good. Every piece is in the right space.’

And that’s what makes this film so immersive and compelling. With a digital assist from Industrial Light & Magic, the mean aerial warships and the cruel cowboys at their helm provide stark contrast to the native Pandorean world, visually and emotionally. And their actions keep moving the story ahead. But, it’s the strangely beautiful and elegant Na’vi, the fascinating Pandorean creatures (even the nasty purple panther ones), and the extraordinary ecosystem on Pandora that makes us want to return to this film and enter this world again.

Fox’s Avatar is still playing in theaters nationwide. The feature surpassed Cameron’s own Titanic as the biggest-grossing pictures in the U.S. in February.