Pixar and the Art of Curbing CG Noise

Piper-HighResHow a UCSB computer science student helped Pixar develop a new noise-eliminating solution.

For many people, the words Monte Carlo conjure images of high-stakes casinos serving cocktails that are shaken, not stirred.  But for those who render computer-generated images, “Monte Carlo” is anything but glamorous. It refers to the noise that’s a typical by-product of ray-traced images…like film grain artifacts, but on steroids. So it’s not surprising that CG production studios have long sought de-noising solutions.  But cost-prohibitive and labor-intensive strategies have been impractical for feature film production thus far.  As Mark Meyer of Pixar Research Group explains, “The computational requirements are so large for the complexity levels of our images. It’s only recently that computers are finally catching up to the math that they’re being asked to do.”

This is where computer scientists from the University of California at Santa Barbara enter the scene. Steve Bako, a UCSB doctoral student, was collaborating with fellow researchers Nima Kalantari and Pradeep Sen on a machine learning approach to filter out Monte Carlo noise. They trained their lab’s computer network to make “noisy” images look more like images that had been computed with greater numbers of light rays. While their process can be considered a subset of artificial intelligence, Bako notes that machine learning has been around for decades. “But now that hardware is so powerful, we can do millions of calculations in parallel.  That’s why there’s been an explosion of complicated machine learning algorithms for various applications.”

Bako and his colleagues published their research in 2015, and Pixar took note. As Bako remarks, “Pixar had moved to a ray-traced-based rendering system, and they were paying attention to de-noising methods out of necessity.” When Bako joined Pixar as a 2016 summer intern, he quickly found himself in hyperdrive. He had “trained” UCSB’s computers on about 20 garden-variety images—a typical academic assortment of lamps and desks. “On my first day at Pixar, they gave me Finding Dory,” he recalls. “The images were diverse, and ideal for setting up a machine learning framework. Machine learning is good for a studio because they can train on one movie, and it’s ready to go for another. They’ll be able to take their Dory-trained network and throw more data at it for another movie.”


The viability of this approach soon became evident, notes Tony DeRose, who heads Pixar Research Group. “We experimented with noisy images from Cars 3 to see what the algorithm would do. It did very well on that, and also with images from Coco and Piper.”  While this project proceeded at Pixar, researchers at Disney Research Zurich were also testing de-noising on Big Hero 6.

As Pixar’s Meyer explains, “Projects that use machine learning used to take months and months, but now we’re training within a few days to a week.”  While this strategy has clear potential, DeRose cautions that it could take a year or two to be production-ready. “It’s still speculative enough that a producer couldn’t budget it for a particular film,” he notes.

In the meantime, Pixar’s collaboration with UCSB scientists continues. Back in his lab in Santa Barbara, Bako is able to access studio assets, and he’s pushing forward using the Tungsten open source renderer. The results have been striking enough that Bako presented them at SIGGRAPH 2017. He couldn’t release any information that’s proprietary to Pixar, but Bako has posted enough open source information online so that others can test and build upon this research. As Meyer observes, “We’ve found that we reap benefits from letting other people continue to come up with ideas that move the whole field forward.”  It is proof, as Bako notes, “that academic institutions really can make a difference.”

Disney-Pixar’s latest feature film Coco will be released in theaters on November 22.


  • Toonio

    Go Kubo and the mariachi strings!