Connect with us

SIGGRAPH 2020 Digs in to CG Trends with Tech Papers Program

“AnisoMPM: Animating Anisotropic Damage Mechanics” © 2020 University of Pennsylvania
“AnisoMPM: Animating Anisotropic Damage Mechanics” © 2020 University of Pennsylvania

Festivals and Events

North America

SIGGRAPH 2020 Digs in to CG Trends with Tech Papers Program

SIGGRAPH 2020 announces 163 selected research projects from 24 countries as part of its world-renowned Technical Papers program. Throughout its now 47-year history, the conference has continuously delivered innovative, cutting-edge research across the many subfields of computer graphics. With its recent move to virtual, SIGGRAPH 2020 is working with contributing researchers to offer a new way for participants to discover this content, and will announce details in the coming weeks.

Each year, the SIGGRAPH Technical Papers program sets the pace for what’s next in visual computing. From 443 submissions, with additional selections out of peer-reviewed journal ]ACM Transactions on Graphics (TOG), experts on the 2020 program jury chose each project through a double-blind review process. Emerging as trends from this year’s selections were the pervasiveness of deep learning — on top of use for images, new applications are proposed for animation, geometry and more — and a move “back to basics” through use of 2D graphics for things like icons, sketches, diagrams, and strokes.

“I am thrilled to be announcing a sneak peek at the amazing work of researchers who continue to think beyond what’s possible in visual computing, and cannot wait to see how these projects fuel memorable discussions during the first-ever digital SIGGRAPH,” said SIGGRAPH 2020 Technical Papers Chair Szymon Rusinkiewicz of Princeton University. “The papers submitted were as strong as they’ve ever been and the community’s research output remains incredible.”

Along with new research from Stanford University, Facebook, Microsoft, Pixar Animation Studios, Google, MIT and NVIDIA, highlights from the 2020 Technical Papers program include:

AnisoMPM: Animating Anisotropic Damage Mechanics

Authors: Joshuah Wolper, Minchen Li, Yu Fang, Ziyin Qu, Jiecong Lu, Meggie Cheng, and Chenfanfu Jiang, University of Pennsylvania; and, Yunuo Chen, University of Pennsylvania and University of Science and Technology of China

With this paper, researchers present AnisoMPM: a robust and general approach that couples anisotropic damage evolution and anisotropic elastic response to animate the dynamic fracture of isotropic, transversely isotropic, and orthotropic materials. (Pictured)

A Massively Parallel and Scalable Multi-GPU Material Point Method Authors: Xinlei Wang, Zhejiang University/University of Pennsylvania; Yuxing Qiu, University of California, Los Angeles/University of Pennsylvania; Stuart R. Slattery, Oak Ridge National Laboratory; Yu Fang, University of Pennsylvania; Minchen Li, University of Pennsylvania; Song-Chun Zhu, University of California, Los Angeles; Yixin Zhu, University of California, Los Angeles; Min Tang, Zhejiang University; Dinesh Manocha, University of Maryland; Chenfanfu Jiang, University of Pennsylvania

A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters Authors: Jungdam Won, Deepa Gopinath, Jessica Hodgins; Facebook AI Research

A System for Efficient 3D Printed Stop-Motion Face Animation Authors: Rinat Abdrashitov, Alec Jacobson, Karan Singh, University of Toronto

Accurate Face Rig Approximation With Deep Differential Subspace Reconstruction Authors: Steven Song, Blue Sky Studios; Weiqi Shi, Yale University; Michael Reed, Blue Sky Studios

XNect: Real-Time Multi-Person 3D Motion Capture with a Single RGB Camera Authors: Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Mohamed Elgharib, Pascal Fua, EPFL; Hans-Peter Seidel, Helge Rhodin, UBC; Gerard Pons-Moll, Christian Theobalt; Max Planck Institute for Informatics

ARAnimator: In-situ Character Animation in Mobile AR With User-Defined Motion Gestures Authors: Hui Ye, Kin Chung Kwan, Wanchao Su, Hongbo Fu; School of Creative Media, City University of Hong Kong

The Eyes Have It: An Integrated Eye and Face Model for Photorealistic Facial Animation Authors: Gabriel Schwartz, Shih-En Wei, Te-Li Wang, Stephen Lombardi, Tomas Simon, Jason Saragih, Yaser Sheikh; Facebook Reality Labs

Comments

More in Festivals and Events

Newsletter


Recent Posts

Featured Trailers

Could not generate embed. Please try it again.

To Top