SIGGRAPH 2019 (July 28-August 1, Los Angeles) has announced its Technical Papers and Art Papers programs. Continuing the event’s 46-year history of delivery cutting-edge global research to the computer science community, this year’s combined programs highlight 157 projects from 31 different countries.
“Each year, the Technical Papers program sets the pace for what’s next in visual computing and the adjacent subfields of computer science. I am excited to be part of presenting the amazing work of researchers who drive the industry and look forward to how this work ignites memorable discussions,” said SIGGRAPH 2019 Technical Papers Chair Olga Sorkine-Hornung. “This is the kind of content you’ll reflect on, and refer to, throughout the year to come.”
Technical Papers programming is open to participants at the Full Conference Platinum and Full Conference registration levels only. Art Papers programming is open to the Experiences level and above. Learn more about SIGGRAPH 2019 and register here: s2019.SIGGRAPH.org/register.
Along with new research from various academic labs, Facebook Reality Labs, NVIDIA, and Disney Research, highlights from the 2019 Technical Papers program include:
Semantic Photo Manipulation with a Generative Image Prior
Authors: David Bau, Massachusetts Institute of Technology and MIT-IBM Watson AI Lab; Hendrik Strobelt, IBM Research and MIT-IBM Watson AI Lab; William Peebles, Jonas Wulff, Jun-Yan Zhu, and Antonio Torralba, Massachusetts Institute of Technology; and, Bolei Zhou, The Chinese University of Hong Kong
We use GANs to make semantic edits on a user’s image. Our method maintains fidelity to the original image while allowing the user to manipulate the semantics of the image.
MeshCNN: A Network with an Edge
Authors: Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes, and Daniel Cohen-Or, Tel Aviv University; and, Shachar Fleishman, Amazon
MeshCNN is a deep neural network for triangular meshes, which applies convolution and pooling layers directly on the mesh edges. MeshCNN learns to exploit the irregular and unique mesh properties.
Text-Based Editing of Talking-Head Video
Authors: Ohad Fried, Michael Zollhöfer, and Maneesh Agrawala, a Stanford University; Ayush Tewari and Christian Theobalt, Max Planck Institute for Informatics; Adam Finkelstein and Kyle Genova, Princeton University; Eli Shechtman and Zeyu Jin, Adobe; and, Dan B. Goldman, Google
Text-based editing of talking-head video supports adding, removing, and modifying words in the transcript, and automatically produces video with lip synchronization that matches the modified script.
SurfaceBrush: From Virtual Reality Drawings to Manifold Surfaces
Authors: Enrique Rosales, University of British Columbia and Universidad Panamericana; Jafet Rodriguez, Universidad Panamericana; and, Alla Sheffer, University of British Columbia
VR tools enable users to depict 3D shapes using virtual brush strokes. SurfaceBrush converts such VR drawings into user-intended manifold 3D surfaces, providing a novel approach for modeling 3D shapes.
Puppet Master: Robotic Animation of Marionettes
Authors: Simon Zimmermann, James Bern, and Stelian Coros, ETH Zurich; and, Roi Poranne, ETH Zurich and University of Haifa
We present a computational framework for robotic animation of real-world string puppets, based on a predictive control model that accounts for the puppet dynamics the kinematics of the robot puppeteer.
The Art Papers program offers a platform to explore and interrogate research that focuses, specifically, on scientific and technological applications in art, design and humanities. Highlights for 2019 include:
CAVE: Making Collective Virtual Narrative
Authors: Kris Layng, Ken Perlin, and Sebastian Herscher, New York University / Courant and Parallux; Corrine Brenner, New York University; and, Thomas Meduri, New York University/ Courant and VRNOVO
CAVE is a shared narrative virtual reality experience. Thirty participants at a time each saw and heard the same narrative from their own unique location in the room, as they would when attending live theater. CAVE set out to disruptively change how audiences collectively experience immersive art and entertainment.
Learning to See: You Are What You See
Authors: Memo Akten and Rebecca Fiebrink, Goldsmiths, University of London; and, Mick Grierson, University of the Arts, London
“Learning to See” utilizes a novel method in “performing” visual, animated content — with an almost photographic visual style — using deep learning. It demonstrates both the collaborative potential of AI, as well as the inherent biases reflected and amplified in artificial neural networks, and perhaps even our own neural networks.