Deep Disentangled Representations for Volumetric Reconstruction
Deep Disentangled Representations for Volumetric Reconstruction
Keywords
Authors
Date
2016-08-31
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
We introduce a convolutional neural network for inferring a
compact disentangled graphical description of objects from 2D images
that can be used for volumetric reconstruction. The network comprises
an encoder and a twin-tailed decoder. The encoder generates a disentangled
graphics code. The first decoder generates a volume, and the second
decoder reconstructs the input image using a novel training regime that
allows the graphics code to learn a separate representation of the 3D
object and a description of its lighting and pose conditions. We demonstrate
this method by generating volumes and disentangled graphical
descriptions from images and videos of faces and chairs.
Description
Citation
Supervisor
Faculty
Faculteit der Sociale Wetenschappen