Deep Disentangled Representations for Volumetric Reconstruction

dc.contributor.advisorGerven, M.A.J. van
dc.contributor.advisorKohli, P.
dc.contributor.authorGrant, E.N.
dc.contributor.otherUniversity of Cambridgeen_US
dc.description.abstractWe introduce a convolutional neural network for inferring a compact disentangled graphical description of objects from 2D images that can be used for volumetric reconstruction. The network comprises an encoder and a twin-tailed decoder. The encoder generates a disentangled graphics code. The first decoder generates a volume, and the second decoder reconstructs the input image using a novel training regime that allows the graphics code to learn a separate representation of the 3D object and a description of its lighting and pose conditions. We demonstrate this method by generating volumes and disentangled graphical descriptions from images and videos of faces and chairs.en_US
dc.thesis.facultyFaculteit der Sociale Wetenschappenen_US
dc.thesis.specialisationMaster Artificial Intelligenceen_US
dc.thesis.studyprogrammeArtificial Intelligenceen_US
dc.titleDeep Disentangled Representations for Volumetric Reconstructionen_US
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Grant, E._MSc_Thesis.pdf
979.34 KB
Adobe Portable Document Format
Thesis text