Novel View Synthesis with Light Field Networks and Shift-Equivariant Convolutional Neural Networks

Keywords

Loading...
Thumbnail Image

Issue Date

2022-08-11

Language

en

Document type

Journal Title

Journal ISSN

Volume Title

Publisher

Title

ISSN

Volume

Issue

Startpage

Endpage

DOI

Abstract

The use of Extended Reality (XR) in games, healthcare and museums has been shown to increase the experience of immersion for its users. The experience of immersion is related to the Degrees of Freedom (DoF), which is the freedom the user has in moving around the world in XR. However, a higher DoF requires more viewpoints to render the novel views resulting from these movements. In order to capture these viewpoints, one has to record the entire scene for each viewpoint with a camera, which is infeasible. A more efficient approach would be to synthesize novel views from a sparse set of existing views. The field of novel view synthesis focuses on generating new views based on existing ones. The recent introduction of Continuous Scene Representations (SRN) and Light Field Networks (LFN) showed that it is possible to generate new views from a single 2D image in real-time. However, the LFN is limited in the fact that is unable to leverage the capabilities of 2D Convolutional Neural Networks (CNNs) to obtain the required visual information, due to their sensitivity to shifts in the input. Recent developments show that it is possible to make 2D CNNs robust to shifts in the input. This work contributes to the LFN by extending it such that it can use a 2D CNN to obtain visual information from the input. The results show that the original LFN outperforms the proposed extended version of the LFN. However, the results also show that this direction of research is interesting and future work might potentially result in a version of the extended LFN that does outperform the original LFN.

Description

Citation

Faculty

Faculteit der Sociale Wetenschappen