Paper published in the CVIU Journal

10 October 2024

Novel view synthesis using Neural Radiance Fields (NeRF) is the leading technique for generating high-quality images of new viewpoints. However, existing methods rely on predefined extrinsic and intrinsic camera parameters, limiting their usability in real-world applications where camera properties vary. Current research has focused on refining extrinsic camera parameters, often assuming fixed intrinsic parameters or requiring pre-processing. Additionally, most approaches only handle a single intrinsic camera model, restricting their adaptability across diverse camera systems.

To address these limitations, a joint research effort by Friedrich-Alexander University Erlangen-Nürnberg (FAU), the Technical University of Munich (TUM), and the University of the Bundeswehr Munich (UniBw M) introduces NeRFtrinsic Four an end-to-end trainable NeRF framework that jointly optimizes both extrinsic and diverse intrinsic camera parameters. The method leverages Gaussian Fourier features to estimate extrinsics and dynamically predicts varying intrinsics using projection error supervision. NeRFtrinsic Four outperforms existing methods on standard benchmarks (LLFF, BLEFF) and introduces the iFF dataset, specifically designed to test varying intrinsic camera properties.

This research has been published in the Computer Vision and Image Understanding (CVIU) journal.


 

NeRFtrinsic Four: Advancing View Synthesis with Joint Camera Optimization
Hannah Schieber, Fabian Deuser,  Bernhard Egger, Norbert Oswald, Daniel Roth

[LINK]