FoV-NeRF: Foveated Neural Radiance Field for Virtual Reality

🥇ISMAR 2022 Best Journal Paper

Nianchen Deng1*, Zhenyi He2*, Jiannan Ye1, Budmonde Duinkharjav2, Praneeth Chakravarthula3, Xubo Yang1,4†, and Qi Sun2†

1: Shanghai JiaoTong University
2: New York University
3: UNC-Chapel Hill
4: Peng Cheng Laboratory
*: These authors contributed equally to this work
†: Corresponding Authors

Abstract

Virtual Reality (VR) is becoming ubiquitous with the rise of consumer displays and commercial VR platforms. Such displays require low latency and high quality rendering of synthetic imagery with reduced compute overheads. Recent advances in neural rendering showed promise of unlocking new possibilities in 3D computer graphics via image-based representations of virtual or physical environments. Specifically, the neural radiance fields (NeRF) demonstrated that photo-realistic quality and continuous view changes of 3D scenes can be achieved without loss of view-dependent effects. While NeRF can significantly benefit rendering for VR applications, it faces unique challenges posed by high field-of-view, high resolution, and stereoscopic/egocentric viewing, typically causing low quality and high latency of the rendered images. In VR, this not only harms the interaction experience but may also cause sickness.

To tackle these problems toward six-degrees-of-freedom, egocentric, and stereo NeRF in VR, we present the first gaze-contingent 3D neural representation and view synthesis method. We incorporate the human psychophysics of visual- and stereo-acuity into an egocentric neural representation of 3D scenery. We then jointly optimize the latency/performance and visual quality while mutually bridging human perception and neural scene synthesis to achieve perceptually high-quality immersive interaction. We conducted both objective analysis and subjective studies to evaluate the effectiveness of our approach. We find that our method significantly reduces latency (up to 99% time reduction compared with NeRF) without loss of high-fidelity rendering (perceptually identical to full-resolution ground truth). The presented approach may serve as the first step toward future VR/AR systems that capture, teleport, and visualize remote environments in real-time.

Keywords: Virtual Reality; Gaze-Contingent Graphics; Neural Representation; Foveated Rendering

Video

Citation

@article{deng2022fovnerf,
  title = {FoV-NeRF: Foveated Neural Radiance Fields for Virtual Reality},
  author = {Deng, Nianchen and He, Zhenyi and Ye, Jiannan and Duinkharjav, Budmonde and 
            Chakravarthula, Praneeth and Yang, Xubo and Sun, Qi},
  year = 2022,
  journal = {IEEE Transactions on Visualization and Computer Graphics},
  volume = {},
  number = {},
  pages = {1--11},
  doi = {10.1109/TVCG.2022.3203102}
}

Resources