NEnv: Neural Environment Maps for Global Illumination

Carlos Rodríguez-Pardo*, Javier Fabre*, Elena Garcés, Jorge López-Moreno,
Computer Graphics Forum
Eurographics Symposium on Rendering Proceedings
, 2023



Environment maps are commonly used to represent and compute far-field illumination in virtual scenes. However, they are expensive to evaluate and sample from, limiting their applicability to real-time rendering. Previous methods have focused on compression through spherical-domain approximations, or on learning priors for natural, day-light illumination. These hinder both accuracy and generality, and do not provide the probability information required for importance-sampling Monte Carlo integration. We propose NEnv, a deep-learning fully-differentiable method, capable of compressing and learning to sample from a single environment map. NEnv is composed of two different neural networks: A normalizing flow, able to map samples from uniform distributions to the probability density of the illumination, also providing their corresponding probabilities; and an implicit neural representation which compresses the environment map into an efficient differentiable function. The computation time of environment samples with NEnv is two orders of magnitude less than with traditional methods. NEnv makes no assumptions regarding the content (i.e. natural illumination), thus achieving higher generality than previous learning-based approaches. We share our implementation and a diverse dataset of trained neural environment maps, which can be easily integrated into existing rendering engines.


    author = {Rodriguez-Pardo, Carlos and Fabre, Javier and Garces, Elena and Lopez-Moreno, Jorge},
    title = {NEnv: Neural Environment Maps for Global Illumination},
    booktitle = {Computer Graphics Forum (Eurographics Symposium on Rendering Conference Proceedings)},
    year = {2023},
    publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
    ISSN = {1467-8659},
    DOI = {10.1111/cgf.14883}


We would like to thank Luis Romero for his help designing the scenes we rendered. Elena Garces was partially supported by a Juan de la Cierva - Incorporacion Fellowship (IJC2020-044192-I). This publication is part of the project V+Real, PID2021-122392OB-I00 funded by MCIN/AEI/10.13039/501100011033/FEDER, UE.


Carlos Rodríguez-Pardo –