A Learned Radiance-Field Representation for Complex Luminaires

EGSR 2022


A living room scene containing a learned radiance field-based complex luminaire. From left to right: Reference path-traced scene with explicitly-modeled complex luminaire (32768 samples per pixel, 7.2h); the same scene rendered with 64 samples per pixel in 1.3 minutes; and our method with 32 samples per pixel, 52.1 seconds. We leverage learned volumetric radiance fields to obtain high quality representations of complex luminaires.

We propose an efficient method for rendering complex luminaires using a high quality octree-based representation of the luminaire emission. Complex luminaires are a particularly challenging problem in rendering, due to their caustic light paths inside the luminaire. We reduce the geometric complexity of luminaires by using a simple proxy geometry, and encode the visually-complex emitted light field by using a neural radiance field. We tackle the multiple challenges of using NeRFs for representing luminaires, including their high dynamic range, high-frequency content and null-emission areas, by proposing a specialized loss function. For rendering, we distill our luminaires' NeRF into a plenoctree, which we can be easily integrated into traditional rendering systems. Our approach allows for speed-ups of up to 2 orders of magnitude in scenes containing complex luminaires introducing minimal error.



Ours vs VPT

Color B/W
Supplemental: Temporal Stability

Downloads

Paper [PDF ~64 MB]
Citation

Jorge Condor, Adrián Jarabo, A Learned Radiance-field Representation for Complex Luminaires, EGSR Conference Proceedings (2022)

@inproceedings {Condor2022,
  booktitle = {Eurographics Symposium on Rendering},
  editor = {Ghosh, Abhijeet and Wei, Li-Yi},
  title = {{A Learned Radiance-Field Representation for Complex Luminaires}},
  author = {Condor, Jorge and Jarabo, Adrián},
  year = {2022},
  publisher = {The Eurographics Association},
  ISSN = {1727-3463},
  ISBN = {978-3-03868-187-8},
  DOI = {10.2312/sr.20221155}
}


Acknowledgements

We want to thank the authors of [Zhu21] for sharing their code with us. This work has been partially supported by the European Research Council (ERC) under the EU Horizon 2020 research and innovation programme (project CHAMELEON, grant No 682080), the EU MSCA-ITN programme (project PRIME, grant No 956585) and the Spanish Ministry of Science and Innovation (project PID2019-105004GB-I00). Jorge Condor acknowledges support from a grant from the I3A Institute (Beca TFM + Practicas).


Collaborations