Abstract
Accurate material modeling is crucial for achieving photorealistic rendering, bridging the gap between computer-generated imagery and real-world photographs. While traditional approaches rely on tabulated BRDF data, recent work has shifted towards implicit neural representations, which offer compact and flexible frameworks for a range of tasks. However, their behavior in the frequency domain remains poorly understood.
To address this, we introduce FreNBRDF, a frequency-rectified neural material representation. By leveraging spherical harmonics, we integrate frequency-domain considerations into neural BRDF modeling. We propose a novel frequency-rectified loss, derived from a frequency analysis of neural materials, and incorporate it into a generalizable and adaptive reconstruction and editing pipeline. This framework enhances fidelity, adaptability, and efficiency.
Extensive experiments demonstrate that FreNBRDF improves the accuracy and robustness of material appearance reconstruction and editing compared to state-of-the-art baselines, enabling more structured and interpretable downstream tasks and applications.

Main methodology
For the network, we adopt a set encoder [21] with permutation invariance and flexibility of input size. It takes an arbitrary set of samples as input, which is the concatenation of BRDF values and coordinates, containing four fully connected layers with two hidden layers of 128 hidden neurons and ReLU activation function.The reconstruction loss between two NBRDFs is defined to be the sum of the L1 loss between samples of the two underlying BRDFs and the two regularization terms for NBRDF weights w and latent embeddings z.

BRDF datasets and codebases references
Inspired by prior work FrePolad (ECCV'24), HyperBRDF (ECCV'24) and NeuMaDiff, we adopt MERL (2003) from here, which contains reflectance functions of 100 different materials, as our main dataset. This dataset is ideal due to its diversity and data-driven nature, making it suitable for both statistical and neural-network-based methods.It contains 100 measured real-world materials. Each BRDF is represented as a 90 × 90 × 180 × 3 floating-point array, mapping uniformly sampled input angles (θ_H , θ_D , ϕ_D) under Rusinkiewicz reparametrization to reflectance values in R^3.
Reconstructed Materials
We present 30 reconstructed materials rendered under consistent scene and lighting conditions in the below figure. The results demonstrate that FreNBRDF effectively learns the material distribution and produces high-quality, faithful reconstructions.

Material editing
Our pipeline provides a low-dimensional space of neural materials, enabling material editing by linearly interpolating between embeddings of different materials. We compare different models on this task. The ground truth can be obtained by directly linearly interpolating the MERL materials [3]. The below figure illustrates interpolations between five pairs of MERL materials where each row represents one interpolation.
The smooth transitions between the two endpoints demonstrate the capability of our FreNBRDF as a robust and effective implicit neural representation for materials. We also report the relevant metrics in Tab. 2 (above), computed over 2000 randomly interpolated materials. The results show that materials represented by FreNBRDF exhibit consistently higher quality compared to the two baselines.
Compared to the reconstruction results in Tab. 1, the interpolated materials produced by the two baselines show degraded performance, while those generated by FreNBRDF maintain similar quality. This indicates that FreNBRDF effectively captures the underlying distribution of neural materials.
Citation
If you found the paper or code useful, please consider citing,@misc{zhou2025FreNBRDF,
title={FreNBRDF: A Frequency-Rectified Neural Material Representation},
author={Chenliang Zhou and Zheyuan Hu and Cengiz Oztireli},
year={2025},
eprint={2507.00476},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2507.00476},
}
The website template was inspired by FrePolad.