Abstract

Accurate material modeling is crucial for achieving photorealistic rendering, bridging the gap between computer-generated imagery and real-world photographs. While traditional approaches rely on tabulated BRDF data, recent work has shifted towards implicit neural representations, which offer compact and flexible frameworks for a range of tasks. However, their behavior in the frequency domain remains poorly understood.
To address this, we introduce FreNBRDF, a frequency-rectified neural material representation. By leveraging spherical harmonics, we integrate frequency-domain considerations into neural BRDF modeling. We propose a novel frequency-rectified loss, derived from a frequency analysis of neural materials, and incorporate it into a generalizable and adaptive reconstruction and editing pipeline. This framework enhances fidelity, adaptability, and efficiency.
Extensive experiments demonstrate that FreNBRDF improves the accuracy and robustness of material appearance reconstruction and editing compared to state-of-the-art baselines, enabling more structured and interpretable downstream tasks and applications.

Overview
Overview of FreNBRDF architecture.

Main methodology

For the network, we adopt a set encoder [21] with permutation invariance and flexibility of input size. It takes an arbitrary set of samples as input, which is the concatenation of BRDF values and coordinates, containing four fully connected layers with two hidden layers of 128 hidden neurons and ReLU activation function.
The reconstruction loss between two NBRDFs is defined to be the sum of the L1 loss between samples of the two underlying BRDFs and the two regularization terms for NBRDF weights w and latent embeddings z.
math
A summary of the main methodology, i.e. how Frequency-Rectified Neural BRDFs are constructed and optimized based on prior work.
Frequency Recitification The key insight here is that these frequency coefficients now contain the extracted frequency information at each degree l and order m. Therefore, we can define a frequency-rectified loss on BRDFs based on the mean squared error of frequency coefficients and consequently, we can incorporate this loss into the reconstruction loss Eq. (1).

BRDF datasets and codebases references

Inspired by prior work FrePolad (ECCV'24), HyperBRDF (ECCV'24) and NeuMaDiff, we adopt MERL (2003) from here, which contains reflectance functions of 100 different materials, as our main dataset. This dataset is ideal due to its diversity and data-driven nature, making it suitable for both statistical and neural-network-based methods.
It contains 100 measured real-world materials. Each BRDF is represented as a 90 × 90 × 180 × 3 floating-point array, mapping uniformly sampled input angles (θ_H , θ_D , ϕ_D) under Rusinkiewicz reparametrization to reflectance values in R^3.

Reconstructed Materials

We present 30 reconstructed materials rendered under consistent scene and lighting conditions in the below figure. The results demonstrate that FreNBRDF effectively learns the material distribution and produces high-quality, faithful reconstructions.
vis-reconstruction
FreNBRDF reconstructs 30 MERL materials with high quality, indicating that FreNBRDF effectively learns the material distribution (ground truths to the left and the reconstructed material to the right).
In Tab. 1, we compare the performance of FreNBRDF on the material reconstruction task with two state-of-the-art baselines: the method of Gokbudak et al. [6] and a naive NBRDF [4] reconstruction pipeline, as described in Sec. 3.1, without frequency rectification. From the results, we can see that FreNBRDF outperforms both baselines across most metrics evaluating frequency compliance, rendering quality, and visual similarity, confirming the effectiveness of our proposed approach. Note that the higher RMSE score for FreNBRDF is likely due to its reconstruction loss (Eq. 11) being designed to enforce consistency in both the spatial and frequency domains.
tbl-both
Quantitative comparison of FreNBRDF with state-of-the-art baselines.

Material editing

Our pipeline provides a low-dimensional space of neural materials, enabling material editing by linearly interpolating between embeddings of different materials. We compare different models on this task. The ground truth can be obtained by directly linearly interpolating the MERL materials [3]. The below figure illustrates interpolations between five pairs of MERL materials where each row represents one interpolation.
vis-editing
FreNBRDF reconstructs 30 MERL materials with high quality, indicating that FreNBRDF effectively learns the material distribution (ground truths to the left and the reconstructed material to the right).

The smooth transitions between the two endpoints demonstrate the capability of our FreNBRDF as a robust and effective implicit neural representation for materials. We also report the relevant metrics in Tab. 2 (above), computed over 2000 randomly interpolated materials. The results show that materials represented by FreNBRDF exhibit consistently higher quality compared to the two baselines.
Compared to the reconstruction results in Tab. 1, the interpolated materials produced by the two baselines show degraded performance, while those generated by FreNBRDF maintain similar quality. This indicates that FreNBRDF effectively captures the underlying distribution of neural materials.

Citation

If you found the paper or code useful, please consider citing,
@misc{zhou2025FreNBRDF,
      title={FreNBRDF: A Frequency-Rectified Neural Material Representation}, 
      author={Chenliang Zhou and Zheyuan Hu and Cengiz Oztireli},
      year={2025},
      eprint={2507.00476},
      archivePrefix={arXiv},
      primaryClass={cs.GR},
      url={https://arxiv.org/abs/2507.00476}, 
}


The website template was inspired by FrePolad.