Accurate Differential Operators for Hybrid Neural Fields

Our approach can compute accurate derivatives (illustrated here by surface normals) from hybrid neural fields like Instant NGP, compared to Autodiff.

Abstract

Neural fields have become widely used in various fields, from shape representation to neural rendering, and for solving partial differential equations (PDEs). With the advent of hybrid neural field representations like Instant NGP that leverage small MLPs and explicit representations, these models train quickly and can fit large scenes. Yet in many applications like rendering and simulation, hybrid neural fields can cause noticeable and unreasonable artifacts. This is because they do not yield accurate spatial derivatives needed for these downstream applications.

In this work, we propose two ways to circumvent these challenges. Our first approach is a post hoc operator that uses local polynomial-fitting to obtain more accurate derivatives from pre-trained hybrid neural fields. Additionally, we also propose a self-supervised fine-tuning approach that refines the neural field to yield accurate derivatives directly while preserving the initial signal. We show the application of our method on rendering, collision simulation, and solving PDEs. We observe that using our approach yields more accurate derivatives, reducing artifacts and leading to more accurate simulations in downstream applications.

High-frequency noise in hybrid neural fields

We observe that hybrid neural fields contain high-frequency noise in the learned signal. This noise can cause artifacts in derivatives computed using automatic differentiation.

(Left) Inaccurate differential operators of neural fields. Hybrid neural SDF of a circle in 2D. (Right) Fourier spectrum of a hybrid neural SDF. Computed over a 1D slice (dashed line in left) of the SDF of a 2D circle.

These artifacts cause issues in downstream applications like rendering and solving PDEs with hybrid neural fields.

Method

We propose two approaches to alleviate this problem: post hoc polynomial-fitting operators and a fine-tuning approach to improve the accuracy of autodiff derivatives.

Given a pre-trained hybrid neural field with noisy autodiff derivatives, we propose two approaches for accurate derivatives. Our polynomial-fitting operator can be applied in a post hoc manner while our fine-tuning approach directly improves autodiff derivatives of the field.

Post hoc polynomial-fitting derivatives

Given a pre-trained hybrid neural field and a query point, we propose fitting a polynomial through the local neighborhood of the query point. We then compute autodiff derivatives of this fitted polynomial instead of the learned signal.

Fine-tuning approach

Since our post hoc operators would require changes in the downstream pipelines. To prevent this, we also propose a fine-tuning approach that aligns the autodiff derivatives of hybrid neural fields with the smoothed derivatives from alternative sources like our post hoc operators or finite-difference stencils, while preserving the original 0th order signal. Our fine-tuning objective is independent of the type of smoothed derivative operator itself.

Results

We evaluate our methods on hybrid neural SDFs of 3D shapes from the FamousShape dataset. We compare normals and mean curvatures obtained from our operators with baselines like autodiff and finite-difference stencils. We experiment with tree type of hybrid neural fields: Instant NGP, Dense Grid, and Tri-plane (only post hoc analysis).

Our post hoc operators are able to provide more accurate derivatives than finite-difference or autodiff baselines.

Table 1. Operator Evaluation: We compare against autodiff (AD) and finite difference (FD) baselines. Note that our approach provides more accurate surface normals and mean curvature than the baselines.
Model Method Surface Normal Mean
Curvature
L2 ↓ Ang ↓ AA@1 ↑ AA@2 ↑ RRE ↓
Instant NGP AD 0.21 12.40 1.58 6.12 -
FD 0.07 4.20 26.86 55.22 3.67
Ours 0.05 2.80 42.92 67.90 0.89
Dense Grid AD 0.11 6.55 11.49 29.40 -
FD 0.07 3.97 30.66 55.06 2.62
Ours 0.06 3.31 38.95 62.65 0.89
Tri-plane AD 0.15 8.59 3.61 13.13 -
FD 0.07 4.19 23.42 51.27 4.12
Ours 0.06 3.23 35.67 62.74 0.90

Our fine-tuning approach also yields more accurate autodiff operators than directly applying autodiff, while preserving the fidelity of the original neural field.

Table 2. Effect of Fine-tuning: Our fine-tuning approach improves the accuracy of surface normals obtained from automatic differentiation. We compare autodiff operators before (first row) and after fine-tuning.
Model Fine-tuning operator Surface Normal Mesh Reconstruction
L2 ↓ Ang ↓ AA@1 ↑ AA@2 ↑ CD ↓ F-Score ↑
Instant NGP - 0.21 12.40 1.58 6.12 9.24 × 10-4 93.07
Finite difference 0.08 5.14 21.16 46.63 9.35 × 10-4 90.24
Polynomial-fit 0.05 3.19 33.60 60.24 9.28 × 10-4 92.28
Dense Grid - 0.11 6.56 11.42 29.37 9.26 × 10-4 89.83
Finite difference 0.09 5.09 18.82 41.52 9.23 × 10-4 88.94
Polynomial-fit 0.08 4.40 29.32 51.40 9.25 × 10-4 87.66

Applications

Our approaches also provide advantages in downstream applications of hybrid neural fields.

Rendering

Rendering hybrid neural SDFs with our approaches yields results that are free from unnecessary artifacts that arise in the case of directly using of autodiff derivatives.

Accurate Normals for Rendering. A perfectly specular sphere lighted by an environment map (top) and a diffuse Armadillo (inset) lit by a light source put in front of the object (bottom). In both cases, noisy normals from autodiff lead to artifacts in rendering as shown in the highlighted parts, that are mitigated by our approaches.

Collision simulation

Surface normals are required for accurate simulations when modeling collisions to compute trajectories. We consider the case of two spheres undergoing head-on collisions and simulate their trajectories post-collision. To obtain these trajectories, we use the normal estimates from the two SDFs at the analytical point of contact.

Effect of noisy normals on collision. An illustration of the effect of noisy normals on collision. Two spheres undergoing perfectly elastic head-on collisions simulated using correct surface normals will re-trace their paths after a collision. Inaccurate normal estimates from autodiff yield incorrect trajectories after bouncing.

We model the collisions as perfectly elastic so that there is no loss of energy. In the ideal case, the spheres should rebound along the line joining the centers with the same velocity, but erroneous normal estimates will lead to incorrect trajectories.

Averaged over 106 trials, the mean error obtained from our normals was 0.85°, compared to 11.51° for autodiff normals.

Solving PDEs

We show that using our operators for solving a 2D advection equation using a hybrid neural field prevents the solution error from exploding. We solve an initial value problem with the gaussian pulse being advected with a constant velocity.

We plot the mean squared error (MSE) for a finite difference grid solver, autodiff gradients (AD), and our polynomial-fitting approach. The error for AD explodes after the first few seconds and eventually crashes (indicated by ×). Using the same hybrid neural field with our operator leads to more accurate solutions at all time steps.

BibTeX

@misc{chetan2023accurate,
      title={Accurate Differential Operators for Hybrid Neural Fields}, 
      author={Aditya Chetan and Guandao Yang and Zichen Wang and Steve Marschner and Bharath Hariharan},
      year={2023},
      eprint={2312.05984},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}