Learning Controllable Adaptive Simulation for Multi-resolution Physics

We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP), the first fully DL-based surrogate model that jointly learns the evolution model, and optimizes spatial resolutions to reduce computational cost, learned via reinforcement learning. We demonstrate that LAMP is able to adaptively trade-off computation to improve long-term prediction error, by performing spatial refinement and coarsening of the mesh. LAMP outperforms state-of-the-art (SOTA) deep learning surrogate models, with an average of 33.7% error reduction for 1D nonlinear PDEs, and outperforms SOTA MeshGraphNets + Adaptive Mesh Refinement in 2D mesh-based simulations.

Method

Simulating the time evolution of a physical system is of vital importance in science and engineering. Usually, the physical system has a multi-resolution nature: a small fraction of the system is highly dynamic, and requires very fine-grained resolution to simulate accurately, while a majority of the system is changing slowly. Examples include hazard prediction in weather forecasting, disruptive instabilities in the plasma fluid in nuclear fusion, air dynamics near the boundary for jet engine design, and computer graphfics examples such as wrinkles in a cloth. Due to the typical huge size of such systems, it is pivotal that those systems are simulated not only accurately, but also with as small of a computational cost as possible. However, current deep learning-based surrogate models typically assume a uniform or fixed spatial resolution, without learning how to best assign computation to the most needed spatial region. And the classical Adaptive Mesh Refinement (AMR) shares similar challenge (e.g., slow) as classical solvers.




In this work, we introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first fully DL-based surrogate model that jointly learns the evolution model and optimizes appropriate spatial resolutions that devote more compute to the highly dynamic regions. Our key insight is that by explicitly setting the error and computation as the combined objective to optimize, the model can learn to adaptively decide the best local spatial resolution to evolve the system. To achieve this goal, LAMP consists of a Graph Neural Network (GNN)-based evolution model for learning the forward evolution, and a GNN-based actor-critic for learning the policy of discrete actions of local refinement and coarsening of the spatial mesh, conditioned on the local state and a coefficient $\beta$ that weights the relative importance of error vs. computation. The policy (actor) outputs both the number of refinement and coarsening actions, and which edges to refine or coarsen, while the critic evaluates the expected reward of the current policy.

We evaluate our model on a 1D benchmark of nonlinear PDEs (which tests generalization across PDEs of the same family), and a challenging 2D mesh-based simulation of paper folding. In 1D, we show that our model outperforms state-of-the-art deep learning-based surrogate models in terms of long-term evolution error by an average of 33.7%, and can adaptively tradeoff computation to improve long-term prediction error.




On a 2D mesh-based simulation, our model can strategically choose appropriate edges to refine or coarsen, and outperforms the state-of-the-art method of MeshGraphNets + Adaptive Mesh Refinement (AMR).





Example rollout by LAMP :



Example rollout by MeshGraphNets + GT remeshing:



Example rollout by MeshGraphNets + heuristic remeshing:



Example rollout by ablation of LAMP (no remeshing):



Example rollout by ablation of fine-grained ground-truth:



For more detailed methods and results, please see the paper.
It is also included in a talk given at Stanford HAI.

Code

A reference implementation of LAMP in PyTorch will be available on GitHub.

Datasets

The datasets used by LAMP are included and can be generated in the code repository. It can also be downloaded at here.

Contributors

The following people contributed to LAMP:
Tailin Wu$^*$
Takashi Maruyama$^*$
Qingqing Zhao$^*$
Gordon Wetzstein
Jure Leskovec
* denotes equal contribution.

References

Learning Controllable Adaptive Simulation for Multi-resolution Physics. T. Wu*, T. Maruyama*, Q. Zhao*, G. Wetzstein, J. Leskovec. ICLR 2023, notable-top-25% (spotlight).