Applied Physics and Applied Mathematics, and Data Science Institute
Numerical integration of given dynamic systems can be viewed as a forward problem with the learning of unknown dynamics from available state observations as an inverse problem. Solving both forward and inverse problems forms the loop of informative and intelligent scientific computing.
This lecture is concerned with the application of Linear multistep methods (LMMs) in the inverse problem setting that has been gaining importance in data-driven modeling of complex dynamic processes via deep/machine learning. While a comprehensive mathematical theory of LMMs as popular numerical integrators of prescribed dynamics has been developed over the last century and has become textbook materials in numerical analysis, there seems to be a new story when LMMs are used in a black box machine learning formulation for learning dynamics from observed states.
A natural question is concerned with whether a convergent LMM for integrating known dynamics is also suitable for discovering unknown dynamics. We show that the conventional theory of consistency, stability and convergence of LMM for time integration must be reexamined for dynamics discovery, which leads to new results on LMM that have not been studied before. We present refined concepts and algebraic criteria to assure stable and convergent discovery of dynamics in some idealized settings. We also apply the theory to some popular LMMs and make some interesting observations on their second characteristic polynomials.
(This is part of a joint work with Rachael Keller of Columbia).