SAYAS NUMERICS SEMINAR     Register to attend talk
Nov. 10, 2020 at 3:30pm (Eastern Time)

Efficient solvers for nonlinear Bayesian statistical inverse problems

Akwum Onwunta
George Mason University

Bayesian statistical inverse problems are often solved with Markov chain Monte Carlo (MCMC)-type schemes. When the problems are governed by large-scale discrete nonlinear partial differential equations (PDEs), they are computationally challenging because one would then need to solve the forward problem at every sample point. In this talk, the use of reduced-order models (ROMs), as well as deep neural network techniques, is considered for the forward solves within an MCMC routine. In particular, a preconditioning strategy for the ROMs is also proposed to accelerate the forward solves. Numerical experiments are provided to demonstrate the efficiency of the approach for solving forward problems and the associated statistical inverse problems.

Preconditioned accelerated gradient descent methods for locally Lipschitz smooth objectives with applications to the solution of nonlinear partial differential equations

Jeahyun Park
University of Tennessee

We talk about preconditioned Nesterov's accelerated gradient descent methods (PAGD) for approximating the minimizer of locally Lipschitz smooth, strongly convex objective functionals. We introduce a second order ordinary differential equation (ODE) as the limiting case of PAGD as the step size tends to zero. Using a simple energy argument, we will show an exponential convergence of the ODE solution to its steady state. The PAGD method may be viewed as an explicit-type time-discretization scheme of the ODE system, which requires a natural time step restriction for energy stability. Assuming this restriction, an exponential rate of convergence of the PAGD sequence is demonstrated by mimicking the convergence of the solution to the ODE via energy methods. Applications of the PAGD method are made in the context of solving certain nonlinear elliptic PDE using Fourier collocation methods, and several numerical experiments are conducted. The results confirm the global geometric and h-independent convergence of the PAGD method, with an accelerated rate that is improved over the preconditioned gradient descent (PGD) method.

HDG and CG methods for the Indefinite Time-Harmonic Maxwell's Equations under minimal regularity

Yangwen Zhang
University of Delaware

We propose to use the hybridizable discontinuous Galerkin (HDG) method and continuous Galerkin (CG) method combined to approximate Maxwell's equations. We have two contributions in this paper. First, even there are many papers using HDG methods to approximate Maxwell's equations, all of these works assume that the coefficients are smooth. We derive an optimal convergence rate for HGD-CG approximation to the solution to Maxwell's solutions with {\em piecewise} smooth coefficients, which is the first such result in the literature. Second, we use CG elements to approximate the Lagrange multiplier used to enforce the divergence condition and we obtain a discrete system in which we can decouple the discrete the Lagrange multiplier. The system for the Lagrange multiplier is SPD and can be solved very efficiently by MG or AMG. In other words, it is good for designing a preconditioner and therefore, it is good for solving the whole system. We present numerical experiments to confirm our theoretical results.