SAYAS NUMERICS SEMINAR     Register to attend talk
Sept. 29, 2020 at 3:30pm (Eastern Time)

Deep Learning Interpretation: Flip Points and Homotopy Methods

Roozbeh Yousefzadeh
Yale University

This talk concerns methods for studying deep learning models and interpreting their outputs and their functional behavior. A trained model is a function that maps inputs to outputs. Deep learning has shown great success in performing different machine learning tasks; however, these models are complicated mathematical functions, and their interpretation remains a challenging research question. We formulate and solve optimization problems to answer questions about the models and their outputs. A deep classifier partitions its domain and assigns a class to each of those partitions. Partitions are defined by the decision boundaries, but such boundaries are geometrically complex. Specifically, we study the decision boundaries of models using flip points, points on those boundaries. The flip point closest to a given input is of particular importance, and this point is the solution to a well-posed optimization problem. To compute the closest flip point, we develop a homotopy algorithm that transforms the deep learning function in order to overcome the issues of vanishing and exploding gradients. We show that computing closest flip points allows us to systematically investigate the model, identify decision boundaries, interpret and audit the models with respect to individual inputs and entire datasets, and find vulnerability against adversarial attacks. We demonstrate that flip points can help identify mistakes made by a model, improve their accuracy, and reveal the most influential features for classifications.

Fractional Deep Neural Network via Constrained Optimization

Ratna Khatri
U.S. Naval Research Laboratory
Center for Mathematics and Artificial Intelligence

In this talk, we will introduce a novel algorithmic framework for a deep neural network (DNN) which allows us to incorporate history (or memory) into the network. This DNN, called Fractional-DNN, can be viewed as a time-discretization of a fractional in time nonlinear ordinary differential equation (ODE). The learning problem then is a minimization problem subject to that fractional ODE as constraints. We test our network on datasets for classification problems. The key advantage of the fractional-DNN is a significant improvement to the vanishing gradient issue, due to the memory effect.

Fractional Optimal Control Problems with States Constraints: Algorithm and Analysis

Deepanshu Verma
George Mason University

Motivated by several applications in geophysics and machine learning, in this talk, we introduce a novel class of optimal control problems with fractional PDEs. The main novelty is due to the obstacle type constraints on the state. The analysis of this problem has required us to create several new, widely applicable, mathematical tools such as characterization of dual of fractional Sobolev spaces, regularity of PDEs with measure-valued datum. We have created a Moreau-Yosida based algorithm to solve this class of problems. We establish convergence rates with respect to the regularization parameter. Finite element discretization is carried out and a rigorous convergence of the numerical scheme is established. Numerical examples confirm our theoretical findings.