The goal of this research direction is to provide a theoretical framework for training and optimizing deep neural network architectures using concepts from stochastic or deterministic optimal control and dynamical systems theory. In this research direction, the lab has published works such the Differential Dynamic Programming Neural Optimizer (DDPNop) [pdf], Game Theoretic Neural Optimizer [pdf] and Second Order Neural Optimizer [pdf]. All papers proposed new algorithms for training deep neural networks architecture that match or outperform state-of-art optimization algorithms.
On the stochastic side, the lab’s more recent paper published in ICLR 2022 [pdf] shows the connection between training algorithms for deep score based generative models, likelihood training of Schrodinger bridge and Forward-Backward Stochastic Differential Equations used in stochastic optimal control. More is on the way stay tune!