Robotics Systems and Autonomy: Our research on robotics systems and autonomy includes the development of perceptual decision making architecture for navigation using aerial and terrestrial vehicles. Some highlights on our research on robotics systems and autonomy include the work on Model Predictive Path Integral Control (MPPI) and its variations Tube-Based MPPI, Robust-MPPI and Tsallis-MPPI and applications to off-road navigation. Other highlights of our research include the work on deep neural network architectures for vision-based navigation as well as trajectory optimization methods under parametric and non-parametric uncertainty for robotic and aerospace systems. See the link: Perceptual Decision Making, MPPI, Safe Control and Learning and Urban Aerial Mobility (UAM)
Mathematical Control and Learning: On the mathematical side our research lies at the intersection of Control and Dynamical Systems Theory, Machine Learning, Information Theory and Statistical Physics. At the core of this intersection are 1) the optimality principles in control theory namely the Dynamic Programming and the Pontryagin Maximization Principle, 2) the fundamental connections between Partial Differential and Stochastic Differential Equations, 3) Information Theoretic interpretations of control and learning and 4) Efficient machine learning algorithms for statistical inference, learning and control.
Deep Reinforcement Learning and Stochastic Optimization: Stochastic control and optimization using deep neural network representation is a major research direction. Our research in this area is centered around deep stochastic control algorithms and representations that are tailored to the Hamilton-Jacobi-Bellman theory, stochastic dynamic programming and its connections to Forward-Backward Stochastic Differential Equations. The computational framework, so-called Deep-FBSDE, handles general classes of stochastic optimal control problem formulations with nonlinear and non-convex dynamics and cost functions, and results in modular and interpretable neural network policy representations. See the link: Deep RL and Stochastic Optimization
Deep Learning Theory – A Dynamical and Control Systems Prespective: While deep neural network representations can be used for the purposes of stochastic optimal control algorithms, one can explore the relation between deep learning and optimal control in the opposite direction. In this direction, novel optimization algorithms for training deep neural network representations can be derived using concepts from deterministic and stochastic optimal control. Examples include the Differential Dynamic Programming Neural Optimizer (DDPNopt) and Second Order Neural ODE Optimizer (SNOPT). See the link: Deep Learning Theory
Multi-Agent Decision Making: Multi-Agent systems are characterized by their large dimensionality, which makes the application of vanilla versions of dynamic optimization and stochastic control algorithms to be prohibited. In this research direction, large-scale optimization architectures are investigated. The architectures rely on distributed parallel and/or sequential decision-making algorithms and nonlinear deterministic and stochastic control. See the link: Distributed Optimization
Control and Learning for Physics: Control and learning of systems with spatio-temporal dynamics is one of biggest challenges in engineering and sciences. These are systems that are represented by Stochastic Partial Differential Equations (SPDEs). Our research on stochastic optimal control and learning extents to such systems since optimality principles as well as calculus such as stochastic calculus extent to these systems. Applications include the areas of fluid flow control, soft robotic systems, infinite dimensional representations of partial observable stochastic control and open quantum systems. See the link: Control and Learning for Physics
Control and Learning for Systems in Medicine and Bio-Engineering: The ACDS lab in collaboration with medical schools of University of Minnesota, is actively engage in research towards the development of control and learning algorithms for autonomous CPR. See the link: Control and Learning for Medicine