The Autonomous Control and Decision Systems (ACDS) Laboratory is part of the Flight Mechanics and Controls (FMC) group at the Daniel Guggenheim School of Aerospace Engineering, Georgia Institute of Technology (GaTech). The ACDS laboratory is also affiliated with the Institute for Robotics and Intelligent Machines (IRIM), the Decision Control Lab (DCL) and the Center for Machine Learning at Georgia Tech.

**Robotics Systems and Autonomy:** Forty years after the first Apollo mission, the technological/industrial development as well as the need for deep space exploration has created new challenges. Among these is the challenge of building autonomous robotic systems that can accomplish difficult missions in remote, unknown or partially known environments, and adapt to changing and dynamic situations. Robotic systems should be able to robustly walk, navigate, efficiently explore, quickly learn new motor skills and generalize these skills to unseen conditions. While absolute autonomy is critical, robotic systems should safely co-operate with and be efficiently teleoperated by humans during motor control tasks in manufacturing and space missions.

**Mathematical Control & Learning:** On the mathematical side our research lies at the intersection of Control and Dynamical Systems Theory, Machine Learning, Information Theory and Statistical Physics. At the core of this intersection are 1) the optimality principles in control theory namely the Dynamic Programming and the Pontryagin Maximization Principle, 2) the fundamental connections between Partial Differential and Stochastic Differential Equations, 3) Information Theoretic interpretations of control and learning and 4) Efficient machine learning algorithms for statistical inference, learning and control.

**Neural Systems & Organization:** Besides autonomy, our interests include the investigation of the computational principles related to neural organization, computation, function and/or behavior. Here are few questions that better explain our point of view: how neural-hardware organization relates to function? Are there any variational principles in control theory and machine learning that explain this relationship? What are the underlying neural optimization algorithms used to perform motor control and learning tasks? Can we transfer control and hardware design principles from neural organisms to robotic and aerospace systems?