We investigate different ways of incorporating perception with stochastic optimal control. We take two different approaches. The first approach assumes an underlying structure of the perceptual control policy inspired by the organization of decision making architectures consisting of a cost function, a dynamics model representation and an optimizer. The second approach relies on an end-to-end formulation which collapses the entire autonomy stack into one big neural network that maps raw observations into actual control commands.
This research involves terrestrial agility using GPS. In the terminology of stochastic control, this is a fully observable case in which perception is minimal. We investigate robust stochastic model predictive control methods together with model learning and adaptation.