This course covers the fundamentals of convex optimization. We will talk about mathematical fundamentals, modeling (how to set up optimization problems for different applications), and algorithms.
Instructor: Justin Romberg
Download the syllabus
Course Notes
(Notes will be posted here shortly before lecture.)
Notes 1, Introduction and examples
(Also: Intro slides)
I. Convexity
Notes 2, convex sets
Notes 3, convex functions
II. Unconstrained optimization
Notes 4, optimality conditions, existence and uniqueness of solutions
Notes 5, line search
Notes 6, convergence of gradient descent
Notes 7, accelerated first order methods
Notes 8, Newton’s method
Notes 9, subgradients and the subgradient method
Notes 10, proximal algorithms
III. Constrained optimization
Notes 11, geometric optimality conditions
Notes 12, the KKT conditions (with bonus problem)
Notes 13, Lagrange duality
Notes 14, Fenchel duality
Notes 15, basic algorithms for constrained optimization
Notes 16, alternating direction methods
Notes 17, an alternating prox method
IV. Further topics
Notes 18, convex relaxations for non-convex problems
Notes 18a, the spectral and nuclear norms
Notes 19, centralized distributed optimization using ADMM
Notes 20, decentralized distributed optimization