
An introduction to JAX for Learning and Control Applications
This tutorial, run by the authors of JAX and related toolboxes, will introduce JAX basics, differentiable MPC and implicit autodifferentiation, as well as cover advanced topics, such as sequential quadratic programs. More details will be provided in the coming weeks.
Presenters: Roy Frostig (Google), Stephen Tu (Google), and Sumeet Singh (Google)
Location: Levine Hall, room 101 (Wu and Chen Auditorium)
Time: June 14, 2023, 13:30-16:30 (tentative)

Towards a Theoretical Foundation of Policy Optimization for Learning Control Policies
Gradient-based methods have been widely used for system design and optimization in diverse application domains. This tutorial surveys some of the recent developments on policy optimization, a gradient-based iterative approach for feedback control synthesis, popularized by successes of reinforcement learning. The presenters will take an interdisciplinary perspective in their exposition that connects control theory, reinforcement learning, and large-scale optimization. Further, a number of recently-developed theoretical results on the optimization landscape, global convergence, and sample complexity of gradient-based methods for various continuous control problems will be discussed.
Presenters: Bin Hu (UIUC), Kaiqing Zhang (MIT), Na Li (Harvard), Mehran Mesbahi (UW), Maryam Fazel (UW), Tamer Başar (UIUC)
Location: Levine Hall, room 101 (Wu and Chen Auditorium)
Time: June 14, 2023, 09:00-12:00 (tentative)