OS Numerical Optimization: 'Deep Q-Learning for Infinite-Horizon Optimal Control Problems Governed by Parabolic PDEs' and 'Solving Lasso and the Sparse Plus Low-Rank Decomposition via ADMM'

Time
Tuesday, 26. March 2024
15:15 - 16:45

Location
F423

Organizer
B. Azmi & S. Volkwein

Speaker:
Magnus Gohm and Valentin Weixler

On 26th March 2024 at 15:15, Magnus Gohm and Valentin Weixler will give two talks.

The first title will be 'Deep Q-Learning for Infinite-Horizon Optimal Control Problems Governed by Parabolic PDEs'.

Abstract: This thesis considers discounted infinite horizon optimal control problems which are governed by parabolic partial differential equations. A finite element discretization is employed to write these optimal control problems, as linear-quadratic optimal control problems. In order to obtain an optimal control law, we are then entering the field of reinforcement-learning or more specifically Q-Learning, where we want to approximate the optimal Q-function or optimal state-control value function. Using the Hamilton-Jacobi Deep Q-Learning (HJDQN) algorithm, we can then train a neural network which will act as optimal control law. Under the utilization of Lipschitz-continuous control functions, we are then able to reduce the training process to one neural network, instead of two for most actor-critic methods like the deep deterministic policy gradient (DDPG) algorithm. For that are methods involving Deep Q-Learning and Double Q-Learning essential. We then analyse the HJDQN algorithm in numerical simulations, where our examples cover linear-quadratic optimal control problems, with linear and non-linear parabolic partial differential equations.

The second title will be 'Solving Lasso and the Sparse Plus Low-Rank Decomposition via ADMM'.

back