Reinforcement Learning
This lecture (9 ECTS) will lay the foundations of reinforcement learning (RL). The lecture is devided into three parts: Multiarmed bandits, tabular RL, non-tabular RL.
- Multiarmed bandits (mostly from the algorithmic point of view) – 5 lectures
- Explore-then-commit
- Greedy
- UCB
- Botzman Exploration
- Softmax (policy gradient)
- Tabular MDP basics – 5 lectures
- Foundations of dynamic programming
- value iteration
- policy iteration
- Tabular Q-learning, TD-learning – 6 lectures
- Monte Carlo evaluation
- Tsitsiklis' convergence proof of stochastic fixed point interations
- One-step approximate dynamic programming (TD(0), SARSA, Q-learning)
- Double Q-learning
- Multi-step approximate dynamic programming (n-step, forwards&backwards TD(lambda))
- Policy Gradient Schemes – 10 lectures
- policy gradient theorems
- variance reduction tricks such as baseline, actor critic
- gradient descent and stochastic gradient descent
- neural networks in RL
- SAC, TRPO, PPO
We will prove everything that we think is needed for a proper understanding of the algorithms but also go into the coding (Python). At many instances of RL convergence proofs are still open (even worse, sometimes algorithms are known to diverge). We will cover theoretical results around RL which sometimes leads to good educated guesses for RL algorithms even though the theoretical assumptions of techniques cannot be checked (or are violated).
Why is reinforcement learning useful?
Reinforcement learning is a type of machine learning that involves training an agent to make a sequence of decisions in an environment in order to maximize a reward. It is often used to control complex, dynamic systems or to optimize performance. Some applications of reinforcement learning include:
- Robotics: Reinforcement learning can be used to teach robots how to perform tasks by rewarding successful execution and punishing mistakes.
- Financial markets: Reinforcement learning can be used to develop trading strategies by learning how to take advantage of market conditions.
- Games: Reinforcement learning has been successfully used to control computer games by learning how to play against human or other computer opponents.
- Web optimization: Reinforcement learning can be used to optimize websites by learning how to control traffic on the site in order to achieve certain goals.
Overall, reinforcement learning offers a way to optimize complex systems by learning how to act in certain situations in order to maximize rewards.
Attention: This text was written by chatGPT, an AI tool based on reinforcement learning (RL) itself (and transformer networks). I do not quite agree with chatGPT, financial markets seem not be very well suited to ML methods. Anyways, as we will cross RL in our future lives in manifold occasions it will be useful to know how RL works.
Target group
Students from the study programs Mathematics, WiMa, WiFo, MMDS. We will cover the mathematical background of reinforcement learning, coding (in python) will be part of the exercises.
Team
Prof. Dr. Leif Döring, André Ferdinand, Till Freihaut, Sara Klein, Marc Pritsch, Almut Röder, Leo Vela
Weekly schedule
Lectures:
Tuesday, B2, garden house (in the garden of B6, 26)
Thursday, B2, garden house (in the garden of B6, 26)
Tutorials:
Thursday, B1, garden house (in the garden of B6, 26)
Oral exams
Exams will be oral, here are some hints.
Dates: 1st-5th of June, 26th, 27th, 28th of July, 26th of August-1st of September
Exercises
Link to Introduction Repository https://github.com/aferdina/IntroductionRL/ Link to the Videos
Repository to collaborate at the programming tasks: https://github.com/aferdina/RLFiniteGames
Solutions for the programming tasks in https://github.com/aferdina/Solution_Exercise
Lecture notes
Further reading
Sutton & Barto: “Reinforcement Learning – an Introduction” is available online. This covers all major ideas but skipps essentially all details. In essence, this lecture course follows the core ideas of Sutton & Barto but tries to include as much of the missing mathematics as possible.