Revision baccf62c764978f0597ffc8b7604e1b7580af8c6 (click the page title to view the current version)

Reinformcement Learning Part 1

To be completed

Exercises

Last session we discussedl the Q-Function, \[Q(s,a) = \sum_{s'}P(s'|s,a)[R(s,a,s') + \gamma \max_{a'}Q(s',a')]\] and the function for the optimal policy based on the results from the Q-Function:
\[\pi^*(s) = \mathop{\text{argmax }}\limits_aQ(s,a)\] We also discussed iterative estimation of the utilities and the policies. This session, we will implement an iterative estimation algorithm for the Q-values, knowns as Q-learning. This is a model-free, off-policy reinforcement learning algorithm.

The exercise outline below is based partly on Eirik’s assigment in 2022 and partly on the Gymnasium tutorial on Blackjack.

Note that I have not asked you explicitly to output any diagnostics on the way. You almost certainly have to do this yourself, so that you know what is going on.

Task 1

The goal for this session is to implement an agent that can solve the Frozen Lake problem as well as possible, using Q-learning. The skeleton for the Agent will look like this:

Thus we need four methods. The most obvious ones are the constructor, the move generator, and model updater. The last method reduces \(\epsilon\) which is the probability of making a random move instead of the best move according to the model.

In order to run the model, you can use the following script:

1A Constructor

Implement the constructor. You need to record all the parameters and initialise the Q-table. You can use Eirik’s initial Q-values below, or it is also possible to use a defaultdict as does the Blackjack tutorial.

In this format initialQ[state][action] is the tenative value for \(Q\)(state,action).

2B. Move generator

The move generator get_action() has to return a valid action, that is an integer in the 0–3 range for the Frozen Lake problem. With probability \(\epsilon\) you want to return a random action (see last session for code example), and with probability \(1-\epsilon\), the action which maximises \(Q\) according to the current estimate.

  • Implement get_action().

3C. Model updater

Now we need some way to update the Q-table.
Q-learning is based on one very simple update rule: \[Q(s,a) \leftarrow Q(s,a) + \alpha\left(\left[ R(s,a,s') + \gamma \max\limits_{a'}Q(s',a')\right] - Q(s,a)\right),\] where \(\alpha\) is the learning rate, which controls the speed of convergence.

  • Implement update()

3D. Epislon decay

  • What does the above line do?
  • Do the attribute name match the ones you have used?
  • Implement decay_epsilon().

Task 2

In this task we will create functions to update our own q-table, for now you can turn make the environment deterministic by turning of the ‘slippery’ argument when making the environment:

Part A

First we need to create an empty q-table, as the gym framework supports frozen-lake environment of different sizes, we need to initialize it with the size “state_space x action_space”.
You can get them from:

Create a function to initialize a q-table

You can use the following ‘skeleton’:

Part B

Implement a function to calculate the value to be updated (the part on the right side of the arrow)

You can use the following ‘skeleton’:

Part C

Using the functions we implemented above, we want to update the q-table by simply having the agent play the game a lot.

Implement a q-learning function, you can use the following ‘skeleton’ and/or take inspiration from the test_performance implementation.

Note that your agent might behave odd (or not work at all), if you use your optimal policy on an empty q-table, so you may want to edit it to take a random action if it has issues differentiating between actions.

Part D

Try out your algorithm;
- Use the updated q-table with the play function and the test_performance function.
- Print out the q-table

Are there any potential problems?
What if you train it again with slippery=True ?

Task 3

(From now on, we will play with a stochastic environment, set slippery=True).

We need some way to encourage exploration, to prevent the agent from only trying to repeat the first sequence that got him to the goal.

There are multiple ways to implement this;

  1. We can set a static epsilon \(\epsilon\) value, and set the action to some random action a if some random number n is below \(\epsilon\).
  2. We usually want to encourage exploration in earlier training phases, and encourage exploitation in the later ones. We can therefore use a similar approach to 1, but with the addition of decaying \(\epsilon\) over time.
  3. The third option (non-exhaustive) is to create a policy that picks an action based on a weighted probability-distribution created based on the q-values. The weighting can then change over time to encourage exploration early, and exploitation later. A modified version of this function could also be used when ‘playing’ the game, if you want a policy that not necessarily always picks the option with the highest utility.

Part A

Implement _one_ of the functions above, you can use the following ‘skeleton’:

Part B

Try out your algorithm;
- Use the updated q-table with the play function and the test_performance function.
- Print out the q-table

Task 4

Part A

Modify your q-learning algorithm to call test_performance every n-episode. Save this in a table and plot the result using matplotlib.

We now have a way to calculate the performance over time/training episodes.

Part B

Experiment with the different hyperparameters (epsilon, learning-rate, gamma, etc) and compare them using the method in part A.
You can also try with other versions of the frozen-lake environment (e.g. the 8x8 map), they have a function to create random maps.

Task 5 (Extra)

Please inform me if you get to this point early, as I might change the task, but for now:

Implement SARSA, and repeat similar experiments from task 4 to compare the two.