Revision 8941a18c85195a805f5b16feba960d2d1c4adf4d (click the page title to view the current version)

Adversarial Search

Changes from 8941a18c85195a805f5b16feba960d2d1c4adf4d to c61630700070f570f62e4016919eb1973e469ca1

---
title: Adversarial Search
---

# Reading

R&N Chapter 6

+ The fundamental concept is **two-player, zero-sum games**
    + The basic solution technique is **minimax search**
+ Minimax search grows exponentially
    + heuristic searches are important (Section 6.3)

# Briefing


## Two-plaer, Zero Sum games

+ State-machine
+ **State-machine**
    - to-move: state $\to$ player
    - actions: state $\to$ {action}
    - result: state $\times$ action $\to$ state
    - utility: state $\times$ player $\to\mathbb{R}$
+ Minimax algorithm: 
    - maximises utility for the player currently to move
    - Tree Search: optimise utility bottom-up
    - haustive research
    - **Tree Search**: optimise utility bottom-up
        - **min nodes and max nodes**
    - exhaustive research
+ Caveats and variations
    - Branching factor
    - multi-player games - alliances and trust
    - co-operative games
    - $\alpha\beta$ pruning
    - heuristic searches
    - Type A and Type B:
        - move generation
        - move evaluation
+ **Monte Carlo** Tree Search
    - Monte Carlo $\sim$ Stochastic Simulation
    - Balance Exploitation and Exploration
+ More Caveats
    + Stochastic Games - **Chance Nodes**
    + Partially Observable Games 
        - the percept sequence is no longer the opponent's move.
    + Limitations - Section 6.7
        + Intractible
	+ Approximations and Assumptions
	+ Individual moves - no sight of the bigger picture


# Exercise

## Tic Tac Toe

+ [Code from github](https://github.com/hgeorgsch/pai-exercises/tree/main/TicTacToe)

I was not able to find suitable exercises on CodinGame, so instead,
I have provided a simulator for you.  You should 

1.  Clone the git repo, `git clone https://github.com/hgeorgsch/pai-exercises.git`
2.  Change to the `TicTacToe` subdirectory
3.  Modify the template to implement your intelligent agent.
    You should use the minimax algorithm as described for two-player,
    zero sum games.
4.  Play the game, using the test scripts: `python3 ttt.py`
5.  Consult the README file for details.

This assumes that you have git and python3 installed.

## Discussion

From [R&N Exercises](https://aimacode.github.io/aima-exercises/game-playing-exercises/)

+ Exercise 1
+ Exercise 3

## Monte Carlo Tree Search

+ [Can you save the forest - Episode 1](https://www.codingame.com/training/medium/can-you-save-the-forest---episode-1)