Revision 64a80dcae248eaf8ea41952574993658d59c28c7 (click the page title to view the current version)

Adversarial Search

Changes from 64a80dcae248eaf8ea41952574993658d59c28c7 to 8941a18c85195a805f5b16feba960d2d1c4adf4d

title: Adversarial Search

# Reading

R&N Chapter 6

+ The fundamental concept is **two-player, zero-sum games**
    + The basic solution technique is **minimax search**
+ Minimax search grows exponentially
    + heuristic searches are important (Section 6.3)

# Briefing

# Exercise

## Two-plaer, Zero Sum games

+ State-machine
    - to-move: state $\to$ player
    - actions: state $\to$ {action}
    - result: state $\times$ action $\to$ state
    - utility: state $\times$ player $\to\mathbb{R}$
+ Minimax algorithm: 
    - maximises utility for the player currently to move
    - exhaustive research
    - Tree Search: optimise utility bottom-up
    - haustive research
+ Caveats and variations
    - multi-player games - alliances and trust
    - co-operative games
    - $\alpha\beta$ pruning
    - heuristic searches
    - Type A and Type B:
        - move generation
        - move evaluation

# Exercise

## Tic Tac Toe

+ [Code from github](

I was not able to find suitable exercises on CodinGame, so instead,
I have provided a simulator for you.  You should 

1.  Clone the git repo, `git clone`
2.  Change to the `TicTacToe` subdirectory
3.  Modify the template to implement your intelligent agent.
    You should use the minimax algorithm as described for two-player,
    zero sum games.
4.  Play the game, using the test scripts: `python3`
5.  Consult the README file for details.

This assumes that you have git and python3 installed.

## Discussion

From [R&N Exercises](

+ Exercise 1
+ Exercise 3