Regret Minimization in Games with Incomplete Information (CFR)

The paper Regret Minimization in Games with Incomplete Information introduces counterfactual regret and how minimizing counterfactual regret through self-play can be used to reach Nash equilibrium. The algorithm is called Counterfactual Regret Minimization (CFR).

The paper Monte Carlo Sampling for Regret Minimization in Extensive Games introduces Monte Carlo Counterfactual Regret Minimization (MCCFR), where we sample from the game tree and estimate the regrets.

We tried to keep our Python implementation easy-to-understand like a tutorial. We run it on a very simple imperfect information game called Kuhn poker.

Open In Colab

Twitter thread Twitter thread

Introduction

We implement Monte Carlo Counterfactual Regret Minimization (MCCFR) with chance sampling (CS). It iteratively, explores part of the game tree by trying all player actions, but sampling chance events. Chance events are things like dealing cards; they are kept sampled once per iteration. Then it calculates, for each action, the regret of following the current strategy instead of taking that action. Then it updates the strategy based on these regrets for the next iteration, using regret matching. Finally, it computes the average of the strategies throughout the iterations, which is very close to the Nash equilibrium if we ran enough iterations.

We will first introduce the mathematical notation and theory.

Player

A player is denoted by , where is the set of players.

History

History is a sequence of actions including chance events, and is the set of all histories.

is the set of terminal histories (game over).

Action

Action , where is a non-terminal history.

Information Set

Information set for player is similar to a history but only contains the actions visible to player . That is, the history will contain actions/events such as cards dealt to the opposing player while will not have them.

is known as the information partition of player .

is the set of all histories that belong to a given information set; i.e. all those histories look the same in the eye of the player.

Strategy

Strategy of player , is a distribution over actions , where is the set of all strategies for player . Strategy on -th iteration is denoted by .

Strategy is defined as a probability for taking an action in for a given information set ,

is the strategy profile which consists of strategies of all players

is strategies of all players except

Probability of History

is the probability of reaching the history with strategy profile . is the probability of reaching without player 's contribution; i.e. player took the actions to follow with a probability of .

is the probability of reaching with only player 's contribution. That is,

Probability of reaching a information set is,

Utility (Pay off)

The terminal utility is the utility (or pay off) of a player for a terminal history .

where

is the expected utility (payoff) for player with strategy profile .

Nash Equilibrium

Nash equilibrium is a state where none of the players can increase their expected utility (or payoff) by changing their strategy alone.

For two players, Nash equilibrium is a strategy profile where

-Nash equilibrium is,

Regret Minimization

Regret is the utility (or pay off) that the player didn't get because she didn't follow the optimal strategy or took the best action.

Average overall regret for Player is the average regret of not following the optimal strategy in all rounds of iterations.

where is the strategy profile of all players in iteration , and

is the strategy profile with player 's strategy replaced with .

The average strategy is the average of strategies followed in each round, for all

That is the mean regret of not playing with the optimal strategy.

If for all players then is a -Nash equilibrium.

Since because it's a zero-sum game, we can add and and the second term will cancel out.

The average of utilities over a set of strategies is equal to the utility of the average strategy.

Therefore,

From the definition of ,

Then,

This is -Nash equilibrium. You can similarly prove for games with more than 2 players.

So we need to minimize to get close to a Nash equilibrium.

Counterfactual regret

Counterfactual value is the expected utility for player if if player tried to reach (took the actions leading to with a probability of ).

where is the set of terminal histories reachable from , and is the prefix of up to . is the probability of reaching z from .

Immediate counterfactual regret is,

where

where is the strategy profile with the modification of always taking action at information set .

The paper proves that (Theorem 3),

where

Regret Matching

The strategy is calculated using regret matching.

The regret for each information set and action pair is maintained,

and the strategy is calculated with regret matching,

where

The paper The paper Regret Minimization in Games with Incomplete Information proves that if the strategy is selected according to above equation gets smaller proportionate to , and therefore reaches -Nash equilibrium.

Monte Carlo CFR (MCCFR)

Computing requires expanding the full game tree on each iteration.

The paper Monte Carlo Sampling for Regret Minimization in Extensive Games shows we can sample from the game tree and estimate the regrets.

is a set of subsets of () where we look at only a single block in an iteration. Union of all subsets spans (). is the probability of picking block .

is the probability of picking in current iteration; i.e. - the sum of where .

Then we get sampled counterfactual value fro block ,

The paper shows that

with a simple proof.

Therefore we can sample a part of the game tree and calculate the regrets. We calculate an estimate of regrets

And use that to update and calculate the strategy on each iteration. Finally, we calculate the overall average strategy .

Here is a Kuhn Poker implementation to try CFR on Kuhn Poker.

Let's dive into the code!

328from typing import NewType, Dict, List, Callable, cast
329
330from labml import monit, tracker, logger, experiment
331from labml.configs import BaseConfigs, option

A player where is the set of players

334Player = NewType('Player', int)

Action , where is a non-terminal history

336Action = NewType('Action', str)

History

History is a sequence of actions including chance events, and is the set of all histories.

This class should be extended with game specific logic.

339class History:

Whether it's a terminal history; i.e. game over.

351    def is_terminal(self):
356        raise NotImplementedError()

Utility of player for a terminal history. where

358    def terminal_utility(self, i: Player) -> float:
364        raise NotImplementedError()

Get current player, denoted by , where is known as Player function.

If it means that current event is a chance event. Something like dealing cards, or opening common cards in poker.

366    def player(self) -> Player:
373        raise NotImplementedError()

Whether the next step is a chance step; something like dealing a new card.

375    def is_chance(self) -> bool:
380        raise NotImplementedError()

Sample a chance when .

382    def sample_chance(self) -> Action:
386        raise NotImplementedError()

Add an action to the history.

388    def __add__(self, action: Action):
392        raise NotImplementedError()

Get information set for the current player

394    def info_set_key(self) -> str:
398        raise NotImplementedError

Create a new information set for the current player

400    def new_info_set(self) -> 'InfoSet':
404        raise NotImplementedError()

Human readable representation

406    def __repr__(self):
410        raise NotImplementedError()

Information Set

413class InfoSet:

Unique key identifying the information set

421    key: str

, the strategy of player

423    strategy: Dict[Action, float]

Total regret of not taking each action ,

We maintain instead of since term cancels out anyway when computing strategy

438    regret: Dict[Action, float]

We maintain the cumulative strategy to compute overall average strategy

445    cumulative_strategy: Dict[Action, float]

Initialize

447    def __init__(self, key: str):
451        self.key = key
452        self.regret = {a: 0 for a in self.actions()}
453        self.cumulative_strategy = {a: 0 for a in self.actions()}
454        self.calculate_strategy()

Actions

456    def actions(self) -> List[Action]:
460        raise NotImplementedError()

Load information set from a saved dictionary

462    @staticmethod
463    def from_dict(data: Dict[str, any]) -> 'InfoSet':
467        raise NotImplementedError()

Save the information set to a dictionary

469    def to_dict(self):
473        return {
474            'key': self.key,
475            'regret': self.regret,
476            'average_strategy': self.cumulative_strategy,
477        }

Load data from a saved dictionary

479    def load_dict(self, data: Dict[str, any]):
483        self.regret = data['regret']
484        self.cumulative_strategy = data['average_strategy']
485        self.calculate_strategy()

Calculate strategy

Calculate current strategy using regret matching.

where

487    def calculate_strategy(self):

506        regret = {a: max(r, 0) for a, r in self.regret.items()}

508        regret_sum = sum(regret.values())

if ,

510        if regret_sum > 0:

513            self.strategy = {a: r / regret_sum for a, r in regret.items()}

Otherwise,

515        else:

517            count = len(list(a for a in self.regret))

520            self.strategy = {a: 1 / count for a, r in regret.items()}

Get average strategy

522    def get_average_strategy(self):

531        cum_strategy = {a: self.cumulative_strategy.get(a, 0.) for a in self.actions()}

535        strategy_sum = sum(cum_strategy.values())

If ,

537        if strategy_sum > 0:

541            return {a: s / strategy_sum for a, s in cum_strategy.items()}

Otherwise,

543        else:

545            count = len(list(a for a in cum_strategy))

548            return {a: 1 / count for a, r in cum_strategy.items()}

Human readable representation

550    def __repr__(self):
554        raise NotImplementedError()

Counterfactual Regret Minimization (CFR) Algorithm

We do chance sampling (CS) where all the chance events (nodes) are sampled and all other events (nodes) are explored.

We can ignore the term since it's the same for all terminal histories since we are doing chance sampling and it cancels out when calculating strategy (common in numerator and denominator).

557class CFR:

set of all information sets.

570    info_sets: Dict[str, InfoSet]
  • create_new_history creates a new empty history
  • epochs is the number of iterations to train on
  • n_players is the number of players
572    def __init__(self, *,
573                 create_new_history: Callable[[], History],
574                 epochs: int,
575                 n_players: int = 2):
581        self.n_players = n_players
582        self.epochs = epochs
583        self.create_new_history = create_new_history

A dictionary for set of all information sets

585        self.info_sets = {}

Tracker for analytics

587        self.tracker = InfoSetTracker()

Returns the information set of the current player for a given history

589    def _get_info_set(self, h: History):
593        info_set_key = h.info_set_key()
594        if info_set_key not in self.info_sets:
595            self.info_sets[info_set_key] = h.new_info_set()
596        return self.info_sets[info_set_key]

Walk Tree

This function walks the game tree.

  • h is the current history
  • i is the player that we are computing regrets of
  • pi_i is
  • pi_neg_i is

It returns the expected utility, for the history where is the set of terminal histories with prefix

While walking the tee it updates the total regrets .

598    def walk_tree(self, h: History, i: Player, pi_i: float, pi_neg_i: float) -> float:

If it's a terminal history return the terminal utility .

619        if h.is_terminal():
620            return h.terminal_utility(i)

If it's a chance event sample a and go to next step.

622        elif h.is_chance():
623            a = h.sample_chance()
624            return self.walk_tree(h + a, i, pi_i, pi_neg_i)

Get current player's information set for

627        I = self._get_info_set(h)

To store

629        v = 0

To store for each action

633        va = {}

Iterate through all actions

636        for a in I.actions():

If the current player is ,

638            if i == h.player():

643                va[a] = self.walk_tree(h + a, i, pi_i * I.strategy[a], pi_neg_i)

Otherwise,

645            else:

650                va[a] = self.walk_tree(h + a, i, pi_i, pi_neg_i * I.strategy[a])

655            v = v + I.strategy[a] * va[a]

If the current player is , update the cumulative strategies and total regrets

659        if h.player() == i:

Update cumulative strategies

664            for a in I.actions():
665                I.cumulative_strategy[a] = I.cumulative_strategy[a] + pi_i * I.strategy[a]

678            for a in I.actions():
679                I.regret[a] += pi_neg_i * (va[a] - v)

Update the strategy

682            I.calculate_strategy()

Return the expected utility for player ,

686        return v

Iteratively update

This updates the strategies for iterations.

688    def iterate(self):

Loop for epochs times

696        for t in monit.iterate('Train', self.epochs):

Walk tree and update regrets for each player

698            for i in range(self.n_players):
699                self.walk_tree(self.create_new_history(), cast(Player, i), 1, 1)

Track data for analytics

702            tracker.add_global_step()
703            self.tracker(self.info_sets)
704            tracker.save()

Save checkpoints every iterations

707            if (t + 1) % 1_000 == 0:
708                experiment.save_checkpoint()

Print the information sets

711        logger.inspect(self.info_sets)

Information set tracker

This is a small helper class to track data from information sets

714class InfoSetTracker:

Set tracking indicators

720    def __init__(self):
724        tracker.set_histogram(f'strategy.*')
725        tracker.set_histogram(f'average_strategy.*')
726        tracker.set_histogram(f'regret.*')

Track the data from all information sets

728    def __call__(self, info_sets: Dict[str, InfoSet]):
732        for I in info_sets.values():
733            avg_strategy = I.get_average_strategy()
734            for a in I.actions():
735                tracker.add({
736                    f'strategy.{I.key}.{a}': I.strategy[a],
737                    f'average_strategy.{I.key}.{a}': avg_strategy[a],
738                    f'regret.{I.key}.{a}': I.regret[a],
739                })

Configurable CFR module

742class CFRConfigs(BaseConfigs):
746    create_new_history: Callable[[], History]
747    epochs: int = 1_00_000
748    cfr: CFR = 'simple_cfr'

Initialize CFR algorithm

751@option(CFRConfigs.cfr)
752def simple_cfr(c: CFRConfigs):
756    return CFR(create_new_history=c.create_new_history,
757               epochs=c.epochs)