Adding Pre-trained/Rule-based models¶
You can add your own pre-trained/rule-based models to the toolkit by following several steps:
Develop models. You can either design a rule-based model or save a neural network model. For each game, you need to develop agents for all the players at the same time. You need to wrap each agent as a
Agentclass and make sure that
use_rawcan work correctly.
Wrap models. You need to inherit the
rlcard/models/model.py. Then put all the agents into a list. Rewrite
agentproperty to return this list.
Register the model. Register the model in
Load the model in environment. An example of loading
leduc-holdem-nfspmodel is as follows:
from rlcard import models leduc_nfsp_model = models.load('leduc-holdem-nfsp')
leduc_nfsp_model.agents to obtain all the agents for the game.
Although users may do whatever they like to design and try their algorithms. We recommend wrapping a new algorithm as an
Agent class the example agent. To be compatible with the basic interfaces, the agent should have the following functions and attribute:
step: Given the current state, predict the next action.
eval_step: Similar to
step, but for evaluation purpose. Reinforcement learning algorithms will usually add some noise for better exploration in training. In evaluation, no noise will be added to make predictions.
use_raw: A boolean attribute.
Trueif the agent uses raw states to do reasoning;
Falseif the agent uses numerical values to play (such as neural networks).
Adding New Environments¶
To add a new environment to the toolkit, generally you should take the following steps:
Implement a game. Card games usually have similar structures so that they can be implemented with
Player, as in existing games. The easiest way is to inherit the classed in rlcard/games/base.py and implement the functions.
Wrap the game with an environment. The easiest way is to inherit
Envin rlcard/envs/env.py. You need to implement
_extract_statewhich encodes the state,
_decode_actionwhich decodes actions from the id to the text string, and
get_payoffswhich calculates payoffs of the players.
Register the game. Now it is time to tell the toolkit where to locate the new environment. Go to rlcard/envs/__init__.py, and indicate the name of the game and its entry point.
To test whether the new environment is set up successfully:
import rlcard rlcard.make(#the new evironment#)
In addition to the default state representation and action encoding, we also allow customizing an environment. In this document, we use Limit Texas Hold’em as an example to describe how to modify state representation, action encoding, reward calculation, or even the game rules.
To define our own state representation, we can modify the
_extract_state function in /rlcard/envs/limitholdem.py.
To define our own action encoding, we can modify the
_decode_action function in /rlcard/envs/limitholdem.py.
To define our own reward calculation, we can modify the
get_payoffs function in /rlcard/envs/limitholdem.py.
We can change the parameters of a game to adjust its difficulty. For example, we can change the number of players, the number of allowed raises in Limit Texas Hold’em in the
__init__ function in rlcard/games/limitholdem/game.py.