rlcard.models¶
rlcard.models.bridge_rule_models¶
File name: models/bridge_rule_models.py Author: William Hale Date created: 11/27/2021
Bridge rule models
- class rlcard.models.bridge_rule_models.BridgeDefenderNoviceRuleAgent¶
- Bases: - object- Agent always passes during bidding - eval_step(state)¶
- Predict the action given the current state for evaluation.
- Since the agents is not trained, this function is equivalent to step function. 
 - Parameters:
- state (numpy.array) – an numpy array that represents the current state 
- Returns:
- the action_id predicted by the agent probabilities (list): The list of action probabilities 
- Return type:
- action_id (int) 
 
 - static step(state) int¶
- Predict the action given the current state.
- Defender Novice strategy:
- Case during make call:
- Always choose PassAction. 
- Case during play card:
- Choose a random action. 
 
 
 - Parameters:
- state (numpy.array) – an numpy array that represents the current state 
- Returns:
- the action_id predicted 
- Return type:
- action_id (int) 
 
 
rlcard.models.doudizhu_rule_models¶
Dou Dizhu rule models
- class rlcard.models.doudizhu_rule_models.DouDizhuRuleAgentV1¶
- Bases: - object- Dou Dizhu Rule agent version 1 - static card_str2list(hand)¶
 - combine_cards(hand)¶
- Get optimal combinations of cards in hand 
 - eval_step(state)¶
- Step for evaluation. The same to step 
 - static list2card_str(hand_list)¶
 - static pick_chain(hand_list, count)¶
 - step(state)¶
- Predict the action given raw state. A naive rule. :param state: Raw state from the game :type state: dict - Returns:
- Predicted action 
- Return type:
- action (str) 
 
 
- class rlcard.models.doudizhu_rule_models.DouDizhuRuleModelV1¶
- Bases: - Model- Dou Dizhu Rule Model version 1 - property agents¶
- Get a list of agents for each position in a the game - Returns:
- A list of agents 
- Return type:
- agents (list) 
 - Note: Each agent should be just like RL agent with step and eval_step
- functioning well. 
 
 
rlcard.models.gin_rummy_rule_models¶
File name: models/gin_rummy_rule_models.py Author: William Hale Date created: 2/12/2020
Gin Rummy rule models
- class rlcard.models.gin_rummy_rule_models.GinRummyNoviceRuleAgent¶
- Bases: - object- Agent always discards highest deadwood value card - eval_step(state)¶
- Predict the action given the current state for evaluation.
- Since the agents is not trained, this function is equivalent to step function. 
 - Parameters:
- state (numpy.array) – an numpy array that represents the current state 
- Returns:
- the action predicted by the agent probabilities (list): The list of action probabilities 
- Return type:
- action (int) 
 
 - static step(state)¶
- Predict the action given the current state.
- Novice strategy:
- Case where can gin:
- Choose one of the gin actions. 
- Case where can knock:
- Choose one of the knock actions. 
- Case where can discard:
- Gin if can. Knock if can. Otherwise, put aside cards in some best meld cluster. Choose one of the remaining cards with highest deadwood value. Discard that card. 
- Case otherwise:
- Choose a random action. 
 
 
 - Parameters:
- state (numpy.array) – an numpy array that represents the current state 
- Returns:
- the action predicted 
- Return type:
- action (int) 
 
 
- class rlcard.models.gin_rummy_rule_models.GinRummyNoviceRuleModel¶
- Bases: - Model- Gin Rummy Rule Model - property agents¶
- Get a list of agents for each position in a the game - Returns:
- A list of agents 
- Return type:
- agents (list) 
 - Note: Each agent should be just like RL agent with step and eval_step
- functioning well. 
 
 
rlcard.models.leducholdem_rule_models¶
Leduc Hold ‘em rule model
- class rlcard.models.leducholdem_rule_models.LeducHoldemRuleAgentV1¶
- Bases: - object- Leduc Hold ‘em Rule agent version 1 - eval_step(state)¶
- Step for evaluation. The same to step 
 - static step(state)¶
- Predict the action when given raw state. A simple rule-based AI. :param state: Raw state from the game :type state: dict - Returns:
- Predicted action 
- Return type:
- action (str) 
 
 
- class rlcard.models.leducholdem_rule_models.LeducHoldemRuleAgentV2¶
- Bases: - object- Leduc Hold ‘em Rule agent version 2 - eval_step(state)¶
- Step for evaluation. The same to step 
 - static step(state)¶
- Predict the action when given raw state. A simple rule-based AI. :param state: Raw state from the game :type state: dict - Returns:
- Predicted action 
- Return type:
- action (str) 
 
 
- class rlcard.models.leducholdem_rule_models.LeducHoldemRuleModelV1¶
- Bases: - Model- Leduc holdem Rule Model version 1 - property agents¶
- Get a list of agents for each position in a the game - Returns:
- A list of agents 
- Return type:
- agents (list) 
 - Note: Each agent should be just like RL agent with step and eval_step
- functioning well. 
 
 
- class rlcard.models.leducholdem_rule_models.LeducHoldemRuleModelV2¶
- Bases: - Model- Leduc holdem Rule Model version 2 - property agents¶
- Get a list of agents for each position in a the game - Returns:
- A list of agents 
- Return type:
- agents (list) 
 - Note: Each agent should be just like RL agent with step and eval_step
- functioning well. 
 
 
rlcard.models.limitholdem_rule_models¶
Limit Hold ‘em rule model
- class rlcard.models.limitholdem_rule_models.LimitholdemRuleAgentV1¶
- Bases: - object- Limit Hold ‘em Rule agent version 1 - eval_step(state)¶
- Step for evaluation. The same to step 
 - static step(state)¶
- Predict the action when given raw state. A simple rule-based AI. :param state: Raw state from the game :type state: dict - Returns:
- Predicted action 
- Return type:
- action (str) 
 
 
- class rlcard.models.limitholdem_rule_models.LimitholdemRuleModelV1¶
- Bases: - Model- Limitholdem Rule Model version 1 - property agents¶
- Get a list of agents for each position in a the game - Returns:
- A list of agents 
- Return type:
- agents (list) 
 - Note: Each agent should be just like RL agent with step and eval_step
- functioning well. 
 
 - property use_raw¶
- Indicate whether use raw state and action - Returns:
- True if using raw state and action 
- Return type:
- use_raw (boolean) 
 
 
rlcard.models.uno_rule_models¶
UNO rule models
- class rlcard.models.uno_rule_models.UNORuleAgentV1¶
- Bases: - object- UNO Rule agent version 1 - static count_colors(hand)¶
- Count the number of cards in each color in hand - Parameters:
- hand (list) – A list of UNO card string 
- Returns:
- The number cards of each color 
- Return type:
- color_nums (dict) 
 
 - eval_step(state)¶
- Step for evaluation. The same to step 
 - static filter_wild(hand)¶
- Filter the wild cards. If all are wild cards, we do not filter - Parameters:
- hand (list) – A list of UNO card string 
- Returns:
- A filtered list of UNO string 
- Return type:
- filtered_hand (list) 
 
 - step(state)¶
- Predict the action given raw state. A naive rule. Choose the color
- that appears least in the hand from legal actions. Try to keep wild cards as long as it can. 
 - Parameters:
- state (dict) – Raw state from the game 
- Returns:
- Predicted action 
- Return type:
- action (str) 
 
 
- class rlcard.models.uno_rule_models.UNORuleModelV1¶
- Bases: - Model- UNO Rule Model version 1 - property agents¶
- Get a list of agents for each position in a the game - Returns:
- A list of agents 
- Return type:
- agents (list) 
 - Note: Each agent should be just like RL agent with step and eval_step
- functioning well. 
 
 - property use_raw¶
- Indicate whether use raw state and action - Returns:
- True if using raw state and action 
- Return type:
- use_raw (boolean) 
 
 
rlcard.models.pretrained_models¶
Wrrapers of pretrained models.
- class rlcard.models.pretrained_models.LeducHoldemCFRModel¶
- Bases: - Model- A pretrained model on Leduc Holdem with CFR (chance sampling) - property agents¶
- Get a list of agents for each position in a the game - Returns:
- A list of agents 
- Return type:
- agents (list) 
 - Note: Each agent should be just like RL agent with step and eval_step
- functioning well. 
 
 
rlcard.models.registration¶
- class rlcard.models.registration.ModelRegistry¶
- Bases: - object- Register a model by ID - load(model_id)¶
- Create a model instance - Parameters:
- model_id (string) – the name of the model 
 
 - register(model_id, entry_point)¶
- Register an model - Parameters:
- model_id (string) – the name of the model 
- entry_point (string) – a string the indicates the location of the model class 
 
 
 
- class rlcard.models.registration.ModelSpec(model_id, entry_point=None)¶
- Bases: - object- A specification for a particular Model. - load()¶
- Instantiates an instance of the model - Returns:
- an instance of the Model 
- Return type:
- Model (Model) 
 
 
- rlcard.models.registration.load(model_id)¶
- Create and model instance - Parameters:
- model_id (string) – the name of the model 
 
- rlcard.models.registration.register(model_id, entry_point)¶
- Register a model - Parameters:
- model_id (string) – the name of the model 
- entry_point (string) – a string the indicates the location of the model class