Its no secret that I like board games. It’s one of my go to timekillers. Settlers of Catan, Dominion, Agricola; I’m a sucker for Eurogames. I play with a few friends from work from time to time, and over time, we started trying to learn things together. We set up a software book club, and that eventually morphed into an AI learning club. We joined The AI Games, and had a blast with it, but then decided to see if we could take a crack at something similar for our private group.
We decided to start trying out some board games we knew. We wanted to build an engine that allowed bots to play the game against each other. The engine would run all the rules and coordinate communication, and each bot would supply the moves. One of us mentioned a two player game called Battle Line, designed by Reiner Knizia and we decided to go for it.
I had written an Elixir bot for the AI Games, and wanted to try again with Battle Line. I had used metaprogramming to set up a simple engine, and it definitely made it so that I could spin up a strategy quickly.
Battle Line
Let’s go over the game real quick.
There are 60 cards (6 colors with 1-10 in each color.)

There are nine flags on the table, and players take turns playing one card at a time on different flags. The goal is to play a stronger formation than the other player (with a max of three cards per side). Regarding formations, (in poker terms) Straight Flush > Three of a Kind > Flush > Straight > Any three cards. You may claim a flag when you can prove that your opponent could not beat the formation (based on knowledge on the board. For instance, if you had a red 9-8-7 that was on your side of a flag, and the opponent has a blue 9-8. If the blue ten is somewhere else on the board, you know they can’t make the blue 10-9-8 so you can claim the flag (ties go to the person with higher sum of cards, then who finished the flag first)
With this being mostly a game about deduction and probability, it was an excellent first game to try out with some bots.
Bot Strategy
So let’s talk about strategy. Here’s what I’ve done (and it’s been beating my peers, so I’m good with it for now.) By the way, you can see all the code here: https://github.com/pviafore/battleline-ai-bot
Let’s break it down. State is a variable that contains the games state (which side you are, what is in your hand, what flags are claimed and what cards are on each flag)
First, we get a list of plays we can make. This is a cartesian product of each open flag and each card in the hand. We then strip out plays that would cause the other player to win a flag.
Then we find out what the highest possible strength the opponent can have on a flag. We use this information in multiple functions, so we pre-calculate it.
So we go through a few phases in order. get_actual_move will return a play if it finds one, and nil if not. If a play is passed into any get_actual move (the first argument) then it passes through and returns that play, otherwise it tries to figure out the next move.
I really like Elixir’s pipe operator, as it makes function composition really easy (and changes how you think when you program).
First I see if I can win any flags with just playing hand cards alone. This is a way of finding a sure win from the hand.
Then if I find no sure wins, I run through each opponent strength (the highest possible if its not a completed flag) and calculate my probability of winning the flag (based on the chance I draw an unplayed card).
However, this still might be zero chance (especially in the first few rounds). Since the highest possible formation for an enemy is often a 10-9-8 straight flush in the beginning rounds, we need a fallback.
Our fallback is just to figure out what card would produce the highest formation, with possibilities coming from the hand and unplayed cards.
Finally there is a default option in case something goes wrong.
Future Improvements
This is far from perfect, but I have some ideas of what to do next.
First, I optimize for the local, trying to win each flag instead of trying to win the game. By looking at the entire game, I think I can make much better choices.
Secondly, I only compare probabilities against an opponent’s highest formation. Comparing against more formations would be more time-intensive, but in the late game, it could be something I do if the solution space is small enough.
Lastly, my code is just an algorithm, no AI or machine learning involved. I’d like to have a more learning-oriented piece of code. Right now, I scale probabilities given some fixed weights determined on number of neighbors and which flag I’m looking at. I just picked the weights randomly, and I’d like to start having a better algorithmic approach. Maybe playing around with a neural network to pick the weights would be interesting.