AI in solitaire/co-op boardgames: How should it work?

I wasn’t talking about the player fully taking over for the AI, but thinking of situations where the AI rules constrain it down to a couple of possibilities, then let the player decide the least-worst among them. You can conceptualize that as representing confusion, poor leadership, or misdirection, or just say it’s a concession to convenience in terms of not having to exhaustively flow-chart out every last conceivable edge case for the AI to follow.

I don’t have a wargame example game on hand (I’m only a dabbler), but here are a couple (from Gloomhaven and Marvel United):

Ambiguity: If the rules ever make any monster action ambiguous because there are multiple viable hexes to which the monster could move, multiple equally viable targets to heal or attack, or multiple hexes a monster could push or pull a character into, the players must decide which option the monster will take.

Breaking Ties: If there are ever events or effects whose conditions are tied, the players decide how they should be resolved.

There’s no suggestion there that the players have to get into the head of the monster and figure out what they’d be most likely to do in real life. At every decision point, the players are trying their best to win, and can freely pick the most beneficial result, so it feels fine to me.

The AI isn’t playing the same game as the player, and the design should embrace that. Its purpose is to provide a somewhat unpredictable and varied tactical canvas on which the player’s decisions can succeed or fail.

I bailed on DVG after the sorry QA state UBoat Leader was released in.

Thanks on FOF. I’ve heard it’s a challenge, in part due to the rulebook. FOF2 does come with the latest rulebook… I hope. I watched half of a Blue Tweezers video on the game last night and that was helpful in getting started.

And yeah, I’m looking forward to spending time with these other three as well. I’ve heard D-Day at Omaha is rather hard to win at.

That’s a good example, but I think you’re right that it’s more of a puzzle where taking into account possible AI behaviors is part of the puzzle. It’s worth noting the core concept is that players can “use” the AI against each other, as you point out. And I don’t think it carries over into solitaire very smoothly. I’d argue that Andrew Park’s attempt at a dungeon crawl does some clever stuff, but the solitaire mode isn’t very well thought out on a number of fronts, and the AI is one of those fronts.

Right, and that’s part of what I’m talking about. That shouldn’t happen. The system shouldn’t require me to play on its behalf.

As for your examples, I think they speak just fine for how it’s a problematic issue. I don’t know Marvel United – it’s the same fella who did Dungeon Alliance, by the way – but Gloomhaven is a great example of a game that can’t follow through with what it’s attempting. Both of your examples are the designer saying, “Look, I took the decision tree as far as I can, so if it has to go any further, eh, I guess just take over for me.” That’s your idea of a “totally valid design approach”? Wouldn’t it be a better design if the game and an AI decision tree that fit each other? In other words, wouldn’t a better design be one that doesn’t fall apart and ask me to step in an adjudicate?

-Tom

Of course. We’re talking about cases where the rules are unclear.

I can see that, even if it’s not really my thing for historical games. But if winning within the system means doing things that are demonstrably “gamey” and ahistorical, I find that offputting.

It depends how you define AI, doesn’t it? If you limit it to the invaders phase, then yeah. But if you take event cards that thematically represent the invader behavior, there are many times the cards make the players choose “for the invaders” (for example, after a quick look, overcrowded cities → on each board with a city destroy two presence OR (player choice) place one blight in a space with a city (player choice which one)).

Now if your definition of AI for SI is strictly the invader phase and not necessarily the totality of systems the player faces, then you are right it’s strictly defined, but when I think of board game AI my definition is broader.

But that’s just for SI. As @Thraeg says there are many games with this design approach of when tied, player can game the AI. And I think it’s a valid approach in the context of a solo board game.

Another interesting approach is the opposite: the enemy does the best move possible (within restrictions driven by the AI system, so it preselects an unit and maybe a target to execute the action, the decision space is limited), as far as the player’s ability to discern it goes. It’s very common in wargames (John Butterfield uses it in at least a couple of systems). As it has been said above, in wargames the concepts of immersion and simulation are important factors, and these kind of rules help with that. They also work great as a balancing mechanism, since the better the player gets at the game, the more effective these “best moves” are. But wargames live in a very different space than more mainstream games in terms of what they are trying to do.

I’ll say! You’re talking about an event card that gives me two options. That’s not AI by any definition I’ve ever heard.

The AI in Spirit Island is 100% scripted, and it never requires the player to make decisions on its behalf. It is not in any way, shape, or form an example of what @Zilla_Blitz has to deal with when he tries to play Sherman Leader.

-Tom

I’m talking about a game mechanic that models part of the behavior of your main opposition in the game (this card and that decision very clearly models invader behavior, as do several others in the game). That definition of AI works for me. But I understand it might not work for others.

You are calling an specific system AI (the predictable mechanics in the invader phase) and “not AI” anything else in the game. If that works for you to conceptualice discussion of the game, that’s great, although I think it limits the discussion on player choice during the execution of adversarial systems in a board game. This I think comes from a digital game bias (the use of even the term AI to board games is questionable) in which AI normally refers to the mechanics that drive the behaviors of individual agents, but even in digital games where AI ends and non-AI systems begin is hard to pinpoint (the director in Left 4 Dead… AI or game system?).

Anyway, it’s a finicky term, since I don’t think most opposition systems in board games are as easily isolated as in digital games due to the higher level of abstraction: only in certain genres like dungeon crawlers you get an agent-by-agent system that drives behavior and that it’s very clearly designed to simulate the algorithms of digital AI in games. You could take the opposite approach to me and say that Spirit Island has no AI at all, and that would be a defensible opinion if you really narrow the usage of the term.

Agreeing on a definition of AI in games that includes board game systems is pretty tricky to do (the CS definition of AI does not apply to most board games, nor does it really to many things people call AI in videogames). The one I bolded above it the best I could do that most people would agree on.

Well, we’re getting into semantic weeds here, but I’m okay down here if you are! Let’s look at your suggested definition of boardgame AI:

What’s with the “part of the behavior” phrase? Why not just the mechanic that models the behavior of your opposition? Sounds to me like you’re trying to come up with a definition specifically to include making choices on an event card. :)

Correct. I’m calling the AI the AI. I am not calling the rest of the game AI. I don’t see how this is problematic.

You used the example of an event card that gives the player multiple choices. That’s a perfectly viable design choice and I don’t see how it’s an example of the player having to make decisions on behalf of the AI. Honestly, I have no idea why you dragged Spirit Island into the discussion.

It shouldn’t be. The discussion began because @Zilla_Blitz is playing a solitaire game that can’t be arsed to have a thoroughly designed AI system. Getting from there to Spirit Island, an excellent example of a thoroughly designed AI system, is quite the leap. I’m happy to talk about the importance of decisions in game design, but I stand by my point that if you’re making a solitaire/co-op boardgame, the AI should never ask me to make decisions on its behalf. That is always* a gap that should have been fixed.

-Tom

* IMO, of course, and I’m certainly open to hearing more counter-examples like @Profanicus bringing up Dungeon Alliance

But why not? I already acknowledged that these situations are thematically muddy, because it confuses the question of exactly who or what the player is controlling. But I don’t see them as inherently problematic from a game mechanics design point of view. Do you?

Imagine that Gloomhaven’s ambiguity rule for monster attack targeting was replaced with “At any time as a free action, a hero may bang their weapons together and yell 'hey ugly, come and get me!` Any time a monster is deciding between two equally valid targets for an attack, it targets the hero that has done this most recently.” Same mechanic, conceptualized with different flavoring: Would you still consider this problematic “playing on the AI’s behalf”?

Equivalently, what if a wargame said “The player controls the Allied forces, plus one double-agent German intel officer who subtly sabotages their decision-making process from time to time.”?

A bit clunky conceptually sure, but I don’t see it as an inherently illegitimate way to design game mechanics. And I’d be intrigued to play a game that leaned into this sort of thing and took it even further, and said, for example, that every turn the enemy forces draw three possible action cards that are bad for the player in distinct ways, and the player chooses which one they will execute. Conceptualize it however you like (information warfare, traitor in their midst, bureaucratic incompetence, etc.), but I think there could be some interesting decisions for the player to make there.

In a recent game of Marvel United, on the villain’s turn, the action card says that they move to the nearest hero, attack, and spawn two thugs. Ant-man is two spaces clockwise, and Hulk is two spaces counterclockwise. Do I play it safe and send the villain to Ant-man, who has shrunk and is immune to damage? On the other hand, Hulk would be really hurt, but could retaliate by smashing all thugs at the location which would let us complete a mission. And what’s in the other locations nearby where the villain might move next turn? The right answer is non-obvious and requires thinking through what might happen.

Sure the game could have included exhaustive tiebreaker mechanics here, but in this case it would have robbed me of an interesting decision to make, so I’m not willing to call that a categorically better design.

I guess what I’m getting at is that every game consists of periods where the game mechanics dictate what happens, punctuated by moments where the player can make a decision. In evaluating a game, I want all of those decision points to be characterized by unity of purpose (at each one my end goal is the same). However, I’m not particularly hung up on unity of identity (it “makes sense” for the person or group that I’m nominally controlling to be making that decision). And even less so for unity of simulation (that whoever is doing the action does so in a way that matches what their historical counterpart would have done).

The definition is without “part of”. Since not the whole behavior of the invaders is modeled in the invader phase (smaller parts of it are modeled through events) even the invader phase system only models “part of” the behavior. Most of it, sure, but not the whole.

The aides is that any mechanic that models part of the behavior of an opposition can feasibly be called AI. The event cards actually make the invaders more lifelike and unpredictable and are part of how they are perceived as an active opositor by the player (as any designer knows, AI is more about player perception of “active” opposition than actual implementation, which might be a reason why you draw a pretty definite line).

I know programmers and designer that would say what you call AI in Spirit Island is not AI but a random system with no heuristics and thus not AI.

Definitions are important because otherwise it’s just assigning subjective value. What would be your definition of AI in a board game other than a judge’s definition of porn?

I’m with Juan on this one – in the context of a solitaire boardgame, I don’t see a meaningful mechanical distinction between ‘AI’ and the rest of the mechanics. You have player decision points, and then you have everything else that operates by some sort of mechanism to bring you to the next player decision point.

Talking about AI makes sense for a system that exists separate from the game itself, and that tries to use information about the game state to make decisions and “play the game” in more or less the same way that a player would, obviously with mixed results.

The opposition systems in boardgames are inherently asymmetrical and not playing the same game as the player. They should be judged not on how “smart” they are, but on how effectively they result in varied and interesting decisions for the player to make.

I feel like there are two elements interwoven in the conversation here. I’m struggling with Sherman Leader because it’s both ambiguous and incomplete. I’d be fine with your example because it gives me clarity and completeness.

Likewise, if the Sherman Leader rules stated “choose the German action to your advantage”, that’s removed both the ambiguity and the completeness. I know how the designer intended the game to be played and I can move forward.

If the rules said, “choose the best action for the Germans”, it’s clear but incomplete as a solitaire system designed to play against me.

As an aside, how you interpret the “you choose” cases seems to matter a lot in Sherman Leader. I’ve started another battle and rewritten the “you choose” decision trees to be more favorable to the Germans. In two turns I’ve already lost an irreplaceable and expensive Wolverine tank instead of a low-cost expendable Stuart. That’ll have a long term impact on how the campaign plays out. One example is only one example, for sure, but over the course of a campaign I’d bet the difficulty really forks over what you do with the “you choose” decisions.

I feel like I’ve wandered into crazypants territory here. I’ve got @Juan_Raigada trying to call out Spirit Island as a game in which you make decisions on the AI’s behalf. It isn’t. And then I’ve got @Thraeg calling a game’s lack of tie-breaker as “a valid design approach” to AI.

I’m going to stop you right there and disagree. Yes, the entirety of the AI behavior is modeled in Spirit Island. I never have to make decisions on its behalf. And, no, event cards don’t count. Event cards are a system that was added to the game to make it less deterministic, not to force the player to assume the role of the invaders.

Pfft, easily. The AI is the autonomous system that pushes back against you winning. But you seem to have decided that AI includes anything that involves a decision, so by your circular logic, the AI necessarily involves player input. I get the feeling you’re just arguing in favor of any definition that supports your claim about Spirit Island.

Now you’re just making excuses for half-assed game design. Look, if you want to play a solitaire game full of “non-obvious answers that require thinking”, you’re going to have a ball playing chess with yourself. That’s going to give you all the non-obvious answers you ever wanted in a solitaire game!

Seriously, though, your example is terrible. You’ve convinced yourself that Andrew Parks’ loosey-goosey “enh, just let the player decide when it’s a tie” is some kind of design concept. In which case, you’re going to love Sherman Leader!

I have read this paragraph several times and I have no idea what you’re trying to say and how it applies here. But, sure, I’m all for unity of purpose. So agree to agree?

-Tom

I feel bad that you’re having to play Sherman Leader. There are so many better examples of solitaire wargames, and I know you own some of them!

-Tom

No, I’m not saying AI is anything that involves a decision. I’m saying that AI can be considered (under some definitions) the totality of systems that model opposition, and in Spirit Island that includes some of the events.

Playing the Spirits is where most decisions are, and that’s clearly not AI by any definition, but once you get into systems that do not model player avatars but opposition avatars the distinction is fuzzy. Most informal polls I supervised on the courses I taught about this subject seem to point that people fall on both sides of that line, depending on whether they focus more on what’s being modeled (they consider more systems as AI, to the extreme as saying there’s no meaningful distinction once you leave the systems that clearly model the player avatar) or whether they focus on the experiential execution of the systems by the player (with some going as far as considering systems with little or no heuristics and limited “reading” of the board -as the invader phase in Spirit Island- as “not AI” due to the very “non-intelligent, non-reactive” nature of the process -it’s more a random-rule cellular automata, something no programmer would ever consider “AI”-).

It’s not a simple concept to define, specially when applied to games, very specially when applied to board games.

The discussion is not dissimilar to the distinction between puzzle and game. There’s a line there, but depending on perception and underlying assumptions, people are going to place it at different points.

I was trying to come up with terminology to disambiguate separate, but related concepts about what bothers different posters about solitaire game design, so that we can avoid talking past each other. It seems that I failed! Let me see if I can expand on it a bit.

Unity of Simulation: What @TheWombat is advocating here. When making decisions about an opposing force’s actions, pick based on your best understanding of what they would have done historically. The priority is simulating reality, not winning the game for either side.

Unity of Identity: What you seem to be advocating here. If the game ever asks you to make decisions about an opposing force’s actions, throw it in the trash because that’s a mortal design flaw. It’s important that the player has a clear identity in terms of who or what they’re controlling in the game. If the player controls side X, and side X wouldn’t logically be making a particular decision, then the player shouldn’t be either.

Unity of Purpose: What I am most concerned with. When making decisions about an opposing force’s actions, pick based on what will be most helpful for me reaching my goal (achieving victory for my side). My purpose is the same, whether I’m making choices about my own forces or the enemy forces. It’s the designer’s job to constrain the system such that this doesn’t become degenerate or boring.

Note that I’m not saying that the system should include decisions that the player makes for opposing forces. I agree with you that in general it’s more thematically elegant to design to avoid that. Just that those decisions don’t particularly bother me unless they want me to pick what’s historical, or optimal for the opposing force, against my own interests.

This would be the diametric opposite of Unity of Purpose, and is exactly what I’ve been arguing against this whole time! Make one move whose purpose is “victory for white”, then another whose purpose is “victory for black”, back and forth forever.

By contrast, going back to Sherman Leader where this discussion started, if I were to make a bunch of moves of Allied units and then some constrained targeting decisions about who the Germans shoot at, then as long as my ultimate goal for both of those types of decisions is “victory for the Allied side”, Unity of Purpose is not violated.

Top post, those definitions are something I will use in future…

I got nothing. :)

I think DA is one example of a deliberate mechanic, where player choice is an intended part of the enemy AI - but I too really dislike ambiguity in the rules with AI action selection and resolution. I want to play against the game, not against myself.

I was playing Imperium recently, which is civilization-ish deck/tableau builder that has an AI* for every nation in the game (if you count both boxes that’s 16 AIs). It can have up to six cards in it’s hand. There are six stacks in the market. The game comes with one six sided dice… :)

So ultimately tie breakers can always eventually come down to the card indicated by the dice, or the lowest slot number, or the most recently acquired cards. If the AI attacks me, I still choose which of my cards to discard, but this is how it works in multiplayer also.

Not sure I have a point is, just wanted to mention a cool game that does head-to-head AIs well in a genre that often doesn’t! :)

*Not the pedantic academic definition of AI, but ‘automa’ as they tend to called in board games, i.e. systems that substitute for a human player in some way.

Great post, I like your thinking.