Strategy Games: The Next Move

Alright, alright, you guys are correct, bad example. :) Replace chess with any reasonably complicated turn-based tactical battle game that can’t easily be brute-forced.

Jon

Plus I heard they had problems with a rogue AI developer going wild on internet forums, though it appears his forum activity has since been curtailed.

AI for Bridge is pretty good now I think.

completely non-random outcomes

The best Backgammon programs are better than the best humans nowadays.

only 1 move at a time, and relatively few possible moves. A game like civilization is vastly more complex.

Even the simplest computer turn-based strategy games are way more complex than traditional board or card games though, so I agree with you here.

Well, as I’ve said before, personally I’d like to see the AI in strategy games improve quite a bit, and across the board. At the end of the day you have two metrics for whether or not an AI is good - what you feel in your gut and what will make the most money. Based on sales numbers for games released over the past decade, from the latter perspective what you see released is definitely “good enough”.

Jon

I judge my AI’s on how it handles playing by the same rules I do. I make an honest effort to not exploit AI’s in return. (aka trying not to use features the AI can’t handle). This is why I always automate my workers in Civ- I feel it’s an unfair advantage if I don’t. (also, manually doing workers is annoying and unfun to me).

What will irritate me AI-wise, is when the AI can ignore rules I have to follow- to use examples from games from posters here- to use a couple of examples from EU3- naval AI attrition and before HTTT, the advisor system, or tech trading in GalCiv2.

I do think expansions should, as a general rule, buff the AI.

I thought the point of the harder difficulties in say, Civ, was that the AI cheated to make up for the suboptimal decisions it would make and that as a player you could compensate only by micro-optimizing everything and playing a perfect game? Or do you generally only play on the highest setting where there is no blatant cheating? I’m just curious here, as I’m not a fan of cheating as a crutch to boost poor AIs, so it’s a catch-22 whether you want to handily win every game or want a challenge that’s completely fabricated.

The problem with playing a perfect game is you don’t get much variety by playing it usually.

I tend to play on the hardest no blatant cheating settings, usually with some sort of handicap for myself as well (no exploits, sometimes with additional rules)

Me too x100. In fact I think that you really need to decide whether you are making a multiplayer game or a single player game and go one way or the other. The AI needs to be designed from the ground up with the mechanics. Every mechanic needs to be vetted immediately for AI feasibility.

Psychology is such a huge factor. I would love to see a scientific study done where you had players play against AI’s and humans without knowing which was which in a game system where the AI could be competent+. Maybe somebody has already done this. But I’m convinced that just knowing you are playing against an AI changes the experience and makes it harder for AI programmers.

I’m beginning to think that this is a root problem in the approach. Maybe the AI should play by different rules. I’m not advocating cheating mind you but it’s got me thinking that maybe the AI should be more like the game keeper in Descent or Mansions of Madness where it competes with a different rule set. One that’s more tailored to entertaining and challenging players. That is a big change in the design approach where you create a mechanics system that assumes that humans and AI’s will be pulling the same levers and pushing the same buttons.

I’d also love to see the humans can reload problem addressed. An iron man game is a different experience than a reload at any time game. I’d love for AI’s to be able to reload on the humans. I guess in a way they can by running “simulations” of the game state that humans can’t possibly match. :)

A Turing test for game AI. This is a really cute idea. However, I think it is okay for the AI to play a fundamentally different style of game than a human as long as it provides a challenge. I am quite certain that expert chess players can tell they are playing against a computer and not another human even while their opponent thoroughly defeats them. So while creating an AI that plays indistinguishably from a human play style sound like an interesting challenge, and would no doubt make for an intriguing opponent to play against, I believe it is not necessarily in line with the goal of creating a strong AI that plays the game well.

Why is the direction of this conversation so focused on game companies making competent AI’s instead of talking about how game companies could open up their API’s to allow the game community to develop their own AI’s??

The recent google AI competition should have made this glaringly obvious - not to mention the success of the original supreme commander where Sorian proved his chops even with a limited scripting ability.

Think instead if the gaming companies got together and agreed on a type of AI API standard for the basic features - scouting / unit identification / pathing, then the community would take care of the rest. My programming background is probably clouding my judgement, but I could foresee the day when more people would be throwing their AI’s at each other instead of playing each other real-time.

As for whether it’s worth it to have a decent AI - I think the recent Supreme commander 2 AI proves that something challenging can be created. Yes, it’s technically cheating with resource bonuses, but that’s OK with me. I just notice how it’s still in the top 20 games on Steam so long after release. We all know the multi-player is dead - but there is a reason thousands of people continue to play it daily.

So why aren’t companies creating a better API to allow the community to spend the resources to create great AI’s? Or has this been discussed and discarded in the past?

I agree with this, being able to play and create custom AIs is probably the best solution for the hardcore, as the hardcore players will make their own AIs. And the nice part is that the AIs can get updated more frequently this way, and made using different methods. One of the problems with making a good game AI is that the game is often in constant flux, its not like chess or backgammon or whatever, whose rules have been pretty much set in stone for a least a hundred years.

I doubt the game industry will ever agree upon a standard API though, probably never happen.

And there was actually some games like Carnage Heart, where it was all about pitting your programmed robots vs your opponents. My brother and I totally dug it – but we both ended up being programmers in the game industry.

i suspect you’re wrong about that. My impression is that what most strategy gamers want is AI that can challenge them in any way.

This is probably the most important lesson I’ve learned in the last decade. It doesnt matter how cool a feature sounds, and how fun it is, if the AI can not handle it.

I judge my AI’s on how it handles playing by the same rules I do.


Look at some of the dearest and greatest strategy games from all the time: X-com.

You don’t play the game in equal terms against the AI, nor in the tactical mode nor in the strategic (geoscape) mode. And the important thing, the game is a blast.

The ai have to be just good enough so the game is fun for the player, the rest is just useless talking.

Giving up on this would make me enjoy the genre a lot less. It turns it from a strategy game into a puzzle game for me.

You’re right on the reload issue. Another issue is diplomacy- the human can make the AI do things it wouldn’t want to do, yet the AI can’t do that back to the human. I think for diplomacy, there needs to be at least an option that the human has to accept deals that the AI would accept in the human’s position. I know I’ve suggested that for Elemental a few times, though I’m not surprised at the balking (why I suggest it to be an option)

While there are several excellent strategy games around now (AI War, SupCom 2, Retribution and soon Shogun 2/Elemental revamped) I don’t have the feeling like Tom does that it’s a golden age. In a golden age we would have had a lot more variety and more developers/publishers working on strategy games and that isn’t happening anymore.

Soren Johnsen has a point that the mid-budget games are disappearing in favour of tried-and-true licenses (SCII, Civ, Total War) and indies (AI War, Elemental). There’s almost nothing in between. No more Big Huge Games, Petroglyph, Massive or GPG doing their own thing with a new IP.

If the press is being honest though then it must be said that strategy games also don’t get the coverage they deserve. I can’t imagine a strategy game getting the general GotY award for example. In these times only a shooter or action game can win that and that’s quite hard to accept.

As for AI; team game skirmishes in Retribution with all Expert AIs still offer a decent challenge. It’s artificial but it’s still a good practice. Poor show from Relic of course but they’re offering other, more interesting game modes than most competitors do.

Not to derail, but I can’t exactly PM either. :p Earlier on someone mentioned that they didn’t pick up Star Ruler due to the current AI issues – those issues will be resolved in the next patch. In the latest patch we changed large portions of the economy related to early-game expansion and the AI doesn’t yet understand how to work with that economy. So they tend to over-expand, kill their resource production, bankrupt their economy, and then just ‘fizzle’. There were also some problems with the ordering of structures on their Homeworlds which was causing them to drop and rebuild some of their structures due to lines being out of order in the AS file. Just a FYI if anyone wished to know. :) As an aside: I also would enjoy games making their AI customizable by the modding community/end-user. I hope that other developers see that our exposing of our AI has only led to positive developments from our user community. As for difficulty: Creating a really good AI that can do everything a player can but doesn’t become impossible to defeat is really challenging. The reason why I don’t think a ‘standard API’ could be developed is that all AIs are incredibly specific to the game they were created in. Not even the pathfinding solution may be applicable in every case. D:

I thought so too until AI War came along and made me really think about it. I’m now of almost the exact opposite opinion.

Symmetric TBS games like Civ & HoMM end up being puzzles, because the AI know no strategies and cannot adapt. All it can do is pose you a gameplay puzzle, where the level of difficulty translates fairly directly into the number of fixed solutions.

Symmetric RTS games like StarCraft & SupCom don’t become puzzles, exactly. Instead they become a sort of AMP-trials, where what you do and why, pales in significance to how fast you can do it. Because again the AI doesn’t really know any strategies, and even if it does it can’t optimise them to what you do or transition into something more useful when its plan A goes awry.

Asymmetric AI is clearly the way to go in terms of fun and challenge in strategy titles. There’s no conceivable way an AI can match a human at complex equivalent goals. It’s all about faking it, and the depth of most gameplay has surpassed the ability of any AI. Civ V’s crime isn’t so much that it can’t compete, it’s that it can’t pretend it’s competing. Even the lowly managerial strategies such as Ascaron’s merchant titles or Sid Meier’s Pirates could hand you your ass on higher difficultly simply because the ‘AI’ was given a greater scope and a role it could flourish in, such as grand computation and the effect of vast numbers of opponents rather than attempting to mimic a lone person’s choices and strategies.

It takes a lot of expensive engineering, design, and testing time to get a good AI in place, and it virtually requires that everything else in the game be finished because new features or major design changes can fuck up all of that AI work. How many developers can sit on a finished game and let a few engineers and testers work on the AI for a year, and will that cost be earned back in sales?

It’s also a great idea to open up your API to users, but to do so is also an extra cost that may or may not be earned back. While it’s cool to see people playing your game longer, it’s actually irrelevant if you don’t get enough sales to make back your development budget. It’s terrible to have to think in these bottom-line terms, but it’s the reality for those of us who aren’t as successful as Blizzard.

The reason AI gets half-assed is because it’s hard to work on it in earnest while everything is in a constant state of flux. SupCom 2 is probably an extreme example, since we barely had over a year to make the whole thing, but some major units and systems weren’t functional until a couple of months before ship. How do you write AI to properly use nonexistent units? And because of the need to finish the campaign so early because of localization, voice recording, and other production issues, it gets finished before AI work has even started. This is why campaign AI is totally different from skirmish AI in SupCom 2; there is none in the campaign, because there was no AI when the campaign was made. You also don’t necessarily want AI in a scripted campaign, because it will be less predictable and harder to control the difficulty.

The psychology is interesting. If a player loses to AI, they may feel stupid or frustrated and quit. If they lose to a human, they think they can practice and get better. How many people will practice to beat the AI?

I think most of us do, in a certain sense. That’s what reloading from saves and variable difficulty levels are all about: the ways we each find our way to a given game’s Chick Parabola.

To your preceding points, though, it strikes me as a problem of prioritization and scheduling. It seems that it’s not the norm in the industry to conceive a ruleset up front, prototype that ruleset, and test it thoroughly and repeatedly over the course of development. Why is that? Why were you guys adding in new, game-changing units that late in the process? If it screws everything up to do that, it seems to indicate either a flaw in the basic ruleset, or a disregard for balance.

Not that I have a clue, since I’ve never developed a video game. But I have developed a couple of boardgames, so there might be some overlap there.

Not necessarily true at all. If they lose to a human, they’ll blame “cheese” and try to get it nerfed in a patch. (sometimes they’re right, sometimes not)

Asymmetric AI is good for some games, in others it’s a cheap cop-out.

It would be an interesting experiment to allow players that are interested in “toughening” up the AI of their favorite game to “play” scenarios against itself, using observations of the play-style of the play it has seen on that local installation of itself, during “down time”. “Down time” in this case being both the time the player is playing (using multi-threading - something I think Stardock has already experimented with in it’s titles) and also by running simulations as a background process during the “down time” of computer idle time ( a la Folding@home or the SETI one). Additionally, a library of uncovered strategies could be compiled by a centralized server and distributed dynamically as well, allowing the game AI’s to communicate it’s findings and get the strategies other installations have found in playing their owners for it’s local use.