Stadia - Google's vision for the future of gaming

In Google Stadia, game plays you.

If they are going to predict your inputs, what happens when it gets predictions wrong? You’d have to train some machine learning algorithm to figure out how good you were at the game, so that it hit the right buttons the right proportion of the time, and the game would basically be playing itself whenever your input agreed with what the algo thought you’d put in. But when your input disagrees with the algo, what happens? Do you start to do the right thing, then cancel the action and do the “wrong” thing? Does the stream hitch up a little bit and not feed you those frames?

It would be funny if the algo realizes that you’re just terrible, and it can’t predict what you’re going to do, so the game feels extra laggy for you.

Well, the technology already exists and one version has been made open-source and free yesterday. It seems to have been used in Skullgirls and other fighting games, so it must work well.
I don’t particularly care to learn how it works, but the page seems to link to enough information about it.

I think it’s bullshit. Who knows what’s you and what’s the machine at that point? I want no part of it.

Okay Google, play this level for me.

This is a fundamental disconnect between a bunch of eggheads and art. They think they understand why people like games. Sweet Graphics! Blood, explosions, boobs, guns! Deep Progression! Our stats have stats, and you can incrementally improve them all! Loot Boxe-- er, Surprise Mechanics!

What makes games fun is making decisions and competing with other people who are making decisions. “Well we can’t do that over the internet reliably so Machine Learning to the rescue!” Which is exactly what someone who doesn’t understand humanity would do. Yeah, no. I want to play the game, not have it run on autopilot.

So… I’ve encountered this concept before but haven’t thought about it much. It’s worth thinking about some more.

If you think about the specific niche of fighting games, rollback networking could work to some degree. Often, you’ll have players of different skill levels. The game can notice this and ‘predict’ that the other player will block/hit because he’s been hitting more and better so far. But this is also kinda shitty: weaker players are going to do worse, because the algorithm already predicted for them that they’ll do badly.

Furthermore, fighting games are frenetic, and there’s very little time to process information. Contesting a move, or really noticing that something fishy happened, is really hard. Also, ‘undoing’ a block that didn’t happen isn’t a huge deal most of the time – most of the time, it just means restoring a little chunk on the HP bar. Same thing for undoing a missing combo hit in a series – in this case, it’s easier to just pretend the hit did happen, and keep on going. The attacker will be pleased that he was able to get the combo right, and the defender won’t know the difference.

In other words, the game is essentially designed around hiding missing and incorrect information. Whether you can do that in other games remains to be seen, but certainly in single player games you can always err on the side of the player – assume the player made the correct move, and he’s unlikely to mind. It’s only in games with large possibility spaces (for example, you could move in one of many directions and end up in an entirely different place) where this won’t work well.

This has crossed the line from enthusiastic marketing to just plain simple fraud. This cannot work, and will not work. I said it when this stupid idea was called ‘onlive’, and am happy to say it again ow its called ‘stadia’. You cannot get a better PC gaming experience than having a fucking PC a foot away from the monitor. I don’t care how clever google think they are, they cannot break lightspeed ffs.
And predicting input is such a dumb idea I assume the person who wrote is was on LSD.

Traditional stream tech and readily available infrastructure is already extremely good. Maybe not ideal for twitchy fighting games where split-second responses are critical, but great for everything else. Shooters, platformers, strategy games, etc, all work just fine.

They’re saying they can combat some or all of that remaining latency predicting what the user is going to do next. That’s a pretty big jump in machine learning and I would have to see it actually improving gameplay to believe it, but I see no reason why it isn’t possible.

Think about infinite resources. They could have three or even more separate instances of the game running for each user. One of those instances works traditionally responding to the player’s actions while the others predict what he is likely to do. Every frame, or perhaps keyframe, you evaluate whether any of the predictive streams were correct and if so you switch to it. I mean, it’s possible.

That sounds like something I don’t actually want to be possible.

Why? Think about a shooter in a frame-by-frame 16 millisecond sort of way. This frame you’re running forward-- you’re very likely to continue running forward the next frame too. This sequence of frames you’re aiming at the alien’s head. You’re likely to shoot at it.

The service isn’t playing the game for you, if you don’t click the mouse button to shoot you won’t see that head explode in a delightful pinata of green alien blood. But if you do, it’ll feel super responsive. That’s a good thing.

Also, John Carmack invented “rollback networking” with clientside prediction in Quakeworld in like 1997.

It shouldn’t be too hard for people like me who gets by on old hardware. I still think it’s a waste of money, but I also thought overpaying for a phone on a subscription was dumb and Apple won, so…

I can see it working with a single binary input. But with half a dozen inputs, some of which are analog, the likelihood of a correct prediction has to be really low. To have any chance of getting a useful hit rate, they’ll have to start treating incorrect but close enough inputs as having been correct.

So maybe they predicted a click at frame X and it arrived at X+4; close enough, just pretend they clicked at X. Or even worse, they predicted the mouse would move 1.5cm in the next 50ms but it moved 1.7cm instead.

What about the way good platformers alrady detect if you’re a bit too late pressing jump, and pretend you actually timed it perfectly? Somehow it feels different, but I don’t know why.

Exactly. I play games on a Shadow instance hosted in a datacenter 500 miles away. I can’t easily tell that the PC isn’t local (except that the graphics rendering is far better than anything my ancient PC could possibly muster.)

I was also a Google Stream beta tester. It worked similarly well. I streamed AC:Ody to my Chromebook and to my mobile phone, and it worked.

Right, I played a couple hours of AC:Odyssey on the Google stream test and Tomb Raider 2017 on Nvidia GeForce Now and both worked perfectly well. I do think third-person action games like that are particularly well-suited to streaming, though.

It’s worth noting that Google probably picked Assassin’s Creed specifically BECAUSE it’s a game that is extremely forgiving of input lag due to it’s inherently mushy, third person controls. Makes a good impression, without seriously taxing latency.

When they can do something twitchy, like Doom Eternal, at scale, across all of the pipes they’ve said the service will work with (meaning not just people on fiber), without people telling a difference, I will believe streaming has truly arrived.

And then you’re still left with the ultimate question I never got answered, of why any of this is better value than xCloud, aside from letting people who hate Microsoft avoid them (for the lovable little indie scamps at Google).

LOL they’ve been using DOOM Eternal at all their demos since they announced at GDC.

“… at scale, across all of the pipes they’ve said the service will work with (meaning not just people on fiber), without people telling a difference…”

Hey, totally fair to say that Google needs to prove that their product actually works. But kinda obvious, too.

With rollback networking, there’s an assumption that someone is “local”. Who or what is local when you’re playing on the cloud? As far as I know, there’s no gaming logic going on when I play street fighter 87 on my toaster oven - I don’t have the CPU for local computations, and don’t need to.

But rollback networking looks like you’re doing the computations locally then correcting them…so I can’t see how you correct the lag from your machine (which is streaming video and sending inputs) to the cloud…

In rollback networking, game logic is allowed to proceed with just the inputs from the local player. If the remote inputs have not yet arrived when it’s time to execute a frame, the networking code will predict what it expects the remote players to do based on previously seen inputs. Since there’s no waiting, the game feels just as responsive as it does offline. When those inputs finally arrive over the network, they can be compared to the ones that were predicted earlier. If they differ, the game can be re-simulated from the point of divergence to the current visible frame.

Maybe it’s not? This is a thread about Stadia? Seems like asking why Hulu is better than Netflix.