I would love for that to happen. I’ve constantly been disappointed with the fact that the first thing devs do with massive increases in power is to increase texture size and pixel count. I mean that’s good, but where are the amazing boosts to AI and physics?
I mean here we are in 2018 and a demo where leaves on the ground react to the two players on screen is blowing people away. But it looks insane, so that’s fine.
PlayStation Now has been providing it for years. It has a library of 650+ games and you can subscribe for one year for $99.
Many game play fine via streaming. If you are super sensitive to lag, or compression artifacting you may not be satisfied, but it’s sufficient for lots of people I’d wager. The biggest challenge is the consistency of your network connection as spikes in lag from your ISP or wifi will be detrimental.
Personally I would not expect the new Xbox generation in 2019. Phil Spencer’s speech at the end of their conference essentially promised they will have a hardware power advantage in perpetuity and that is difficult to achieve launching before the next PlayStation not knowing what they have achieved.
My expectation is both will use Ryzen based tech, Microsoft is going to use GDDR6 in a unified memory setup. I think Sony may have a more customized GPU and use HBM2/3 memory, possibly just for VRAM, with a large pool of DDR4 as system memory managed by AMD’s HBCC. Microsoft will adjust their clock targets as late as possible like they did with the Xbox One.
Red herring. “Ultra quality” is horseshit, a ton of computational expense for stuff most people won’t even be able to see. “High quality” is the proper benchmark. Ultra = e-peen wankery = THIS ONE GOES TO ELEVEN
Then I can tell you unequivocally that 1080 Ti is plenty good for 4K in modern games on high settings at 60fps. There is not a lot of headroom, you won’t be getting to 120fps constant any time soon at that res, but it fits.
None of this matters because there is no way in holy hell a 1080 Ti level of performance is shipping in any console in 2019. And 2020 is still a bit of a stretch in my book.
Next gen most consoles will employ checkerboarding, temporal injection and even more advanced reconstruction techniques to lessen the flop requirements of games by a large factor at 4K resolutions, just like many games already use on the Pro and XB1X.
If that’s true, and assuming Sony funneled money into development of Navi, I’d imagine they would have wanted to prohibit its use in a competing console, so the fact that Microsoft reportedly isn’t using it makes sense.
This rumor showed up on a tweet like 10 days ago. I saw it being discussed at Beyond3D. The original tweet was basically trying to blame Sony for the delay of getting Navi to production, but the other perspective is that Sony’s input will likely make the release product better than it would have been without the added time.
After a couple of generations seeing their tech repurposed by partners in competing consoles, I wouldn’t be surprised if the contract with AMD this time was a bit more specific.
I don’t see why not. Of course Sony would have to pay dearly for that exclusivity. It’s not like MS can just switch to Nvidia, it’s true that the Xbone uses intel instructions and the feature-set of AMD/Nvidia GPUs is basically identical for most purposes, but Nvidia doesn’t make integrated SoCs outside of low-end ARM. They don’t even make APUs. So they would need a separate CPU and GPU which increases costs and cooling requirements.
Also Nvidia, being on top of the world with their GPU business, is less willing to play ball on price than AMD, who until very recently was absolutely desperate.
Jeff: “So when you pie-in-the-sky think about what that next box is… what’s going to be the next thing in that ‘we have to have this’, what is it?”
Phil: “If you look at the Xbox stuff we are doing right now like variable framerate… I think framerate is an area where consoles can do more just in general. You look at the balance between CPU and GPU in todays consoles they are a little bit out of whack compared to what’s on the PC side and I think there is work that we can do there.”
Well I knew I was right, but it’s always nice when those in charge agree!
I believe he was trying to reference variable refreshrates too, and I’m totally for that also. Freesync TVs are just starting to appear, and variable refresh makes framerate inconsistency largely irrelevant, smoothing over a lot of performance problems in underpowered hardware. No need to lock at 30fps any more, anywhere above 30fps is fine and feels butter smooth.
Why would they require that? Each dev should have the ability to trade-off framerate for visual quality. 60fps doesn’t mean much for a slow deliberate exploration title, but is huge for a fast-paced esports or fighting games. And freesync makes FPS largely unimportant anyway.
Requiring 60 FPS for everything is kind of silly, though, isn’t it? Think about what you could do with visuals if you didn’t have to do that in gameplay sections where it wasn’t really needed, especially with TVs that can accommodate variable rates.
I don’t think the obsession with ever-higher framerate multipliers is actually beneficial to games creation, no. But it’s a big thing amongst hardcore gamer types, console wars, etc. PC games are already basically 60 or you get hammered for “crappy ports”. Both Sony and Microsoft are going to want the marketing point of having it and want to avoid the marketing embarrassment of not having it when the other does.
Well without freesync, 60fps does feel much better than 30fps. But developers must have the freedom to balance performance versus visual quality. Mandating that every game run at locked 4k 60fps with HDR doesn’t make sense. What matters is the end result.
It doesn’t really matter if the game isn’t true 4k if through the use of tricks like checkerboard rendering at 2560x1440 you get a better image and framerate. And it doesn’t matter if the framerate isn’t locked at 60fps if you have freesync.