Will you build the OS like that? first, you will use some type of sparce storage, where empty pages don’t use space. Then you can maybe discard as much pages as you want. Let the paging of the OS restore the pages he really need. So from 5GB of data from the disc, you can load 512MB, and maybe let the others be loaded by the paging system. (It may be complicated, since the pages can exist in the snapshot or the swap file, but computers are good at tracking stuff like that)

Durrrrr hurrrrrr… thats how I would implement the feature, but I don’t design OS’s.

Well, it’s either a VM snapshot or it’s not. Yes, it’ll use sparse storage, and it’ll be compressed, so you’re right that it won’t actually have to load the full 5GB from disk. But it’ll still be 2-3GB at least, and that’s not instant. Which is all besides the point, because it won’t be stored resident in memory, so it’s not a RAM constraint.

It’s reasonable to assume that the Xbox One uses some version of Hyper-V to facilitate its features, and given that presumption, Hyper-V effectively uses ram to make snapshot saves and loads pretty much instant.

I’m no windows admin, but a quick google search indicates hyper-v snapshots are stored on disk. Shrug. I guess we’ll see how it performs.

Both MS and Sony decided to offer somewhere close 10x more memory to games then the current console making it 5GB, at some point adding more 1 or 2 GB stopped making any meaningful difference so they either saved on cost and reduced the amount of memory, and Sony was if rumors are true close to only having 4GB but that would be to little, so 6BG was probably the bare minimum but that doesn’t save any real money from 8GB. So what to do with the extra 2GB, Sony came to the same conclusion that MS did these extra 2GB offer no real advantage to games, but can make the system incredibly more flexible by allowing all sort of memory intensive applications to run in parallel with games, and that is what is going to use the memory applications not the OS itself.

It’s possible that this bet may no payoff and that the only resource intensive application people use on this boxes is games and for everything else they just want something simple like chromecast, but when building a device that gets no hardware upgrades for 5 years is best to prepare for the possibility, so I think they are making the right choice with the memory split.

Stored on disk, yes, but selectively persisted in memory to enable them to save and load instantly.

Wasn’t there something about the Xbone using three simultaneous OSes (presumably operating independently) to provide the diversity of promised services? If that’s the case, then you would have to plan for as many “max-out” scenarios for memory as you have active OSes. Seems bloaty, if true.

Kinda. One of those three is a hypervisor below the other two; the GameOS and the Windows OS for the dashboard/frontend stuff.

This article on Hyper-V makes a completely virtualized 2-OS system appear not to be very game-friendly. Lots of denial of access to hardware and graphics assets to the VMs. Hopefully, there are customizations/workarounds that let the Xbone VMs play nicely when they have to share the screen.

But hell if I know, any programming I did was not too far removed from punch cards (and in one instance, actualy did involve punch cards). However, my recent experiences playing with Oracle’s Virtualbox (WinXP under Win7, amazing that Direct3D works at all) indicates that sharing access to 3D graphics hardware is not something that comes naturally to a VM environment.

That’s the advantage of a closed platform, you only need to test on a single configuration and you can deny applications that don’t follow the rules, so there are a lot less edge cases to cover.

Yeah… anyone else reading this and thinking:

“So… Microsoft and Sony have developed some of the best selling consoles in gaming history. They are billion dollar companies in part because of their gaming expertise. Independent of each other, they both arrived at the same conclusion about how much of their memory they should allocate to gaming versus their operating systems. So I’m not too worried about what some of the folks here are writing.”

Could just be me.

I refuse to believe that multi-billion dollar companies are incapable of making mistakes.

For example, I can think of one company that thought it would be awesome to pin their console future on a DRM scheme that every gamer would find absolutely repugnant.

Another company thought it was a great idea to launch a year after their competitor with a $599 price.

Touche. I guess I’m saying that when it comes to this technical aspect, I’m willing to trust that the engineers at these companies more than the arm chair conjecture here.

Oh I know their full of geniuses dude. Like the ones that came up with stuff like the ridiculous used game DRM scheme that they didn’t even talk too the publishers about and so it consequently blew up in their face. Not too mention all the other 180 degree policy changes like indie publishing and always on these geniuses came up with. Hey remember Red Ring of Death? How about Sony’s $599 launch price and designing a console that developers had no clue how to code for. Nah these guys never screw up.

I think it’s more a case of both companies having been burned somewhat by their memory reservations last generation and having the luxury of being very conservative this time around. Even on PC, games that use more than 4GB are exceptions because of the Win32 limitations in place, so 5GB will feel plenty generous at least in the beginning. And since nothing is set in store more can be released in the future.

The way I see it is how do the developers feel about all this? I’ve been hearing is that they have more ram then they know what to do with. If they need more later then MS or Sony can allow for more, but until they care I certainly won’t.

That would be my guess, since even on the PC side, Win32 code can’t easily go over the 3gb limit as Brad notes above (although it might be different because there is the video card’s ram to deal with as well, while the consoles use integrated pools). Even a 64-bit version of a PC game is likely to be a 32-bit port with some bells, whistles, and perhaps operational smoothness added (the 64-bit WoW client for example, still has to deal with a system that has a lot of 32-bit clients logged in and can’t be seen to offer any sort of “edge” to the 64-bitters that isn’t hardware-related). It’s a new frontier.

It seems pretty naive to assume that because some product planners came up with stupid ideas around DRM and pricing in the past that the technical operating system architects are similarly stupid and/or misguided.

Possibly, but who do you think comes up with requirements like always on recording and live streaming for everyone?

Intelligent people who know how to look past forums full of disproportionately vocal deriders and simply look at sheer numbers of people who would benefit from such features, to the point where not having those features would put them at an extremely major disadvantage in a way that could significantly harm sales.