All it needs is feet. Are feet. Presumably they thought of that though.

It’s probably not removable because they didn’t want people taking the stand off and then setting it upright so that all the intake holes were blocked.

That would make sense, I imagine the Xbox team is deeply paranoid about heat issues.

Is the GPU actually less powerful or is that just conjecture? I know the XBSS has less memory, which definitely makes sense why XB1X enhancements might not work.

Ding ding! I gotta think that’s the reality of it. Because you know that’s what people would do. They’d take that stand off, lose it, then tip it on its side and boom, dead Xbox, resulting in long CS calls and lots of crying.

The Xbox One X has 12GB of DDR5 at 326GB/s bandwidth

XSS has 10GB of RAM, 8GB of which is at 244GB/s bandwidth

Xbox One X GPU is 6 terfalops, 40 compute units, 1.1Ghz. It’s basically a beefed up version of the Xbox One GPU.

XSS GPU has 4 terfalops. 20 compute units, and 1.55GHz. But, again, it’s a brand-new architecture, and a lot more advanced than the Xbone/XboneX. (XSX is 12.4 teraflops).

The XSS CPU is waaaaaay more powerful than the XboneX CPU. It’s not even close. Storage speeds and IO are also light years ahead.

Eh, it’s a bit squishy as a matter of declarative fact. One X is more TFLOPS, but is also the far less-efficient GCN 4th Gen architecture. Series S is fewer TFLOPS, but RDNA2 is far more efficient per FLOP.

One X had more and faster GDDR5 memory, Series S has less and slower GDDR6, but with the various enhancements of the next-gen ā€œVelocity Architectureā€.

And then you’ve got the CPU, where the Series S is like a Ferrari, and the One X is like a hamster that fell asleep on it’s wheel.

Yeah, don’t take the RAM speed on the XboneX to think the XSS is at a disadvantage. Remember, the entire point of the XboneX was for MS to push ā€œ4Kā€ gaming. They probably needed it for 4K gaming, but for 1080p/1440p, you don’t.

I’ve seen the sentiment around since reveal that two specs are going to be a problem and it sounds like some devs are officially voicing their concerns.

Good to know about BC on the XSS. I’ll keep my One X around for Ninja Gaiden then.

Not that I planned to upgrade because there’d be no point for me, but the thought at least crossed my mind.

The XSS GPU is substantially slower than the xboneX. The CPU and storage is ridiculously faster. But games primarily scale with the GPU.

Much less so at 1080p of course.

With a fast GPU, it’s less likely to be the bottleneck at lower resolution. The XSS is not a fast GPU. It’s slower than the xboneX, and that’s basically a RX580 in performance.

Do we really know this? Has anyone tested it in any way?

Since the TFLOP count is not comparable due to different architectures. Like when nVidia bragged about Ampere 3080 running 30 TFLOPs, but due to different architecture it is actually equal to 21 Turing TFLOPs.

It’s extrapolation from RDNA1. 20CUs is really weak. Even if RDNA2 offers a 30% IPC improvement.

OK, that’s a little bit funny.

This needs to be officially confirmed but I have to believe the 2 GB that’s substantially slower is set aside for the OS and isn’t for games and certainly not meant at all for graphics.

These comments from devs so far look like people who don’t actually have the machine and are making assumptions.

Gee… someone said that’s usually a problem earlier in this thread.

This is no surprise.

Apologies, I didn’t realize you had previously posted the info.

One thing I learned during the PS3 era is that developers will complain, but they’ll still sit down and do their jobs and somehow eek out impressive results, especially if they’re good programmers.