PS3 chip has been patented

Trying to bring down the Sony hype is impossible. Believe me, I tried when Dreamcast launched and less informed folks simply ate it up.

The fact that PS2 went on to sell so well sort of makes PS3 a self-fulfilling prophecy because no matter how outrageous it gets, the people are happy with PS2 and probably still think it can guide missiles and that the weapons inspectors are going to turn them up any minute in Iraq with the coordinates to downtown Tel Aviv programmed into them.

–Dave

Dude, Saddam just had bulldozers run over all his PS2s. I just watched it on the news!

Don’t sweat it, because “Buoyed by so much processing power, consumers will be able to interact with these worlds without worrying about hackers, viruses, or lost connections.”

Dean Takahashi, author of this thread’s article in question and the above sentence, has a long history of contributing to the ruination of our lives. What will happen upon his third offense?

But here is the thing-the PS3 only needs to be able to do half (or even much less) of what they are claiming and 95% of its users will believe it is doing all of what they say. Remember when the DC came out and it was pushing 60FPS on Soul Calibur, and people went “Wow! Look at that!” and a year later people were saying “The PS2 will be able to do 5 times that!” Sony was spouting out the numbers, as usual. Well, it didn’t work out that way, but it WAS a bit faster than the DC and so people bought right into it. Who the hell can tell the difference at such speeds anyway? So for the average guy on the streets Sony can say anything they want and there is no way they can be challenged. You techno-guys, with your dev-kits telling you the real in-motion specs know the difference, but most people don’t. So I don’t remember anyone other than the hardest of hardcore challenging the PS2 specs when it came out.

Christ, you guys, as if the XBox specs were realistic. And Dave, you just need to get over the Dreamcast. I see my PS2 do things every day the DC couldn’t even concieve. Sega didn’t lose the generation because of some Press Releases from Sony, they were out spent, outclassed, out marketed and always willing to make compromises in order to be first. Same reason the Saturn failed, Sega’s hardware people lack vision.

They didn’t say the PS3 would do this, only that the architecture was designed to handle this. Sony, Toshiba and IBM (well, mostly IBM) are always looking for the next big thing in server technology. Hardware based distributed could be big. Look how successful Seti@Home, RC5, etc are. You can sell your idle cycles, and spyware will steal it sometimes.

Nah, the whole idea behind the Cell architecture is that you never have to worry about managing threads. It supposed to be completely abstracted for the programmer. If Tim gets a headache trying to manage the threads as they execute something has gone horribly wrong. Something would have to be really broken in that scenario, Tim’s comments, if they are informed, assumes Sony’s engineers have failed miserably in their goals. Either way it’s not what I’d call a fair assessment. I suppose he might be talking about just writing multithreaded code in general, but people should be doing that already.

Hey, maybe the PS3’s OS will be based on BeOS! I remember Sony having some interest in the platform and one of the principle technologies of BeOS was it’s multithreading. You couldn’t help but write multithreaded applications the way the OS was designed.

But, yeah, the development software is key to the architecture’s success. Presumably Sony, Toshiba and IBM are well aware of this. They really need a bitchin’ compiler.

As a refresher, here’s the full patent application.

The design is really flexible and modular, so a chip can be specialized for 3D graphics or communications depending on the particular variety of Processing Elements are included. For example, there is a version where half of the Processing Elements are omitted in favor of a pixel engine, image cache (embedded VRAM I assume, but with a new name, like what Nintendo did) and a DAC. There is a digram of a graphics-centric implementation which has 4 processing elements with 8 APUs (each with 4 128 bit FPUs, and 4 128 bit Integer units) and then 4 processing elements with 4 APUs and 4 pixel engines/image caches. There’s little to no information on the pixel engines, though. Each processing element also has a Processing Unit which is the PowerPC core. The processing unit directs the flow of data, manages the execution of threads on the APUs.

The design actually is similar to Hyper-threading in what is being attempted. Basically the idea is to keep from having execution units go unused. Hyper-threading allows multiple threads to execute simultaneously and transparently on a single CPU (which normally is not possible). The Cell design takes the same idea and builds the hardware around that concept. The problem with the PS2 was that it was essencially a multi-processor architecture on a single chip. It was designbed with maximum multimedia/3D performance potential in mind. The Cell architecture builds on that foundation of parallelims but aims to maximize effective use of that potential.

The problem is, as many programmers will likely tell you, often they want or need control over what is executed when and usually the programmer is smarter than some on chip decision maker that’s trying to do it for them.

The unpredictability of when something executes based on when a chip has cycles available and the absolute need in games for specific processing at a specific time means that all this technology may not be well-suited to games at all.

–Dave

As a programmer, I’m going to have to say “sometimes we care”, and it depends a lot on the type of application running. The thing I can’t figure out is why you need thay many processors. More processors than threads means that you end up with unused chips.

I agree with Dave. While I can think of a few use for threads (loading stuff in the background), I’m hard pressed to think of 8.

That depends entirely on how much latency is caused by internal scheduling and data transfer. Like I said in an earlier thread about this topic, I think the idea is to make the whole thing as transparent and unobtrusive as the “hyperthreading” in current Intel CPUs. That is, you won’t notice any delay whatsoever as the scheduler is putting tasks on different execution units, and you won’t have to do anything special.

Now whether the much bigger PS3 structure can actually work with such a degree of transparency is another issue. I don’t think there are any existing multi-processor architectures that can do this.

This sounds like them saying it to me:

RC5, Seti@Home, etc, are all applications. Not CPUs. That’s like calling your friend up on the phone and asking them to tape The Simpsons for you and bring it to work tommorow.

They are talking about using the Internet for CPU level load sharing. That’s like calling your friend up on the phone and trying to ask him to come give you the Heimlich Maneuver through a series of grunts.

Look at the kind of dedicated network bandwith and latency CPU level distributed computing uses now. I assure you it ain’t ADSL.

L

I think the distributed network stuff is all part of the hype machine and nothing will come of it for PS3. They’ll talk and talk about it and render the fanboi a quivering goo of anticipatory angst, but it won’t really happen in a meaningful way.

One other thing is that with any kind of astraction layer between the programmer and the chips, you’re often going to find that if you can get right at the chips, you can make things faster. Look at Gran Turismo on PSOne, a game that supposedly uses mostly machine code to run instead of high level programming languages. Now imagine if programmer team A sees that they’re losing, oh, 30% of their speed because they’re letting the chips handle the distribution of work, but by doing it themselves (and it taking them maybe six months more to do it resulting in code they can reuse for future projects), they can get back that 30% and more, do you really think they’re going to let this on-chip instruction handler take over?

If you say yes, you haven’t followed console gaming for any great length of time.

A lot of this stuff looks good on paper to journalists and the fandom, but I think in the end it’s going to be a nightmare eclipsing the horror stories of PS2/Sega Saturn and programmers will bitch and moan all the way to the bank with their Sony-made money hats on.

–Dave

Agreed about distributing resources over a WAN, that’s just crazy and won’t happen. Not with games.

One other thing is that with any kind of astraction layer between the programmer and the chips, you’re often going to find that if you can get right at the chips, you can make things faster.

Right now we don’t even know if the scheduler will be accessible to the programmer at all. Hyperthreading on Intel CPUs is out of the programmer’s control, for instance. And if the architecture is sufficiently complex it might be nearly impossible to manually create an optimal instruction sequence for any length of program code.

Look at Gran Turismo on PSOne, a game that supposedly uses mostly machine code to run instead of high level programming languages. Now imagine if programmer team A sees that they’re losing, oh, 30% of their speed because they’re letting the chips handle the distribution of work, but by doing it themselves (and it taking them maybe six months more to do it resulting in code they can reuse for future projects), they can get back that 30% and more, do you really think they’re going to let this on-chip instruction handler take over?

If you say yes, you haven’t followed console gaming for any great length of time.

Uh, if you say no, you’ve apparently forgotten that PC game developers were also the last holdouts to abandon assembler programming while everyone else was already coding in C or Turbo Pascal or Visual BASIC. We’re talking about PS3 here, not PSOne. The more complex the architecture gets, the harder it is to write assembler. At the same time, the programs themselves get longer and more complex. Eventually writing big chunks of code ML/ASM code just isn’t feasible anymore, and the use of assembler is restricted to small performance-critical spots, such as core routines of graphics drivers on the PC. Consoles aren’t any different in this respect.

A lot of this stuff looks good on paper to journalists and the fandom, but I think in the end it’s going to be a nightmare eclipsing the horror stories of PS2/Sega Saturn and programmers will bitch and moan all the way to the bank with their Sony-made money hats on.

From what little I know about the PS3 I don’t think it will be humanly and economically possible to adapt to the proposed architecture. The compiler and/or hardware must do most of the work automatically, or nobody will ever get a game done. Yeah, someone will eventually write some super-tricky routine for some super-special effect but Sony can hardly afford to wait for that to happen.

The fun is in debugging your program once it’s chopped up and different parts are running on different processors.

Yep, that’s another thing that Sony will have to take care of. Would be extremely annoying if you got random errors depending on which processor a particular piece of code was running…

No problem - just call in additional resources from the Internet and the PS3 will debug your code for you! Soon all videogame development will be done remotely from the nation of ZeroOne - It’s the only choice…

The PS2 is far superior (from a design point of view) over any other console. What held it back was the hideous developer tools, which were so bad that developers now use tools they were forced to painfully grow almost from the ground up, such as the RenderWare platform. Luckily, the PS2 turned out to be a giant sales sucess, achieving heights not even the PSX achieved, and continues to grow at an expanding rate with the competitors not even pretending they stand a chance for direct competition for the mainstream market. Such amazing commercial success signs the pay-cheques of all those hungry developers, and so they had no choice but to develop quality tools to make use of hardware as it was designed to, often teaching themselves exactly how because Sony didn’t believe in quality documentation.

The PS3, honestly, is not that much different from an architectual point of view than the PS2. The PS2 can be called parralism with huge bandwidth. The PS3 can be be called massive parralism with massive bandwidth. I assume Sony realizes what caused their developers so much greif the last time around, and so will be prepared with years of intense commerial development for the next round. That’s only logical.

And Sony does finally look like it’s interesting in getting on-line, as they have begun offering sweetheart deals to all developers willing to build online games for the PS2. With 50,000,000+ consoles shipped, this is a deal that no one will pass up, I don’t care who it is. Clearly Sony is positioning itself to take absolute dominace of all markets, leaving nothing for Microsoft when it come 2006. Nintendo will probably forfiet the next round and withdraw.

A greater GUTB post I may never have seen. :)

–Dave

GUTB, agreed. That’s what I was saying before. One of the primary goals of the cells design is to tap what would otherwise be untapped processing power. It will absolutely be impossible for any programming team to manage the simultaneous execution of code across 512 execution units. The kind of maturation Dave’s talking about will likely take place almost entirely on the tools side of things. Better and better compilers, and even just more efficient art creation. But you’re not writing to the metal. In fact, it will probably be pretty easy for developers to get that percentage of power used number the magazines all like so much. Just ask the Processing Unit how many ALUs are still idle.

This sounds like them saying it to me:
Article wrote:
Sony officials said that one key feature of the cell design is that if a device doesn’t have enough processing power itself to handle everything, it can reach out to unused processors across the Internet and tap them for help.

I don’t see the words Playstation, PS3, game system or console in their. I see cell design and device. Like I said, this probably won’t end up being a PS3 thing, but that doesn’t mean it won’t be implemented in server farms and the like.

Sony’s idea is not totally outrageous. SMP (Single Memory Processors) archtecture is a common and well known technology. As I understand the design it sounds like a SMP architecture. The placing of multiple processors into one chip doesn’t alter the basic concept of SMP, multiple processors sharing a common memory bank). The strange part is that SMP archecture works best for server systems that are given many tasks. Each task can then be given to a processor. For one program to take advantage of multiple processors the program need only be designed with multi-threading in mind.

The only application I know of where the user requests a single thread of work be done that is successfully broken up automatically amongst multiple processors that has been commercially and generically successful in fully utilizing multiple processors is RDBMS (Relational Database Management Systems). I don’t see how Sony is going to automatically break up a game program and ship pieces off to the different processors. I suspect that part of the story was muddled in the reporting.

Sorts, collision detection, physics updates, potentially visible surface generation, vertex transformation, and even scanline rendering can be parallelized. I don’t suspect the reporting is muddled at all - it’s called hype. And the EB salesdrones must be reprogrammed as they were just starting to catch on that XBox games look better than PS2 games.

Having programmed both a 2 and a 4 mostly SMP processor console, I’d much rather drive 1 corvette than 4 yugos. There are plenty of reasons why Intel won the processor war - totally unrelated to their scurrilous business tactics.

You’re correct but those are pieces of an application not the application. As you mentioned it is likely just hype.

One fast computer is much better than multiple slow computers.