I can't help but wonder if Sony ever look at anything outside their own little corner of the universe.
Quick history lesson - the original Playstation development kits sounded very similar to these "Cell workstations" Sony are blathering on about. They were essentially custom build MIPS machines, with similar graphics and sound hardware to a PS1, supplied with hardware manualy, and virtually no software support aside from a simple C compiler. Almost the exact same setup as Sega's Saturn development kits.
The shipped PS1 development kits were originally developed by another company, running on standard PC hardware, using fairly standard C compilers, and providing an extensive set of library functionality and examples. Far beyond what Sony were going to provide, and far beyond what Sega provided with the Saturn.
We all know what happened to the Saturn, how few third party developers it attracted, and how badly they tended to screw things up.
With the PS2, they kept the thing running on commodity hardware, using GCC again as the reference compiler, so you can develop PS2 games on any platform you like. However, because developers were coding for the PS1 as close to the bare metal as they could at the end of the PS1's life, Sony assumed that all developers would want to do the same on the PS2, and thus provided a very thin operating system, and left the developers to do everything themselves. That meant that developers had to learn every aspect of the system, and spend a stupid amount of time working on things that Sony should have dealt with in the first place.
The result - the first PS2 games were utter crap. Even the ones that Sony produced. Compare that with the Gamecube or (especially) the Xbox, where the first generation games were better than virtually any PS2 game, because they both had full development systems, useful and well written libraries to communicate with the hardware, and much simpler system designs which are easier to get the most out of, and lots of guidance on how to do so.
The PS2 doesn't suffer the same fate as the Saturn, simply because of brand recognition, and Sony's marketing ability. Left on their own, I doubt developers would have bothered with the platform. Too much hassle, not enough return.
The PS3 makes the same mistake multiplied by a factor of fifty.
The processor seems to be derived from IBM's POWER architecture, as is the PowerPC that's used in Macs, the Nintendo Gamecube, probably the Xbox 2 and whatever Nintendo are doing next. The only modification I've been able to find, among all the marketing garbage, is that the Cell is intended for "media applications", and that it's intended for use in a multiprocessor environment.
First off, that means it'll have something similar to the math coprocessors in the PS2, or the AltiVec units in PowerPC G5 chips. They work well enough for things like video en/decoding, but they certainly aren't going to perform as well as a hardware implementation would, no matter how much CPU power you throw at them. They work well for 3D math, but most modern video hardware deals with that even more efficiently. They could concievably be used for 3D rendering, but again, dedicated hardware does the job more efficiently, easily, and inexpensively.
The other part that worries me is the multiprocessing part. Realistically, that is of no use whatsoever in a typical game. The main CPU should simply be recieving input from the input devices, running the game for a certain period of time, and then dispatching the appropriate data to the video and sound hardware. Programming anything for multiple processors is inherently more difficult, no matter how hard you may try.
Now, when you consider that last comment, I think they're probably going to try using the main CPU to generate sound and graphics. Which means a return to software rendering. Software rendering is a lot more flexible than typical fixed-pipeline 3D accelerators, but also requires far more raw power, and therefore costs a lot more.
Sony really do suffer from a serious dose of Not Invented Here. Everyone else is moving toward hardware-based rendering systems with programmable pixel and vertex pipelines. These can do virtually any effect you care to think of, because you can change the way each vertex is processed, and the way each pixel is processed. Assuming a sufficiently powerful implementation of programmable pipelines (Vertex and Pixel Shaders in Microsoft jargon, vertex and fragment programs in OpenGL jargon), virtually any effect is possible. Combined with a sufficiently powerful driver system and API (such as DirectX 9 or OpenGL 2.0), these would be far easier to use, far easier to get decent performance out of, cheaper to manufacture, and generally more suitable for the job. These systems are getting cheaper, they are well understood, fairly easy to grasp, and they are extremely flexible. We already know how to get good performance and good results out of them.
Intel made the same mistake when they were designing the Itanium. They assumed that they could completely change the entire architecture, and even the principles on which the CPU worked, and still be able to get decent performance. Admittedly, a 1GHz Itanium beats a 3GHz P4 or a 3000+ Opteron for certain specific tasks, but in most cases it performed abysmally, because it was just too different.
The only way Sony could possibly pull it off is if they implement a vast software library to deal with this stuff on behalf of the developers, filling the same gap that DirectX fills on the Xbox. But they won't do that. Sony really do not seem able to create any kind of software, probably because they're a hardware company, not a software company.
Bookmarks