In a typically snarky Register article, Chris Mellor raises a caution about the use of future many-cored chips in IT. In essence, he says that today’s apps run in a relatively small number of threads each, and modifying them to run in many threads is too difficult. Hence, most of the IT use for many-cored chips will be via hypervisors that assign apps to cores as makes sense.
Mellor has a point, but he’s overstating it. For example, he asserts that Oracle databases don’t run in a lot of threads. Actually, they routinely run today in multiple threads per core, up to at least 16 cores on SMP (Symmetric MultiProcessing) machines. Large OLTP systems often have highly clustered middle tiers. And on the analytic side, Teradata, Netezza, Kognitio, and Greenplum each have run on configurations with over 100 processors or cores.* Other analytic processing – data mining, geospatial analysis, etc. — benefits from massive parallelization as well. And the candidate next-generation OLTP DBMS H-Store architecture could thrive in massively multi-core chip architectures of the future.
* And I doubt that’s a complete list. For example, Aster and DATAllegro are probably in the club too.
In one important way, I’m being overglib. My examples are drawn from cases in which many different chips are used, each with their own Level 2 caches, memory bandwidth, and so on. In some cases, that’s a huge distinction. Replace 100 MPP chips by a single node, and you can be right back to the I/O bandwidth problems that cripple many conventional-DBMS data warehousing installations. But if the fundamental argument is “There’s little point in putting more transistors on a chip because there isn’t much software can do with them anyway” — well, that would be extremely incorrect.