June 22, 2009

The TPC-H benchmark is a blight upon the industry

ParAccel has released a 30,000-gigabtye TPC-H benchmark, and no less a sage than Merv Adrian paid attention. Now, the TPCs may have had some use in the 1990s. Indeed, Merv was my analyst relations contact for a visit to my clients at Sybase around the time — 1996 or so — I was advising Sybase on how to market against its poor benchmark results. But TPCs are worthless today.

It’s not just that TPCs are highly tuned (ParAccel’s claim of “load-and-go” is laughable Edit: Looking at Appendix A of the full disclosure report, maybe it’s more justified than I thought.). It’s also not just that different analytic database management products perform very differently on different workloads, making the TPC-H not much of an indicator of anything real-life.  The biggest problem is: Most TPC benchmarks are run on absurdly unrealistic hardware configurations.

For example, if you look at some details, the ParAccel 30-terabyte benchmark ran on 43 nodes, each with 64 gigabytes of RAM and 24 terabytes of disk. That’s 961,124.9 gigabytes of disk, officially, for a 32:1 disk/data ratio. By way of contrast, real-life analytic DBMS with good compression often have disk/data ratios of well under 1:1.

Meanwhile, the RAM:data ratio is around 1:11  It’s clear that ParAccel’s early TPC-H benchmarks ran entirely in RAM; indeed, ParAccel even admits that.  And so I conjecture that ParAccel’s latest TPC-H benchmark ran (almost) entirely in RAM as well. Once again, this would illustrate that the TPC-H is irrelevant to judging an analytic DBMS’ real world performance.

More generally — I would not advise anybody to consider ParAccel’s product, for any use, except after a proof-of-concept in which ParAccel was not given the time and opportunity to perform extensive off-site tuning. I tend to feel that way about all analytic DBMS, but it’s a particular concern in the case of ParAccel.


96 Responses to “The TPC-H benchmark is a blight upon the industry”

  1. Justin Swanhart on June 22nd, 2009 8:21 pm

    The disk usage seems excessive, but I’m not sure the memory number is that bad. 2TB of ram for 30TB of data doesn’t seem that unreasonable to me.

    Then again, I guess the size of disks keeps going up, and it was probably cost effective to buy that many large disks in bulk for the number of spindles.

    Maybe they are planning to scale up to SF100K with the system. SF100K with 2TB of ram would be very impressive indeed.

    Then again maybe they plan on heating a small country.

    Who can say?

  2. Peter on June 22nd, 2009 9:08 pm

    Why would they use more disks than they have to?
    In the end it is also about TPC-H/$


  3. Curt Monash on June 22nd, 2009 9:27 pm

    Everybody uses a lot more disk on the TPC than they would in real life. Well, Kickfire may be an exception, but if so it’s only because they didn’t have a lot of configuration conflexibility in their systems. :)

  4. Curt Monash on June 22nd, 2009 9:27 pm

    My point on the RAM is that it’s nice to have the compression/budget to run entirely in RAM. But if you do, detailed numerical comparisons to disk-based vendors are rather beside the point.

  5. Justin Swanhart on June 22nd, 2009 9:48 pm

    The reason Kickfire doesn’t have more disks isn’t because of limitations in configuration flexibility.

    We use very few disks for two reasons:
    First, we are a compressing column store which generates more sequential IO than random IO.

    Second, the QPM (query processing module) is connected via an external PCI-X cable. PCI-X DMA transfer is used to send information back and forth over the bus to and from the cache memory attached directly to the SQL chip. This constrains the bandwidth between the disks and the QPM.

    Since sequential access is the most important performance aspect for us, and the eight disks included in the appliance achieve more than enough bandwidth to copy data to the QPM at the fastest possible speed of the bus.

  6. Jerome Pineau on June 22nd, 2009 11:20 pm

    I find it hard to justify investing time & resources in the TPC-H exclusively as a marketing tool. There’s a lot a startup can do more effectively with $100K IMHO. That being said, it’s true that being able to validate a certain level of SQL support is positive, even though the TPC-H queries are not particularly “difficult” in that sense. However, I have not yet heard of any customer making a decision based on that. Furthermore, I understand TPC-DS was nixed, so it’s not clear where this whole system is headed. I’m not really sure who besides the big 3 have been helped by it either. As they control the underlying hw platforms anyway, it’s kinda moot. However I do know what it takes to get there and that effort cannot be taken away from ParAccel.

  7. Curt Monash on June 23rd, 2009 12:32 am

    ParAccel has a strong preference for benchmarks they control.

    I, however, prefer benchmarks controlled by customers or independent observers.

    To each their own.

  8. Greg Rahn on June 23rd, 2009 1:05 am

    @Jerome Pineau

    TPC-DS has not been nixed. It is still under development. See the TPC Benchmark Status:

  9. dave on June 23rd, 2009 7:01 am

    i believe that i had the distinct honor of seeing merv jam years ago in the lounge of the exec ed center at babson college (perhaps it was 01)…he rocked, people cheered, popcorn was free, no lighters were held up following his set- but it’s also a strict no-smoking place….

  10. Jerome Pineau on June 23rd, 2009 12:19 pm

    Well if you’re shelling out $100K you damn well better be “in control” – of course, IMHO it makes more sense to let prospects run the TPC-H metrics themselves if they’re so attached to it. We can help them get setup. Let _them_ run it internally on their own h/w, that makes more sense. Not as much sense as running their _own_ data of course but, if that’s what they want, they rule.
    The “standard” IMHO should be run against a certain level of SQL, connectivity and I/O throughput.

  11. Karl on June 23rd, 2009 2:09 pm

    Hi Curt,

    You make valid points about the unrealistic hardware configurations of some vendors’ TPC-H configurations. However, I would caution against throwing the baby out with the bathwater.

    While nothing replaces a customer doing a POC on their own data, industry benchmarks do serve their purpose of providing an independent yardstick (no matter how imperfect) against which to measure vendor performance.

    If we throw out industry benchmarks, customers are left with vendor benchmarks. I don’t think I need to comment on the value of the latter.


  12. Curt Monash on June 23rd, 2009 2:49 pm


    In my opinion, this independent yardstick is too warped to be worth the trouble of measuring with.


  13. Jerome Pineau on June 23rd, 2009 8:01 pm


    I guess the question beckons: would you suggest doing away with the TPC.org altogether or fixing/reworking it in a more adequate manner?
    And if you say #2, is there another standards body out there you would model them against?

  14. Curt Monash on June 23rd, 2009 9:27 pm

    I don’t think any vendor-organized benchmarking body is going to work.

  15. Merv Adrian on June 23rd, 2009 10:28 pm

    I have to jump in here, but where to start? First I must thank Curt for callimg me a sage. I’ve been called lots of things..but that’s another story.
    Over the years I’ve said a lot about benchmarks and their limits, and I plan to pull that together on my blog for a refresh. One or two quick points:
    the platform question is an interesting one. It’s usually provided by the HW vendor, as it was here, and in this case, the attributes of that box were relatively fixed. But more disk is good, and so is more memory, for almost everything anyway. And the price/performance makes it steadily more so – the results here make that quite vivid. I see no value in saying vendors should use less disk and memory because that would prove something – p/p is what customers want.
    Jerome’s comment is telling: of course, putting all your money in a benchmark for marketing is a bad idea. But if you have a partner who stands up the expensive hardware for you, adn you get to use a well-known one that you occupy for at least a while with dramatically better numbers, it’s pretty useful for visibility, and my point in my post was that ParAccel will likely get some attention. For my part, I can tel you that I’ve had more traffic on that post ( http://bit.ly/14JpI0 and pardon the plug) than any I’ve put up.

    And finally, thanks Dave! I miss the big sessions with the jam gang now that I’m not at Forrester. I don’t do weddings or bar mitvahs, but I’ll play for beer.

  16. Neil Raden on June 23rd, 2009 10:54 pm

    I agree with Curt (this will probably distress him because he’ll thing there’s a big BUT lurking, but there isn’t. He right), benchmarks organized by vendors represent a weird compromise. All you can get from them, at best, is an indication of how a given hardware/software configuration can perform UNDER THE BEST CIRCUMSTANCES. Again, if you can even get that. But what ever runs under the best circumstances? Nothing. Once this platform makes it out to the field and the crosswinds of old practice, old ideas, inadequately trained keepers and, of course, over-hyped expectations, performance is nothing like the documented benchmark.

    Take Oracle databases for example. Oracle DBA’s are more common than twisters in trailer parks. Most of them are woefully unprepared to tune and maintain an analytical database. For the newer drop-and-go or schemaless analytical databases, who knows what sort of indecent acts the clients will perform on them?

    SO my suggestion is to forget about benchmarks. It would be better to sample the performance of these databases in the field and generate some statistics to get a directional idea what combinations are performing better than others. If two-thirds of database X installations are pumping mud and two-thirds of database Y are doing OK, there is your evidence with all variables considered.

    -Neil Raden
    Hired Brains

  17. Curt Monash on June 23rd, 2009 11:02 pm


    Do you know of any users who are interested in the best performance for managing 30 TB of data on 961 TB of disk?

    I don’t.

    The system built and run in that benchmark — as in almost all TPC-Hs — is ludicrous. Hence it should be of interest only to ludicrously spendthrift organizations.

  18. ParAccel pricing | DBMS2 -- DataBase Management System Services on June 23rd, 2009 11:22 pm

    [...] I noted in connection with ParAccel’s recent TPC-H filing, I think the whole exercise is basically an expensive joke. But one slightly useful spin-off is [...]

  19. Anonymous on June 23rd, 2009 11:24 pm

    Hi Curt,

    I have never seen you using such strong words against any company on his blog. Did this benchmark or company piss you off?

    At the end, It’s just a benchmark.

  20. Curt Monash on June 23rd, 2009 11:37 pm

    TPC-Hs waste hours of my time every year. I generally am scathing whenever they come up, e.g. in http://www.dbms2.com/2008/04/18/kickfire-kicks-off/ .

    ParAccel’s Kim Stanick has in the past threatened me when the company didn’t like what I wrote, both financially and (at least obliquely) legally. That pissed me off too. (Kim’s recent pettiness in blocking me on Twitter was by comparison just a trivial black mark.) It further pisses me off when a company that, in the kindest interpretation, is middle-of-the-road in how much it misleads people, goes around bragging about its so-called integrity.

    Perfect storm.

    For a contrary opinion: The aforementioned Kim Stanick — who used to claim she was under orders from her CEO not to comment on blogs — put up a substantive comment on Merv’s post. It would be nice if what she said there is actually true and not-too-misleading. We’ll see. Maybe ParAccel is getting its straightforwardness act together, like Netezza eventually did.

  21. Anonymous on June 23rd, 2009 11:52 pm

    As a datawarehouse consultant, I love your blog and want(as have experienced in past) to read good coverage. I was definetly impressed with the benchmark numbers so I would give credit to ParAccel for doing that. At the same time, I agree with you that noone should mislead, infact noone can mislead customers in todays time. Customers are smart.

    Looking forward to more information on your excellent blog.

  22. richard gostanian on June 24th, 2009 7:34 am



    Perusing your website, I detect a certain hostility towards ParAccel.

    I have no quarrels with that; after all you may have personal reasons which I’m not aware of.

    However when you make outrageous statements like

    “Managing 30 TB quickly with 961 TB of disk and 2 1/2 TB of RAM isn’t much of an accomplishment, especially if your compression is good enough to fit a whole lot of data into RAM”

    you display a profound ignorance about TPC-H in general and ParAccel in particular, that just makes you look foolish.

    Indeed you do more to harm your own credibility than raise doubts about ParAccel.

    Again from perusing your website, I see you’re not a great fan of TPC-H. But, for all its flaws, TPC-H is the only industry standard, objective, benchmark that attempts to measure the performance, and price-performance, of combined hardware and software solutions for data warehousing.

    You may dismiss it, but more than a dozen hardware and software vendors take it very seriously. And contrary to you assertion, many customers do factor TPC-H results into short list decisions.

    It’s true that the rate of submissions over the past 2 years have slowed down from that of previous years, but I think that is a consequence of the much higher level of performance that the newer results are achieving. This is primarily due to the innovations that systems like ParAccel are introducing.

    As for your speculations about the difficulty of ParAccel’s accomplishment, the following are some details that would be useful for you to understand, so you don’t say silly things in the future.

    By way of full disclosure, I was deeply involved in all aspects of producing ParAccel’s result. So obviously I’m somewhat biased. However, I should also point out that over the past 7 years, I have personally run, hundreds TPC-H tests on numerous servers, using several different DBMS products. More than a dozen of the currently listed TPC-H results are due to either my sole effort, or to joint efforts with colleagues. Hence it’s safe to assume that I know something about TPC-H in general and about ParAccel in particular.

    First let’s look at your fantasies about compression. You want your readers to believe that aggressive compression enabled ParAccel to store the entire 30TB (raw ASCII data) in 2.5 TB of main memory. That simply did not, and could not, have happened!

    In order to accomplish what you’re suggesting, a compression factor of more than 100 to 1 would be required.

    To see why, consider the following.

    Of the 64 GB of memory per node (there were 43 nodes for a total about 2.7 TB) about 8 GB was used for the OS and the ParAccel text and (non-shared) data segments. About 50 GB of shared memory was used for (1) storing query processing intermediate results e.g., the workspaces required for sorting and processing aggregations and the hash tables used for hash joins and (2) the memory required for versioning (ParAccel uses a multi-versioning concurrency control protocol). So at best, you could argue that there could be 6GB of memory per node (or about .3 TB in total) that could be made available for caching data. The other 58 GB per node, or about 2.4 TB in total, was used for other purposes.

    Now if you want to compress 30TB down to .3 TB you’re going to need a compression factor of 100 to 1. But such a compression factor, for TPC-H data, is more than two orders of magnitude beyond anything possible.

    Yes, you can cook up highly artificial data, with sufficient redundancy, so as to achieve a 100 to 1 compression factor. But TPCH data, which is also artificial, does not possess anywhere near the amount of redundancy required for a 100 to 1 compression factor. Indeed the best compression I’ve ever seen for TPC-H data is on the order of 5 to 1. This was from a database vendor (which shall remain nameless) who, to the best of my knowledge, has done more with compression than any other vendor. Interestingly, 5 to 1 is very close to the theoretical limit for TPCH data, based on my own Information-Theoretic computations using Shannon’s entropy. So to cache the entire 30TB with the most aggressively compressed product would require 6TB on top of the already accounted for 2.4 TB.

    Since ParAccel’s TPCH compression is more on the order of 3 to 1, ParAccel would require more than 12 TB for a fully cached database — a luxury the benchmark configuration did not possess.

    Moreover, assuming a best-case compression factor of 5 to 1, no vendor could possibly fully cache a 30TB database with less than 8.4TB. So let’s dispense with the idea that ParAccel achieved its magic by caching the entire, or even a significant portion, of the database in memory. No, ParAccel used virtually all of the 2.7 TB of available memory for necessary overheads and not to cache TPCH data.

    This brings us to your next conjecture:

    Why did ParAccel require 961 TB of disk space?

    The answer is they didn’t.

    They really only needed about 20TB (10TB compressed times 2 for mirroring).

    But the servers they used, which allowed for the superb price-performance they achieved, came standard with 961 TB of storage; there simply was no way to configure less storage.

    These Sun servers, SunFire X4540s, are like no other servers on the market. They combine (1) reasonable processing power (two Quad Core 2.3 GHz AMD Opteron processors) (2) large memory (64 GB), (3) very high storage capacity (24 or 48 TB based on 48 x 500GB or 1TB SATA disks) and 4) exceptional I/O bandwidth (2 – 3 GB/sec depending upon whether you’re working near the outer or inner cylinders) all in a small (4 RU), low cost (~$40K), power efficient (less than 1000 watts) package.

    What any DBMS needs to run a disk-based TPC-H benchmark is high I/O throughput. The only economical way to achieve high disk throughput today is with a large number of spindles. But the spindles that ship with today’s storage are much larger than what they were, say, 5 years ago. So any benchmark, or application, requiring high disk throughput is going to waste a lot of capacity. This will change over the next few years as solid-state disks become larger, cheaper and more widely used. But for now, wasted storage is the reality, and should not be viewed in a negative light.

    Finally, I need to point out that your allegations of “heroic tuning” by Barry Zane are complete fabrications — at least for TPC-H. There were of course a few ParAccel configuration parameters that needed to be set properly for the given hardware configuration, but outside of that, there was virtually no tuning at the ParAccel level.

    However, since this was the first time a major ParAccel benchmark was attempted on hardware running Solaris (actually OpenSolaris), there were a number of
    Solaris-based optimizations that were inserted into the code, as well as some Solaris tunings that required experimentation. These additions to the product significantly improved the TPC-H performance. But these were one-time improvements that are now built into the Solaris-ParAccel product, so that anyone who runs ParAccel in the Solaris environment will automatically take advantage of them.

    So no, ParAccel’s outstanding result was not due to running in memory, using 961 TB of storage capacity or Barry Zane heroics.

    The real reasons are quite simple

    (1) ParAccel has implemented algorithms, within the context of a highly scalable architecture that make it the highest performing data warehousing product on the market
    (2) the low cost, high I/O throughput SunFire X4540 is the perfect companion to ParAccel’s software
    (3) Sun’s collaborations with ParAccel, introducing Solaris-based optimizations, into the product made a great product even better

    So Curt, pray tell, if ParAccel’s 30 TB result wasn’t “much of an accomplishment”, how is it that no other vendor has published anything even remotely close?

  23. Anonymous on June 24th, 2009 10:44 am

    Very insightful explanation by Richard. Glad to read it from someone who sounds technically savy.

  24. anonmyous on June 24th, 2009 12:58 pm

    After reading Curt’s post about ParAccel and Kim this is obviously personal. But last time I saw Curt at TDWI he did look older than 7, so I wonder why the little fella didn’t have a fit over Oracles 1TB TPC-H?

  25. anonymous on June 24th, 2009 1:22 pm

    Check his bio.

    He consults for Oracle.

  26. Curt Monash on June 24th, 2009 1:35 pm

    To copy a response I made to the same comment on Merv’s blog:

    100:1 is “more than two orders of magnitude” better than the best compression possible? You obviosly made a typo there.

    Anyhow, thanks for the clarification on RAM use.

    I’m highly suspicious of bemchmark-specific R&D. But as Merv points out, that kind of thing is not ALWAYS valueless.

    As for the disk — clearly, there are also some commercial implementations where, for better MPP performance, enterprises keep their disks pretty empty. But 96% empty disks? That’s pretty unrealistic even for those cases. If an enterprise has a 30 TB data warehouse, I doubt ParAccel or many other vendors would recommend a 43 node SATA system.

  27. Curt Monash on June 24th, 2009 1:36 pm

    Anonymous Troll,

    Actually, I consult for a large number of head-to-head competitors in the analytic DBMS market.

  28. Curt Monash on June 24th, 2009 1:38 pm

    Thanks for the misplaced compliment on my youthful looks, Other Anonymous Troll!

    How strongly I react to a benchmark is affected by how much marketing noise it gets.

    But did you actually put “Oracle TPC” into the search box and look at what came up?

  29. Curt Monash on June 24th, 2009 1:44 pm


    Various people on Twitter think you work for Sun. Is that accurate?



  30. Curt Monash on June 24th, 2009 1:53 pm


    As for your question as to why other vendors don’t do TPC-Hs — perhaps they’re too busy doing POCs for real customers and prospects to bother.

  31. Anonymous on June 24th, 2009 2:20 pm


    Why does it matter if Richard is from SUN or Moon??? He made perfect clarification towards TPC-H and great insight into how the test was conducted and thats what makes the blog impressive rather than just saying something based on personal agendas.

  32. Curt Monash on June 24th, 2009 9:14 pm

    At the end, Richard stated his views about Sun equipment, which are extremely positive. That alone makes it relevant whether he works for or recently has worked for Sun.

  33. Jerome Pineau on June 25th, 2009 12:59 am

    I have to say, whether or not one disagrees with Curt’s take on either TPC or ParAccel, or any other company or industry topic for that matter, that I find it disturbing that anyone would be compelled to address this discussion under an “anonymous” identity. I find statements such as “so I wonder why the little fella” particularly ironic from someone small enough to address such a polemic under anonynmous cover. Richard should also address his relationship (or lack thereof) to Sun. There’s nothing wrong with challenging Curt on the issues, but as he has the balls to speak up clearly identifying himself (either here on in other online venues), so should the people who take the time to oppose his views.

  34. Hriyadesh Sharma on June 25th, 2009 1:16 am

    Problem is that Curt has agreed in this blog that he has a personal agenda against this company because he is pissed for whatever reason. That’s the most concerning to me. I work in DW industry as a consultant and read blogs of analysts but with such kind of hostility, I have lost faith in Curt’s blog.

    Let me know if you need my phone number too to discuss this further. :)

  35. Curt Monash on June 25th, 2009 1:32 am

    Thanks for going non-anonymous.

    But no, I don’t agree that my views are heavily influenced by my feelings. At most my tone is.

    In my line of work, it is necessary to be diplomatic in how one phrases things. I think that, on the whole, I am less diplomatic than most. But I still feel like blowing off frustration at that tact from time to time — and the target is likely to be somebody who has particularly abused that tact in the past.

    Nobody in recent years has more abused my past courtesy than ParAccel.

  36. richard gostanian on June 25th, 2009 1:55 am


    Responses to your comments:

    100:1 is “more than two orders of magnitude” better than the best compression possible? You obviosly made a typo there.

    Yes I did. I should have said “more than ONE order of magnitude … “. Thanks for pointing it out.

    But 96% empty disks? That’s pretty unrealistic even for those cases. If an enterprise has a 30 TB data warehouse, I doubt ParAccel or many other vendors would recommend a 43 node SATA system.

    You’re mixing two independent things here (1) number of nodes and (2) total disk capacity.

    First, would ParAccel recommend a 43 node SATA system? If that what it takes to meet the customer’s performance requirements, then yes!

    The point of the TPC-H publication was to demonstrate the exceptional scalability of the ParAccel product. Not everyone would need the equivalent of 1 million QphH, so for those customers, fewer nodes would be recommended.

    Concerning disk capacity, as I stated before, if ParAccel could have obtained smaller disks, in the configuration that was used, ParAccel would have used them — without any loss of performance. But the servers they used were so inexpensive, it didn’t matter that 96% of the disk capacity was wasted.

    By comparison, take a look at IBM’s 32 node DB2 cluster which was used for IBM’s 10TB TPC-H submitted 10/15/07. IBM used smaller disks (36 GB) vs ParAccel’s (500 GB), but required about 1000 more of them. IBM’s hardware cost was about 7 times ParAccel’s. And still their disks were 80% empty.

    To reiterate what I said previously, when solid state disks become more economical, wasted disk capacity will become a relic of the past. Until then, large numbers of spindles are the only way to achieve the kind of disk throughputs needed for high performance data warehousing on large databases.

    CURT: Do I work for Sun?

    RICHARD: Yes.

    I did not mention this in my previous post, not because I had anything to hide, but rather because (a) I felt my professional bona fides were far more relevant to my comments than my professional affiliation and (b) my comments represented my own thoughts and not necessarily any official position of Sun.

    As for your question as to why other vendors don’t do TPC-Hs — perhaps they’re too busy doing POCs for real customers and prospects to bother.

    I did not ask why other vendors don’t do TPC-Hs, because clearly they do.

    Indeed, within the last year, with the exception of IBM, there has been at least one submission from each of the major hardware vendors. And within the last two years all of the major hardware vendors have made multiple submissions. (Note, it’s always the hardware, not the DBMS, vendor who makes the submission.

    In total, there are currently more than 140 submissions posted on the TPC website.

    What I was asking you is: if, as you claim, parAccel’s result is “not much of an accomplishment”, why hasn’t anyone else been able to match, or exceed it?

  37. Hriyadesh Sharma on June 25th, 2009 1:58 am

    Curt, Whatever might be the case but I suggest that as an analyst you have a responsibility of being fair to the DW vendors. I was hoping you will in fact applaud a small company(with probably not so many resources) for stellar benchmarks . Again, in your last post “Nobody in recent years has more abused my past courtesy than ParAccel” is clear indication of your personal issue. I donot want to get started on your technical reasoning for dismissing the benchmark as the points you make are extremely weak (trust me, I work on the DBMS everyday and the TPC-H is not easy shit). How can you even claim of it being run in memory? The math doesnot work. 3x compression only gets you to 10 TB. Maybe, somehow you can fit 10 TB data in 2 TB of memory.Please get the facts before dismissing or endorsing something. At one point you were endorsing everything DatAllegro said and we all found out that they never had the revenue or customers as mentioned by your blog.

    How can professionals trust your blog?

  38. Curt Monash on June 25th, 2009 5:58 am


    I was criticizing ParAccel’s TPC-Hs back when they were still a client of mine, so will you just PLEASE STOP with your insinuations that my views on the idiotic TPC-H benchmark have anything to do with my relationship with ParAccel.


    Thank you.

  39. Curt Monash on June 25th, 2009 6:00 am


    Your last answer is a silly semantic game. You’re asking why other software vendors aren’t “able” to match ParAccel’s results. I’m suggesting that they aren’t trying. Your premise is wrong. And your pretense of thinking I wasn’t responding directly to your point about software vendors does you no credit.

  40. Curt Monash on June 25th, 2009 6:21 am


    You wrote “The point of the TPC-H publication was to demonstrate the exceptional scalability of the ParAccel product.”

    I doubt that.

    The point of a TPC-H publication is to exhibit a supposedly authoritative number that is superior to other vendors’ supposedly authoritative numbers. Period. What the number means in real life is secondary.

    But you’re right to say that the main issue is number of nodes, or more precisely amount and power of silicon. And there certainly are some cases where a whole lot of silicon gets thrown at a relatively small amount of data — a typical Kognitio installation, a typical Exasol installation, a few Netezza installations, and so on, unless I’m mistaken.

    But there’s still something VERY contrived about a benchmark in which all the data is on the very very outer thinnest edge of the disk, or wherever it was put. And $60K of hardware (before a modest quantity discount) for each TB of data is pretty steep these days. Nobody I can think of is paying $160K per terabyte (hardware + software) before quantity discount for a system these days, unless maybe it has the kind of advanced functionality that Teradata and Oracle might, but Netezza, Vertica, Greenplum et al. — let alone ParAccel — do not.

  41. Hriyadesh Sharma on June 25th, 2009 10:56 am

    Hi Curt,

    If you do doubt so much about TPC-H, why don’t you call the council and try to make your point and have it shutdown :) .
    Or better try to do the TPC-H on your smartphone and see if it works because you believe its so easy and straight forward and maybe smartphone memory can hold all 30 TB. :)

  42. Neil Raden on June 25th, 2009 4:41 pm

    I think it’s interesting that Curt called out a few points in Richard’s original comment that Richard has had to retract. 1) 1 order of magnitude, not 100. 2) His affiliation with Sun. No matter how he (Richard) tries to spin it, that was just an egregious breach of etiquette. Funny that Colin White and others have a problem with Curt’s tone, an innocuous situation, but completely overlook Richard’s hidden identity which, to me, is on a level with plagiarism (no, I did not just accuse him of plagiarism, just drew a parallel).

    I don’t know what Curt’s motivation really was for going after ParAccel’s TPC-H results, and I don’t really care. I remain fond of both parties. But this whole affair has served to air out a knotty topic in a way that it hasn’t been before, so we all benefit from it.

    Let’s not forget that polite diplomacy led to Hitler’s invasion of Europe. There is a time and place for it, but sometimes you just have to call a spade a spade. And then bear the consequences.


  43. Colin White on June 25th, 2009 4:47 pm

    You missed my point. I wasn’t objecting to Curt’s tone, but to his attack on Kim Stanick. It was unnecessary and it didn’t add any value. The fact that Richard works for Sun is irrelevant. TPC benchmark have quite strict auditing requirements.

  44. Curt Monash on June 25th, 2009 5:07 pm


    I was answering a question about my perceived bias.

    I also, honestly, was motivated by the public personal attacks she’d started on me.

  45. Colin White on June 25th, 2009 6:23 pm

    My comments equally apply to vendors. They should also act professionally and not react at a personal level to reviews they don’t like. I guess we live in an imperfect world! The interaction, however, is still valuable. It gets the issues on the table.

  46. Curt Monash on June 25th, 2009 6:55 pm

    Right, Colin.

    But when a vendor DOES engage in major attempts at intimidation, I’m inclined to call them out. For the sake of users, analysts, and everybody else in the community, I want to help ensure that attempts at intimidating forthright analysis backfires badly.

    If they just whine privately, it’s one thing. But when the attempted intimidation is more explicit, it’s a different can of worms.

    It’s pretty rare that a vendor makes explicit threats against me in connection with what I write. I can stand the heat that stems from calling them out each time it happens.

    I’m sorry that some of my compatriots seem to think it’s an analyst’s obligation to conceal the threats made against them. I really don’t understand why they feel that way, and I think they’re doing the industry a disservice.

  47. Curt Monash on June 25th, 2009 7:27 pm


    If you think it’s irrelevant that Richard concealed his Sun affiliation — his “full disclosure” was anything but — while expressing opinions about Sun hardware and the significance of Solaris-related technology tweaks …

    … well, let’s just say I disagree. A lot.

  48. Curt Monash on June 25th, 2009 7:29 pm


    You’ve said one should never “attack” a vendor. You’ve said it’s OK for a vendor employee to conceal his affiliation when commenting on the vendor’s products.

    Do you have areas in mind where you do NOT think vendors should be sacrosanct from criticism or requirements of honest behavior?

  49. Colin White on June 25th, 2009 7:58 pm

    It depends on what you mean by attack. It’s fair game to critique vendors and their products. I feel this can be done without being mean spirited or making personal attacks.

    If a vendor is attacking you personally then I think it’s best to address this one on one with the vendor and not conduct the battle in a public forum. I think this is more professional. Conducting the conversation in a public forum makes you no better than they are. I also think it adds no value to the reader. It just makes you feel better by venting steam. As someone pointed out to me – the way to get rid of trolls is to stop feeding them! BTW I am not accusing anyone of being a troll!

    I agree with you that Richard should have disclosed his Sun affiliation. I always believe in full disclosure. He was pretty up front about his involvement with the ParaAccel benchmark and I must admit I was focused on the TPC results rather than his comments about Sun hardware. I think his comments on TPC were quite valid given his prior involvement in TPC benchmarks. For some reason I got it into my head that he was the auditor. This was my error.

    However, I think the criticism about full disclosure should apply equally to analysts. How many analysts disclose who they consult for?

    The bottom line is that all my comments apply to equally both analysts and vendors. Hope this qualifies some of my comments.


  50. Curt Monash on June 25th, 2009 8:14 pm


    I disclose which vendors I consult for, fairly completely, although I don’t publish a precise win/loss status at all times. How about you?

  51. Todd Fin on June 25th, 2009 11:26 pm

    TPC-H benchmarks are important to customers who cares about performance and tuning. Do you know that the easiest way to find out how to optimize database is to read results of the TPC, the answers are all there. No need to hire expensive ACEs to do the same.

  52. Frank Reight on June 25th, 2009 11:43 pm

    Todd, quick tell Curt who you work for before you get personally attacked.

    Frank Reight

  53. Todd Fin on June 25th, 2009 11:51 pm

    Frank, don’t have to be so sensitive.

  54. Curt Monash on June 26th, 2009 12:08 am


    If one comments on one’s employer’s products, it’s appropriate to disclose who one works for.

    Beyond that, I do hold people to a higher standard themselves if they attack my integrity or credibility — as Richard very explicitly did.

  55. Angelika K. on June 26th, 2009 2:31 am


    As an occasional reader, I would like to offer some constructive criticism: I believe this post has (at least) two major flaws.

    First, it is laced with emotion right from the title. Probably not a good thing when you seem to be expressing your position…a position that you want others to respect. Drama rarely adds value in technical assessments.

    Second, it has numerous technical assumptions that demonstrate the level of your (mis)understanding. Databases are technical. Database benchmarks even more so. Recognize you are an analyst, not an engineer. Quick example: You leached on to the fact that the configuration contained 961,124.9 GB of raw space, yet you failed to read in the FDR report (page 22) that only 50GB of the 500GB per drive was allocated for the tablespace area for the database. I would say that 50TB of mirrored space for 30TB of raw data is not absurdly unreasonable. Let’s also not overlook the fact that if one has to generate 30TB of flat file data to load into a database, it needs to go somewhere as well. My guess is that is that this benchmark could have used 72GB drives and housed both the flat file data and the database, had it been an option. As Richard Gostanian stated, 500GB was the smallest available so significant space/capacity was left unused. Really not that uncommon with today’s large capacity drives in high performance VLDBs. Capacity means very little compared to drive spindle count.

    Curt, are you familiar with auto or motorcycle racing? Any idea how much a Ducati track bike or a Formula One car costs? Any idea how much of the engineering that goes into that race bike/car ends up back in the consumer model/products making them better? TPC benchmarks are very similar; they may use costly configurations and almost always push the performance envelope further than any consumer would.

    Perhaps you should ask your list of clients who uses TPC-H internally to test and evaluate performance on very large configurations. My guess is you will be very surprised how many do so. Clearly it has value.

    I hope you have learned from your mistakes and have better analysis going forward.


  56. Curt Monash on June 26th, 2009 3:09 am


    Thank you for your comments. But I’m going to stick by my guns on this one.

    In my opinion, if there were no such thing as a TPC-H, every small vendor that has submitted a TPC-H would be better off today. That said, in fairness, I think TPC-fixation is more of a symptom than a cause of troubles, and I further note that Kickfire has had a change of leadership since its TPC-fixated days. Still, the general point remains.

    Users are poorly served by the TPC-H as well.

    Industry-controlled regulation has failed in a lot of places, as recent news teaches us, and I think it has failed in the TPCs as well.

  57. Jos van Dongen on June 26th, 2009 11:22 am


    After all is said and done, there’s one thing I’m still missing from the equation since apparently no one seems to care: power consumption. Using 43 boxes with an typical consumption of 1200 watt (according to Sun specs) makes 51.600 watt. Not a very ‘green’ way of computing imho.

  58. Curt Monash on June 26th, 2009 11:38 am


    Just more of the same. The TPC-H has little to do with real-world computing. TPC-H fans say that doesn’t matter, and make analogies to car/motorcycle racing.


    Where those analogies break down, by the way, is that car/motorcycle racing is generally HARDER than real life highway/city driving. But TPC computing, with (for example) low concurrency, may be easier than the real thing.

  59. Jerome Pineau on June 26th, 2009 12:35 pm

    “Curt, are you familiar with auto or motorcycle racing? Any idea how much a Ducati track bike or a Formula One car costs? Any idea how much of the engineering that goes into that race bike/car ends up back in the consumer model/products making them better?”

    YES! It’s described in:

    I know because Richard pointed me to it on Merv’s blog :)

  60. Jos van Dongen on June 26th, 2009 12:52 pm


    Been doing some more digging into other recent results, which started me wondering what specifically triggered you to start this discussion related to this particular submission? On June 3, Oracle/HP published 1TB TPC-H results on a configuration that looks quite ludicrous compared to the SUN/Paraccel combo. For starters, it costs more than twice as much for running only 1/30 of the data volume. Second, it consumes even MORE power (57.600 Watts if I calculated correctly) and third, the amount of RAM is 2 TB, double the size of the data volume. I cannot even begin to imagine what configuration they would need to run a 30TB TPC-H, so compared to Oracle/HP’s 1TB configuration, Sun/Paraccels 30TB config looks pretty modest to me. Which leads me to asking why you attack Paraccel/Sun now and didn’t start this discussion 3 weeks ago when Oracle/HP submitted the 1TB results? (which would have been more than justified) Could it be that you’re perhaps a little bit biased and prejudiced here? Just a teeny little bit?

  61. Curt Monash on June 26th, 2009 12:53 pm


    It’s a lousy analogy, and doesn’t get better just because a bunch of guys wish it made sense, because they think fast cars and motorcycles are cool. :)

  62. Curt Monash on June 26th, 2009 1:28 pm


    I’m having great trouble answering that while keeping my temper. The idea that I’m obligated to write on every fracking piece of “news” in the industry, real and imaginary, including the ones I was never aware of in the first place, is bizarre at best.

    I responded directly to Merv’s post, which I regarded as misguided. Please point me to a similar post that I should have seen (but didn’t) about Oracle’s benchmark.

  63. Jos van Dongen on June 26th, 2009 3:20 pm


    You did a lot more than just responding to Merv’s post: you went to the TPC site, read the report and then started your comment without checking on the same TPC site if it was really such an outrageous configuration compared to the other (recent) submissions. In fact, that’s what I did after my initial comment on power consumption. I simple don’t believe that you didn’t see (or knew about) the Oracle 1 TB results (they probably sent it to you under NDA prior to the publication), but that’s just guessing.

    Well, time for a beer now. Have one yourself and enjoy your weekend!

  64. Curt Monash on June 26th, 2009 3:42 pm


    That’s a stupid and offensive accusation.

  65. Curt Monash on June 26th, 2009 3:43 pm


    As for the part about other configurations being stupid too, OF COURSE THEY ARE!!!! That’s what I’ve been saying!!!!!!!!!!!!!!!!

    I did not say ParAccel’s TPC was specifically misleading. I said TPCs overall are bad.

  66. Jos van Dongen on June 26th, 2009 6:13 pm


    This might surprise you but I largely agree with you on the point of the stupid configurations. I don’t say TPC-H is bad overall (they have their place and are kind of a hobby for me, see my blog) but it has become more of a beauty contest for companies who can throw a lot of money at it and thus ‘win’ than a realistic and objective means of comparing products. And that, of course, will never happen.

    by the way, I wasn’t accusing you of anything. As I said: just guessing. Obviously I guessed wrong ;-)

  67. Curt Monash on June 26th, 2009 6:44 pm


    I ignore TPC-Hs as hard as I can. I’m only aware of them when somebody puts marketing emphasis on them. That emphasis is ALWAYS too much in my opinion.

    If you’d asked me to guess, I would have a strong guess that Oracle, IBM, Microsoft, and Sybase have submitted lots of TPC-Hs. But I wouldn’t recall a single detail without going back and looking for them.

  68. Danny on June 27th, 2009 2:51 am

    You guys need to calm down. This discussion is starting to look/feel more like a tabloid than a place of serious discussion and thought. I’ve purchased plenty of products and drawn many conclusions based on their ability to solve MY particular problem. Gee, I dont know nor do I care if its IBM, Oracle, MS etc. If the price is right and it solves MY problem then I’m happy. I feel like a guy looking at a bunch of TV’s and the salesguys arguing over which is better. All this bickering is pretty much noise at this point.

  69. Curt Monash on June 27th, 2009 3:30 am


    Right on!

    I think technology is cool when it solves awesome real-world problems. I’m not nearly as impressed, however, with artificial speeds-and-feeds exhibitions.

  70. Daniel Weinreb on June 29th, 2009 12:36 am

    I’ve been involved in many benchmark wars, especially the “Gabriel benchmarks” for Lisp and the OO7 benchmark for object-oriented DBMS’s. Benchmarking is fraught with peril in many, many ways. However, in my experience, the ultimate problem is the consumer of the information. People want simple answers. They want to be told a single number, N, and to be told “product A is N times as fast as product B”, with the idea that they can then know how fast their own software would run, were it ported from A to B.

    You can explain over and over, until you are blue in the face, why different benchmarks will produce different values of N, and a dozen other complications that make it impossible for such a magic N to exist. They’ll listen to all that, and then say, OK, but tell me what N is. It’s amazingly frustrating. It’s almost enough to make you feel sorry for the vendors of whom these benchmarks are demanded.

  71. Curt Monash on June 29th, 2009 12:53 am


    The Gabriel Benchmarks were the first reason I developed disdain for benchmarks. LISP machine vendors openly put in accelerators for the functions covered in the benchmarks, so that they’d get good scores — but you know much more about that than I do, of course, and may even have been the person who first told me that …

    So far as I can tell, TPCs are not badly subject to that particular criticism. But they seem to have all the other traditional benchmark flaws.

  72. Todd Fin on June 29th, 2009 3:03 am

    Just heard the rumors, ParAccel had closed $22 mil of series C funding

  73. anonymous on June 29th, 2009 3:07 am

    chill down, folks! ;-)

  74. Curt Monash on June 29th, 2009 3:21 am

    Congrats to them if true. There will surely be a press release soon if it is.

  75. John Galloway on June 29th, 2009 11:38 am

    Angelinia K: writes: “Curt, are you familiar with auto or motorcycle racing? Any idea how much a Ducati track bike or a Formula One car costs? Any idea how much of the engineering that goes into that race bike/car ends up back in the consumer model/products making them better? ”

    Well I’m not sure about Curt, but I have a rough idea and this analogy while delightful from a tech-head perspective is terribly flawed. First AMA Superbike racing is wildly different from F1 precisely due to the bikes starting life right from the showroom floor. The frame, engine block, transmission, and various bits remain the same so in the end the bikes on the track are maybe 50-70% stock and thus not insanely expensive. A stock bike achieves perhaps 80% of the performance of a race bike (which is why so many are wrecked in their first months of ownership). Its perhaps 100x expensive less (uh that would be 2, no wait wait 3.2 orders …no no 2 was it, yea 2 orders of magnitude :-) less. Superbike tech does get back to the showroom very fast, often within just a year or two because of this commonality, but its also why the vastly expensive analogy does not hold.

    F1 on the other hand where expenditures are through the roof (Toyota in 2008 alone I believe is thought to have spent more then $300 million on their F1 effort), but alas this side of the analogy also fails as F1 technology does not trickle down to consumer cars as the two just have nothing in common. In order to drive an F1 car your spine is practically bolted into the car as otherwise you can not tolerate the 4-5g cornering and braking loads. Even the manufactures will admit F1 is not the proving ground for consumer technology. The article offered by a later poster (http://www.racerchicks.com/motor/formulaonetechnology.html) points out all the F1 tech that was in fact NOT invented for F1 (yet then concludes that the trickle down does occur, perhaps this was supposed to be sarcastic, or maybe I haven’t had enough coffee yet this morning).

    So the motorcycle tech does trickle down (I’d even up that to a serious dribble) but the race bikes are very similar to the consumer models not exotic wildly expensive machines. The F1 cars do have lunatic price tags but, that tech doesn’t trickle down as they as so far from consumer autos that they have nothing in common besides having 4 wheels (in 2009 F1 added hybrid tech (KERS) about a decade after consumer introduction).

    The analogy that holds is that bike/car manufacturers field race teams for the advertising and bragging rights, the same reason hw/sw vendors publish TPCs.

    By way of full disclosure I do not work for any motorsports manufacturer!

  76. Curt Monash on June 29th, 2009 12:04 pm


    Great post. Thanks!


  77. Jerome Pineau on June 29th, 2009 12:43 pm

    I guess the best way to resolve it would be to look at the past and ask if, indeed, any previous TPC-H metrics ever happened to “trickle” into the customers’ hands. If that’s the case, then the analogy is valid.

  78. John Treadway on June 29th, 2009 1:05 pm

    Dan – I remember the 007 benchmarks well. They really were an amazing time sink for little value. I remember that at least one of our competitors turned off locking and a few other important core features to run faster.

    But your point is 100% right on. Vendors would be happy to never perform another TPC benchmark. But failing to put up reasonable TPC numbers when your competitors are publishing theirs makes it hard to get business. Taking the stand that “benchmarks are bad so we don’t do them” is not what IT buyers want to hear. So, we all beg hardware from our friends and spend otherwise valuable time performing unnatural acts on otherwise quite reasonable software.

  79. Is TPC-H really a blight upon the industry? « Hype Cycles on June 29th, 2009 8:50 pm

    [...] TPC-H really a blight upon the industry? By ak On June 22, Curt Monash posted an interesting entry on his blog about TPC-H in the wake of an announcement by ParAccel. On the same day, Merv Adrian posted another [...]

  80. Amrith on June 29th, 2009 9:06 pm

    Merv, Curt,

    Great discussion. I have a slightly different take on the issue and I’ve posted it at



  81. More on TPC-H comparisons « Hype Cycles on July 1st, 2009 10:43 am

    [...] the numbers that Curt and I have been referring to in our posts. Curt’s original post was, my post [...]

  82. Amrith on July 1st, 2009 10:45 am


    I’ve posted a graphical comparison of the disk to data, memory to data and load time results from TPC-H at http://hypecycles.wordpress.com/2009/07/01/tpch-chart-comparisons/

    Your thoughts please.


  83. Notes on columnar/TPC-H compression | DBMS2 -- DataBase Management System Services on July 2nd, 2009 2:54 pm

    [...] was chatting with Omer Trajman of Vertica, and he said that a 70% compression figure for ParAccel’s recent TPC-H filing sounded about right.*  When I noted that seemed kind of low, Omer pointed out that TPC-H data is [...]

  84. The TPC-H schema | DBMS2 -- DataBase Management System Services on July 2nd, 2009 2:59 pm

    [...] anybody recommend in real life running the TPC-H schema for that data? (I.e., fully normalized, no materialized views.) If so — why???? [...]

  85. Historical significance of TPC benchmarks | Software Memories on July 2nd, 2009 4:07 pm

    [...] case you missed it, I’ve had a couple of recent conversations about the TPC-H benchmark.  Some people suggest that, while almost untethered from real-world computing, TPC-Hs [...]

  86. Daniel Abadi has a theory about ParAccel | DBMS2 -- DataBase Management System Services on July 7th, 2009 6:46 pm

    [...] most of the mentions were by competitors and/or Vertica-affiliated academics, and since my own unflattering ParAccel-related comments were rather fresh at the [...]

  87. ParAccel and their puzzling TPC-H results | Bookmarks on July 25th, 2009 2:46 am

    [...] it. First, there was Merv Adrian’s positive post on the subject, and then there was Curt Monash’s negative post. Monash’s negativity stemmed largely from the seemingly unrealistic configuration of having nearly [...]

  88. VectorWise, Ingres, and MonetDB | DBMS2 -- DataBase Management System Services on August 4th, 2009 6:14 am

    [...] achieve speed. That said, VectorWise claims 3-4X compression on TPC-H data, which is no worse than what ParAccel reported, and enjoys higher compression rates on other kinds of [...]

  89. Tom Williams on June 10th, 2010 5:50 pm

    Does anyone know what file system ParAccel uses. Is ti b-tree. They are claiming linear scalability. Are they stating that when there is enough memory to run hash joins in RAM? What happens when high concurrency and not enough memory for hash join. Anyone know?

  90. Links and observations | DBMS2 -- DataBase Management System Services on August 9th, 2010 10:40 pm

    [...] replaced their CEO, replaced their marketing chief, and stopped the worst of the marketing nonsense I used to complain about. ParAccel has some interesting plans for ParAccel 3.0 which are, [...]

  91. Exadata TPC Benchmarks | The Pythian Blog on August 12th, 2010 9:37 pm

    [...] fastest 1,000GB TPC-H result at the time, but has since been overtaken by a ParAccel result that makes a mockery of the benchmark according to Curt Monash by running almost entirely in [...]

  92. Is TPC-H really a blight upon the industry? | Pizza And Code on October 7th, 2011 4:27 am

    [...] (typeof(addthis_share) == "undefined"){ addthis_share = [];}On June 22, Curt Monash posted an interesting entry on his blog about TPC-H in the wake of an announcement by ParAccel. On the same day, Merv Adrian posted another [...]

  93. More on TPC-H comparisons | Pizza And Code on October 7th, 2011 4:27 am

    [...] the numbers that Curt and I have been referring to in our posts. Curt’s original post was, my post [...]

  94. YCSB benchmark notes | DBMS 2 : DataBase Management System Services on January 17th, 2013 11:01 pm

    [...] once again, I stand by my position that benchmark marketing is an annoying waste of everybody’s time. Categories: Aerospike, [...]

  95. Some notes on new-era data management, March 31, 2013 | DBMS 2 : DataBase Management System Services on April 2nd, 2013 3:41 am

    [...] particular, benchmarks such as the YCSB or TPC-H aren’t very [...]

  96. http://www.fromchookyshome.nl/ on May 4th, 2013 12:17 pm

    Unquestionably believe that which you said. Your favorite reason seemed to be
    on the internet the simplest thing to be aware of. I say to you, I
    certainly get annoyed while people consider worries that
    they plainly do not know about. You managed to hit the nail upon the top as well as defined out the whole thing without
    having side effect , people could take a signal. Will probably be back to get
    more. Thanks

Leave a Reply

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:


Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.