August 20, 2008

Kevin Closson doesn’t like MPP

Kevin Closson of Oracle offers a long criticism of the popularity of MPP. Key takeaways include:

Comments

20 Responses to “Kevin Closson doesn’t like MPP”

  1. Daniel Abadi on August 20th, 2008 12:56 pm

    Hi Curt,

    I’m definitely in the MPP camp, but in Kevin’s defense, that article was written more than a year ago (so the TPC-H benchmarks where MPP wins big didn’t exist yet).

    I wonder if Kevin still stands by his opinion today.

  2. Aniruddha Mitra on August 20th, 2008 12:59 pm

    Well anibody who administers a MPP does not like it due to its administrative overhead. However my question is why he is not talking about the cost advantage you have by using mpp. Oracle gives up 2-5% per new node addition, much more than mpps, and smps have a 4:1 cost disadvantage compared to mpps.

    I guess that Oracle is looking for a mpp vendor to buy. I do nbot know when it will happen, otherwise, they have to say goodbye to > 5 TB DWs, – which are everywhere.

  3. Jonathan Moore on August 20th, 2008 1:01 pm

    So one of the main points here seems to be that data shipping is bad but I don’t understand how shipping data across a SAN is worse then shipping data between hosts in the case that both have the same interconnect. It may even be that shipping data between hosts is better as the host can prune the data down before shipping it.

  4. Steve Wooledge on August 21st, 2008 1:18 pm

    re:Anniruddha’s comment on administering an MPP database — there are some innovations around recovery-oriented computing making their way from research to commercial practice. The point is to reduce mean time to recovery (MTTR) in large distributed systems rather than worrying about mean time to failure (MTTF). more here: http://www.asterdata.com/blog/index.php/category/manageability/

  5. Serge Rielau on August 21st, 2008 2:09 pm

    Hmm, isn’t Oracle the company that tells us to replace big expensive SMP boxes with cheap pizza bozes using RAC? So perhaps a single big box with 1000 CPU may scale better than 250 4-way boxes. But how how do I grow it? Where do I find that 1000 way SMP box and at which price?

    Cheers
    Serge

  6. Curt Monash on August 21st, 2008 2:21 pm

    Daniel,

    Thanks so much for the catch. I thought I’d checked the date on his post and it WAS current, but I evidently screwed up.

    I’m traveling, so I’m glad somebody else was on the ball for me.

    Best,

    CAM

  7. Greg Rahn on August 21st, 2008 5:35 pm

    @Aniruddha Mitra

    Where do you get your magic number of >5TB from? Is this something you personally experienced or something you read on the Internet?

    Bottom line is there is no such limitation of Oracle for data warehousing. If you desire to make such a claim, please be sure to support it with technical facts explaining what limitation you think you are expereriencing. This way there is the possibility of something meaningful compared to yet another unfounded factless claim.

    @Serge

    You are correct. Oracle has been promoting Oracle RAC on Linux using commodity hardware for a number of years. I’m not quite sure why the SMP argument keeps coming up as there is another option.

  8. Curt Monash on August 21st, 2008 8:26 pm

    Greg,

    Whenever I find a case of an Oracle data warehouse w/ >5 TB of user data (perhaps that limit is increasing toward 10 now), there’s always a story of huge amounts of painful work to get it to work and meet user needs.

    The same generalization is not true of the new data warehouse startups.

    CAM

  9. Greg Rahn on August 22nd, 2008 5:36 am

    @Curt

    I’m quite certain that your generalizations about Oracle data warehouses are related to the management of the database and not any limitations of the Oracle database software. If you have some technical evidence and details to support otherwise, do share.

    When a data warehouse gets to 5TB – 10TB and one manages it like an OLTP database, it will certainly be painful. Then again, applying the same OLTP database management principles to any other data warehouse database software would result in the same pain. The other notable is the same principles that make and MPP DW perform well, also make an Oracle DW perform well. Specifically: appropriately choice of partitioning key and type, leveraging parallelism and using compression. The other common problem that hinders many Oracle data warehouses is inappropriate storage bandwidth provisioning. If sized appropriately for the workload, it makes a world of difference. Again, if one manages their Oracle data warehouse storage as if it was an OLTP database it will certainly be painful. If they follow the MPP model: adding storage, I/O bandwidth and CPU in building block increments, it again performs well. The issue that I see is that many Oracle shops brought in Oracle years ago as on OLTP database, gained experience with it in that form, then one day they built a data warehouse. This Oracle data warehouse was just another Oracle database so the same DBAs managed it in the same way they did all their other databases. This Oracle DW started small, then grew, and today it is slow, not because of any software limitations, but because the fundamental principles that make DW database software work well we not applied, or applied inappropriately.

    There are also shops out there that thought that an MPP DW database (say like Teradata) would give them perfect scalability (as the marketing literature says) and they could just add more hardware as their database grew and keep the same performance. The only problem is that ratio between hardware and performance keeps growing and it gets more and more costly to keep the same performance as the DW grows. How could this be? Well, if good partition keys are not chosen, it results in much of the data being shipped between nodes. I’ve seen MPP DWs come to a near halt because of this. Again, if the fundamental principles of data warehousing are not applied, performance will suffer. Period. BTW, this also applies to the new data warehouse startups.

  10. Curt Monash on August 22nd, 2008 10:38 am

    Greg,

    Re: “If you have some technical evidence and details to support otherwise, do share.”

    In writing that, you’re dismissing a large fraction of the posts in this blog as if they didn’t exist at all, from customer examples to hardcore technical arguments.

    I really don’t have a guess as to what else I could say that you would regard as suitable.

    CAM

  11. Jeff Moss on August 22nd, 2008 5:19 pm

    Curt

    It’s not really fair to say to someone that they “are dismissing a large fraction of the posts…” – I understand your point, but you’re not really expecting everyone who reads each post you’ve written, to then go and read a large proportion of your other posts to get all your supporting evidence, are you?

    We’ve debated, briefly, this “Oracle doesn’t work with 5Tb+” issue a few months back and I don’t think we came to any significant conclusions then – it’s one of those debates that rumbles on I guess, with many differing viewpoints, however, I certainly didn’t leave the debate thinking I’d need to recommend to my client that they need to plan for a move to MPP.

    As I said in the comments on that post we discussed a while back, I’ve no knowledge of MPP systems per se, but I just don’t understand how they offer more than a RAC or SMP solution that is architected properly – and yes, Greg is absolutely right, that architecting the thing right is the key – it’s the design principles that are more important here…not so much the hardware architecture, or software vendor for that matter. The individuals involved in the warehouse I work on are used to thinking about things in a warehouse way, rather than an OLTP way… perhaps that’s why it works for us.

    If you fail to use appropriate database features such as partitioning, for example, then you should probably prepare to fail – I mentioned before that Tim Gorman wrote an excellent paper on this.

    As Greg says, compression and parallelism are also very effective features to improve the chances of success. Bitmap indexes and materialized views can also aid performance too.

    The data warehouse I deal with is currently 4.7Tb of real data – not indexes, temp or staging – I’m quoting it properly this time 😉 It’s likely to exceed 5Tb very shortly, and we’re still not planning on changing from Oracle any time soon.

    Cheers
    Jeff

  12. Kevin Closson on August 22nd, 2008 6:24 pm

    I still think you guys are missing the entire point (devil’s advocate here). Curt found an old post where I was discussing shared nothing verses shared disk database architecture and titled it, “Kevin Closson doesn’t like MPP.” MPP is a hardware architecture and Oracle has (and never has) had a problem working with MPP hardware (ever heard of Pyramid R1000, nCube, IBM SP2).

    I am a huge fan of MPP…especially with shared disk architecture. The two are not mutually exclusive.

    I’ve been very short on blogging time lately. This should have turned into a post or two over at my blog.

  13. Kevin Closson on August 22nd, 2008 6:25 pm

    oops, I meant …Oracle has (and never has) no problem working with MPP…

  14. Curt Monash on August 24th, 2008 10:06 pm

    Kevin,

    I apologize again for the error re your posting date. Thanks for taking it so calmly.

    As for whether MPP equates to shared-nothing — in theory, of course it doesn’t. But in practice the benefit of MPP is going to be lost unless you have a whole lot of channels communicating data from disks to processors at once. And so far, that equates to shared-nothing.

    The problem is even more acute when you’re talking about RAM caches, and the speed of light comes seriously into play. To quote an old T-shirt, 3 x 10^10 cm/sec isn’t just a good idea. It’s the law.

    Best regards,

    CAM

  15. Greg Rahn on August 25th, 2008 1:37 pm

    @Curt

    I apologize in advance for asking, but I have not found any “hardcore technical arguments” on why Oracle can not run well at these scales on your blog. Can you provide a few specific links?

    I do realize this is a database analyst blog and not a database engineering blog, but some of your customer examples do not really seem well researched to me.

    Let me give you one example of such a report:

    “after extensive engineering…Oracle couldn’t get the load time for 100 million call detail records (CDRs) below 24 hours…DATAllegro and Netezza both handled it in 2-3 minutes.”

    I find it completely bizarre that it does not seem that you even question why there is over a 480x-720x difference in load times (over 1440 mins vs. 2-3 mins). Do you think that DATAllegro/Netezza do something so unique and so revolutionary in loading data that it warrants a 480x-720x difference w/o even wondering why? I don’t have a Ph.D. in Mathematics, but something certainly does not add up to me. I for one think this has absolutely nothing to do with the database software. Do you agree, or is this simply just a case of: “This is what I was told and it fits my position on Oracle so I print it“?

  16. Curt Monash on August 25th, 2008 2:15 pm

    Greg,

    I agree that the TEOCO figure makes no sense on the surface. And indeed in the interests of time I often pass through claims without carefully vetting each one, but rather being careful about how I attribute them and the evidence (such as it is) for them. That said, the actual discussion suggested there were some extreme acts going on to massage the data to make it rapidly queryable later.

    CAM

  17. Kevin Closson on August 25th, 2008 5:47 pm

    “But in practice the benefit of MPP is going to be lost unless you have a whole lot of channels communicating data from disks to processors at once. And so far, that equates to shared-nothing.”

    Curt,

    I couldn’t agree with you more about balancing the pipes from I/O->CPU. And if you read as much of my blog as I yours, you’d know my position about trying to feed a huge hungry compute tier–be it a huge monolithic SMP or 50 RAC nodes or whatever–with little I/O tiny pipes.

    So, Curt, when it comes down to it I suspect you are more biased toward BALANCED systems configurations than shared nothing or MPP…and it is for that reason the fangs remain concealed.

  18. Kevin Closson on August 25th, 2008 6:15 pm

    “That said, the actual discussion suggested there were some extreme acts going on to massage the data to make it rapidly queryable later.”

    Curt,

    That isn’t going to cut it. You can tell us old fuddy-duddy Oracle guys get peaved when a claim such as “Oracle took longer than 24 hours” and DATAllegro and Neteeza both took 2-3 minutes and you come back with some claims about how they messaged the data for queries. Let’s stick to the point.

    The claim is that Oracle couldn’t ingest 100 million call detail records (CDR) in 24 hours. Even if they tested Oracle on a laptop and were limited to an ingest rate of 1MB/sec that would be
    84GB with of 24hr load bandwidth…and, by the way, budget for 100 million CDRs of 900 bytes–which would be a HUGE CDR.

    Please, for you reader’s sake, let’s see a little more intellectual curiosity go into these claims. If you think Oracle on any platform is limited to a bulk data load rate less than 1MB/s please, by all means come out and say it and then tell us what that has to do with an Oracle to Netezza/DATAllegro comparion.

    If you can get something done in 1/700th the time, you’re either not doing the same thing (and I mean an apples to mucous comparison) or using 700 fold the resources. It’s as simple as that.

    When it comes to getting records into a database elegance can only get you so much and much, much less than 700 fold. If not elegance, than brute force which can get 700 fold performance increase, but do us a favor and cite the configuration detail.

    OK, I’m back to my work performance tuning a CDR loading exercise on my Pentium II laptop running Oracle Lite on Knoppix.

  19. Kevin Closson on August 23rd, 2012 3:31 pm

    Hi Curt,

    I’ve been getting an odd amount of traffic to my site from this post over the last few days. If you’ll allow I’d like to refer to a post where I shared a bit about Exadata “offload processing”:

    Quote: If, over time, “everything”—or even nearly “everything”—is offloaded to the Exadata Storage Servers there may be two problems. First, offloading more and more to the cells means the query-processing responsibility in the database grid is systematically reduced. What does that do to the architecture? Second, if the goal is to pursue offloading more and more, the eventual outcome gets dangerously close to “total offload processing.” But, is that really dangerous?

    So let me ask: In this hypothetical state of “total offload processing” to Exadata Storage Servers (that do not share data by the way), isn’t the result a shared-nothing MPP? Some time back I asked myself that very question and the answer I came up with put in motion a series of events leading to a significant change in my professional career.

    http://kevinclosson.wordpress.com/2011/03/20/will-oracle-exadata-database-machine-eventually-support-offload-processing-for-everything/

  20. Curt Monash on August 23rd, 2012 4:19 pm

    Hi Kevin,

    Thanks for showing up! I also noticed your interview in http://kevinclosson.files.wordpress.com/2012/08/nocoug_journal_201208.pdf, and look forward to following both links more closely.

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.