January 28, 2009

More Oracle notes

When I went to Oracle in October, the main purpose of the visit was to discuss Exadata. And so my initial post based on the visit was focused accordingly. But there were a number of other interesting points I’ve never gotten around to writing up. Let me now remedy that, at least in part.

There were a couple of notes about text analytics as well, but I’ll blog about those separately.

Comments

9 Responses to “More Oracle notes”

  1. Sanjay on January 28th, 2009 4:32 pm

    I recently saw Exadata in action and also attended a technical session on Exadata. I can validate a couple of things.

    Compression – In 11g compression is greatly enhanced and works very differently from prior versions. For one the compression takes place in the background and according to Oracle the overall impact on a very active system is about 3%. The background process (if it runs all the time because you are actively inserting/updating data) can have a 3% impact. Writes see no impact because nothing is compressed when data is written – it is compressed in the background later. Reads see improved performance because they have to scan fewer blocks and more blocks fit in memory. Plus the compression algorithm can compress data across columns. The same algorithm is used for compressing backup files. CCompression can be enabled at the tablespace level, table level, or even a partition level.

    Exadata – From what I have seen Exadata has massive I/O capabilities. Each database machine (8 DBnodes + 14 Exadata storage servers) can sustain 10-14GB I/O per second. As a result you don’t need many indices (which Oracle traditionally required to get decent performance). If you do massive sequential I/O the need for random I/O goes down. I saw a few queries on multi terabyte tables come back in less than 5 minutes which is unheard of in regular Oracle environments. The thing with Oracle is that they have excellent functionality (terrific PL/SQL engine, many SQL extensions, truly great concurrency model – a huge weakness of Teradata, many partitioning options, excellent compression in 11g, many high availability options etc.) and with the new I/O capabilities of Exadata – they have a winner in the high end DW area. A lot of Oracle shops that were forced to go to Teradata now have a real alternative.

  2. Curt Monash on January 28th, 2009 7:04 pm

    Sanjay,

    This was all demo, right? No production use?

    Thanks for sharing!

    CAM

  3. Timur Akhmadeev on January 29th, 2009 5:13 am

    Could you please clarify this statement:

    However, Oracle’s optimizer is sufficiently self-aware to notice when a query runs long and try to do things differently “next time.” For example, it might do more sampling if the statistics proved unreliable, or might take the time to search a bigger solution space.

    What is particular feature you are talking about? IMO Oracle’s CBO can not gather or regather stats on the fly unless user requests CBO do it.

  4. Curt Monash on January 29th, 2009 6:31 am

    I’m sorry, Timur, but I was taking Oracle’s word for that one. It didn’t seem like a terribly big deal, so I didn’t push for clarity as to the exact amount of DBA intervention required.

    Perhaps somebody from Oracle can comment more precisely …

    CAM

  5. Sanjay on January 29th, 2009 9:41 am

    Curt,
    Yes this was a demo and not a production database. I did speak to a couple of customers (that either did a POC or are in the process) who were at the demo and they confirmed some of the things I heard at the demo.

    As to the question from Timur – Yes Oracle can perform dynamic sampling of statistics through the use of a parameter called OPTIMIZER_DYNAMIC_SAMPLING. Zero indicates no dynamic sampling and 10 is for very aggressive dynamic sampling. Oracle can do dynamic sampling if stats are missing, out of date, or may lead to bad estimations. Of course dynamic sampling is only useful if the amount of time to sample is small compared to the query execution time.

    Through dynamic sampling the optimizer may come up with a different plan.

  6. Timur Akhmadeev on January 30th, 2009 5:41 am

    Sanjay,

    I know about dynamic sampling. I mentioned that “CBO can not gather or regather stats on the fly unless user requests CBO to do it.” But
    dunamic_smpling is an instruction to CBO, so it is not the same thing as in quote. That is my point: it is not a self-awareness, just a user request.

  7. Sanjay on January 30th, 2009 11:02 am

    It may just be semantics but I believe it is self awareness. Once you tell the optimizer to be self aware (ONE time at the instance level not per user or per query) to use sampling the optimizer makes the decision on when to sample from that point on until you turn that off again.

    So while I agree that you have to inform the optimizer but you do that ONCE.

  8. Timur Akhmadeev on February 2nd, 2009 5:53 am

    Sanjay,

    I think it’s a kind of perception… In my view it’s just a configuration issue.

    And BTW dynamic sampling (when it is set) is always applied by CBO. It has some rules to follow (you can find them in docs). There are no such a condition that “when *a query runs long* and try to do things differently “next time.””. So, I don’t think that dynamic sampling is the feature which fits to this definition.

  9. Oracle Exadata article — up at last | DBMS2 -- DataBase Management System Services on February 2nd, 2009 2:15 pm

    [...] reasons, it was a lot more work than one might at first guess.  One such reason is that it spawned four related blog posts over the past few [...]

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.