September 28, 2008

Oracle Database Machine performance and compression

Greg Rahn was kind enough to recite in his blog what Oracle has disclosed about the first Exadata testers. I don’t track hardware model details, so I don’t know how the testers’ respective current hardware environments compare to that of the Oracle Database Machine.

Each of the customers cited below received “half” an Oracle Database Machine. As I previously noted, an Oracle Database Machine holds either 14.0 or 46.2 terabytes of uncompressed data. This suggests the 220 TB customer listed below — LGR Telecommunications — got compression of a little under 10:1 for a CDR (Call Detail Record) database. By comparison, Vertica claims 8:1 compression on CDRs.

Greg also writes of POS (Point Of Sale) data being used for the demo. If you do the arithmetic on the throughput figures (13.5 vs. a little over 3), compression was a little under 4.5:1. I don’t know what other vendors claim for POS compression.

Here are the details Greg posted about the four most open Oracle Database Machine tests:

M-Tel

  • Currently runs on two IBM P570s with EMC CX-30 storage
  • 4.5TB of Call Data Records
  • Exadata speedup: 10x to 72x (average 28x)
  • “Every query was faster on Exadata compared to our current systems. The smallest performance improvement was 10x and the biggest one was 72x.”

LGR Telecommunications

  • Currently runs on HP Superdome and XP24000 storage
  • 220TB of Call Data Records
  • “Call Data Records queries that used to run over 30 minutes now complete in under 1 minute. That’s extreme performance.”

CME Group

  • “Oracle Exadata outperforms anything we’ve tested to date by 10 to 15 times. This product flat out screams.”

Giant Eagle

  • Currently runs on IBM P570 (13 CPUs) and EMC CLARiiON and DMX storage
  • 5TB of retail data
  • Exadata speedup: 3x to 50x (average 16x)

Comments

9 Responses to “Oracle Database Machine performance and compression”

  1. Kevin Closson on September 29th, 2008 10:46 am

    Even though most of us on the Exadata team have routinely stated that the Beta participants received “a half rack”, doing so is not precise. The Beta participants received a configuration with 4 database servers and 6 Exadata Storage Servers. A production HP Oracle Database Machine has 8 database servers and 14 Exadata Storage Servers. Likewise the Exadata Storage Server Software was executing on Proliant DL185 hardware which is significantly less powerful than the production DL180 G5 hardware. So, less and less-powerful. Just FYI.

  2. Curt Monash on September 29th, 2008 11:08 am

    Thanks Kevin!

    So was LDR Telecom really putting 220 TB of user data on 6 Exadata Storage Servers, each of which accomodates 3.3 TB of uncompressed data? Or is there some kind of apples/oranges issue with those figures?

  3. Kevin Closson on September 29th, 2008 11:48 am

    Curt,

    Eek, first off I have to apologize. I had wires crossed between Beta1 and Beta2 specifically the host hardware for Exadata Storage Server. So, while we did only ship them 6 Exadata cells (as opposed to the 7 of a true half rack) in the Beta2 program, we did not ship them Proliant DL185 hardware as that was the Beta1 platform. Sorry. Having said that, customers should be glad that the platform is the DL180 G5 because it is significantly more capable of handling Exadata Storage Server Software as a workload.

    While LDR was not my account, I should add that 6 SAS Exadata Cells actually have room for just under 10TB of mirrored space for user data (using the entirety of each platter). We recommend folks use the short-stroke regions of the platters (e.g., the outer 60%), but they can fill in more if they need/want to. That still wouldn’t accomodate the entirety of LGRs 220TB production dataset of course.

    The metrics differed between accounts, but in each measurement we are apples-apples. Allow me to explain. If, for instance, LGR were to transport only a partition of a table from their 220TB production side to a 6-Cell Exadata environment and run the query that only accesses that partition on both systems they do indeed access precisely the same content and amount of data.

  4. Curt Monash on September 29th, 2008 9:32 pm

    Thanks, Kevin.

    Let me try to reflect this back to you.

    A. The figures of 1 TB and 3.3 TB respectively for the 12 x 300 MB and 12 x 1 TB disk options understate the case in at least one regard. Namely, they are based on a recommendation that only the outer 60% of the disk be used, EVEN FOR MIRRORING. Thus, it is possible to increase capacity 1.6X over the quoted amounts, albeit at some unspecified performance penalty.

    B. The 220 TB figure for the LGR telecommunications database is not relevant for imputing any kind of compression metric, because the whole 220 TB of user data never were loaded onto — and probably wouldn’t have fit onto — the test system.

    Do I have it right?

    Thanks,

    CAM

  5. Kevin Closson on September 30th, 2008 6:00 pm

    (12 x 300GB ) / 2 [for mirroring] == 1800GB.
    1800* .6 == 1080GB

    So using the outer 60% of the drives for mirrored user space comes in at roughly 1TB with the SAS option. I think marketing is trying to keep the capacities to nice, simple numbers like 1TB when possible. The remaining space is, of course, usable for colder data, mirrored or un-mirrored.

    The math is quite similar for the SATA option.

    LGRs production database size was cited, but only a partition (e.g., 1 day of CDRs or some such) was loaded into the Exadata (“half”-rack) system. The queries used partition elimination on both the PROD and the Exadata side so the query touches precisely the same data on both.

  6. Curt Monash on September 30th, 2008 6:43 pm

    Kevin,

    Why use the outer part of the disk for everything? I.e., why not use the slower part for mirroring?

    Thanks,

    CAM

  7. Kevin Closson on October 3rd, 2008 10:55 am

    Curt,

    That is actually a good question. We mirror the contents of our logical management unit (a disk group) and do not mirror between disk groups. If we supported a RAID 0+1 (with BCL) style approach between disk groups then I would put a hot-mirror-side disk group on the sweet sectors and a cold-mirror-side disk group closer to the spindles and only read the cold-mirror-side in the event of a failure. But, we don’t do that. Instead, we do a quasi-RAID 1+0 **within** the disk group. As such we consume sweet disk for both the primary and secondary extents. In the event of a loss, I’d say our current approach is better because the application will continue to be serviced with I/O from sweet sectors. On the other hand, if there is never a loss we are consuming sweet disk for naught. It is a trade-off.

    The other problem with a RAID 0+1 (hot-to-cold) striped-then-mirrored approach is suffered by OLTP workloads because with the typical read/write ratio of OLTP we’d be wildly flailing between the sweet and sour sectors to satisfy the writes. Remember, we are not a one pony show.

  8. Kevin Closson on October 3rd, 2008 11:25 am

    I should add that I purposefully omitted the fact that Automatic Storage Management does offer a way to do what Curt asked about using Failure Groups). I tend to not mention this because the origin of that feature was to equip administrators with the necessary tools to do the hard work of ensuring ASM mirrors between seperate controllers and/or cabinets, etc. Exadata is much more aware of the underlying storage so this level of admin effort is not needed.

  9. Aster Data 4.0 and the evolution of “advanced analytic(s) servers” | DBMS2 -- DataBase Management System Services on November 23rd, 2009 6:34 pm

    […] instead doing mirroring in its own software. (Other examples of this strategy would be Vertica, Oracle Exadata/ASM, and Teradata Fallback.) Prior to nCluster 4.0, this caused a problem, in that the block sizes for […]

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.