July 30, 2009

Netezza is changing its hardware architecture and slashing prices accordingly

Netezza is about to make its biggest product announcement in years. In particular:

Allow me to explain.

For months, it has been an increasingly open secret that Netezza was planning a major refresh of its product line. As signaled by a blog post from Netezza’s product marketing VP Phil Francisco, many of the details are finally fit to post.*

*A couple more will be revealed next week, and a longer-term roadmap will be laid out during Netezza’s conference tour in September. (By the way, yours truly will be keynoting the Boston, Chicago, San Francisco, Washington, London, and Milan iterations of same. Come by and say hi!)

Basics include:

Beyond the switcheroo in components, Netezza is making substantial changes to its hardware architecture. In current Netezza products, the FPGA plays the role of a disk controller on steroids — it receives data, does some SQL or other analytic operations on it, and then throws it over the wall to the CPU for the rest of the processing. Netezza TwinFin, however, adds an actual disk controller. More important, it adds fast interconnects between the FPGAs, the disk controller, and RAM — specifically, as Phil Francisco put it in an email,

using multiple parallel channels of PCIe with much faster interconnection rates and lower contention between the blade server and the “DB accelerator card” with the FPGAs.

DMA (Direct Memory Access) technology also fits into the picture somehow.

Given faster interconnects, as well as faster CPUs, Netezza has changed its basic data path. Previously, data went from disk to the FPGAs (where it was filtered) to RAM, and from that point perhaps to the CPU for more processing. Now, however, data goes from disk to disk controller to RAM, and only then to the FPGAs.

The big win here — beyond the usual benefits of standard CPUs — is that Netezza now has a viable cache. Apparently, Netezza’s current product line doesn’t even cache the most heavily reused tables, such as those storing small dimensions or Netezza’s zone map information. Netezza TwinFin will be able to cache those and more, with a default of 256 megabytes of cache per core, and the ability to grow up to 1 gigabyte if needed or desired.

Related links

Comments

35 Responses to “Netezza is changing its hardware architecture and slashing prices accordingly”

  1. “The Netezza price point” | DBMS2 -- DataBase Management System Services on July 30th, 2009 10:13 pm

    […] just changed. Netezza is cutting its pricing to the $20K/terabyte range imminently, with further cuts to come. So where does that leave […]

  2. Charles Wardell on July 30th, 2009 10:52 pm

    Great move by Netezza. I would like to get some of the HW details regarding the intel blades. Are they still proprietary or are they using actual 3rd party intel based blades.

    I would assume that they are keeping the concept of the SPU since the FPGA is still in play and just replacing the DBOS and CPU?

    Charlie
    http://www.bcsolution.com

  3. Phil Francisco on July 31st, 2009 1:58 am

    Good posting, Curt – very informative, IMO.

    I did try to provide a little more clarity regarding your comments on changes to the architecture and data flows back on my blog, here: http://www.enzeecommunity.com/blogs/nzblog/2009/07/31/change-but-no-change. I hope that helps.

  4. Netezza launches new data warehouse appliance family | Between the Lines | ZDNet.com on July 31st, 2009 10:37 am

    […] is some debate over whether Netezza is changing its hardware architecture. Monash Research writes that Netezza is substantially changing its hardware architecture: Netezza has now decided that conventional Intel-based boards are a better companion to the FPGAs […]

  5. Anurag on July 31st, 2009 1:55 pm

    This, in part, shows the tremendous strides Intel’s made with Nehelam and Sandy Bridge. It’s really made a dramatic difference to the balance point between memory – cpu, much closer to Amdahl’s balance law.

    I’m very surprised they’re only placing 16GB RAM per blade. That seems to imply there’s still a chokepoint with the disks. Anyone know more about this?

  6. Greg Rahn on July 31st, 2009 3:32 pm

    @Anurag

    I question whether Netezza is using the Intel Nehalem processors in the blades. It does not seem the IBM BladeCenter HS21 supports the Nehalem processors. The newer IBM BladeCenter HS22 do however use the Intel Xeon 5500 series Nehalem processors. See each product’s data sheet for the details.

  7. Analytics Team » Blog Archive » Netezza news on July 31st, 2009 9:06 pm

    […] Netezza is Changing its Hardware Architecture and Slashing Prices Accordingly The Netezza Price Point […]

  8. Dave Anderson on August 4th, 2009 10:37 am

    “In some cases, analytic performance will be greatly improved (Netezza says 100X with a straight face, although that’s far from being an across-the-board claim). ”

    Yesterday:
    1. Buy million$ in SAS licenses
    2. Spend days iterating the correct hundred GB data transfer from the DB to your SAS-local server
    3. Run SAS analytics (rinse and repeat until correct)
    4. Report done 8 days after “go” date

    Tomorrow:
    1. Iterate pull and analysis locally on Netezza server
    2. Report done same day as “go” date

    -Dave

  9. Bence Arató on August 4th, 2009 4:42 pm

    There is a quite harsh opinion on Oracle’s
    Data Warehouse Insider blog:

    A not so fabulous new release
    http://blogs.oracle.com/datawarehousing/2009/08/a_not_so_fabulous_new_release_1.html

  10. Tim Nickason on August 4th, 2009 5:02 pm

    Hi all,

    From what I’ve read in Curt’s post and the slides I saw, it looks like Netezza are moving the FPGA a step further from the actual disk (and introducing the disk controller). I wonder how the FPGA’s map back to the disks? Before it was fairly clear that each SPU had it’s own storage. Do the FPGA’s somehow still map back to specific disks? but now go thru a common disk array?

    With the switch to Intel blade servers, they seem to be putting more resources in place for compute intense situations and not strictly raw disk performance. It will be interesting to see these operate in the real world in comparision to the current architecture.

    Cheers

    Tim

  11. Curt Monash on August 4th, 2009 6:00 pm

    Bence,

    Oracle claims slightly better performance than Netezza in that post. Do you happen to have refreshed the comparison of relative prices you helped me with last year? 🙂

    http://www.dbms2.com/2008/09/30/oracle-database-machine-exadata-pricing-part-2/

  12. Phil Francisco on August 4th, 2009 6:21 pm

    @Bence – I provided a rebuttal of sorts to Oracle’s statements in this morning’s blog, here: http://www.netezzacommunity.com/blogs/nzblog/2009/08/04/a-fud-machine-in-overdrive.

  13. FinTwin on August 4th, 2009 9:38 pm

    @Curt – By what calculation do you get $20,000/TB?

    Is this 72TB @ 2.25x compression = $20,000/TB? This would make the uncompressed TB be $45,000 so a TwinFin 12 of 32TB uncompressed capacity would list at $1,440,000?

    I’m curious to know the uncompressed $/TB number.

  14. Curt Monash on August 4th, 2009 11:01 pm

    It’s not just compression. There’s also mirroring and so on. I have a prior post on the subject using the example of Greenplum, although IIRC that was atypical in that they mirrored 4X vs. a more typical 2X. Anyhow, I’m pretty sure you can find some Netezza slides that contain both raw disk specs and user data ratings for the same system, which would allow you to work out the ratio, since you’re interested.

  15. Greg Rahn on August 5th, 2009 1:40 am

    @Curt

    Since I was wondering/confused about this calculation as well…and even more so after your last comment. What exactly does mirroring have to do with how the price per TB of user data is calculated given that Netezza clearly uses a raw space to user data space ratio of 3:1 (primary/mirror/temp)? You clearly state “$20K/terabyte of user data” so it seems irrelevant to me if mirroring is 2x, 4x or 10x given the physical space for user data has been clearly documented (in the case of the TwinFin 12, 32TB user data uncompressed.)

    If it is unclear I would just use the larger of the numbers…
    $20K/terabyte of user data at a capacity of 128TB of user data (assumes 4x compression) = $2,560,000 for a TwinFin 12.

    I have to say I really despise silly games when it comes to calculations… What is so wrong about showing the math or at least giving a straight forward answer? It’s not like Netezza pricing is top secret. Just look it up on the GSA Schedule 70.

  16. Curt Monash on August 5th, 2009 10:54 am

    Greg,

    My last comment was in response to somebody who, for some reason, was asking about UNCOMPRESSED, TOTAL disk space. If that’s not a concern to you, please ignore it.

    As for the other, Netezza’s CURRENT TwinFin products are being rated at 2.25X compression. That “4X” figure is something they should have taken out from the slide … and that I should have caught when they didn’t. So the box pictured there should sell for a little less than 128/4 * 2.25 * $20,000 = $720,000. E.g., try rounding down to $700,000 as a good working figure.

    List price, of course.

    Good find on the GSA schedule.

  17. Steve Wooledge on August 5th, 2009 1:02 pm

    Netezza’s 1/2-step toward true commodity hardware is interesting. Good technical analysis here:
    http://www.asterdata.com/blog/index.php/2009/08/03/netezzas-change-in-architecture-move-towards-commodity/

  18. Curt Monash on August 5th, 2009 2:17 pm

    Steve,

    I love you and Mayank to death, but parts of that post are pretty bizarre. Do you have any evidence that Netezza can’t throttle a runaway query? Can you explain what cache consistency issues arise in a shared-nothing, query-optimized system?

    On the plus side, it’s good to see you making the case “We’re better at parallelism than the other guys and hence we run just fine on cheaper hardware.”

    Also on the plus side, that post isn’t nearly as bizarre as the Oracle one Phil Francisco eviscerated on his blog.

    Thanks,

    CAM

  19. Steve Wooledge on August 6th, 2009 2:28 am

    Curt,

    Please let us know which portions are incorrect – it is a technical post, so we should be able to reason about correctness/incorrectness.

    I think Mayank wrote that Netezza needs to demonstrate that their software can handle the richness and limitations of commodity hardware. The listed points (e.g., runaway query, cache consistency) are all points that the database software must now explicitly handle since Linux does not provide these solutions out-of-the-box. And none of these were issues in a resource-starved FPGA-centric hardware.

    The point is that a move to Intel CPUs, 16GB RAM, Linux and commodity RAID controllers is not a free-lunch or a trivial architectural change – it will require careful software engineering to make the system as robust as the previous product line that had 9 years of Engineering behind it. And Netezza must demonstrate that it has considered these factors to its customers and prospects.

    On Cache Consistency, a shared-nothing distributed system that uses RAM for caching will have data cached from remote servers too (e.g., when a relation has been re-distributed for a non-partition key join). When data is updated in the primary servers, the distributed cache needs to be updated for consistency of query results that follow. Hence the overhead.

  20. Geno Valente on August 6th, 2009 12:25 pm

    @ Curt: “So the box pictured there should sell for a little less than 128/4 * 2.25 * $20,000 = $720,000. E.g., try rounding down to $700,000 as a good working figure”

    TwinFin12:
    128/4 = 32.
    Thus, 32*2.25*20K = $1.44M ??? I think there is a math problem with the above. A TwinFin12 is most likely $1.44M.

    Also, does that include the first year of maintenance? the NPS MSRP didn’t include it, wondering if TwinFin does.

    Our dbX appliance is clearly priced: $20K/TB User data, 1 year maintenance/support included.

  21. Curt Monash on August 6th, 2009 12:30 pm

    Geno,

    You’re right. I was off by a factor of 2. Sorry.

    If you want detailed Netezza pricing, I suggest you ask them.

    To the nearest 10 or 20% — at least — list pricing is irrelevant anyway, due to negotiated discounts.

  22. Bence Arató on August 6th, 2009 4:54 pm

    @Curt

    Estimating Oracle’s pricing is much easier now, because there is an official Exadata price list on oracle.com.

    My current estimation for a full-rack Oracle Database Machine mentioned in the blog post
    – 8 DB server at 2×4 cores each, 14 Exadata unit at 12×1000 TB SATA disk, raw user data capacity 46 TB – is the following:
    – Hardware price is 650,000 USD
    – Exadata Software license is 1,680,000 USD

    So total list price for the ODB is $2,330,000,
    which gives you about $50,000 per user TB.

    Of course this calc assumes that you already have the necessary DB software licenses, like when you use the ODM as an add-on to an existing Oracle DW environment in order to gain speed without changing the architecture or the vendor.

    If you start a new DW and have to buy all the necessary DB software as well, then the picture is quite different:
    For the 64 cores of the ODM, a simple package of Oracle DB EE+RAC+Partitioning will cost you about $2.6 million, effectively doubling the total price of the system and the $/TB ratio.

  23. Curt Monash on August 6th, 2009 7:10 pm

    Bence,

    The higher figure makes a lot more sense, because a lot of the processing is being done on the DBMS tier.

    Indeed, the hardware on that tier is also a consideration.

    So basically, we’re back in the territory of my post late last September. 🙂

    Thanks,

    CAM

  24. Bence Arató on August 7th, 2009 2:56 am

    Curt,

    the hardware of the DB tier is included in the $650.000. The Database Machine is a full hardware package, including Exadata units, DB processing units, interconnets, etc.

    As I see it, Oracle pricing strategy makes sense for the first scenario (upgrade of an existing DW, when the DB licenses are already have been purchased). The $50.000/TB price point is in line with other vendors’s pricing – or was before last week 🙂 – and the customer can keep the familiar, all-Oracle DW environment. No migrations, no new vendor, quite small technology change etc.

    Competing for and winning new DW business against the other MPP appliance vendors is another matter. I don’t really think Oracle actively target this sector as of now. Do you hear news from the other vendors about Oracle competing with them in new accounts?

  25. Curt Monash on August 7th, 2009 3:10 am

    Bence,

    I don’t hear much about anybody targeting non-Oracle accounts, except for web-only businesses. That’s because there aren’t all that many large non-Oracle accounts, again except for web-only businesses. 🙂

  26. Greg Rahn on August 7th, 2009 3:23 am

    The $50K/TB number Bence quotes for Exadata is for uncompressed user data where the $20K/TB Netezza number assumes 2.25x compression. The Netezza price per user data TB uncompressed would be $45K (2.25 * $20K). Seemingly the price per Netezza rack has changed only minimally. The TwinFin 12 price is around $1.44M and the NPS 10100 price with the compress engine was around $1.54M (based on the GSA Schedule 70 pricing). The major influence on the reduction of Netezza’s price per TB is the result of going from 400GB drives to 1000GB drives, a factor of 2.5x. The performance gains from the new hardware are being quoted around the 2.5x factor as well, so it seems that the TwinFin yields just slightly more performance per TB than the NPS lineup with the exception of compute intensive workloads which quote 10x. I think it is important to not only talk in terms of capacity ($ per TB) but to also include performance per TB as well. It’s quite easy to half the price per TB simply by doubling the drive capacity but doing so yields zero gains in performance.

    Not to go into a pricing deep dive but I think the other notable is that Oracle gives perpetual software licenses and Netezza does not. This means that it will cost the Oracle customer much less on hardware upgrades. A current Netezza NPS 10100 customer upgrading to a TwinFin 12 will result in a $1.54M + $1.44M = $2.98M cost. A current Oracle Database Machine customer upgrading to the next generation hardware incurs only the hardware cost (assuming the same CPU and HDD count) which seems to me to make the cost more favorable toward Oracle even with the cost of the DB licenses.

  27. Curt Monash on August 7th, 2009 4:31 am

    Last I checked, Bence had Oracle at somewhere north of $100K/TB uncompressed, while Netezza is sitting just under $45K/TB uncompressed. Oracle’s compression seems somewhat more mature than Netezza’s, so compression probably narrows the gap somewhat. On the other hand, Netezza uses less space for anything resembling indexes than Oracle does, so that widens it again.

    Greg is correct that if you keep your installation for, say, 6 years, Oracle’s perpetual licenses could save you from buying the software twice. On the other hand, if you go with Oracle, you will over that time be paying higher total maintenance than you do with Netezza.

    Also, your DBA costs are higher with Oracle than Netezza. That’s another factor affecting TCO.

  28. Curt Monash on August 7th, 2009 5:03 am

    Meanwhile, I’ll be more convinced that Oracle’s performance is competitive to Netezza’s when I hear some stories of Oracle outperforming Netezza in head-to-head onsite POCs.

    Nobody should blame Oracle for the business practice of focusing their sales efforts on the deals that are easiest to win — but people shouldn’t also assume that they’d win competitions they actually don’t bother to enter.

  29. Curt Monash on August 7th, 2009 5:14 am

    By the way — if I did the math right, both Netezza and Oracle have around a 1:1 ratio between cores and spindles. Netezza seems to be right at 2 gigs of RAM per spindle, while Oracle is slightly higher. Netezza also puts in a whole lot of FPGAs. Oracle probably does invest more in networking.

    So assuming Netezza is putting those FPGAs in there for a substantial purpose, to a first approximation one would assume its performance/TB was nicely higher than Oracle’s.

  30. What does Netezza do in the FPGAs anyway, and other questions | DBMS2 -- DataBase Management System Services on August 8th, 2009 5:17 am

    […] news of Netezza’s new TwinFin product family has generated a lot of comments and questions, some pretty reasonable, some quite silly. E.g., […]

  31. Sorting out Netezza and Oracle Exadata data warehouse appliance pricing | DBMS2 -- DataBase Management System Services on August 8th, 2009 5:19 am

    […] Bence Arato estimates that the Oracle Exadata price is right around $4 million for 46 uncompressed terabytes of user data. I found Bence’s estimates excellent when he helped me work out then-current Exadata pricing last September. That’s a little under $100K/terabyte uncompressed, vs. Netezza’s figure of a little under $45K uncompressed. I would guess Oracle’s compression is a little better than Netezza’s, but only a little. I hope those Oracle figures take indexes into account (Netezza has no indexes, and the zone maps it substitutes for indexes take little space), but even if they do, there’s a considerable price difference now between Exadata and Netezza. Also, Netezza TwinFin seems to offer more processing power per terabyte of data than Oracle Exadata does — specifically via its FPGAs — giving hope it does more work as well. […]

  32. Interesting trends in database and analytic technology | DBMS2 -- DataBase Management System Services on February 1st, 2010 6:12 pm

    […] ditched that in its latest generation […]

  33. A partial overview of Netezza database software technology | DBMS2 -- DataBase Management System Services on June 22nd, 2010 5:58 pm

    […] significantly. Because this was anticipated, this upgrade was planned for in the design of the systems Netezza started introducing last summer. Consequently, the reduction in I/O produced by compression translates almost directly into better […]

  34. John on June 4th, 2011 1:55 am
  35. Netezza TwinFin - Simba Technologies on July 30th, 2019 7:14 am

    […] blog about Netezza’s new TwinFin family of data warehouse appliances.  Worth a quick read: http://www.dbms2.com/2009/07/30/netezza-new-product-family/.  Interesting that Netezza moves to Intel chips and a Linux […]

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.