June 29, 2009

Aster Data enters the appliance game

Aster Data is rolling out a line of nCluster appliances today.  Highlights include:

I don’t have a lot more to add right now, mainly because I wrote at some length about Aster’s non-appliance-specific, non-MapReduce technology and positioning a couple of weeks ago.


16 Responses to “Aster Data enters the appliance game”

  1. Greg Rahn on June 29th, 2009 3:08 am


    Last week you wrote:

    “Most TPC benchmarks are run on absurdly unrealistic hardware configurations.”

    In particular you were discussing a 30TB TPC-H that used 43 nodes, each containing 2 quad-core processors and 64GB memory (344 total cores, 2,752GB memory).

    As I look at the Aster MapReduce Data Warehouse Appliance Data Sheet I see that their nodes contain 8 cores and 24GB memory. The Aster 25TB solution contains 32 worker nodes (256 total cores and 768GB memory) and the 50TB solution contains 63 worker nodes (504 total cores and 1.5TB memory). Given that information, the ParAccel configuration (primarily cpu cores) does not seem that “absurdly unrealistic” now does it?

    Aster even highlights their high number of CPU cores per usable space.

  2. Curt Monash on June 29th, 2009 3:35 am


    You have a point. Good catch.

    At least the disk capacity is only 2.5X user data or so, rather than 32X. The node:data ratio is very similar, however. There’s 3/8 as much RAM per node.

    3.75X compression is more aggressive than most vendor estimates. I don’t immediately know whether that makes it as conservative as other vendors’ claims in that regard. (Single-number estimated compression is one of the few areas of consistent vendor conservative claim-making I’ve ever seen.) If that’s indeed conservative, then the disparity is a bit higher.

    I wonder what the wattage is per node in the two configurations. I don’t have a guess one way or the other.

    Absent more information, I’m not in love with this product line. One of Aster’s strengths is that nCluster is supposedly easy to manage; an appliance doesn’t add much for them. Nor do I love the “We’re better because we plug more hardware into your power outlets” aspect of the positioning.

  3. Morning C# news – and more « C# Hacker – The Rambling Coder on June 29th, 2009 9:07 am

    […] Another EDW appliance comes into the fold with Aster Data. […]

  4. Steve Wooledge on June 29th, 2009 12:40 pm

    Since the Aster Appliance is using commodity servers from Dell, we also benefit for the latest advances in energy saving technology. Their latest line of PowerEdge R710 servers has new energy optimizations over the previous PowerEdge 2950.

    To quote from Dell: http://www.dell.com/downloads/global/products/pedge/en/server-poweredge-r710-specs-en.pdf

    “Energy-Optimized Technologies:
    Using the latest Energy Smart technologies, the R710 reduces power consumption while increasing performance capacity versus the previous generation servers. Enhancements include efficient power supply units right-sized for system requirements, improved system-level design efficiency, policy-driven power and thermal management, and highly efficient standards-based Energy Smart components.

    Dell’s advanced thermal control delivers optimal performance at minimal system and fan power consumption resulting in our quietest 2U servers to date. These enhancements maximize energy efficiency across our latest core data center servers without compromising enterprise performance.”

    Specifically, Aster’s system uses close to 125 Watts per TB of raw disk. Compare that with something like a Teradata that’s rated at 8300 Watts for 41.4 TB of raw disk (=200 Watts per raw TB):


    In general, our new appliance offering is targeted at enterprises looking for high performance for their analytical workloads and provides the best price/performance in the market. For customers that want to use other hardware that may be even more power efficient, we offer our software-only solution – something that proprietary appliance vendors can’t offer.

  5. Curt Monash on June 29th, 2009 1:19 pm


    That would be the Teradata 2550 line, which is a fair comparison for Aster feature-wise.

    The spinning disk:user data ratio listed for Teradata is much higher than Aster’s, but that seems largely due to much less aggressive compression claims.

    Or are your less than 1 PB figures uncompressed? If so, your product suddenly looks a lot better …

  6. Greg Rahn on June 29th, 2009 2:30 pm


    The argument around the ratio of disk capacity to user data is really a poor one. The capacity of drives keeps increasing, but the rate that data can be transferred off has not done so at the same rate. This has caused the (raw space):(user data) ratio to grow very large in many cases. A much better comparison should be how many disk drives are being used (and what rotational speed), regardless of capacity. Many of servers that are used for disk intensive operations are moving to SFF drives (small form factor) from LFF (large form factor) simple because it allows for more disk scan capacity within the same server footprint.

  7. Steve Wooledge on June 29th, 2009 2:55 pm


    The configurations for the Aster MapReduce DW Appliance product line are all for *uncompressed* data, except for the 1 petabyte appliance: http://www.asterdata.com/resources/downloads/datasheets/appliance_ds.pdf

  8. Curt Monash on June 29th, 2009 3:51 pm


    If you’d rather talk about spindles because you think TB/spindle is irrelevant, fine. The ParAccel configuration was way out of whack in number of spindles as well.

  9. Karl on June 29th, 2009 4:04 pm

    Interesting thread. We applaud Aster’s move to appliances. It is a proven delivery model, particularly for the mass market. This is where Kickfire is focused: http://tinyurl.com/ljg2sx. With appliances starting at $32k for 1TB capacity and the world’s #1 price/performance per independent industry benchmarks (http://tinyurl.com/52zyno), the Kickfire appliance opens up the possibility of high-end data warehousing for customers who previously could not afford this capability.

  10. Steve Wooledge on June 29th, 2009 7:18 pm

    As a point of clarification, we don’t view the data warehousing world as “either-or” / “black-white”. Some customers want software, others want appliances, still others want a cloud-based DBMS. It is our responsibility to give to each customer the best offering on each of these packaged frameworks that works best for them. The importance of today’s announcement is that we are bringing the most cost-effective MPP data warehouse appliance to customers.

    Additionally, we’ve had lots of requests from customers for a “starter kit”, including people evaluating Hadoop, but who want enterprise-class features before bringing MapReduce into their environment. This is why a $50k price point made sense. More thoughts here:http://www.asterdata.com/blog/index.php/2009/06/29/enterprise-ready-mapreduce-data-warehouse-appliance/

  11. Greg Rahn on June 29th, 2009 9:11 pm


    Based on your response I would say you missed my point. Also it seems things are relevant/irrelevant with you which of course is also a misunderstanding. Let me also be clear that I think most TPC hardware configurations (the ones that focus purely on performance vs. price/performance) are significantly different from what production customers would run, so there is no point in trying to argue with me there either.

    I think that only mentioning (disk capacity)/(user data) w/o including the number of drives uses is just plucking numbers out of context as simply increasing/decreasing this ratio (but holding the spindle count constant) for a given system would have minimal impact on the performance of it. However, if the spindle count changes significantly, but total capacity is constant, then performance will likely significantly change as well.

    Hopefully that clarifies the point I was trying to make. If not, it will have to suffice…as Karl from Kickfire cited a TPC-H result and I’m running for cover. 🙂

  12. Curt Monash on June 29th, 2009 9:16 pm


    You SLIGHTLY rephrased and improved upon my reasoning for demonstrating that the ParAccel TPC-H filing is “absurdly unrealistic” as a guide to DBMS selection, and a blight upon the industry. Thank you for your assistance.

    That you phrased that as if you were being critical of and disagreeing with me is baffling — unless, of course, that wasn’t your point at all. But if so, you either haven’t made your point clear or else haven’t done much in the way of substantiating it. 😉



  13. Dave Menninger on June 30th, 2009 12:51 pm

    Steve–We agree wholeheartedly that it’s good to deliver different deployment options to customers. Vertica was the 1st to offer four deployment options–software only from the beginning in Q1 2007; appliance on HP hardware in Dec 2007; Amazon cloud in May 2008 and VMware appliance in Feb 2009. But I’m not sure I agree that you offer the most cost-effective MPP data warehouse appliance. Your positioning graphic omits all the columnar appliances…from Vertica, ParAccel, Sybase, et al, which typically offer faster performance on less hardware due to compression, columnar storage and other IO-reducing advantages.

  14. Steve Wooledge on July 1st, 2009 1:11 am


    We focused our comparison on large-scale MPP DW appliances who like to point out the “amount” of data they can store in marketing claims: peta-, exa-, … “Xtreme-ginormous” 🙂 to make a strong contrast. Our point is to focus the discussion on processing power per TB of storage, which is a better measure of the value someone can get from their data. Having processing power plus SQL/MapReduce in-database for rich analytic expressions is where we see the future of data warehousing: http://www.asterdata.com/mapreduce

  15. Curt Monash on July 1st, 2009 2:03 am


    Of course, that assumes equal or better throughput per unit of processing power. 🙂

  16. Correction to a recent quote | DBMS2 -- DataBase Management System Services on July 1st, 2009 2:19 am

    […] quoted in a recent article around Aster’s appliance announcement as saying data warehouse appliances are more suitable for small workgroups of analysts crunching […]

Leave a Reply

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:


Search our blogs and white papers

Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/dbms2cm/public_html/wp-content/themes/monash/static_sidebar.php on line 29

Warning: include(http://www.monash.com/blog-promo.php): failed to open stream: php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/dbms2cm/public_html/wp-content/themes/monash/static_sidebar.php on line 29

Warning: include(): Failed opening 'http://www.monash.com/blog-promo.php' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/dbms2cm/public_html/wp-content/themes/monash/static_sidebar.php on line 29