June 29, 2009

Xtreme Data readies a different kind of FPGA-based data warehouse appliance

Xtreme Data called me to talk about its plans in the data warehouse appliance business, almost all details of which are currently embargoed. Still, a few points may be worth noting ahead of more precise information, namely:

So far as I can tell, Xtreme Data’s 1.0 product will — like most other 1.0 analytic database management products — be focused on price/performance, without little or no positive differentiation in the way of features.


6 Responses to “Xtreme Data readies a different kind of FPGA-based data warehouse appliance”

  1. Patented SQL on a chip! « EveryDay EssBase on June 29th, 2009 1:48 pm

    […] Patented SQL on a chip! Xtreme Data readies a different kind of FPGA-based data warehouse appliance […]

  2. Zman on June 29th, 2009 2:45 pm

    Can you provide any info on how this differs from Kickfire’s SQL Chip? (NDA permitting) Thanks!

  3. Curt Monash on June 29th, 2009 4:35 pm


    I’m sorry, but I think any more detail would be covered by the embargo.

  4. Justin Swanhart on June 30th, 2009 7:45 pm


    It is a little unclear as to what the capabilities the “Xtremedata” box will actually have, given the partial stealth nature of Xtremedata.

    Kickfire is selling a product today and is performing extremely well for our customers. We feature a small form factor and we don’t cosume a lot of power. We can run queries in seconds that MySQL takes DAYS to process.

    For example. We have one customer who bought our appliance. On their old MySQL servers, one of their most important queries ran for more than 4 DAYS. Before engaging in a POC with Kickfire, they tried out Infobright. They got pretty good results. Their 4 day query ran in four hours.

    In the Kickfire POC, this query ran in four minutes. This very important query can now be run any time they want, it can be changed to look at patterns and it doesn’t lock their database for hours or days at a time.

    Kickfire has show proven real performance numbers on benchmarks like TPC-H(tm) and in customer POCs. We’ve proven our solution works. We’ll just have to wait and see if Xtremedata can match the very high bar which we have set.

    Xtremedata claims to run the TPC-H(tm) queries, but they have not submitted audited results, and they aren’t allowed to use the TPC-H(tm) name as far as I understand.

    The architectural differences are moribund manifold. The Kickfire SQL chip is a true dataflow engine with patented in-memory compression and SQL execution technology. Just having an FPGA doesn’t mean you can run “SQL on a chip”.

    These are my own opinions and I am not speaking directly for Kickfire.

  5. Hriyadesh Sharma on July 1st, 2009 9:11 am

    Hi Justin,

    Please help me understand the benefit of a SQL on Chip engine as comared to ParAccel and Vertica, who can provide stellar performance on any data volumes with SW only? They can also package it as an appliance with very small number of nodes to very large number of nodes.


  6. Justin Swanhart on July 1st, 2009 1:23 pm

    The big difference between the MPP column stores and Kickfire is the price/performance ratio. If you look at the benchmarks you will see that Kickfire provides great performance in a small form factor for a lot less money. If benchmarks are good at only one thing, gauging price/performance is probably it. The SQL chip allows us to literally do more with less.

    But beyond that, Kickfire decided to go in a different direction from the other guys. In order to get performance at very large data volumes, there are tradeoffs. MPP databases often don’t feature UNIQUE keys, FOREIGN KEY constraints and other indexes which can be very useful.

    There are technical challenges to implementing such keys in a MPP system, since the constraints must be maintained across the entire cluster. When you have volumes of data in the tens of terabytes or higher, it makes sense to go with ETL instead.

    Big companies with vast amounts of time and money will be happy to invest in an ETL process to accompany their new multi-TB warehouse. Smaller shops with less data are not so willing to spend effort and hard earned capital on such processes.

    Small companies want fast results without spending a lot of money. They might only have a hundred gigabytes of data, but they want answers on it now, and they don’t want to spend a million dollars to get them.

    There are a lot more small companies with tens or hundreds of gigabytes than there are large companies with tens or hundreds of terabytes.

Leave a Reply

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:


Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.