September 28, 2015

The potential significance of Cloudera Kudu

This is part of a three-post series on Kudu, a new data storage system from Cloudera.

Combined with Impala, Kudu is (among other things) an attempt to build a no-apologies analytic DBMS (DataBase Management System) into Hadoop. My reactions to that start:

I’ll expand on that last point. Analytics is no longer just about fast queries on raw or simply-aggregated data. Data transformation is getting ever more complex — that’s true in general, and it’s specifically true in the case of transformations that need to happen in human real time. Predictive models now often get rescored on every click. Sometimes, they even get retrained at short intervals. And while data reduction in the sense of “event extraction from high-volume streams” isn’t that a big deal yet in commercial apps featuring machine-generated data — if growth trends continue as much of us expect, it’s only a matter of time before that changes.

Of course, this is all a bullish argument for Spark (or Flink, if I’m wrong to dismiss its chances as a Spark competitor). But it also all requires strong low-latency analytic data underpinnings, and I suspect that several kinds of data subsystem will prosper. I expect Kudu-supported Hadoop/Spark to be a strong contender for that role, along with the best of the old-school analytic RDBMS, Tachyon-supported Spark, one or more contenders from the Hana/MemSQL crowd (i.e., memory-centric RDBMS that purport to be good at analytics and transactions alike), and of course also whatever Cloudera’s strongest competitor(s) choose to back.

Comments

6 Responses to “The potential significance of Cloudera Kudu”

  1. Cloudera Kudu deep dive | DBMS 2 : DataBase Management System Services on September 28th, 2015 3:55 am

    […] Part 3 is a brief speculation as to Kudu’s eventual market significance. […]

  2. Adam F on September 30th, 2015 1:42 pm

    What’s your take on the relative merits of Parquet-in-HDFS versus Kudu? (I know, apples and oranges – but still seems like a real decision that application architects will face) It seems like the key additional capability of Kudu versus Parquet is the ability to update existing records versus just append. But I wonder about the relative value of this compared with the cost of introducing a whole new storage system into the already complex Hadoop environment if the main goal is analytics. After all, lots of stuff can already talk to HDFS and Parquet, and Parquet doesn’t require any additional running services, just some client libraries. And it seems like analytics have long been orienting more and more toward appends versus updates – for example, log-based systems like Kafka that model everything as updates/messages, but even traditional dimensional data warehouse systems where maintaining accurate history means implementing modeling approaches like type 2 slowly changing dimensions so you only add to history instead of overwriting it. Given all this gravity towards append-based data, I can’t help but wonder if a lighter-weight library-based columnar storage framework like Parquet might not have better Darwinian odds than another additional active service added to the Hadoop stack. Unless perhaps real OLTP support comes, and the argument becomes “all in one / OLTP + analytics.”

  3. Curt Monash on September 30th, 2015 2:12 pm

    Hi Adam,

    I have concerns about append-only. For example:

    1. You’re streaming data, but you’re not putting everything into the analytic store — just what you’ve identified as “anomalies” or “events”.

    2. Data arrives out of time sequence.

    3. You’re enhancing the streaming data with information from a tabular store (e.g. customer records). Those get updated from time to time.

    4. Heck, you want to replicate your whole business transaction data store into the place where you stream your web logs.

    #1 is more IoT. #3-4 are more internet marketing. #2 is both.

  4. Patrick Angeles on October 5th, 2015 3:29 pm

    @Adam

    Very valid question. In exchange for a marginally more complex deployment architecture (it does add another component to the Hadoop zoo), I believe it will greatly simplify the data architecture.

    Consider the Type 2 case, which you brought up. To get that working right in HDFS, you’d need to deal with incremental ingest, merge the new data with the historical, and compact files in order to maintain decent scan performance.

    This is a common pattern that’s been implemented in a number of places, but it’s non-trivial and fragile, and the latency between the receiving the data and that data showing up in queries is measured in minutes, not sub-second. Plus — merge/compact involves shuffling tables and views around, so the query consistency guarantees aren’t great. (Mostly because the Hive metastore currently does not support versioned metadata.)

    Kudu takes care of all this. And again, given that several production clusters are already running a combination of ZK, Hive, MR, Spark, HDFS, HBase, Oozie, etc… and that we have cluster management tools that make it easy to manage these services, then adding another service like Kudu is a small price to pay for a much simpler data architecture.

  5. Introduction to Cloudera Kudu | DBMS 2 : DataBase Management System Services on November 19th, 2015 6:36 am

    […] Part 3 is a brief speculation as to Kudu’s eventual market significance. […]

  6. Luis Claudio Silveira on February 24th, 2016 6:51 pm

    Kudu is great alternative to Hadoop HDFS Parquet file structure and HBase. We will this project go up this year, specially if it is join to Apache Flink streaming engine. Will it be possible?

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.