July 23, 2014

Teradata bought Hadapt and Revelytix

My client Teradata bought my (former) clients Revelytix and Hadapt.* Obviously, I’m in confidentiality up to my eyeballs. That said — Teradata truly doesn’t know what it’s going to do with those acquisitions yet. Indeed, the acquisitions are too new for Teradata to have fully reviewed the code and so on, let alone made strategic decisions informed by that review. So while this is just a guess, I conjecture Teradata won’t say anything concrete until at least September, although I do expect some kind of stated direction in time for its October user conference.

*I love my business, but it does have one distressing aspect, namely the combination of subscription pricing and customer churn. When your customers transform really quickly, or even go out of existence, so sometimes does their reliance on you.

I’ve written extensively about Hadapt, but to review:

As for what Teradata should do with Hadapt:

I herewith apologize to Aster co-founder and Hadapt skeptic Tasso Argyros (who by the way has moved on from Teradata) for even suggesting such heresy. :)

Complicating the story further:

I was less involved with Revelytix that with Hadapt (although I’m told I served as the “catalyst” for the original Teradata/Revelytix partnership). That said, Teradata — like Oracle — is always building out a data integration suite to cover a limited universe of data stores. And Revelytix’ dataset management technology is a nice piece toward an integrated data catalog.

Related posts

Comments

6 Responses to “Teradata bought Hadapt and Revelytix”

  1. Daniel Abadi on July 23rd, 2014 9:27 am

    Hi Curt,

    Just one small correction: with our latest release, Hadapt no longer makes you decide whether the data is in HDFS or the SQL DBMS. Our IQ engine can now be pointed to data directly in HDFS.

  2. Daniel Abadi on July 23rd, 2014 9:33 am

    Also, just to clarify what you mean by “Hadapt pivoted” — our messaging/positioning did indeed pivot as you mentioned earlier in the paragraph (thanks in part to your advice). However, we continued to invest in our SQL-on-Hadoop solution, significantly improving the SQL support over the past year. I suspect that our core SQL-on-Hadoop product was a major component of Teradata’s decision-making process.

  3. Notes from a visit to Teradata | DBMS 2 : DataBase Management System Services on August 31st, 2014 5:18 am

    […] the acquisition of Hadapt, Teradata gets some attention from Dan Abadi. Also, they’re retaining Justin […]

  4. Teradata embraces the big data ecosystem, buys Think Big Analytics — Tech News and Analysis on September 3rd, 2014 9:00 am

    […] appears to have been a fire sale) the assets of SQL-on-Hadoop pioneer Hadapt. (Analyst Curt Monash published some good insights on both deals at the time.) Randy Lea, president of Teradata’s big data practice, said the Revelytix deal […]

  5. Ranko Mosic on September 9th, 2014 9:51 am

    Now that Teradata bought Hadapt, it would be interesting to hear what prof. Abadi thinks now i.e. are there any changes in position or major premise of his article from 2012 ( that Hadoop needs to compete heads on with RDBMSs )? It is quite ironic that first potential Hadoop target ( Teradata ) is firing back and buying one of Hadoop products that was perhaps supposed to replace it. Or not – it is actually quite common ( Oracle buying MySQL, for example ).

    http://hadapt.com/blog/2012/07/24/why-database-to-hadoop-connectors-are-flawed/

  6. Fred Holahan on November 2nd, 2014 7:12 am

    @Ranko Teradata’s acquisition of Hadapt actually validates the premise of the article you cite in your post. Connector-based strategies for integrating data between Hadoop and analytic databases are, in general, slow and inefficient. Importantly, they completely miss the opportunity to combine Hadoop’s resilient, shared-nothing parallelism with the node-level throughput of analytic databases. And there is a growing population of analytic applications that could benefit from just such a combination.

    As Curt pointed out, HadoopDB (and Hadapt, its commercial successor) use PostgreSQL for node-level storage and query processing, all while exposing a SQL interface to the application tier. Imagine the possible benefits of replacing PostgreSQL with high performance columnar database nodes. The result would blend the best of Hadoop’s parallel communications framework with the proven query throughput of a column store (or read-optimized row store), without the need for clunky ETL-style connectors.

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.