July 31, 2010

Teradata, Xkoto Gridscale (RIP), and active-active clustering

Having gotten a number of questions about Teradata’s acquisition of Xkoto, I leaned on Teradata for an update, and eventually connected with Scott Gnau. Takeaways included:

Frankly, I’m disappointed at the struggles of clustering efforts such as Xkoto Gridscale or Continuent’s pre-Tungsten products, but if the DBMS vendors meet the same needs themselves, that’s OK too.

The logic behind active-active database implementations actually seems pretty compelling: 

Analytic DBMS vendors pretty much all need to offer this. (Possible exception: If they have a data-mart-only positioning so extreme that customers will never care about any form of failover.) That said, I must confess to not having done a good job of tracking who does or doesn’t have which features in this area to date; informative comments to this post in that regard would be much appreciated!

Comments

9 Responses to “Teradata, Xkoto Gridscale (RIP), and active-active clustering”

  1. Michael McIntire on July 31st, 2010 1:06 pm

    Having tried to implement Active-Active in several shops, the last being several efforts at eBay, I think we need to find a different architectural framework.

    Dual-Active – or what I termed later as Multi-Active, puts the onus on the Application or Transformation Integration team to define standards, process, and tooling to implement a very difficult and at times less than transparent set of database state conditions.

    The onus for synchronization is put on the application in this framework, and not the database infrastructure. This is not such a bad thing when you have a strict hierarchy of transforms or a small number of them. But when you have 10’s of thousands of event driven transforms, and 100’s of separate concurrent chains of logic, this is simply not practical.

    Other high level problems which tend to force the app to deal with it, are the high variability of a Unit of Work in typical EDW systems, combined with highly mixed workloads. OLTP systems do not typically have UoW which run for hours, mixed right in with sub-second queries.

    Teradata’s efforts to find tooling to simplify this process are applauded, but I think the root cause of our problems is that we’re putting the responsibility for transaction synchronization in the wrong part of the infrastructure.

    I am not much for complaining without a solution, but in this case we’re chasing and extremely high cost solve without rationalizing the human and time to market costs over the benefits. Yeah, there are some really notable implementations. Now find me a one that scaled to 10’s of thousands of production entities, and 100’s of TB’s of user data.

    Bluntly: The Dual Active method does not scale.

  2. Curt Monash on July 31st, 2010 6:54 pm

    Michael,

    If I understand you correctly, you’re saying that databases which are in sync for the purposes of one kind of query might not be in sync for another — and by the time you’ve dealt with that at the application level, the imagined benefits of active-active are out the window.

    If so, my answer would be that it’s the DBMS vendors’ job to make sure that a database looks logically like ONE database to the application, whatever that takes. Then, the scalability question is reduced to “Are we really sending enough of the work to what otherwise would be an idle system to justify all the cost and hassle of the tooling”?

    But anyway, you’re sure put in perspective why third-party offerings find it very hard to get traction. Thanks!

    CAM

  3. Paul Johnson on August 5th, 2010 7:52 am

    The dual-active vision for Teradata was announced ~5 years ago at a Teradata Partners event in the US (can’t remember which one). As ever, Steven Brobst gave a very entertaining pitch on the subject.

    In the years that followed there were several notable attempts to deploy this vision. The issues are many and complex, as the early adopters found out.

    The real question for me here is, what problem are we trying to solve by subscribing to the dual-active vision? Forget the “how is it delivered” question and ask “why do it”?

    For every very large Teradata user such as ebay that needs dual-active for resilience and probably load balancing, there must be 100 other sites that just want/need/have KPI reporting/customer insight etc and are happy with the very high uptime and workload management capabilities that Teradata users enjoy.

    I don’t see dual-active as being of wide interest to the Teradata community, yet, at least. Of the dozens of Teradata sites where I’ve consulted over 20 years it is only in play in 2-3 places. Cost is clearly an issue, but the DW is rarely *genuinely* ‘mission critical’.

    Anyway, Michael’s insights shine a light on the challenge at a complex deployment like ebay.

    For the record, I don’t think dual-active is achievable at the application level in anything other than a simple DW deployment.

  4. Teradata’s future product strategy | DBMS2 -- DataBase Management System Services on August 12th, 2010 6:37 am

    […] thinks Michael McIntire’s condemnation of Active-Active architectures is overstated. That […]

  5. Dan Cutler on August 31st, 2010 10:36 am

    I’m pursuing a relatively simple A/B solution. An A/B architecture is one that has one copy of a database that is being loaded (is off-line), while the second copy is used for BI queries. My other requirement is to have one system that is used for ad-hoc queries and another that is being used for constantly generating reports in a batch style process. So, that’s 2 TD boxes, each with A/B tables for a total of 4 copies of the same databases.

    The question is now how to sync one side of this system (either A or B side) in the most simple way possible?

    I have scripting that uses arcmain dumps to arcmain loads over a named pipe which I believe is likely the simplest solution but I’m wondering if there is a better way. It seems I might be able to take advantage of the fact that I will always have a pair of databases that are off-line and that if I know exactly what tables changed, I could simply apply some sort of transaction logs to the secondary half without having to literally drop and re-create the entire table when it might be that only 10% of the table change during the last ETL load.

    Perhaps I could use some sort of proxy that duplicates the transactions (inserts primarily) to the secondary system so that are updated in near real time.

    Any thoughts or suggestions?

  6. Teradata’s Future Product Strategy – The Intelligent Enterprise Blog on September 20th, 2010 4:43 am

    […] thinks Michael McIntire’s condemnation of Active-Active architectures is overstated. That […]

  7. Scaling Up or Scaling Out? Part Two | Brent Ozar – Too Much Information | Brent Ozar - Too Much Information on March 9th, 2011 9:17 am

    […] just so darned difficult.  Xkoto Gridscale sounded like the brightest hope in a while, but that was discontinued when Teradata bought Xkoto.  I don’t know of any other reliable technology that I’d trust to pull it off, and […]

  8. Robert Wagner on June 14th, 2011 9:48 pm
  9. Curt Monash on June 15th, 2011 12:40 am

    Thanks, Robert!

    CAM

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.