I think Teradata’s future product strategy is coming into focus. I’ll start by outlining some particular aspects, and then show how I think it all ties together.
The immediate hook here is that I had a short conversation with Scott Gnau of Teradata yesterday, triggered by Teradata’s acquisition of Kickfire’s assets. Takeaways from that part included:
- The acquisition is all about Kickfire’s data pipelining technology.
- Scott (in my opinion rightly) thinks that isn’t particularly tied to Kickfire’s choice of particular DBMS architecture (fairly vanilla columnar).
- No decision has been made about whether the right vehicle for this technology is an FPGA (Field Programmable Gate Array), conventional Intel CPU, RAM, etc.
If you want to handicap Teradata’s future data pipelining strategy, you might note that:
- Kickfire’s own choice – and hence its existing implementation – is an FPGA.
- VectorWise’s approach to pipelining is Intel-based, apparently at the cost of being closely tied to specific generations of Intel CPUs.
- XtremeData’s approach to pipelining is FPGA-based.
- Teradata has a lot more development resources than any of those other companies, as well as important existing products, and hence has both means and motive to shoehorn new technology into older system designs.
While I had Scott on the phone, I brought up a few other subjects too. Highlights included:
- Teradata’s Flash-based appliance is doing just fine in beta test and customer POCs (Proofs of Concept).
- Other kinds of Teradata appliance are not inconceivable.
- Scott thinks Michael McIntire’s condemnation of Active-Active architectures is overstated. That said,
- Scott does acknowledge a need for greater Active-Active scalability, and suggests that the reason Xkoto’s current products are being discontinued is their lack of scaling.
- Scott seems quietly confident the scaling will get done.
- Scott is emphatic that Teradata is not going to go to a two-tier architecture. In particular, the point of splitting storage/lightweight database processing and heavyweight database processing on separate tiers is generally to save bandwidth, and Teradata’s BYNET is typically less than 10% loaded.
- Scott didn’t dispute my claim that this all suggests Teradata Virtual Storage is the future, at the expense of a rigid delineation among specific use-case-focused product lines.
- Unlike Netezza or Aster, Teradata doesn’t seem to plan analytic capability that works outside the UDF (User Defined Function) framework. However, Scott noted that Teradata has long had the capability that Aster and Netezza now also have of letting you run analytic code either in “protected mode” (if the process fails the whole database doesn’t crash) or in the database kernel (best performance, if you’re sufficiently confident in the code’s stability to take the risk). Scott also spoke of the release later this quarter of Teradata FastPath, which will offer yet better performance (however, there’s a gotcha to Teradata FastPath that’s still NDA).
Putting all that together with the rest of what we know about Teradata, I’m going to call out three pillars of Teradata’s long-term product strategy:
- Same fundamentals as always. Teradata’s core product strategy is:
- Single DBMS, capable of meeting all analytic needs while running in a single instance, usually running on …
- … proprietary hardware …
- … built from conservatively-chosen parts.
- Selective vertical application stack. No matter how horizontally-oriented they are, many companies that have been in the analytic technology business for a while wind up with some vertical applications. It sort of just happens. Teradata is no exception. Teradata also likes to sell services to its product customers, and some of those are quite vertical-aware.
- Mutable, modular platform. This is what I highlighted above. Note that it’s philosophically attuned with the one-system-does-everything approach Teradata prefers. More subtly, please also note that it goes well with customer-by-customer price customization, which is almost a must for Teradata given the Innovator’s Dilemma kind of pricing box it finds itself in.
So far, that’s not too exciting, except in the details of how Teradata’s engineers make that all work. But there’s a fourth pillar to Teradata’s technical strategy as well, and it’s a wild card: tight partnerships. Every time I talk with Teradata hardware chief Carson Schmidt, he seems excited about some particular version of a part or other – sometimes from a reasonably established vendor (once it was LSI Logic), sometimes from a tiny one (notably the “stealth” start-up on which Teradata bet its first solid-state product.) In the future, I expect tight business intelligence partnerships as well. Cognos BI will be increasingly integrated with IBM’s DBMS and hardware; Business Objects’ BI will increasingly be integrated with SAP’s applications; and Oracle’s BI will eventually be integrated with everything. How do you compete with that if you‘re Microstrategy? Well, you try to have superior product, of course – but you also partner as closely with DBMS vendors as you can, an approach Microstrategy has already started. Predictive analytics stalwart SAS, of course, is on a partnership binge as well.
Teradata has a larger installed base than almost all its competitors, and enjoys richer third-party software and service support as a result. But I suspect that going forward, for Teradata to remain a leading competitor at price points it is willing to accept, Teradata’s “ecosystem” advantages will need to ratchet up one or several notches.