October 27, 2009

Teradata’s nebulous cloud strategy

As the pun goes, Teradata’s cloud strategy is – well, it’s somewhat nebulous. More precisely, for the foreseeable future, Teradata’s cloud strategy is a collection of rather disjointed parts, including:

Teradata openly admits that its direction is heavily influenced by Oliver Ratzesberger at eBay. Like Teradata, Oliver and eBay favor virtual data marts over physical ones. That is, Oliver and eBay believe that the ideal scenario is that every piece of data is only stored once, in an integrated Teradata warehouse. But eBay believes and Teradata increasingly agrees that users need a great deal of control over their use of this data, including the ability to import additional data into private sandboxes, and join it to the warehouse data already there.

The Teradata Elastic Mart(s) Builder Viewpoint portlet automates the inclusion of outside data. If you’re already an authorized Teradata data warehouse user, you can fill in a very short form (three or so fields) and add authorization to import outside data, e.g. from a .CSV file. No fuss, little bother. Trivial as that sounds, when you combine it with Teradata’s pre-existing robust workload management tools, it creates a pretty good virtual data mart story.

Spinning out and maintaining consistency with physical data marts is a different matter. Teradata doesn’t seem too sure it believes in those. And while Teradata is obviously planning to increase its capability in that regard anyway, I didn’t get a lot of detail beyond the reference to Data Mover 2.0.

Related links


5 Responses to “Teradata’s nebulous cloud strategy”

  1. John Sequeira on October 27th, 2009 3:50 pm

    Curt – you have an important typo. Teradata express is limited to a rather lame 4GB, not ‘1 terabyte only’.

    Not sure if that’s compressed or not, but this page
    implies uncompressed.

  2. John Sequeira on October 27th, 2009 3:57 pm

    My mistake … 4GB was for version 12.

  3. Curt Monash on October 27th, 2009 4:31 pm

    That’s right. They’ve increased it to 1 TB.

    Actually, I’m not sure whether that increase is only for the EC2 and/or VMware editions. But I don’t much care. 😉

  4. Michael McIntire on October 28th, 2009 10:05 am

    The most realistic reason for “Teradata Express”, is so that you don’t have to have a bonafide Teradata platform to do development on… If you’re a consultant or developer, this is particularly important since you likely cannot afford even a single node, much less the data center environment required to run one. it’s simply an enabler, it has nothing to do with cloud, and it’s been out there for several major versions back to v2r5 if I remember right…

  5. Curt Monash on October 28th, 2009 4:09 pm

    Fair enough. Free test/dev software is important. And the public cloud is used mainly for test/dev these days, at least by enterprises.

    But it still contributes nothing to a cloud deployment strategy beyond a “get your feet wet” experimental experience (for vendor and user alike).

Leave a Reply

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:


Search our blogs and white papers

Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/dbms2cm/public_html/wp-content/themes/monash/static_sidebar.php on line 29

Warning: include(http://www.monash.com/blog-promo.php): failed to open stream: php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/dbms2cm/public_html/wp-content/themes/monash/static_sidebar.php on line 29

Warning: include(): Failed opening 'http://www.monash.com/blog-promo.php' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/dbms2cm/public_html/wp-content/themes/monash/static_sidebar.php on line 29