DBMS product categories

Analysis of database management technology in specific product categories. Related subjects include:

July 24, 2006

Firebird, nee Interbase

Apparently, Interbase has morphed into Firebird. Interbase was an early RDBMS, owned by Borland, occasionally touted as the next great DBMS contender, and early to be open-sourced. That’s about as much as I remember about it. There were a couple of features on which it was earlier than the big boys — BLOBs, maybe? — but I imagine that’s very old news by now. And indeed the product doesn’t seem to be terribly up to date at this point.

So are there any Firebird partisans out there who’d like to tell me what’s so great about Firebird? Thanks in advance, and I’m especially grateful for the flame-free nature of your expected contribution.

July 12, 2006

Ingres’s questionable target market

Eric Lai of Computerworld interviewed Roger Burkhardt, new CEO of Ingres, and obviously did a bang-up job of asking him the tough “Who really are your target customers, and why would they buy from you?” questions. The answer, so far as I can tell, is “Large financial institutions writing new RDBMS apps that don’t need up-to-date functionality and don’t want to pay Oracle’s license fees.” Up to a point, that makes sense. Except for the “financial institutions” qualifier, it’s actually pretty obvious. I can’t imagine why any other new users would buy Ingres, which has been ever the bridesmaid, never the bride for the past 20 years.
Read more

July 9, 2006

OS-DBMS integration

A Slashdot thread tonight on the possibility of Oracle directly supporting Linux got me thinking – integration of DBMS and OS is much more common than one might at first realize, especially least in high-end data warehousing.

Think about it.

This trend isn’t quite universal, of course. Open systems DB2 and Sybase and Progress and MySQL and so on are quite OS-independent, and of course you could dispute my characterization of Oracle as being “integrated” with the underlying OS. But in performance-critical environments, DBMS are often intensely OS-aware.

And of course this dovetails with a point I noted in another thread – DBMS are (or need to become) increasingly aware of chip architecture details as well.

July 3, 2006

DATallegro’s technical strategy

Few areas of technology boast more architectural diversity than data warehousing. Mainframe DB2 is different from Teradata, which is different from the leading full-spectrum RDBMS, which are different from disk-based appliances, which are different from memory-centric solutions, which are different from disk-based MOLAP systems, and so on. What’s more, no two members of the same group are architected the same way; even the market-leading general purpose DBMS have important differences in their data warehousing features.

The hot new vendor on the block is DATallegro, which is stealing much of the limelight formerly enjoyed by data warehouse appliance pioneer Netezza. (After some good early discussions, Netezza abruptly reneged on a promise a year ago to explain more about its technology workings to me, and I’ve hardly heard from them since. Yes, they’re still much bigger than DATallegro, but I suspect they’ve hit some technical roadblocks, and their star is fading.)

Read more

June 28, 2006

Good DATallegro/Intel white paper

I really like this short white paper, which carries the personal byline of Stuart Frost. Stuart is DATallegro’s CEO, and also the guy who does analyst relations for them (at least in my case). Part of it just does a concise job of spelling out some of the DATallegro story. But the rest is about the comparison between Intel’s new dual-core “Woodcrest” Xeons and their single-core predecessors. Not only does it give credible statistics, it gives understanding of the reasons behind them.

Read more

May 22, 2006

Data warehouse appliances

If we define a “data warehouse appliance” as “a special-purpose computer system, with appliance administratibility, that manages a data warehouse,” then there are two major contenders: Netezza and DATAllegro, both startups, both with a small number of disclosed customers. Past contenders would include Teradata and White Cross (which seems to have just merged into Kognitio), but neither would admit to being in that market today. (I suspect this is a mistake on Teradata’s part, but so be it.) IBM with DB2 on the z-Series wouldn’t be properly regarded as an appliance player either, although IBM is certainly conscious of appliance competition. And SAP’s BI Accelerator does not persist data at this time.

In principle, the Netezza and DATAllegro stories are similar — take an established open source RDBMS*, build optimized hardware to run it, and optimize the software configuration as well. Much of the optimization is focused on getting data on and off disk sequentially, minimizing any random accesses. This is why I often refer to data warehouse appliances as being the best alternative to memory-centric data management. Beyond that, the optimizations by the two vendors differ considerably.
*Netezza uses PostgreSQL; DATAllegro uses Ingres.

Hmm. I don’t feel like writing more on this subject at this very moment, yet I want to post something urgently because there’s an IOU in my Computerworld column today for it. OK. More later.

May 15, 2006

Philip Howard likes Viper

Philip Howard likes DB2’s Viper release. Truth be told, Philip Howard seems to like most products, whether they deserve it or not. But in this case, I think his analysis is spot-on.

May 13, 2006

Hot times at Intersystems

About a year ago, I wrote a very favorable column focusing on Intersystems’ OODBMS Cache’. Cache’ appears to be the one OODBMS product that has good performance even in a standard disk-centric configuration, notwithstanding that random pointer access seems to be antithetical to good disk performance.

Intersystems also has a hot new Cache’-based integration product, Ensemble. They attempted to brief me on it (somewhat belatedly, truth be told) last Wednesday. Through no fault of the product, however, the briefing didn’t go so well. I still look forward to learning more about Ensemble.

May 10, 2006

White paper on memory-centric data management — excerpt

Here’s an excerpt from the introduction to my new white paper on memory-centric data management. I don’t know why WordPress insists on showing the table gridlines, but I won’t try to fix that now. Anyhow, if you’re interested enough to read most of this excerpt, I strongly suggest downloading the full paper.

Introduction

Conventional DBMS don’t always perform adequately.

Ideally, IT managers would never need to think about the details of data management technology. Market-leading, general-purpose DBMS (DataBase Management Systems) would do a great job of meeting all information management needs. But we don’t live in an ideal world. Even after decades of great technical advances, conventional DBMS still can’t give your users all the information they need, when and where they need it, at acceptable cost. As a result, specialty data management products continue to be needed, filling the gaps where more general DBMS don’t do an adequate job.

Memory-centric technology is a powerful alternative.

One category on the upswing is memory-centric data management technology. While conventional DBMS are designed to get data on and off disk quickly, memory-centric products (which may or may not be full DBMS) assume all the data is in RAM in the first place. The implications of this design choice can be profound. RAM access speeds are up to 1,000,000 times faster than random reads on disk. Consequently, whole new classes of data access methods can be used when the disk speed bottleneck is ignored. Sequential access is much faster in RAM, too, allowing yet another group of efficient data access approaches to be implemented.

It does things disk-based systems can’t.

If you want to query a used-book database a million times a minute, that’s hard to do in a standard relational DBMS. But Progress’ ObjectStore gets it done for Amazon. If you want to recalculate a set of OLAP (OnLine Analytic Processing) cubes in real-time, don’t look to a disk-based system of any kind. But Applix’s TM1 can do just that. And if you want to stick DBMS instances on 99 nodes of a telecom network, all persisting data to a 100th node, a disk-centric system isn’t your best choice – but Solid’s BoostEngine should get the job done.

Memory-centric data managers fill the gap, in various guises.

Those products are some leading examples of a diverse group of specialist memory-centric data management products. Such products can be optimized for OLAP or OLTP (OnLine Transaction Processing) or event-stream processing. They may be positioned as DBMS, quasi-DBMS, BI (Business Intelligence) features, or some utterly new kind of middleware. They may come from top-tier software vendors or from the rawest of startups. But they all share a common design philosophy: Optimize the use of ever-faster semiconductors, rather than focusing on (relatively) slow-spinning disks.

They have a rich variety of benefits.

For any technology that radically improves price/performance (or any other measure of IT efficiency), the benefits can be found in three main categories:

  • Doing the same things you did before, only more cheaply;
  • Doing the same things you did before, only better and/or faster;
  • Doing things that weren’t technically or economically feasible before at all.

For memory-centric data management, the “things that you couldn’t do before at all” are concentrated in areas that are highly real-time or that use non-relational data structures. Conversely, for many relational and/or OLTP apps, memory-centric technology is essentially a much cheaper/better/faster way of doing what you were already struggling through all along.

Memory-centric technology has many applications.

Through both OEM and direct purchases, many enterprises have already adopted memory-centric technology. For example:

  • Financial services vendors use memory-centric data management throughout their trading systems.
  • Telecom service vendors use memory-centric data management in multiple provisioning, billing, and routing applications.
  • Memory-centric data management is used to accelerate web transactions, including in what may be the most demanding OLTP app of all — Amazon.com’s online bookstore.
  • Memory-centric data management technology is OEMed in a variety of major enterprise network management products, including HP Openview.
  • Memory-centric data management is used to accelerate analytics across a broad variety of industries, especially in such areas as planning, scenarios, customer analytics, and profitability analysis.

May 8, 2006

Memory-centric data management whitepaper

I have finally finished and uploaded the long-awaited white paper on memory-centric data management.

This is the project for which I origially coined the term “memory-centric data management,” after realizing that the prevalent “in-memory DBMS” creates all sorts of confusion about how and whether data persists on disk. The white paper clarifies and updates points I have been making about memory-centric data management since last summer. Sponsors included:

If there’s one area in my research I’m not 100% satisfied with, it may be the question of where the true hardware bottlenecks to memory-centric data management lie (it’s obvious that the bottleneck to disk-centric data management is random disk access). Is it processor interconnect (around 1 GB/sec)? Is it processor-to-cache connections (around 5 GB/sec)? My prior pronouncements, the main body of the white paper, and the Intel Q&A appendix to the white paper may actually have slightly different spins on these points.

And by the way — the current hard limit on RAM/board isn’t 2^64 bytes, but a “mere” 2^40. But don’t worry; it will be up to 2^48 long before anybody actually puts 256 gigabytes under the control of a single processor.

← Previous PageNext Page →

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.