More and more, I’m hearing about reliability, resilience, and uptime as criteria for choosing among data warehouse appliances and analytic DBMS. Possible reasons include:
- More data warehouses are mission-critical now, with strong requirements for uptime.
- Maybe reliability is a bit of a luxury, but the products are otherwise good enough now that users can afford to be a bit pickier.
- Vendor marketing departments are blowing the whole subject out of proportion.
The truth probably lies in a combination of all these factors.
Making the most fuss on the subject is probably Aster Data, who like to talk at length both about mission-critical data warehouse applications and Aster’s approach to making them robust. But I’m also hearing from multiple vendors that proofs-of-concept now regularly include stress tests against failure, in what can be – and indeed has been – called the “baseball bat” test. Prospects are encouraged to go on a rampage, pulling out boards, disk drives, switches, power cables, and almost anything else their devious minds can come up with to cause computer carnage. The goal is to see whether systems keep running, how well they perform when impaired, or how fast they recover if they indeed do go down.
Teradata benchmark honcho Gene Erickson definitely encourages this kind of behavior. Kognitio – which sees “resilience” as a competitive advantage and reports that prospects increasingly care about same — is friendly to such behavior as well. And that’s surely not a complete list.
Of course, some data warehouse users have cared about robustness for years, even up to the point of replication and hot standbys. Even so, I think there’s recently been a change in the market. For example, when Vertica and before that its research predecessor C-Store were rolled out, Mike Stonebraker repeatedly called attention to the potential for implementing them with high redundancy, but I and most other observers basically yawned.
Selecting analytic DBMS products is even more fun than it used to be.