February 19, 2008

The architectural assumptions of H-Store

I wrote yesterday about the H-Store project, the latest from the team of researchers who also brought us C-Store and its commercialization Vertica. H-Store is designed to drastically improve efficiency in OLTP database processing, in two ways. First, it puts everything in RAM. Second, it tries to gain an additional order of magnitude on in-memory performance versus today’s DBMS designs by, for example, taking a very different approach to ensuring ACID compliance.

Today I had the chance to talk with two more of the H-Store researchers, Sam Madden and Daniel Abadi. Our call focused on the part that I didn’t think I’d understood well before, namely:

It’s too early in the research process for those questions to have fully definitive answers. Indeed, the guesses of the individual researchers seem in some cases to differ a bit. That said, here’s how I understand and evaluate the core H-Store assumptions and design issues at this point.

In H-Store, a “single-site” transaction is one that runs solely against data that has been partitioned to a single node in a grid. Most transactions are single-site. In the TPC-C benchmark, the 99% of inventory lookups are that are in line with the warehouse/district/customer hierarchy are single-site; the other 1% are not. In a real-life application that partitions solely by customer, any single-customer transaction will be single-site.

Many other transactions can be shoehorned into a single-site paradigm, and indeed are today. Suppose a customer wants to buy two flight segments as a unit, with the outward and return parts coming from different airlines. That’s inherently multi-site. But if you set up “escrow” pools of seat allocations reserved by various vendors, that don’t need to be synchronously checked with the airline before being sold, you can get back to a single-site paradigm. The H-Store researchers claim such hacks are already standard in high-end OLTP. I find that rather plausible. Reads can certainly be more complex – think of Amazon’s search across hundreds of thousands of used book store inventories — but updating data is a pretty separable task.

H-Store is focused on speeding up single-site transactions. The simplest explanation of the H-Store design philosophy is:

  1. Radically speed up single-site transactions.

  2. Don’t make multi-site transactions much slower than they are today.

Single-site transactions have predictable, very short processing times; single-threading is a great idea for them. Single-threading may not be so great for the multi-site ones, but so be it. The H-Store guys think that’s a great trade-off anyway. If the project runs into trouble, it will likely be because multi-site transactions are slowed down more than the researchers now anticipate.

You can’t get away from multi-site transactions altogether. H-Store will have a locking mechanism for multi-site transactions; it just will be an optimistic one. As for concurrency control, there seem to be two schools of thought right now. One says “Hey, even if going across the network takes 1 millisecond rather than 50 microseconds, that’s no big deal. Doing everything in a single thread is fine.” The other recalls Murphy’s Laws. I’m in the Murphy school, and believe that single-site and multi-site transactions will wind up in different threads as the H-Store design evolves.

In H-Store, transactions are all stored procedures. The H-Store researchers assert that in high-performance OLTP systems today, most of the transactions already are stored procedures, so H-Store’s reliance on them is no big deal. I don’t completely buy that. But it is obviously true that transactions are a key element of program modularity. So forcing them all into stored procedures doesn’t seem like a big deal – although some programmers will surely rant about it.

H-Store will probably wind up with persistent storage. I kind of see the theoretical justification for having data reside solely in a set of production RAM copies, some of which are separated by “more than the width of a hurricane.” Still, I think persistent storage is needed, for two reasons.

First, storing data solely in your run-time systems leaves you with too many single points of logical failure. If some logical error is introduced – whether via an attack or just human error – you can lose data no matter how many times it’s replicated. So it is safer to copy the data and store it in a disconnected way. Logic even aside, eliminating persistent storage would introduce a major fear factor – and few markets are more paranoid and safety conscious than high-end transaction processors.

Just to be clear: I’m not saying transaction rollback or even crash recovery will ever touch disk. I’m just saying that something will get periodically persisted, probably on a checkpoint/snapshot basis.

Comments

5 Responses to “The architectural assumptions of H-Store”

  1. Between the Lines mobile edition on February 20th, 2008 4:24 pm

    […] Academically, H-Store is described in a paper and slide presentation. I examined H-Store and its assumptions at some length over on DBMS2, where you’ll also find an extensive analysis of other […]

  2. DBMS2 — DataBase Management System Services » Blog Archive » Mike Stonebraker calls for the complete destruction of the old DBMS order on May 21st, 2008 4:39 pm

    […] My follow-up post on H-Store and its assumptions […]

  3. Daniel Abadi and Sam Madden on column stores vs. indexed row stores | DBMS2 -- DataBase Management System Services on August 5th, 2008 9:04 pm

    […] Abadi and Sam Madden — for whom I have the highest regard after our discussions regarding H-Store — wrote a blog post on Vertica’s behalf, arguing that column stores are far superior to […]

  4. VoltDB finally launches | DBMS2 -- DataBase Management System Services on May 25th, 2010 3:15 am

    […] is based on the H-Store technology, which I wrote about in February, 2009. Most of what I said about H-Store then applies […]

  5. Starcounter | DBMS 2 : DataBase Management System Services on April 13th, 2011 12:56 pm

    […] beyond what’s on their rather breathless web site. I’m guessing this isn’t an H-Store/VoltDB architecture, but rather something more like what Workday […]

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.