December 29, 2008

Ordinary OLTP DBMS vs. memory-centric processing

A correspondent from China wrote in to ask about products that matched the following application scenario:

… a real-time inventory control system which has the following requirements — basically it needs to provide high write-through-rate, and it needs little if any indexing functionality.

1) a central control system records/updates the inventory data (number/weight and etc.) at each room/rack — there exist thousands of racks/rooms

2) sensors (at different rack) also report/update the temperature to the central control system, at rate of approximately 1~2 updates/min — as there are thousands of sensors, the update throughput needs to be high considering the scalability requirement.

3) When some problems happen, we need to roll back the logs (to replay the events and diagnose the root cause).

His questions included:

Memory-centric DBMS or complex event/stream processing (CEP)? Given that the purpose is to record data, and that he wants to record all data rather than engage in immediate data reduction, true DBMS seems like the way to go.

What are the good in-memory DBMS alternatives anyway? He thought McObject’s eXtremeDB was dominant in the market, which surprised me (although McObject does seem to have made a push in Asia). I know long-time leaders TimesTen and solidDB have, since their acquisitions by Oracle and IBM respectively, pulled back from the standalone market. (Their new owners are more interested in front-end caching for Oracle and DB2 respectively.) But I didn’t think the pullback had been that — as it were — extreme.

And while my correspondent didn’t ask this, I’ll add — should he maybe just go with an ordinary DBMS anyway? A couple thousand updates per minute isn’t that forbidding. On the other hand, it might be hard to achieve with conventional DBMS on super-cheap hardware. And a memory-centric alternative that only logs to disk in near-real-time might be plenty good enough for any analytics they want to do.

He was quite frank about wanting to get experience with leading-edge technology, with an eye to deploying it in other use cases.  So it’s reasonable to be pretty general in this whole discussion. With that as background — well, I’ve already given some of my thoughts.  So what are yours?

Comments

7 Responses to “Ordinary OLTP DBMS vs. memory-centric processing”

  1. jam02 on December 29th, 2008 9:12 pm

    Do you know ‘ALTIBASE’? I think it has not been known to the states including other western engineer yet, but it is well-known DBMS in the Far East.
    Surprisingly, it has been widely adopted in South Korea, Japan, and China.

    Originally, Altibase is in-memory DBMS. (It also support disk-based table since 2005.) I know their major customers are using them for the high performance in specific area such as telecommunication and stock trading systems.

    I insist that Altibase is another good alternative because it is the number one company in in-memory DBMS market of Korea.

  2. Curt Monash on December 29th, 2008 9:33 pm

    I’ve never heard of Altibase before. Thank you for telling us about them!

    Reading about them on the Web is, on first blush, something of an adventure …

  3. Sandeep on January 3rd, 2009 8:56 am

    I agree that inmemory DBMS should be part of the solution, since it requires a high update throughput.

    I am also looking for an in memory DB solution to address sub-second response time needs, the dilemma for me is there are no benchmarks in millisec/microsec units and transactions per second sounds like stoneage.

    Also how do we choose between McObject, SolidDB and TimesTen, other than vendor alignment.

  4. Curt Monash on January 3rd, 2009 9:54 am

    In case of doubt, do proofs-of-concept (POCs).

    Those products have very different architectures, so I would expect performance to be different in different specific use-cases.

    I’ve started to consult on that kind of thing, by the way, and if desired I have resources to run the whole project.

  5. Chris Mureen on January 6th, 2009 10:00 pm

    Hi Curt – not to step on your toes, but keep in mind that one benefit of working with a smaller, nimbler vendor is they will often work with you to develop a proof-of-concept involving their product. McObject does that http://www.mcobject.com/performance_proof.

  6. Curt Monash on January 7th, 2009 6:48 am

    Chris,

    If you can win your POCs by putting your best technical talent on them, while the competition puts ordinary field engineers on the job — more power to you!

    Just please don’t follow that strategy, brag about your POC wins, and neglect to mention that they don’t turn into actual sales. 😉

  7. Ari Valtanen on May 9th, 2009 6:27 pm

    To Sandeep and others: TATP is a database benchmark that is focussed on measuring sub-millisecond response times http://en.wikipedia.org/wiki/TATP_Benchmark

    IBM solidDB continues to be available as a ‘standalone’ version, but is can now also be used as a relational in-memory cache in front of most enterprise databases (not just IBM brands).

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.