October 19, 2010

Introduction to Kaminario

At its core, the Kaminario story is simple:

In other words, Kaminario pitches a value proposition something like (my words, not theirs) “A shortcut around your performance bottlenecks.”

*1 million or so on the smallest Kaminario K2 appliance.

Kaminario asserts that both analytics and OLTP (OnLine Transaction Processing) are represented in its user base. Even so, the use cases Kaminario mentioned seemed to be concentrated on the analytic side. I suspect there are two main reasons:

*Somebody can think up a new analytic query overnight that takes 10 times the processing of anything they’ve ever run before. Or they can get the urge to run the same queries 10 times as often as before. Both those kinds of thing happen less often in the OLTP world.

Accordingly, Kaminario likes to sell against the alternative of getting a better analytic DBMS, stressing that you can get a Kaminario K2 appliance into production a lot faster than you can move your processing to even the simplest data warehouse appliance.  Kaminario is probably technically correct in saying that; even so, I suspect it would often make more sense to view Kaminario K2 appliances as a transition technology, by which I mean:

On that basis, I could see Kaminario-like devices eventually getting to the point that every sufficiently large enterprise should have some of them, whether or not that enterprise has an application it believes should run permanently against DRAM block storage. 

*Indeed, if you look at the four actual production examples on Slide 7 of an abridged Kaminario slide deck, at least three look like ones that really don’t need to live in RAM, the one possible exception being simulation. The same goes for other production use cases Kaminario shared.

In a bit of an oversight, I forgot to ask Kaminario about pricing.

Highlights of the Kaminario technical story include:

Kaminario company highlights include:

One last thing — it seems that DRAM often is classified as “solid-state” memory or storage. I’m OK with that.

Comments

7 Responses to “Introduction to Kaminario”

  1. Camuel Gilyadov on October 19th, 2010 3:37 pm

    I feel proud for Kaminario to be covered on DBMS2 🙂

    Just wanted to mention that DRAM SSD is a well-known niche market, It’s pioneer – TMS (Texas Memory Systems) being in-business for more than 30 years. TMS particularly are known for having a lot of expertise in using SSD (DRAM or NAND Flash based) for database acceleration. I would go so far as claiming that TMS is the single most concentrated point of conventional DBMS-acceleration-by-SSD knowledge. Violin Memory is the other one offering DRAM-based rack-mounted SSD. There are others…

    Kaminario claim-to-fame is that they don’t have any SPOF (single-point-of-failure) while they claim everyone else in DRAM rack-mounted SSD market does have it and…. seems being able to withstand the critique and prove the claim, at least in one case that I remember. Here it is at Robin Harris blog (http://storagemojo.com/2010/06/09/room-at-the-top/).

    Another claim-to-fame is well… the usual and hypocritical “we have no stinking proprietary hardware here”. Heh? Did I missed open-sourcing announcement of the “revolutionary OS” part 😉

    Kaminario aside, I’m very skeptical that SAN model in general is appropriate for analytics in any form, be it disk, DRAM or flash. Moreover, in my humble opinion, if the volumes of data and budget situation allows purchasing another SAN (esp. DRAM-based), one is always better off spending on RAM upgrade for existing servers/nodes and configuring more cache there in his analytic DBMS of choice. And in the case the enterprise SAN must be used for political/management reasons, why not to upgrade its DRAM cache to ridiculous amounts? It will do wonders for performance holistically for whole storage infrastructure and across all loads.

  2. Eyal Markovich on October 22nd, 2010 3:18 pm

    Disclaimer, I work for Kaminario.
    I’d like to add a few comments to Curt’s write up:

    1. Is a DRAM-based SSD suitable for OLTP? Absolutely. I agree that the I/O requirements for analytic applications (DW, BI, OLAP, etc.) will be different from an OLTP application. As a rule of thumb, analytic applications will demand IOPS and throughput, while for OLTP applications the name of the game is latency. As DRAM offers superior latencies (compared to flash and HDD), a DRAM-based SSD appliance is suitable for OLTP as it offers excellent latencies coupled with high availability. I totally agree with Curt analysis of OLTP vs. analytics, but would add that many people think that DRAM based SSD is suitable only for analytics simply because they are not aware of the potential improvements in OLTP. When you have queries that run for 2 hours, everyone looks for solutions to speed up the response time (be it SSD appliance or data warehouse appliance, etc); however, when your queries complete in several milliseconds (and they all tuned with the optimal indexes and execution plans) people tend to assume that they have reached the maximum throughput from their system. In many cases, they are not aware that a DRAM-based SSD can double their transaction per seconds, (in real-world business terms),reduce checkout time for online retailers. etc

    2. One of Curt’s comments demonstrates the unique solution by Kaminario. “Unlike Schooner, Kaminario makes no exceptions for transaction logs and the like. Kaminario K2 is just a block device. Of course, you can choose to put just your most bottlenecking data on Kaminario K2 – the hot stuff, your temp space, your logs, etc”. Because Kaminario K2 uses DRAM, there is no need to worry about tuning, let alone wear-leveling so obviously you can locate the transaction logs in DRAM without any penalty (I have seen cases where just locating the transaction logs in Kaminario K2 solved performance problems – as you can imagine in OLTP).

    3. Based upon the Kaminario unique Scale-Out Performance Architecture™ (SPEAR), the Kaminario K2 can scale according to business needs. The number of I/O directors (which provides the IOPS and throughput performance) will vary based on the application need. The number of data nodes will also vary based on the required capacity. So the total capacity in a single enclosure will be determined by the number of I/O directors (two minimum) and the number of data nodes.

  3. Camuel Gilyadov on October 22nd, 2010 9:07 pm

    Well, rack-mounted SSD are great for high-throughput OLTP. Also it is funny that Kaminario presentation mentioned in the post pitches exclusively analytics (four examples from four) and doesn’t mentions latency where it really shines allegedly 🙂

    Regarding DRAM vs. Flash. In any setup where DRAM is accessed over network, it can be safely substituted by flash because network time dominates the latency anyway. Proof? (FC 4Gb – overhead)/4KB = ~15 microseconds to transfer the smallest 4KB block across Fibre-Channel network. In Kaminario setup two crossings are necessary so it becomes ~30 microsecond. Flash latency is ~25 microseconds.
    That’s the reason in my opinion for TMS and Violin increasing focus on Flash.

    Regarding worrying about wear-leveling: well, let your SSD vendor worry about that. SLC-SSD provides more than 10 years of non-stop rewriting.

    Now let’s see why SSD (no matter DRAM or Flash) in general is great for OLTP?

    OLTP requires storage systems to implement the following two features simultaneously:

    1. Coherency – which complicates caching, making it impractical. Coherency means that all readers must see the most recent version of the block the instant it was written.

    2. Low-latency random pinpoint reads – with no caching as option, fully random reads are problematic with mechanical disks. Arraying won’t help with latency.

    The only option to implement Coherence and low-latency simultaneously is to have a very large centrally accessed cache witch is reminiscent of rack-mounted SSD and particularly Kaminario K2.

    Random writes and transaction logs are non-issue because any storage-vendor today have small NVRAM (DRAM backed by battery or supercap)for writes and then replicates it to disk in async manner. You can even achieve this on regular server with NVRAM card and modern file-system (open-source ZFS will do).

  4. Paul Johnson on November 11th, 2010 12:04 pm

    How does this differ from White Cross/Kognitio?

  5. Curt Monash on November 11th, 2010 1:11 pm

    For starters, Kognitio is a DBMS product and Kaminario isn’t.

  6. Ian Posner on August 14th, 2011 9:40 am

    The author’s claim that analytic databases are more likely to suffer sudden problems in performance than OLTP systems is not true: When using a locking database, performance may be acceptable as contention rises, but when a certain locking threshold is surpassed, the symptom is a sudden “lock-up” – no queries referencing the contended resources get serviced until the locks on those resources are released. And if there’s a considerable amount of CPU activity and a large number of connections, the entire system may seize up requiring DBA intervention.

  7. Curt Monash on August 14th, 2011 11:28 am

    Fair enough, but I think the case of rapidly outgrowing your system because somebody thinks of a new business requirement is more common than that of outgrowing it due to a happy increase in volume.

    And the analytic case is the one where you can program something in negligible time that brings your system to its knees.

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.