September 13, 2008

How will SSDs get incorporated into data warehousing?

SSDs (Solid-State Drives) have gotten a lot of recent attention as an eventual replacement for spinning disk. I haven’t researched expected timelines in detail, but George Crump offered a plausible scenario recently in a highly visible Information Week blog post. After the great recent (and still ongoing!) discussion in the SAN vs. DAS comment thread, I’d like to throw some questions out for discussion, including:

  1. Just how much faster than disk will SSDs be than disk for random reads?
  2. Will SSDs be faster or slow than disk for sequential reads, and by how much?
  3. What will the speed comparison be on SSDs between sequential and random reads?
  4. How many times will it be possible to write to an SSD? Will this be a problem?
  5. Will DBMS — which today invariably assume that storage is homogeneous — need to take account of storage heterogeneity?
  6. What are the implications of SSDs for database and DBMS architecture?

I commented on some of these issues a year ago. Now it’s your turn. 🙂

Comments

5 Responses to “How will SSDs get incorporated into data warehousing?”

  1. Chuck Hollis on September 13th, 2008 8:31 am

    Hi — I was planning a future post on this topic, but I thought I’d answer some of your SSD questions.

    EMC has been shipping this stuff since the beginning of the year, so this is not theoretical.

    First, it’s important to differentiate between the consumer-grade stuff, and enterprise flash drives (EFDs) which are purpose built for the requirement.

    You might as well as forget about longevity concerns. Experience so far shows them more reliable — and lasting longer — than rotating disks.

    You might assume that the characteristics of the flash device (e.g. STEK) might carry over to the application. Well, there’s usually a storage array in the middle, and it can either hurt or help with performance, depending on how it was designed.

    When talking performance, it’s more useful to think in terms of IOPs (I/O operations per second). EFDs don’t differentiate between random and sequential access. Although reads are faster than writes, any decent caching array will mask this from applications.

    The headline we use is that — from an application level — you should expect 30x the IOPs at an application level.

    How this translates into actual application performance is the usual “it depends” — there’s about a half-dozen factors that impact the discussion.

    I will say this — so far we haven’t seen any application run slower 🙂

    Seriously, though, it’s a question of how much faster, and is it worth it.

    Any workload with a significant random component appears to scream — substantial and noticeable performance bumps that the business is sometimes willing to happily pay for.

    The other interesting use case seems to be analytics — use disk for the primary database, but run the intense data grinding on something like enterprise flash.

    As we speak, we’re running some before-and-after exercises in engineering to more fully characterize the performance impacts in DW and BI environments, using the CLARiiON CX4.

    I’ll be sure to post the results as I get them.

    If you can’t wait, you can go over and ask Barry Burke (thestorageanarchist.typepad.com) who appears to be one of the de-facto industry expert on this technology.

    Cheers!

  2. Chuck Hollis on September 13th, 2008 8:35 am

    Ooops, me again, I forgot to address the last — and perhaps most important — question, e.g. “what’ the impact?”

    My answer may be a bit off the reservation, but as I see it, this is a 180 degree shift in database design philosophy.

    Historically, the way that database architects have addressed performance problems was to spread the I/O workload across as many spindles as possible.

    Well, with EFDs, due to the expense, you probably want to do just the opposite: concentrate the I/O load in as small a storage region as possible.

    The more I/O density you can drive per MB or GB, the more cost-effective your use of flash technology.

    And that’s gonna be a hard concept for a lot of people to wrap their head around.

  3. Curt Monash on September 13th, 2008 9:29 am

    Chuck,

    Based on your assessment of SSD technology, it seems that the DBMS design challenges would run something like:

    1. Federate queries among separate databases stored on media with very different performance characteristics.

    2. Eventually, be smart about automagically assigning the data to heterogeneous media.

    A quick-and-dirty approach would be to say “Hmm, SSD is just another level of RAM cache, except we refresh it much more rarely” and proceed from there. Somehow, I doubt that will turn out to be ideal.

    So assuming one gets serious, one requirement is to have the optimizer be able to account for data on heterogeneous media, and then later on to be smart about suggesting where data goes. Hopefully, some Stanford grad students are working on that even as we speak. If not, they should be.

    Best,

    CAM

  4. Woody Hutsell on September 16th, 2008 2:38 pm

    Curt,

    Some additional perspective:

    1. Just how much faster than disk will SSDs be than disk for random reads?

    This depends on the SSD that you buy. There are alternatives on the market ranging from Flash drives made for notebooks to RAM or Flash SSDs designed for enterprise database applications. Even within the enterprise space the architectures are quite different. I would compare devices based on three criteria: random IOPS, latency and bandwidth. I am with Texas Memory Systems a manufacturer of RAM and Flash SSD. Our RAM SSDs can provide: 600,000 random read or write IOPS (2000x faster than a hard drive), 4.5GB/second of random IO, and 15 microsecond latency. Our Cached Flash solution provides 100,000 random read IOPS (333x faster than a hard drive), 25,000 random write IOPS (83x faster than a hard drive) or 2GB/second sustained read or write bandwidth with 200 microsecond latency. The RAM based systems are 4U and provide 512GB of capactiy while the Flash systems are 4U and provide 2TB capacity (after RAID). A hard disk (just a plain old hard disk) provides about 300 random IOPS and 60-70MB/second at 4-5 milliseconds of latency. Once you put a hard disk into an array with cache the performance characteristics change dramatically as the array vendors can scale the number of spindles to improve IOPS and bandwidth (but notice this does not include scaling latency and you should also be aware that not all RAID controllers can scale IOPS and bandwidth). In this case, the main benefit of SSD is that it retains amazing latency even with high IOPS and high bandwidth. Picking the right solution for your application then is a factor of determining which device is the right fit based on your 1) sensitivity to latency and 2)the access patters and 3) the cost of the solution.

    2. Will SSDs be faster or slow than disk for sequential reads, and by how much?

    Enterprise SSDs are faster but not always the right solution for making sequential data move at higher bandwidths. We have many customers who use SSD to accelerate sequential accesses but they are latency sensitive.

    3. What will the speed comparison be on SSDs between sequential and random reads?

    Very little difference between sequential and random reads. This is one of the core benefits of SSD is that they excel at random data access.

    4. How many times will it be possible to write to an SSD? Will this be a problem?

    Capabilities vary depending on the solution you buy. The solutions marketed for the enterprise that use Flash memory are designed to handle enterprise workloads. Solutions that are designed with RAM memory have virtually no issues with write endurance. Systems designed for the enterprise have many layers of architecture designed to minimize the write endurance issues of Flash SSD. You cannot wish away write endurance issues of Flash SSD, but you can design to minimize them.

    5. Will DBMS — which today invariably assume that storage is homogeneous — need to take account of storage heterogeneity?

    This question suggests that the DBMS can be a bottleneck and to some extent this could be true. I would say that in most cases an SSD can be installed today and deliver performance today and will bottleneck at the CPU before it bottlenecks at the DBMS. We have sold SSD to database markets for 10 years and most of these are for database applications. We have also developed a tuning application (www.statspackanalyzer.com) for Oracle database applications.

    6. What are the implications of SSDs for database and DBMS architecture?

    The main suggestion is to put the files that you access frequently on SSD and leave the rest on slow disk based storage. In database envinronments it is usually pretty easy to determine the frequently accessed files and the patterns to access those files do not change unless the queries are modified. In some cases, we suggest decreasing server cache especially in RAC installations and just force direct IO to our SSDs but this advice is on a case by case basis.

    In response to your comments to Chuck’s post:

    1. I think the key is to have the data you access the most on the fastest storage. As Flash SSD gets less expensive it becomes more practical to have more of your Database on SSD. I don’t think you need to separate by queries, rather separate at the file level.

    2. Automagic migration sounds nice but in practice will add latency and may decrease the value proposition of SSD. Some innovative solutions to these problems appear to be on the horizon. DBAs probably know what data is accessed the most already. The big challenge for data warehouses is that in some of these systems the answer is everything needs to be fast. In these cases, we usually look at some method to partition data (by date, geography, etc) so that we can narrow in on the most accessed files.

    Hope this helps add to the discussion,

    Woody

  5. Jonathan Moore on September 19th, 2008 1:16 pm

    My answers here are mostly based on research, about a month of reading and calling vendors, I did last year, but I have tried to keep up with things as they evolve. Also I am going to assume you are talking about NAND Flash SSDs here.

    1. Just how much faster than disk will SSDs be than disk for random reads?

    For SSDs you would use with a DB about 100 times faster.

    2. Will SSDs be faster or slow than disk for sequential reads, and by how much?

    Most of the high end SSD do sequential reads between 80Mbs-120Mbs

    3. What will the speed comparison be on SSDs between sequential and random reads?

    You can think of this the same way you do for disk there is a seek time and a read rate.

    4. How many times will it be possible to write to an SSD? Will this be a problem?

    If you don’t consider the nature of the SSD it will be a issue, but if you do ware leveling then the SSD should out last a spinning disk even if you wright to it 24-7.

    5. Will DBMS — which today invariably assume that storage is homogeneous — need to take account of storage heterogeneity?

    Yes I think they will have to care about what they are accessing. SSD are not disks and need to be treated differentially to get optimal performance.

    6. What are the implications of SSDs for database and DBMS architecture?

    With disks the biggest cost is seeks on SSDs these are 100 times faster but SSDs have there own weakness. Unlike disk SSDs have a trinary state, unset, 1, or 0. The issue is that to go from 1 to 0, or 0 to 1, you must go thought the unset state first 0 -> unset -> 1. Furtherer you can’t just unset one bit at a time you must unset an entire “erase block” which is in the tens of Kb. This means that small random writes on SSDs can hurt. Log structure data structures are great on SSDs thou.

    In terms of when to deploy SSD vs disk we can appeal to the work of Jim Gray and Franco Putzolu did with the 5 minute rule way back in 86. It needs to be update to account for SSDs but Goetz Graefe already did that for us with his “The five-minute rule twenty years later” paper. I hive links to her for any one who wants to read them.

    http://0x0000.org/2007/12/the_5_minute_rule.html

    One other point of interest from my research on SSDs. At the time I did the work I found that basically all storage devices were priced linearly with respect to IOPs. That is if you buy Texas Memory Systems RAM storage or you by a 7200 RPM SATA drive you pay about the same per IO.

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.