June 25, 2010

Flash is coming, well …

I really, really wanted to title this post “Flash is coming in a flash.” That seems a little exaggerated — but only a little.

Uptake of solid-state memory (i.e. flash) for analytic database processing will probably stay pretty low in 2010, but in 2011 it should be a notable (b)leading-edge technology, and it should get mainstreamed pretty quickly after that. 

*So far as I can tell, that’s one of the two significant roadmap changes between the 2009 and 2010 editions of Enzee Universe. The other one is that the robust form of appliance-to-appliance replication technology is coming out later than Netezza had originally planned and hoped.

There also is increasing reason to think that the issues with flash memory wearing out are overwrought. And by the way, the entire history of enterprise solid-state memory use is basically shorter than the time in which these products supposedly will wear out, so it’s not as if there have been a lot of real-life failures out there.

Comments

4 Responses to “Flash is coming, well …”

  1. Daniel Lemire on June 25th, 2010 1:27 pm

    analytic DBMS are pretty much an ideal use case for Flash reliability

    Absolutely. Except that flash allows fast random read access which most analytic DBMS don’t leverage.

  2. Curt Monash on June 25th, 2010 2:06 pm

    Well, the way I phrased it is correct. :)

    But yes — a decade of performance innovation revolving around sequential-vs.-random I/O will soon be a lot less important.

  3. Eric Kraemer on July 6th, 2010 2:29 pm

    “Soon” is the real question…I think it may be a few years yet.

    per RAID1 LUN:
    10k drives: ~115MB/s
    15k drives: ~140MB/s

    You don’t need to stack many spindles to saturate one of the many bottlenecks in play for a scan based stack: PCI,CPU,Storage Processor, Database engine etc.

    At the same time, you have to stack a lot more spindles to achieve peak IOPs rate. SSD’s deliver far more *realizable” IOP’s than they do scan rate as other components in the stack bottleneck scan throughput at much lower values (in the form factors SSD throughput can currently be delivered in).

    It’s not that hard to build a model that shows where the price point has to move to make SSD price competitive for scan based systems. Looks to be a ways out yet to me.

    With that said, great potential for targeted operations (like temp) that inherently induce random IO. Tiered storage looks ideal in scan based workloads.

  4. Some thoughts on the announcement that IBM is buying Netezza | DBMS 2 : DataBase Management System Services on September 21st, 2010 5:30 pm

    […] been getting some DB2 briefings, which is why I’ve blogged about some specialized technical points from same. But I can’t yet say why the theoretically great-sounding data warehousing […]

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.