Netezza Skimmer
As I previously complained, last week wasn’t a very convenient time for me to have briefings. So when Netezza emailed to say it would release its new entry-level Skimmer appliance this morning, while I asked for and got a Friday afternoon briefing, I kept it quick and basic.
That said, highlights of my Netezza Skimmer briefing included:
- In essence, Netezza Skimmer is 1/3 of Netezza’s previously smallest appliance, for 1/3 the price.
- I.e., Netezza Skimmer has 1 S-blade and 9 disks, vs. 3 S-blades and 24 disks on the Netezza TwinFin 3.
- With 1 disk reserved as a hot spare, that boils down to a 1:1:1 ratio among CPU cores, FPGA cores, and 1-terabyte disks on Netezza skimmer. The same could pretty much be said of Netezza TwinFin, the occasional hot-spare disk notwithstanding.
- Netezza Skimmer costs $125K.
- With 2.8 or so TB of space for user data before compression, that’s right in line with the Netezza price point of slightly <$20K/terabyte of user data.
- That assumes Netezza’s usual 2.25X compression. I forgot to ask when 4X compression was actually being shipped.
- I forgot to ask, but it seems obvious that Netezza Skimmer uses identical or substantially similar components to Netezza TwinFin’s.
- Netezza Skimmer is 7 rack units high.
- In place of the SMP hosts on TwinFin Systems, Netezza Skimmer has a host blade.
- Netezza (specifically Phil Francisco) mentioned that when Kalido uses Netezza Skimmer for its appliance, there will be an additional host computer, but when it uses TwinFin for the same software, the built-in host will suffice. (Even so, I suspect it might be too strong to say that Skimmer’s built-in host computer is underpowered.)
- Netezza also suggested that more appliance OEMs are coming down the pike specifically focused on the affordable Skimmer.
Categories: Data mart outsourcing, Data warehouse appliances, Data warehousing, Netezza, Pricing | 2 Comments |
Two cornerstones of Oracle’s database hardware strategy
After several months of careful optimization, Oracle managed to pick the most inconvenient* day possible for me to get an Exadata update from Juan Loaiza. But the call itself was long and fascinating, with the two main takeaways being:
- Oracle thinks flash memory is the most important hardware technology of the decade, one that could lead to Oracle being “bumped off” if they don’t get it right.
- Juan believes the “bulk” of Oracle’s business will move over to Exadata-like technology over the next 5-10 years. Numbers-wise, this seems to be based more on Exadata being a platform for consolidating an enterprise’s many Oracle databases than it is on Exadata running a few Especially Big Honking Database management tasks.
And by the way, Oracle doesn’t make its storage-tier software available to run on anything than Oracle-designed boxes. At the moment, that means Exadata Versions 1 and 2. Since Exadata is by far Oracle’s best DBMS offering (at least in theory), that means Oracle’s best database offering only runs on specific Oracle-sold hardware platforms. Read more
Three broad categories of data
People often try to draw a distinction between:
- Traditional data of the sort that’s stored in relational databases, aka “structured.”
- Everything else, aka “unstructured” or “semi-structured” or “complex.”
There are plenty of problems with these formulations, not the least of which is that the supposedly “unstructured” data is the kind that actually tends to have interesting internal structures. But of the many reasons why these distinctions don’t tend to work very well, I think the most important one is that:
Databases shouldn’t be divided into just two categories. Even as a rough-cut approximation, they should be divided into three, namely:
- Human/Tabular data –i.e., human-generated data that fits well into relational tables or arrays
- Human/Nontabular data — i.e., all other data generated by humans
- Machine-Generated data
Even that trichotomy is grossly oversimplified, for reasons such as:
- These categories overlap.
- There are kinds of data that get into fuzzy border zones.
- Not all data in each category has all the same properties.
But at least as a starting point, I think this basic categorization has some value. Read more
Categories: Database diversity, Investment research and trading, Log analysis, Telecommunications, Web analytics | 19 Comments |
Vertica slaughters Sybase in patent litigation
Back in August, 2008, I pooh-poohed Sybase’s patent lawsuit against Vertica. Filed in the notoriously patent-holder-friendly East Texas courts, the suit basically claimed patent rights over the whole idea of a columnar RDBMS. It was pretty clear that this suit was meant to be a model for claims against other columnar RDBMS vendors as well, should they ever achieve material marketplace success.
If a recent Vertica press release is to be believed, Sybase got clobbered. The meat is:
… Sybase has admitted that under the claim construction order issued by the Court on November 9, 2009, “Vertica does not infringe Claims 1-15 of U.S. Patent No. 5,794,229.” Sybase further acknowledged that because the Court ruled that all the remaining claims in the patent (claims 16-24) were invalid, “Sybase cannot prevail on those claims.”
For those counting along at home — the patent only has 24 claims in total.
I have no idea whether Sybase can still cobble together grounds for appeal, or claims under some other patent. But for now, this sounds like a total victory for Vertica.
Edit: I’ve now seen a PDF of a filing suggesting the grounds under which Sybase will appeal. Basically, it alleges that the judge erred in defining a “page” of data too narrowly. Note that if Sybase prevails on appeal on that point, Vertica has a bunch of other defenses that haven’t been litigated yet. It further seems that Sybase may have recently filed another patent case against Vertica, in a different venue, based on a different patent.
One annoying blog troll excepted, is anybody surprised at this outcome?
Categories: Columnar database management, Data warehousing, Sybase, Vertica Systems | 6 Comments |
Intersystems Cache’ highlights
I talked with Robert Nagle of Intersystems last week, and it went better than at least one other Intersystems briefing I’ve had. Intersystems’ main product is Cache’, an object-oriented DBMS introduced in 1997 (before that Intersystems was focused on the fourth-generation programming language M, renamed from MUMPS). Unlike most other OODBMS, Cache’ is used for a lot of stuff one would think an RDBMS would be used for, across all sorts of industries. That said, there’s a distinct health-care focus to Intersystems, in that:
- MUMPS, the original Intersystems technology, was focused on health care.
- The reasons Intersystems went object-oriented have a lot to do with the structure of health-care records.
- Intersystems’ biggest and most visible ISVs are in the health-care area.
- Intersystems is actually beginning to sell an electronic health records system called TrakCare around the world (but not in the US, where it has lots of large competitive VARs).
Note: Intersystems Cache’ is sold mainly through VARs (Value-Added Resellers), aka ISVs/OEMs. I.e., it’s sold by people who write applications on top of it.
So far as I understand – and this is still pretty vague and apt to be partially erroneous – the Intersystems Cache’ technical story goes something like this: Read more
Categories: Data models and architecture, Emulation, transparency, portability, Health care, Intersystems and Cache', Mid-range, Object, OLTP, Sybase, Theory and architecture | 8 Comments |
There sure seem to be a lot of inaccuracies on ParAccel’s website
In what is actually an interesting post on database compression, ParAccel CTO Barry Zane threw in
Anyone who has met with us knows ParAccel shies away from hype.
But like many things ParAccel says, that is not true.
Edit (October, 2010): Like other posts I’ve linked to from Barry Zane’s blog, that one seems to be gone, with the URL redirecting elsewhere on ParAccel’s website.
The latest whoppers came in the form of several customers ParAccel listed on its website who hadn’t actually bought ParAccel’s DBMS, nor even decided to do so. It is fairly common to to claim a customer win, then retract the claim due to lack of permission to disclose. But that’s not what happened in these cases. Based on emails helpfully shared by a ParAccel competitor competing in some of those accounts, it seems clear that ParAccel actually posted fabricated claims of customer wins. Read more
Categories: Columnar database management, Data warehousing, Database compression, Market share and customer counts, ParAccel, Telecommunications | 24 Comments |
Research agenda for 2010
As you may have noticed, I’ve been posting less research/analysis in November and December than during some other periods. In no particular order, reasons have included: Read more
More miscellany
Adding to yesterday’s varied quick comments: Read more
Categories: Continuent, Infobright, Rainstor, Software as a Service (SaaS) | 2 Comments |
Clearing up MapReduce confusion, yet again
I’m frustrated by a constant need — or at least urge 🙂 — to correct myths and errors about MapReduce. Let’s try one more time: Read more
Categories: Analytic technologies, Aster Data, Cloudera, Data warehousing, Google, Hadoop, MapReduce, SenSage, Splunk | 8 Comments |
This and that
I have various subjects backed up that I don’t really want to write about at traditional blog-post length. Here are a few of them. Read more