June 8, 2009

Per-terabyte pricing

Software-only DBMS vendors sometimes price per terabyte of user data.  Vertica’s list price is $100K/TB. Greenplum’s list price is $70K/TB. In practice, both offer substantial discounts, especially at higher volumes.  In both cases, this means raw data, uncompressed, without counting indexes or temp space.

Client experience teaches me that this definition is easy to forget, so let me reemphasize the key point:

Per-terabyte pricing is based on a calculated figure.  Per-terabyte pricing is not based on the current disk space used by your database when managed by the DBMS you are replacing.

There’s at least one important difference in how Vertica and Greenplum calculate database size.  No matter how many times you copy the data, Vertica only charges you for it once.* But if you spin out data marts and recopy data into it — as Greenplum rightly encourages you to do — Greenplum wants to be paid for each copy.  Similarly, Vertica charges only for deployment, and not for test or development; I didn’t remember to ask what Greenplum’s policies are in those regards. (Edit: Greenplum says in a comment below that it doesn’t charge for test or development data either.)

*That policy is a great fit with Vertica’s performance recommendation that you should store columns in different sort orders, perhaps an average of two copies per column.

June 8, 2009

Greenplum blogs about some customers

I’ve written some about Greenplum’s customers at eBay and Fox Interactive Media.  But as I recently grumped, I’m not in the mood right now to write much about other Greenplum customers.  Fortunately, Greenplum has filled the gap itself.  Marketing chief Paul Salazar just blogged about a number of other big Greenplum customers. And last month Paul blogged in considerable detail about what he characterizes as an enterprise data warehouse (EDW) conversion — Oracle replacement — at a large pharmaceutical company.

June 8, 2009

The future of data marts

Greenplum is announcing today a long-term vision, under the name Enterprise Data Cloud (EDC). Key observations around the concept — mixing mine and Greenplum’s together — include:

In essence, Greenplum is pitching the story:

When put that starkly, it’s overstated, not least because

Specialized Analytic DBMS != Data Warehouse Appliance

But basically it makes sense, for two main reasons:

Read more

June 8, 2009

More on Fox Interactive Media’s use of Greenplum

Greenplum’s most important reference is probably its energetic advocate Fox Interactive Media, even ahead of much larger user Greenplum user eBay, and notwithstanding Aster Data’s large presence in Fox subsidiary MySpace. I just ran across a “review” of Greenplum by FIM’s Brian Dolan, neatly summarizing his views about Greenplum’s strengths, weaknesses, and uses inside Fox.  Highlights include: Read more

June 7, 2009

Merv Adrian on SAND Technology

Merv Adrian blogged about SAND Technology, casting significant doubt on SAND’s business prospects.  At this point, I can’t say I disagree. On the other hand, SAND does have public, audited financial statements showing it generating more revenue than a lot of other analytic DBMS or archiving vendors probably make. Columnar DBMS vendors doing better than SAND are Sybase, Vertica, maybe Infobright — and who else?

June 7, 2009

Daniel Abadi on Kickfire and related subjects

Daniel Abadi has a new blog, whose first post centers around Kickfire.  The money quote is (emphasis mine):

In order for me to get excited about Kickfire, I have to ignore Mike Stonebraker’s voice in my head telling me that DBMS hardware companies have been launched many times in the past are ALWAYS fail (the main reasoning is that Moore’s law allows for commodity hardware to catch up in performance, eventually making the proprietary hardware overpriced and irrelevant). But given that Moore’s law is transforming into increased parallelism rather than increased raw speed, maybe hardware DBMS companies can succeed now where they have failed in the past

Good point.

More generally, Abadi speculates about the market for MySQL-compatible data warehousing.  My responses include:

Anyhow, as previously noted, I’m a big Daniel Abadi fan. I look forward to seeing what else he posts in his blog, and am optimistic he’ll live up to or exceed its stated goals.

June 5, 2009

Greenplum update — Release 3.3 and so on

I visited Greenplum in early April, and talked with them again last night. As I noted in a separate post, there are a couple of subjects I won’t write about today. But that still leaves me free to cover a number of other points about Greenplum, including: Read more

June 5, 2009

Greenplum will be announcing some stuff

Greenplum is having a webinar Monday to announce “The Next Big Leap in Data Warehousing” (capitalization theirs). The idea they’ll be talking about is a genuinely good one. And off the top of my head I can only think of a few vendors who implemented it before Greenplum, and even fewer who emphasize it explicitly. So if you like webinars, you might want to listen in. I plan to blog about the general concept soon after the 12:01 am Monday embargo lifts. (Uh, guys, it is Monday rather than Tuesday, right?) Read more

June 3, 2009

What statistics texts and other analytics books should we recommend to people?

On a message board I frequent, two different guys have asked for recommendations for statistics textbooks, in a kind of general knowledge vein.  One phrases it as:

I’m looking for a general purpose statistics textbook for reference purposes.

giving his background as

I took Calculus-level Statistics in college. (i.e. 2 semesters of Calc was a prerequisite; this was the stats class that stat majors took.)

He was a computer science major and is now a professional programmer. (And if somebody can use a tournament-chess-smart programmer with outstandingly clear communication skills in the Buffalo area, I’m pretty sure he’d be glad to know about the opportunity. But I digress …)

The other is a law student with a more general need, which he phrases as

I want to use them for work to help identify trends; do multiple regressions; put values on things that aren’t easy to quantify, etc.

Economics I already know most of the basics from my undergrad studies, but I need more advanced economic theory and such.

He’s interested in what I’d call “pop” analytics books as well as hardcore stuff; e.g., the one book he’s identified already is “Competing with Analytics.” I’m thinking some good vendor white papers might be just as useful for him as that class of books. But he obviously also wants to learn the hardcore stuff as well.

I haven’t attended or taught a college course since 1981, and I tend to find the business books on analytics too simple for my tastes, so I’m not the right guy to answer from his own experience.

Does anybody have any helpful thoughts? Thanks!

May 30, 2009

Reinventing business intelligence

I’ve felt for quite a while that business intelligence tools are due for a revolution. But I’ve found the subject daunting to write about because — well, because it’s so multifaceted and big. So to break that logjam, here are some thoughts on the reinvention of business intelligence technology, with no pretense of being in any way comprehensive.

Natural language and classic science fiction

Actually, there’s a pretty well-known example of BI near-perfection — the Star Trek computers, usually voiced by the late Majel Barrett Roddenberry. They didn’t have a big role in the recent movie, which was so fast-paced nobody had time to analyze very much, but were a big part of the Star Trek universe overall. Star Trek’s computers integrated analytics, operations, and authentication, all with a great natural language/voice interface and visual displays. That example is at the heart of a 1998 article on natural language recognition I just re-posted.

As for reality: For decades, dating back at least to Artificial Intelligence Corporation’s Intellect, there have been offerings that provided “natural language” command, control, and query against otherwise fairly ordinary analytic tools. Such efforts have generally fizzled, for reasons outlined at the link above. Wolfram Alpha is the latest try; fortunately for its prospects, natural language is really only a small part of the Wolfram Alpha story.

A second theme has more recently emerged — using text indexing to get at data more flexibly than a relational schema would normally allow, either by searching on data values themselves (stressed by Attivio) or more by searching on the definitions of pre-built reports (the Google OneBox story). SAP’s Explorer is the latest such view, but I find Doug Henschen’s skepticism about SAP Explorer more persuasive than Cindi Howson’s cautiously favorable view. Partly that’s because I know SAP (and Business Objects); partly it’s because of difficulties such as those I already noted.

Flexibility and data exploration

It’s a truism that each generation of dashboard-like technology fails because it’s too inflexible. Users are shown the information that will provide them with the most insight. They appreciate it at first. But eventually it’s old hat, and when they want to do something new, the baked-in data model doesn’t support it.

The latest attempts to overcome this problem lie in two overlapping trends — cool data exploration/visualization tools, and in-memory analytics. Read more

← Previous PageNext Page →

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.