An interesting claim regarding BI openness
Analyst conference calls about merger announcements are generally pretty boring. Indeed, the companies involved tend to feel they are legally barred from saying anything interesting, by mandate of both the antitrust regulators and the SEC.
Still, such calls are joyful events, full of strategic happy talk. If one is really lucky, there may a virtuouso tap dancing exhibition as well. On today’s IBM/Cognos call, Cognos CEO Rob Ashe was asked whether he thought Cognos’ independence or lack thereof was as important today as he said it was after SAP announced its BOBJ takeover. Without missing a beat, he responded that there were two kinds of openness:
- Database openness (not important)
- ERP/business process openness (indeed important)
Hmm. I’m not so sure I agree. To begin with, there aren’t just two major points of potential integration. There’s also a whole lot of middleware: obviously data integration, but also app servers, portals, and query execution acceleration as well. Read more
| Categories: Analytic technologies, Business intelligence, Business Objects, Cognos, IBM and DB2, Memory-centric data management, ParAccel, SAP AG | 1 Comment |
IBM is buying Cognos – quick reactions
Some quick thoughts in connection with IBM’s just-announced plans to acquire Cognos.
1. Ironically, IBM just put out a press release describing a strong-sounding reseller partnership with Business Objects. The deal specified that
Business Objects will begin distributing and reselling IBM DB2 Warehouse with Business Objects XI and CFO Performance Management solutions. In addition, IBM will include a starter edition of Business Objects XI with DB2 and DB2 Warehouse.
Jeff Jones of IBM told me that they also had a partnership with Cognos, but with different details. I guess Cognos will eventually take over that deal, which is an obvious negative for Business Objects.
2. More generally, I can see where Cognos will now likely gain share at DB2 sites, and IBM/Ascential at Cognos sites. I can’t as easily see why Cognos would now lose share at Oracle or Teradata or Netezza sites, or why Ascential would lose share at SAP/BOBJ sites. So there seem to be some genuine synergies here, albeit perhaps modest ones.
3. Thus, I think the negatives in this deal for the remaining independents (Microstrategy, Information Builders, Informatica, etc.) will somewhat outweigh the positives.
4. I’m not a big fan of Cognos’ management, former CEO Ron Zambonini and a few other freethinkers excepted. So from that standpoint I don’t think they have a lot to lose being taken over by Big Blue.
5. Obviously, with most of the dominoes now fallen, the big question is about the future of BI as it – potentially – gets integrated into much larger enterprise technology suites. And I think the answer to that depends a lot more on technology than most people seem to realize. More on that subject later, but here’s one hint:
I think fixing the disappointment that is dashboards will involve taking query volumes up by at least 2 to 3 orders of magnitude. So as great as recent innovations in analytic query performance have been, I hope and trust that so far we’ve only seen the tip of the iceberg.
Links:
1. eWeek on the IBM/Business Objects deal.
2. Press release on the IBM/Business Objects deal.
3. Press release on the IBM/Cognos deal.
| Categories: Analytic technologies, Business intelligence, Business Objects, Cognos, IBM and DB2 | 5 Comments |
Vertica update – HP appliance deal, customer information, and more
Vertica quietly announced an appliance bundling deal with HP and Red Hat today. That got me quickly onto the phone with Vertica’s Andy Ellicott, to discuss a few different subjects. Most interesting was the part about Vertica’s customer base, highlights of which included:
- Vertica’s claim to have “50” customers includes a bunch of unpaid licenses, many of them in academia.
- Vertica has about 15 paying customers.
- Based on conversations with mutual prospects, Vertica believes that’s more customers than DATAllegro has. (Of course, each DATAllegro sale is bigger than one of Vertica’s. Even so, I hope Vertica is wrong in its estimate, since DATAllegro told me its customer count was “double digit” quite a while ago.)
- Most Vertica customers manage over 1 terabyte of user data. A couple have bought licenses showing they intend to manage 20 terabytes or so.
- Vertica’s biggest customer/application category – existing customers and sales pipelines alike – is call detail records for telecommunications companies. (Other data warehouse specialists also have activity in the CDR area.). Major applications are billing assurance (getting the inter-carrier charges right) and marketing analysis. Call center uses are still in the future.
- Vertica’s other big market to date is investment research/tick history. Surely not coincidentally, this is a big area of focus for Mike Stonebraker, evidently at both companies for which he’s CTO. (The other, of course, is StreamBase.)
- Runners-up in market activity are clickstream analysis and general consumer analytics. These seem to be present in Vertica’s pipeline more than in the actual customer base.
| Categories: Analytic technologies, Business Objects, Data warehouse appliances, Data warehousing, DATAllegro, HP and Neoview, RDF and graphs, Vertica Systems | 5 Comments |
Clarifying SAS-in-the-DBMS, and other SAS tidbits
I followed up with Keith Collins of SAS today about SAS-in-the-database, expanding on what I learned or thought I did when we talked last month. Here’s the scoop:
SAS users do a lot of data filtering, aka data preparation, in SAS. These have WHERE clauses, just like SQL. However, only some of them map to actual SQL WHERE clauses. SAS is now implementing many of the rest as UDFs (User-Defined Functions), one DBMS at a time, starting with Teradata. In addition, SAS users can write custom filters that get registered as UDFs. This capability will be released with SAS 9.2. (The timing on SAS 9.2 is in line with the comment thread to my prior post on SAS-in-the-DBMS.) Read more
| Categories: Analytic technologies, Data warehousing, Intel, SAS Institute, Theory and architecture | Leave a Comment |
Netezza cites three warehouses over 50 terabytes
Netezza is finally making it clear that they run some largish warehouses. Their latest press release cites Catalina Marketing, Epsilon, and NYSE Euronext as having 50+ terabytes each. I checked with Netezza’s Marketing VP Ellen Rubin, and she confirmed that those are clean figures — user data, single warehouses, etc. Ellen further tells me that Netezza’s total count of warehouses that big is “significantly more” than the 3 named in the release.
Of course, this makes sense, given that Netezza’s largest box, the NPS 10800, runs 100 terabytes. And Catalina was named as having bought a 10800 in a press release back in December, 2006. Read more
ParAccel opens the kimono slightly
Please do not rely on the parts of this post that draw a distinction between in-memory and disk-based operation. See our February 18, 2008 post about ParAccel instead. It turns out that communication with ParAccel was yet worse than I had realized.
Officially launched today at the TDWI conference, ParAccel is out to compete with Netezza. Right out of the chute, ParAccel may have surpassed Netezza in at least one area: pointlessly annoying secrecy. (In other regards I love them dearly, but that paranoia can be a real pain.) As best I can remember, here are some things about ParAccel that I both am allowed to say and find interesting:
- ParAccel offers a columnar, MPP data warehouse DBMS, called the ParAccel Analytic Database.
- ParAccel’s product runs in two main modes. “Maverick” is normal, stand-alone mode. “Amigo” mode amounts to a plug-compatible accelerator for Oracle or Microsoft SQL*Server. Early sales and marketing were concentrated on SQL*Server Amigo mode.
- ParAccel’s product also runs in another pair of modes – in-memory and disk-based. Early sales and marketing were concentrated on in-memory mode. Hybrid memory-centric processing sounds like something for a future release.
- Sun has a reseller partnership with ParAccel, focused on in-memory mode.
- Sun and ParAccel published record-shattering 100 gigabyte, 300 gigabyte, and 1 terabyte TPC-H benchmarks today, based on in-memory mode. (If you’d like to throw 13 terabytes of disk at 1 terabyte of user data, running simple and repetitive queries, that benchmark might be a useful guide to your own experience. But hey – that’s a big improvement on the prior champion, who used 40 terabytes of disk. To ParAccel’s credit, they’re not pretending that this is a bigger deal than it is.)
Infobright responds
An InfoBright employee posted something quite reasonable-looking in response to my inaugaral post about BrightHouse. Even so, InfoBright asked if they could substitute something with a slightly different tone. I agreed. Here’s what they sent in.
Curt, thanks for the write-up and the opportunity to talk about our customer success stories. As you say, our customer story is definitely “more than zero.” We are addressing a number of critical customer issues with our unique approach to data warehousing.
Infobright currently has 5 customers – customers that have bucked the trend of throwing hardware at the problem. To be perfectly braggadocio about this, we have never lost a competitive proof of concept in which we’ve been engaged. This is accomplished with the horsepower of one box (though for redundancy customers may deploy multiple boxes with a load balancer). Read more
| Categories: Analytic technologies, Columnar database management, Data warehousing, Database compression, Infobright | Leave a Comment |
Dude, you stole my joke!
October 15: We know what BEA is — now it is just a matter of negotiating the price
October 25: We’ve already established what you are, now we’re just working out a price
The news in the latter is that BEA has admitted it.
Note: Of course, the original joke is so old as to be variously attributed to all of George Bernard Shaw (most credibly), Winston Churchill, and Oscar Wilde.
| Categories: Application servers, Humor, Oracle | Leave a Comment |
DATAllegro discloses a few numbers
Privately held DATAllegro just announced a few tidbits about financial results and suchlike for the fiscal year ended June, 2007. I sent over a few clarifying questions yesterday. Responses included:
- Yes, the company experienced 330% year-over-year annual revenue growth.
- The majority of DATAllegro customers have bought systems in the 25-100 terabyte range.
- One system over 250 terabytes has been in production for months (surely the one I previously wrote about); a second is being installed.
- DATAllegro has “about 100” employees. By way of comparison, Netezza reported 225 full-time employees for the year ended January, 2007 – which probably means as of January 31, 2007.
All told, it sounds as if DATAllegro is more than 1/3 the size of Netezza, although given its higher system size and price points I’d guess it has well under 1/3 as many customers.
Here’s a link. I’ll likely edit that to something more permament-seeming later, and generally spruce this up when I’m not so rushed.
| Categories: Analytic technologies, Data warehouse appliances, Data warehousing, DATAllegro | 8 Comments |
Vertica — just star and snowflake schemas?
One of the longest-running technotheological disputes I know of is the one pitting flat/normalized data warehouse architectures vs. cubes, stars, and snowflake schemas. Teradata, for example, is a flagwaver for the former camp; Microstrategy is firmly in the latter. (However, that doesn’t keep lots of retailers from running Microstrategy on Teradata boxes.) Attensity (a good Teradata partner) is in the former camp; text mining rival Clarabridge (sort of a Microstrategy spinoff) is in the latter. And so on.
Vertica is clearly in the star/snowflake camp as well. I asked them about this, and Vertica’s CTO Mike Stonebraker emailed a response. I’m reproducing it below, with light edits; the emphasis is also mine. Key points include:
- Almost everybody (that Vertica sees) wants stars and snowflakes, so that’s what Vertica optimizes for.
- Replicating small dimension tables across nodes is great for performance.
- Even so, Vertica is broadening its support for more general schemas as well.
Great question. This is something that we’ve thought a lot about and have done significant research on with large enterprise customers. … short answer is as follows:
Vertica supports star and snowflake schemas because that is the desired data structure for data warehousing. The overwhelming majority of the schemas we see are of this form, and we have highly optimized for this case. Read more
