Business intelligence notes and trends
I keep not finding the time to write as much about business intelligence as I’d like to. So I’m going to do one omnibus post here covering a lot of companies and trends, then circle back in more detail when I can. Top-level highlights include:
- Jaspersoft has a new v3.5 product release. Highlights include multi-tenancy-for-SaaS and another in-memory OLAP option. Otherwise, things sound qualitatively much as I wrote last September.
- Inforsense has a cool composite-analytical-applications story. More precisely, they said my phrase “analytics-oriented EAI” was an “exceptionally good” way to describe their focus. Inforsense’s biggest target market seems to be health care, research and clinical alike. Financial services is next in line.
- Tableau Software “gets it” a little bit more than other BI vendors about the need to decide for yourself how to define metrics. (Of course, it’s possible that other “exploration”-oriented new-style vendors are just as clued-in, but I haven’t asked in the right way.)
- Jerome Pineau’s favorable view of Gooddata and unfavorable view of Birst are in line with other input I trust. I’ve never actually spoken with the Gooddata folks, however.
- Seth Grimes suggests the qualitative differences between open-source and closed-source BI are no longer significant. He has a point, although I’d frame it more as being about the difference between the largest (but acquisition-built) BI product portfolios and the smaller (but more home-grown) ones, counting open source in the latter group.
- I’ve discovered about five different in-memory OLAP efforts recently, and no doubt that’s just the tip of the iceberg.
- I’m hearing ever more about public-facing/extranet BI. Information Builders is a leader here, but other vendors are talking about it too.
A little more detail Read more
| Categories: Application areas, Business intelligence, Information Builders, Inforsense, Jaspersoft, QlikTech and QlikView, Scientific research, Tableau Software | 8 Comments |
Lots of analytic DBMS vendors are hiring
After writing about a Twitter jobs page, it occurred to me to check out whether analytic DBMS vendors are still hiring. Based on the Careers pages on their websites, I determined that Aster, Greenplum, Kickfire, and ParAccel all evidently are, in various mixes of (mainly) technical and field positions. At that point I got bored and stopped.
I didn’t choose those vendors entirely at random. If I had to name three vendors who are said to have had small layoffs at some point over the past few quarters, it would be ParAccel, Greenplum, and Kickfire. So if even they are hiring, the analytic DBMS sector is still pretty healthy … or at least thinks it is. 😉
| Categories: Aster Data, Data warehousing, Greenplum, Kickfire, ParAccel | 5 Comments |
Somebody is spreading Teradata acquisition rumors again
An mass email from Tom Coffing was forwarded to me today that starts:
I have heard from reliable sources that both HP and SAP have purchased more than 5% of Teradata stock. My sources tell me that both companies appear to be positioning themselves for a bid.
I got my version of the same email from Coffing yesterday with a different introduction but otherwise the same substance (he’s pushing a new product of his). It also had a different From address.
Possible explanations include but are not limited to:
- Coffing knows something (seems unlikely, but I haven’t actually checked www.sec.gov to confirm or disconfirm)
- Coffing thinks he knows something
- Coffing just made this up (I hope not)
- There’s an April Fool’s Day prank going on (not by me — after my bizarre March, I’m recusing myself from April Fool’s pranks this year)
| Categories: Data warehousing, HP and Neoview, SAP AG, Teradata | 4 Comments |
Twitter is considering using MapReduce
From a Twitter job listing (formatting mine). The most interesting section is “Additional preferred experience.” Read more
| Categories: Analytic technologies, Data warehousing, MapReduce, Specific users, Web analytics | 6 Comments |
What you learn in statistics class
xkcd does it again. Previous links to xkcd here and here.
| Categories: Analytic technologies, Fun stuff, Humor | 2 Comments |
Aleri update
My skeptical remarks on the Aleri/Coral8 merger generated some pushback. Today I actually got around to talking with John Morell, who was marketing chief at Coral8 and has remained with the combined company. First, some quick metrics:
- The combined Aleri has around 100 employees, 60-40 from Aleri vs. Coral8.
- The combined Aleri has around 80 customers. All of Aleri’s, with one sort-of exception at Banks.com, were in financial services. A large minority of Coral8’s were in financial services too.
- However, half of Aleri’s marketing spend going forward is budgeted outside the financial services markets. Not unreasonably, John presents this as a proof point Aleri is serious about selling to other markets.
- Aleri had 12-14 people in the UK pre-merger. Coral8 had none in Europe.
- Coral8 had 15 OEMs pre-merger, some actually generating revenue. Aleri had substantially none.
- Coral8 had been closing a “couple” of customers/quarter in online commerce. But recently, that rate ramped up to a “few.”
- Aleri’s engine is used to handle “many” hundreds of thousands of messages per second. Coral8’s highest-throughput user processes 100-150,000 messages/second.
John is sticking by the company line that there will be an integrated Aleri/Coral8 engine in around 12 months, with all the performance optimization of Aleri and flexibility of Coral8, that compiles and runs code from any of the development tools either Aleri or Coral8 now has. While this is a lot faster than, say, the Informix/Illustra or Oracle/IRI Express integrations, John insists that integrating CEP engines is a lot easier. We’ll see.
I focused most of the conversation on Aleri’s forthcoming efforts outside the financial services market. John sees these as being focused around Coral8’s old “Continuous (Business) Intelligence” message, enhanced by Aleri’s Live OLAP. Aleri Live OLAP is an in-memory OLAP engine, real-time/event-driven, fed by CEP. Queries can be submitted via ODBO/MDX today. XMLA is coming. John reports that quite a few Coral8 customers are interested in Live OLAP, and positions the capability as one Coral8 would have had to develop had the company remained independent. Read more
Kickfire update
I talked recently with my clients at Kickfire, especially newish CEO Bruce Armstrong. I also visited the Kickfire blog, which among other virtues features a fairly clear overview of Kickfire technology. (I did my own Kickfire overview in October.) Highlights of the current Kickfire story include:
- Kickfire is initially focused on three heavily overlapping markets — network event analysis, the general Web 2.0/clickstream/online marketing analytics area, and MySQL/LAMP data warehousing.
- Kickfire has blogged about a few sales to unnamed customers in those markets.
- I think network management is a market that’s potentially friendly to five-figure-cost appliances. After all, networking equipment is generally sold in appliance form. Kickfire doesn’t dispute this analysis.
- Kickfire’s sales so far are to run databases in the sub-terabyte range, although both Kickfire and its customers intend to run bigger databases soon. (Kickfire describes the range as 300 GB – 1 TB.) Not coincidentally, Kickfire believes that MySQL doesn’t scale very well past 100 GB without a lot of partitioning effort (in the case of data warehouses) or sharding (in the case of OLTP).
- When Bruce became CEO, he let go some sales, marketing, and/or business development folks. He likes to call this a restructuring of Kickfire rather than a reduction-in-force, but anyhow — that’s what happened. There are now about 50 employees, and Kickfire still has most of the $20 million it raised last August in the bank. Edit: The company clarifies that it actually wound up with more sales and marketing people than before.
- Kickfire has thankfully deemphasized various marketing themes I found annoying, such as ascribing great weight to TPC-H benchmarks or explaining why John von Neumann originally made bad choices in his principles of computer design.
| Categories: Data warehouse appliances, Data warehousing, Kickfire, MySQL, Open source, Web analytics | 1 Comment |
SAS in its own cloud
The Register has a fairly detailed article about SAS expanding its cloud/SaaS offerings. I disagree with one part, namely:
SAS may not have a choice but to build its own cloud. Given the sensitive nature of the data its customers analyze, moving that data out to a public cloud such as the Amazon EC2 and S3 combo is just not going to happen.
And even if rugged security could make customers comfortable with that idea, moving large data sets into clouds (as Sun Microsystems discovered with the Sun Grid) is problematic. Even if you can parallelize the uploads of large data sets, it takes time.
But if you run the applications locally in the SAS cloud, then doing further analysis on that data is no big deal. It’s all on the same SAN anyway, locked down locally just as you would do in your own data center.
I fail to see why SAS’s campus would be better than leading hosting companies’ data centers for either of data privacy/security or data upload speed. Rather, I think major reasons for SAS building its own data center for cloud computing probably focus on: Read more
| Categories: SAS Institute, Software as a Service (SaaS) | 15 Comments |
Why should anybody worry about Oracle’s tweaks to Red Hat Enterprise Linux (RHEL)?
Internet News offers an overview of how Oracle’s own version of Red Hat Enterprise Linux does or doesn’t different from generic RHEL. The defining example appears to be an alternate file system that Oracle finds useful, but Red Hat doesn’t want to bother offering. (Oracle says it donates all extensions back to the community, putting the onus on the community whether or not to use them in Linux versions other than Oracle’s.) The question is:
Does this count as an Oracle fork of (Red Hat Enterprise) Linux or doesn’t it?
My answer is:
Who cares? Read more
| Categories: Open source, Oracle | 1 Comment |
Oracle introduces a half-rack version of Exadata
Oracle has introduced what amounts to a half-rack Exadata machine. My thoughts on this basically boil down to “makes sense” and “no big deal.” Specifically:
- The new Baby Exadata still holds 10 terabytes or more.
- Most specialty analytic DBMS purchases are still for databases of 10 terabytes or smaller.
- Large enterprise data warehouse projects are often being deferred or cut back due to the economic crunch, but smaller projects with credible, quick ROIs are doing fine.
- Exadata is evidently being sold overwhelmingly to Oracle loyalists. Other analytic DBMS vendors aren’t telling me of serious Exadata competition yet. If the market for Exadata is primarily “happy Oracle data warehouse users”, that’s mainly folks who have <5-10 terabytes of user data today.
- Oracle Exadata beta tests were done on a kind of half-rack configuration anyway.
| Categories: Data warehouse appliances, Data warehousing, Exadata, Oracle | Leave a Comment |
