Analytic technologies
Discussion of technologies related to information query and analysis. Related subjects include:
- Business intelligence
- Data warehousing
- (in Text Technologies) Text mining
- (in The Monash Report) Data mining
- (in The Monash Report) General issues in analytic technology
Kickfire update
I talked recently with my clients at Kickfire, especially newish CEO Bruce Armstrong. I also visited the Kickfire blog, which among other virtues features a fairly clear overview of Kickfire technology. (I did my own Kickfire overview in October.) Highlights of the current Kickfire story include:
- Kickfire is initially focused on three heavily overlapping markets — network event analysis, the general Web 2.0/clickstream/online marketing analytics area, and MySQL/LAMP data warehousing.
- Kickfire has blogged about a few sales to unnamed customers in those markets.
- I think network management is a market that’s potentially friendly to five-figure-cost appliances. After all, networking equipment is generally sold in appliance form. Kickfire doesn’t dispute this analysis.
- Kickfire’s sales so far are to run databases in the sub-terabyte range, although both Kickfire and its customers intend to run bigger databases soon. (Kickfire describes the range as 300 GB – 1 TB.) Not coincidentally, Kickfire believes that MySQL doesn’t scale very well past 100 GB without a lot of partitioning effort (in the case of data warehouses) or sharding (in the case of OLTP).
- When Bruce became CEO, he let go some sales, marketing, and/or business development folks. He likes to call this a restructuring of Kickfire rather than a reduction-in-force, but anyhow — that’s what happened. There are now about 50 employees, and Kickfire still has most of the $20 million it raised last August in the bank. Edit: The company clarifies that it actually wound up with more sales and marketing people than before.
- Kickfire has thankfully deemphasized various marketing themes I found annoying, such as ascribing great weight to TPC-H benchmarks or explaining why John von Neumann originally made bad choices in his principles of computer design.
| Categories: Data warehouse appliances, Data warehousing, Kickfire, MySQL, Open source, Web analytics | 1 Comment |
Oracle introduces a half-rack version of Exadata
Oracle has introduced what amounts to a half-rack Exadata machine. My thoughts on this basically boil down to “makes sense” and “no big deal.” Specifically:
- The new Baby Exadata still holds 10 terabytes or more.
- Most specialty analytic DBMS purchases are still for databases of 10 terabytes or smaller.
- Large enterprise data warehouse projects are often being deferred or cut back due to the economic crunch, but smaller projects with credible, quick ROIs are doing fine.
- Exadata is evidently being sold overwhelmingly to Oracle loyalists. Other analytic DBMS vendors aren’t telling me of serious Exadata competition yet. If the market for Exadata is primarily “happy Oracle data warehouse users”, that’s mainly folks who have <5-10 terabytes of user data today.
- Oracle Exadata beta tests were done on a kind of half-rack configuration anyway.
| Categories: Data warehouse appliances, Data warehousing, Exadata, Oracle | Leave a Comment |
Greenplum claims very fast load speeds, and Fox still throws away most of its MySpace data
Data warehouse load speeds are a contentious issue. Vertica contrived a benchmark with a 5 1/2 terabyte/hour load rate. Oracle has gotten dinged for very low load speeds, which then are hotly debated. I was told recently of a Greenplum partner’s salesman steering a prospect who needed rapid load speeds away from Greenplum, which seemed odd to me.
Now Greenplum has come out swinging, claiming “consistent” load speeds of 4 terabytes/hour at its Fox Interactive Media account, and armed with a customer quote saying just that. Note however that load speeds tend to be proportional to the number of disks, and there are a LOT of disks at that installation.
One way to think about load speeds is — how long would it take to load the entire database? It seems as if the Fox database could be loaded, perhaps not in one week, but certainly in less than two. Flipping that around, the Fox site only has enough capacity to hold less than 2 weeks of detailed data. (This is not uncommon in network event kinds of databases.) And a corollary of that is — worldwide storage sales are still constrained by cost, not by absolute limits on the amounts of data enterprises would like to store.
| Categories: Data warehousing, EAI, EII, ETL, ELT, ETLT, Fox and MySpace, Greenplum, Theory and architecture, Web analytics | 3 Comments |
Database implications if IBM acquires Sun
Reported or rumored merger discussions between IBM and Sun are generating huge amounts of discussion today (some links below). Here are some quick thoughts around the subject of how the IBM/Sun deal — if it happens — might affect the database management system industry. Read more
Pervasive DataRush today
In my first post-fire briefing, I had a long-scheduled dinner with the Pervasive DataRush folks. Much of DataRush’s positioning, feature evolution, and so on remain To Be Determined. Most existing customers and applications remain To Be Disclosed. What’s more, DataRush is a technology to accelerate applications that
- Need to be parallelized
- Should run on SMP rather than shared-nothing hardware
and Pervasive hasn’t done a great job of explaining where #2 applies.
That said, there’s at least one use case for which DataRush should clearly be considered today. Suppose you have a messy ETL/data transformation task that requires custom code. Then I see three main choices:
- Write the code within the confines of an off-the-shelf ETL tool.
- Write the code to run on an analytic DBMS platform, ideally an MPP/shared-nothing one.
- Use something like DataRush (and I’m not familiar with any good alternatives to DataRush).
In some cases, DataRush may be best possibility.
| Categories: Analytic technologies, Data integration and middleware, Data warehousing, EAI, EII, ETL, ELT, ETLT, Parallelization, Pervasive Software | 1 Comment |
Independent CEP vendors continue to flounder
Independent CEP (Complex/Event Processing) vendors continue to flounder, at least outside the financial services and national intelligence markets.
- StreamBase once planned to conquer the world, making an impact as big as database management’s. Now it has retreated into niche markets.
- Progress Software, a decent-sized company, put a large fraction of its energy into Apama. Little has happened outside the financial service sector.
- Coral8 has some great-sounding ideas. But Coral8 now has merged into Aleri, basically a financial-markets specialist.
- Mike Franklin says some ambitious things on behalf of Truviso, but I haven’t noticed much traction there either.
CEP’s penetration outside of its classical markets isn’t quite zero. Customers include several transportation companies (various vendors), Sallie Mae (Coral8), a game vendor or two (StreamBase, if I recall correctly), Verizon (Aleri, I think), and more. But I just wrote that list from memory — based mainly on not-so-recent deals — and a quick tour of the vendors’ web sites hasn’t turned up much I overlooked. (Truviso does have a recent deal with Technorati, but that’s not exactly a blue chip customer these days.)
So far as I can tell, this is a new version of a repeated story. Read more
| Categories: Aleri and Coral8, Analytic technologies, Business intelligence, Progress, Apama, and DataDirect, StreamBase, Streaming and complex event processing (CEP), Truviso | 12 Comments |
Three Greenplum customers’ applications of MapReduce
Greenplum (and Truviso) advisor Joseph Hellerstein offers a few examples of MapReduce applications (specifically Greenplum MapReduce), namely:
The big aha moment occured for me during our panel discussion, which included Luke Lonergan from Greenplum, Roger Magoulas from O’Reilly, and Brian Dolan from Fox Interactive Media (which runs MySpace among other web properties).
Roger talked about using MapReduce to extract structured entities from text for doing tech trend analyses from billions of rows of online job postings. Brian (who is a mathematician by training) was talking about implementing conjugate gradiant and Support Vector Machines in parallel SQL to support “hypertargeting” for advertisers. I mentioned how Jonathan Goldman at LinkedIn was using SQL and MapReduce to do graph algorithms for social network analysis.
Incidentally: While it’s been some months since I asked, my sense is that the O’Reilly text extraction is home-grown, and primitive compared to what one could do via commercial products. That said, if the specific application is examining job postings, I’m not sure how much value more sophisticated products would add. After all, tech job listings are generally written in a style explicitly designed to ensure that most or all of their meaning is conveyed simply by a bag of keywords. And by the way, this effort has been underway for quite some time.
Related link
- Greenplum has a page on the O’Reilly relationship. However, the part that isn’t behind a registration barrier is trivial — and I wouldn’t know one way or the other about the registration-required part.
| Categories: Analytic technologies, Data warehousing, Fox and MySpace, Greenplum, MapReduce, Specific users, Web analytics | 3 Comments |
Greenplum discloses a bit of pricing
Getting information about Greenplum pricing is not always easy. However, a bit was disclosed in a recent Greenplum blog post, which said:
… roughly $200k … For that amount you get the hardware, software and services to stand up around a 4TB (usable) Greenplum DW …
No doubt there are large quantity discounts for much bigger systems.
| Categories: Data warehousing, Greenplum, Pricing | Leave a Comment |
Fox Interactive Media’s multi-hundred terabyte database running on Greenplum
Greenplum’s largest named account is Fox Interactive Media — the parent organization of MySpace — which has a multi-hundred terabyte database that it uses for hardcore data mining/analytics. Greenplum has been engaging in regrettable business practices, claiming that it is in the process of supplanting Aster Data at Fox/MySpace. In fact, MySpace’s use of Aster is more mission-critical than Fox’s use of Greenplum, and is increasing significantly.
Still, as Greenplum’s gushing customer video with Fox Interactive Media* illustrates, the Fox/Greenplum database is impressive on its own merits. Read more
| Categories: Analytic technologies, Aster Data, Data warehousing, Fox and MySpace, Greenplum, Specific users, Theory and architecture, Web analytics | 3 Comments |
MySpace’s multi-hundred terabyte database running on Aster Data
Aster Data has put up a blog post embedding and summarizing a video about its MySpace account. Basic metrics include:
The combined Aster deployment now has 200+ commodity hardware servers working together to manage 200+ TB of data that is growing at 2-3TB per day by collecting 7-10B events that happen on one of the world.
I’m pretty sure that’s counting correctly (i.e., user data).* Read more
