Hadoop marketing themes that deserve to be ignored
This is part of a four-post series, covering:
- Annoying Hadoop marketing themes that should be ignored (this post).
- Hadoop versions and distributions, and their readiness or lack thereof for production.
- In general, how “enterprise-ready” is Hadoop?
- HBase 0.92.
The posts depend on each other in various ways.
I am subjected to much Hadoop marketing. Indeed, I even help various clients inflict Hadoop marketing upon the world. But a guy’s got to draw a line somewhere, and there are certain Hadoop marketing themes that I just refuse to take seriously.
1. Big data. I think the term “big data” long ago jumped the shark. If a firm uses the term “big data”, I teeth-grittingly let it pass. But if they send me PR email offering to “explain” the benefits or “real meaning” of “big data”, my response is apt to be unkind.
2. Conference-timed news. I’ve never liked the custom of multiple vendors piling announcements into the same conference week. It seems like a calculated strategy to ensure getting the least possible mind share and attention — unless, of course, your announcement is so lame that brief mentions in conference-week roundups are the most visibility you can hope to get. Even so, many vendors make the marketing choice to pile on. Fine. But I’ll write in response if and when I feel like it.
3. Contribution Olympics. The Urinary Olympics as to who contributed more lines of code, patches, whatever to various Hadoop sub-projects got pretty silly; and although it peaked last year, elements of it are with us still. I do see two scenarios where the whole discussion might have genuine value, namely:
- When two vendors — typically Hortonworks and Cloudera — differ about a particular Hadoop sub-project, I’m inclined to believe the one who asserts “Well, we built most of and then extensively tested the last release of this, and it does what we say it does.”
- If you have a specialized desire to see a particular aspect of Hadoop hacked, there are a limited number of developers who are best-suited to do it for you, and you might be best served to deal with the company that employs (most of) them.
Otherwise, however, I pay little attention to claims like “We thought this scheme up 2 years ago, and hence we’re the experts on whether it’s now ready for production.”
Categories: Cloudera, Hadoop, Hortonworks | 12 Comments |
Introduction to MemSQL
I talked with MemSQL shortly before today’s launch. MemSQL technology basics are:
- In-memory relational DBMS.
- Being released single-box only. Transparent sharding is under development for release in the fall. Basic replication is under development too.
- Subset of SQL-92.
- MySQL wire-compatible (SQL coverage issues excepted).
MemSQL’s performance claims include:
- Read performance 10% or so worse than memcached.
- Write performance 20% or so better than memcached.
- 1.2 million inserts/second on a 64-core, 1/2 TB of RAM machine.
- Similarly, 1/2 billion records loaded in under 20 minutes.
MemSQL company basics include: Read more
Categories: Database compression, In-memory DBMS, Investment research and trading, Market share and customer counts, memcached, MemSQL, OLTP, Pricing, Web analytics | 3 Comments |
Metamarkets’ back-end technology
This is part of a three-post series:
- Introduction to Metamarkets and Druid
- Druid overview
- Metamarkets’ back-end technology (this post)
The canonical Metamarkets batch ingest pipeline is a bit complicated.
- Data lands on Amazon S3 (uploaded or because it was there all along).
- Metamarkets processes it, primarily via Hadoop and Pig, to summarize and denormalize it, and then puts it back into S3.
- Metamarkets then pulls the data into Hadoop a second time, to get it ready to be put into Druid.
- Druid is notified, and pulls the data from Hadoop at its convenience.
By “get data read to be put into Druid” I mean:
- Build the data segments (recall that Druid manages data in rather large segments).
- Note metadata about the segments.
That metadata is what goes into the MySQL database, which also retains data about shards that have been invalidated. (That part is needed because of the MVCC.)
By “build the data segments” I mean:
- Make the sharding decisions.
- Arrange data columnarly within shard.
- Build a compressed bitmap for each shard.
When things are being done that way, Druid may be regarded as comprising three kinds of servers: Read more
Metamarkets Druid overview
This is part of a three-post series:
- Introduction to Metamarkets and Druid
- Druid overview (this post)
- Metamarkets’ back-end technology
My clients at Metamarkets are planning to open source part of their technology, called Druid, which is described in the Druid section of Metamarkets’ blog. The timing of when this will happen is a bit unclear; I know the target date under NDA, but it’s not set in stone. But if you care, you can probably contact the company to get involved earlier than the official unveiling.
I imagine that open-source Druid will be pretty bare-bones in its early days. Code was first checked in early in 2011, and Druid seems to have averaged around 1 full-time developer since then. What’s more, it’s not obvious that all the features I’m citing here will be open-sourced; indeed, some of the ones I’m describing probably won’t be.
In essence, Druid is a distributed analytic DBMS. Druid’s design choices are best understood when you recall that it was invented to support Metamarkets’ large-scale, RAM-speed, internet marketing/personalization SaaS (Software as a Service) offering. In particular:
- Druid tries to use RAM well.
- Druid tries to stay up all the time.
- Druid has multi-valued fields. (Numeric, but of course you can use encoding tricks to be effectively more general.)
- Druid’s big limitation is to assume that there’s literally only one (denormalized) table per query; you can’t even join to dimension tables.
- SQL is a bit of an afterthought; I would expect Druid’s SQL functionality to be pretty stripped-down out of the gate.
Interestingly, the single-table/multi-valued choice is echoed at WibiData, which deals with similar data sets. However, WibiData’s use cases are different from Metamarkets’, and in most respects the WibiData architecture is quite different from that of Metamarkets/Druid.
Introduction to Metamarkets and Druid
I previously dropped a few hints about my clients at Metamarkets, mentioning that they:
- Have built vertical-market analytic platform technology.
- Use a lot of Hadoop.
- Throw good parties. (That’s where the background photo on my Twitter page comes from.)
But while they’re a joy to talk with, writing about Metamarkets has been frustrating, with many hours and pages of wasted of effort. Even so, I’m trying again, in a three-post series:
- Introduction to Metamarkets and Druid (this post)
- Druid overview
- Metamarkets’ back-end technology
Much like Workday, Inc., Metamarkets is a SaaS (Software as a Service) company, with numerous tiers of servers and an affinity for doing things in RAM. That’s where most of the similarities end, however, asĀ Metamarkets is a much smaller company than Workday, doing very different things.
Metamarkets’ business is SaaS (Software as a Service) business intelligence, on large data sets, with low latency in both senses (fresh data can be queried on, and the queries happen at RAM speed). As you might imagine, Metamarkets is used by digital marketers and other kinds of internet companies, whose data typically wants to be in the cloud anyway. Approximate metrics for Metamarkets (and it may well have exceeded these by now) include 10 customers, 100,000 queries/day, 80 billion 100-byte events/month (before summarization), 20 employees, 1 popular CEO, and a metric ton of venture capital.
To understand how Metamarkets’ technology works, it probably helps to start by realizing: Read more
Workday update
In August 2010, I wrote about Workday’s interesting technical architecture, highlights of which included:
- Lots of small Java objects in memory.
- A very simple MySQL backing store (append-only, <10 tables).
- Some modernistic approaches to application navigation.
- A faceted approach to BI.
I caught up with Workday recently, and things have naturally evolved. Most of what we talked about (by my choice) dealt with data management, business intelligence, and the overlap between the two.
It is now reasonable to say that Workday’s servers fall into at least seven tiers, although we talked mainly about five that work together as a kind of giant app/database server amalgamation. The three that do noteworthy data management can be described as:
- In-memory objects and transactions. This is similar to what Workday had before.
- Persistent MySQL. Part of this is similar to what Workday had before. In addition, Workday is now storing certain data in tables in the ordinary relational way.
- In-memory caching and indexing. This has three aspects:
- Indexes for the ordinary relational tables, organized in interesting ways.
- Indexes for Workday’s search-box navigation (as per my original Workday technical post, you can search across objects, task-names, etc.).
- Compressed copies of the Java objects, used to instantiate other servers as needed. The most obvious uses of this are:
- Recovery for the object/transaction tier.
- Launch for the elastic compute tier. (Described below.)
Two other Workday server tiers may be described as: Read more
QlikTech bought Expressor
QlikTech has bought Expressor. Notes on that include:
- Expressor wanted to offer data integration/ETL (Extract/Transform/Load) that was all things to all people — great parallel performance, great UI, great price, etc.
- In practice, Expressor seemed to focus on cheap/easy ETL in the Microsoft Windows (I mean server) market.
- Expressor never got much traction. This seems confirmed by the “more than 20” figure for headcount mentioned in the acquisition press release.
- Both the press release and some tweets by QlikTech’s Donald Farmer seem to confirm that Expressor is being taken off the market for “boil the ocean” ETL. It will be companion technology to/integrated technology in QlikView.
- Unsurprisingly, Donald indicated that Expressor technology would expand past its Microsoft focus. (Edit: “If needed”)
Categories: Business intelligence, EAI, EII, ETL, ELT, ETLT, Expressor, Pricing, QlikTech and QlikView | 5 Comments |
Introduction to Cloudant
Cloudant is one of the few NoSQL companies with >100 paying subscription customers. For starters:
- Cloudant’s core software is a fork of CouchDB.
- Cloudant only sells you software as a service.
- More precisely, whether Cloudant offers DBaaS (DataBase as a Service) or PaaS (Platform as a Service) or a “data layer” (Cloudant’s preferred terminology) depends on your taste in buzzwords.
- I gather that Cloudant (the company) wants to handle pretty much all your data management needs. But Cloudant (the product) isn’t there yet, especially on the analytic side.
- Before CouchDB and Membase joined together, Cloudant was positioned as the big(ger) data version of CouchDB.
Company demographics include:
- Cloudant is based in Boston.
- Cloudant started out as a Y Combinator company in 2008, and “got serious” in 2009.
- Cloudant now has ~20 employees.
- Management hires include a couple of former Vertica guys.
The Cloudant guys gave me some customer counts in May that weren’t much higher than those they gave me in February, and seem to have forgotten to correct the discrepancy. Oh well. The latter (probably understated) figures included ~160 paying customers, of which:
- ~100 were multitenant.
- ~60 were single tenant.
- 1 was on-premise (but still managed by Cloudant) because of privacy concerns.
The largest Cloudant deployments seem to be in the 10s of terabytes, across a very low double digit number of servers.
Categories: Cloudant, Clustering, Couchbase, CouchDB, MapReduce, Market share and customer counts, NoSQL, Pricing, Specific users, Storage | 2 Comments |
Quick-turnaround predictive modeling
Last November, I wrote two posts on agile predictive analytics. It’s time to return to the subject. I’m used to KXEN talking about the ability to do predictive modeling, very quickly, perhaps without professional statisticians; that the core of what KXEN does. But I was surprised when Revolution Analytics told me a similar story, based on a different approach, because ordinarily that’s not how R is used at all.
Ultimately, there seem to be three reasons why you’d want quick turnaround on your predictive modeling: Read more
Categories: Business intelligence, Investment research and trading, KXEN, Predictive modeling and advanced analytics, Revolution Analytics, Telecommunications, Web analytics | 10 Comments |
Kognitio’s story today
I had dinner tonight with the Kognitio folks. So far as I can tell:
- Branding has been mercifully simplified. Everything is now called “Kognitio” (as opposed to, for example, “WX2”).
- Notwithstanding its long history of selling disk-based DBMS and denigrating memory-only configurations, Kognitio now says that in fact it’s always been an in-memory DBMS vendor.
- Notwithstanding its long history of selling (or attempting to sell) analytic DBMS, Kognitio wants to be viewed as an accelerator to your existing DBMS. This is apparently inspired in part by SAP HANA, notwithstanding that HANA’s direction is to evolve into a hybrid OLTP/analytic general-purpose DBMS.
- Notwithstanding its lack of analytic platform features, Kognitio wants to be viewed as selling an analytic platform.
- Notwithstanding its memory-centric focus, Kognitio doesn’t want to compress data. Kognitio’s opinion — which to my knowledge is shared by few people outside Kognitio — seems to be that the CPU cost of compression/decompression isn’t justified by the RAM savings from compression.
- Kognitio still is pushing a cloud/SaaS (Software as a Service) story. Even if you want to use Kognitio (the product) on-premises, Kognitio (the company) calls that “private cloud” and offers to let you pay annually.
Kognitio believes that this story is appealing, especially to smaller venture-capital-backed companies, and backs that up with some frieNDA pipeline figures.
Between that success claim and SAP’s HANA figures, it seems that the idea of using an in-memory DBMS to accelerate analytics has legs. This makes sense, as the BI vendors — Qlik Tech excepted — don’t seem to be accomplishing much with their proprietary in-memory alternatives. But I’m not sure that Kognitio would be my first choice to fill that role. Rather, if I wanted to buy an unsuccessful analytic RDBMS to use as an in-memory accelerator, I might consider ParAccel, which is columnar, has an associated compression story, has always had a hybrid memory-centric flavor much as Kognitio has, and is well ahead of Kognitio in the analytic platform derby. That said, I’ll confess to not having talked with or heard much about ParAccel for a while, so I don’t know if they’ve been able maintain technical momentum any more than Kognitio has.