Notes on the ClearStory Data launch, including an inaccurate quote from me
ClearStory Data launched, with nice coverage in the New York Times, Computerworld, and elsewhere. But from my standpoint, there were some serious problems:
- (Bad.) I was planning to cover the launch as well, in a split exclusive, but that plan was changed, costing me considerable wasted work.
- (Worse.) I wasn’t told of the change as soon as it was known. Indeed, I wasn’t told at all; I was left to infer it from the fact that I was now being asked to talk with other reporters.
- (Horrific.) I was quoted in the ClearStory launch press release, but while the sentiments were reasonably in line with my own, the quote was incorrect.*
I’m utterly disgusted with this whole mess, although after talking with her a lot I’m fine with CEO Sharmila Mulligan’s part in it, which is to say with ClearStory’s part in general.
*I avoid the term “platform” as much as possible; indeed, I still don’t really know what the “new platforms” part was supposed to refer to. The Frankenquote wound up with some odd grammar as well.
Actually, in principle I’m a pretty close adviser to ClearStory (for starters, they’re one of my stealth-mode clients). That hasn’t really ramped up yet; in particular, I haven’t had a technical deep dive. So for now I’ll just say:
Categories: Business intelligence, ClearStory Data, Data integration and middleware, Data mart outsourcing | 1 Comment |
DataStax Enterprise 2.0
Edit: Multiple errors in the post below have been corrected in a follow-on post about DataStax Enterprise and Cassandra.
My client DataStax is announcing DataStax Enterprise 2.0. The big point of the release is that there’s a bunch of stuff integrated together, including at least:
- Cassandra — the NoSQL DBMS, which DataStax sometimes calls “DataStax Server”. Edit: That’s not really a fair criticism of DataStax’s messaging.
- Hadoop MapReduce, which DataStax sometimes calls “Hadoop”. Edit: That is indeed fair. 🙂
- Sqoop — the general way to connect relational DBMS to Hadoop, which DataStax sometimes calls “RDBMS integration”.
- Solr — the search-centric Apache project, or big parts of it, which DataStax generally calls either “Solr” or “Solr compatibility”.
- log4j — an Apache project that has something or other to do with logging, or parts of it, which DataStax sometimes calls “log file integration”.
- DataStax OpsCenter — some management tools and so on around Cassandra and the rest of the product line.
DataStax stresses that all this runs on the same cluster, with the same administrative tools and so on. For example, on a single cluster:
- You can manage the interactive data for a web site.
- You can store the logs for that website.
- You can analyze all of the above in Hadoop.
Comments on Oracle’s third quarter 2012 earnings call
Various reporters have asked me about Oracle’s third quarter 2012 earnings conference call. Specific Q&A includes:
What did Oracle do to have its earnings beat Wall Street’s estimates?
Have a bad second quarter and then set Wall Street’s expectations too low for Q3. This isn’t about strong results; it’s about modest expectations.
Can Oracle be a leader in both hardware and software?
- It’s not inconceivable.
- The observation that Oracle, IBM, and Teradata all are pushing hardware-software combinations has been intriguing ever since IBM bought Netezza. (SAP really isn’t, however; ditto Microsoft.)
- I do think Oracle may be somewhat overoptimistic as to how cooperative the Sun user base will be in buying more high-end product and in paying more in maintenance for the gear they already have.
Beyond that, please see below.
What about Oracle in the cloud?
MySQL is an important cloud supplier. But Oracle overall hasn’t demonstrated much understanding of what cloud technology and business are all about. An expensive SaaS acquisition here or there could indeed help somewhat, but it seems as if Oracle still has a very long way to go.
Other comments
Other comments on the call, whose transcript is available, include: Read more
Categories: Cloud computing, Exadata, Humor, In-memory DBMS, Oracle, SAP AG, Software as a Service (SaaS) | 5 Comments |
Akiban update
I have a bunch of backlogged post subjects in or around short-request processing, based on ongoing conversations with my clients at Akiban, Cloudant, Code Futures (dbShards), DataStax (Cassandra) and others. Let’s start with Akiban. When I posted about Akiban two years ago, it was reasonable to say:
- Akiban is in the short-request DBMS business.
- MySQL compatibility is one way to access Akiban, but it’s not the whole story.
- Akiban’s main point of technical differentiation is to arrange data hierarchically on disk so that many joins are “zero-cost”.
- Walking the hierarchy isn’t a great way to get at data for every possible query; Akiban recognizes the need for other access techniques as well.
All of the above are still true. But unsurprisingly, plenty of the supporting details have changed. Read more
Categories: Akiban, Data models and architecture, MySQL, NewSQL, Object | 9 Comments |
Juggling analytic databases
I’d like to survey a few related ideas:
- Enterprises should each have a variety of different analytic data stores.
- Vendors — especially but not only IBM and Teradata — are acknowledging and marketing around the point that enterprises should each have a number of different analytic data stores.
- In addition to having multiple analytic data management technology stacks, it is also desirable to have an agile way to spin out multiple virtual or physical relational data marts using a single RDBMS. Vendors are addressing that need.
- Some observers think that the real essence of analytic data management will be in data integration, not the actual data management.
Here goes. Read more
Kinds of data integration and movement
“Data integration” can mean many different things, to an extent that’s impeding me from writing about the area. So I’ll start by simply laying out some of the myriad ways that data can be brought to where it is needed, and worry about other subjects later. Yes, this is a massive wall of text, and incomplete even so — but that in itself is my central point.
There are two main paradigms for data integration:
- Movement or replication — you take data from one place and copy it to another.
- Federation — you treat data in multiple different places logically as if it were all in one database.
Data movement and replication typically take one of three forms:
- Logical, transactional, or trigger-based — sending data across the wire every time an update happens, or as the result of a large-result-set query/extract, or in response to a specific request.
- Log-based — like logical replication, but driven by the transaction/update log rather than the core data management mechanism itself, so as to avoid directly overstressing the DBMS.
- Block/file-based — sending chunks of data, and expecting the target system to store them first and only make sense of them afterward.
Beyond the core functions of movement, replication, and/or federation, there are other concerns closely connected to data integration. These include:
- Transparency and emulation, e.g. via a layer of software that makes data in one format look like it’s in another. (If memory serves, this is the use case for which Larry DeBoever coined the term “middleware.”)
- Cleaning and quality — with new uses of data can come new requirements for accuracy.
- Master, reference, or canonical data —
- Archiving and information preservation — part of keeping data safe is ensuring that there are copies at various physical locations. Another part can be making it logically tamper-proof, or at least highly auditable.
In particular, the following are largely different from each other. Read more
Categories: Clustering, Data integration and middleware, EAI, EII, ETL, ELT, ETLT, eBay, Hadoop, MapReduce | 10 Comments |
Hardware and components — lessons from Teradata
I love talking with Carson Schmidt, chief of Teradata’s hardware engineering (among other things), even if I don’t always understand the details of what he’s talking about. It had been way too long since our last chat, so I requested another one. We were joined by Keith Muller, who I presume is pictured here. Takeaways included:
- Teradata performance growth was slow in the early 2000s, but has accelerated since then; Intel gets a lot of the credit (and blame) for that.
- Carson hopes for a performance “discontinuity” with Intel Ivy Bridge.
- Teradata is not afraid to use niche special-purpose chips.
- Teradata’s views can be taken as well-informed endorsements of InfiniBand and SAS 2.0.
Categories: Data warehouse appliances, Data warehousing, Database compression, Solid-state memory, Storage, Teradata | 13 Comments |
Where the privacy discussion needs to head
An Atlantic article suggests that the digital advertising industry is coalescing around the position “restrict data use if you must, but go easy on data collection and retention.”
There is a fascinating scrum over what “Do Not Track” tools should do and what orders websites will have to respect from users. The Digital Advertising Alliance (of which the NAI is a part), the Federal Trade Commission, W3C, the Internet Advertising Bureau (also part of the DAA), and privacy researchers at academic institutions are all involved. In November, the DAA put out a new set of principles that contain some good ideas like the prohibition of “collection, use or transfer of Internet surfing data across Websites for determination of a consumer’s eligibility for employment, credit standing, healthcare treatment and insurance.”
This week, the White House seemed to side with privacy advocates who want to limit collection, not just uses. Its Consumer Privacy Bill of Rights pushes companies to allow users to “exercise control over what personal data companies collect from them and how they use it.” The DAA heralded its own participation in the White House process, though even it noted this is the beginning of a long journey.
There has been a clear and real philosophical difference between the advertisers and regulators representing web users. On the one hand, as Stanford privacy researcher Jonathan Mayer put it, “Many stakeholders on online privacy, including U.S. and EU regulators, have repeatedly emphasized that effective consumer control necessitates restrictions on the collection of information, not just prohibitions on specific uses of information.” But advertisers want to keep collecting as much data as they can as long as they promise to not to use it to target advertising. That’s why the NAI opt-out program works like it does.
That’s a drum I’ve been beating for years, so to a first approximation I’m pleased. However:
- I don’t think currently proposed protections go nearly far enough, for reasons I previously stated plus others that keep coming to me. (For example, substantially all consumer privacy protections could be nuked simply by user agreements that compel you to “voluntarily” renounce most privacy rights in return for unfettered use of the internet.)
- If current trends are followed, it could end up that data use restrictions are too mild and data collection restrictions are too severe — and maybe that will all work out in a rough balance, at least for a while.
- In the not-so-near term, however, these rough political compromises may not work so well. That’s why I think next-generation digital advertising ecosystem design should start yesterday, or perhaps sooner.
So to sum up my views on consumer privacy:
- Focusing on data use is basically good.
- It is important to also focus on data collection, at least for a transitional period.
- For the whole thing to work out well, a major rethinking of systems is needed.
That’s the good news. The bad news is on the side of government data collection and use. As I wrote last year:Â Read more
Categories: Surveillance and privacy | 11 Comments |
Translucent modeling, and the future of internet marketing
There’s a growing consensus that consumers require limits on the predictive modeling that is done about them. That’s a theme of the Obama Administration’s recent work on consumer data privacy; it’s central to other countries’ data retention regulations; and it’s specifically borne out by the recent Target-pursues-pregnant-women example. Whatever happens legally, I believe this also calls for a technical response, namely:
Consumers should be shown key factual and psychographic aspects of how they are modeled, and be given the chance to insist that marketers disregard any or all of those aspects.
I further believe that the resulting technology should be extended so that
information holders can collaborate by exchanging estimates for such key factors, rather than exchanging the underlying data itself.
To some extent this happens today, for example with attribution/de-anonymization or with credit scores; but I think it should be taken to another level of granularity.
My name for all this is translucent modeling, rather than “transparent”, the idea being that key points must be visible, but the finer details can be safely obscured.
Examples of dialog I think marketers should have with consumers include: Read more
Categories: Predictive modeling and advanced analytics, Surveillance and privacy, Web analytics | Leave a Comment |
The latest privacy example — pregnant potential Target shoppers
Charles Duhigg of the New York Times wrote a very interesting article, based on a forthcoming book of his, on two related subjects:
- The force of habit on our lives, and how we can/do deal with it. (That’s the fascinating part.)
- A specific case of predictive modeling. (That’s the part that’s getting all the attention. It’s interesting too.)
The predictive modeling part is that Target determined:
- People only change their shopping habits occasionally
- One of those occasions is when they get pregnant
- Hence, it would be a Really Good Idea to market aggressively to pregnant women
and then built a marketing strategy around early indicators of a woman’s pregnancy. Read more