April 21, 2011

In-memory, parallel, not-in-database SAS HPA does make sense after all

I talked with SAS about its new approach to parallel modeling. The two key points are:

The whole thing is called SAS HPA (High-Performance Analytics), in an obvious reference to HPC (High-Performance Computing). It will run initially on RAM-heavy appliances from Teradata and EMC Greenplum.

A lot of what’s going on here is that SAS found it annoyingly difficult to parallelize modeling within the framework of a massively parallel DBMS such as Teradata. Notes on that aspect include:

Read more

April 19, 2011

Notes on short-request scale-out MySQL

A press person recently asked about:

… start-ups that are building technologies to enable MySQL and other SQL databases to get over some of the problems they have in scaling past a certain size. … I’d like to get a sense as to whether or not the problems are as severe and wide spread as these companies are telling me? If so, why wouldn’t a customer just move to a new database?

While that sounds as if he was asking about scale-out relational DBMS in general, MySQL or otherwise, short-request or analytic, it turned out that he was asking just about short-request scale-out MySQL. My thoughts and comments on that narrower subject include(d) but are not limited to:  Read more

April 18, 2011

Endeca topics

I visited my then-clients at Endeca in January. We focused on underpinnings (and strategic counsel) more than on coolness in what the product actually does. But going over my notes I think there’s enough to write up now.

Before saying much else about Endeca, there’s one confusion to dispose of: What’s the relationship between Endeca’s efforts in e-commerce (helping shoppers navigate websites) and business intelligence (helping people navigate their own data)? As Endeca tells it:

Endeca’s positioning in the business intelligence market boils down to “investigative analytics for people who aren’t hardcore analysts.” Endeca’s technological support for that stresses:  Read more

April 17, 2011

Netezza TwinFin i-Class overview

I have long complained about difficulties in discussing Netezza’s TwinFin i-Class analytic platform. But I’m ready now, and in the grand sweep of the product’s history I’m not even all that late. The Netezza i-Class timing story goes something like this:

My advice to Netezza as to how it should describe TwinFin i-Class boils down to:  Read more

April 16, 2011

Unpacking the EMC Greenplum Q1 sales disaster rumors

A well-connected tipster believes:

In the past I might have called Greenplum for clarification, but they’re not knocking themselves out to inform me these days, nor to inspire me with confidence in what they say.  Read more

April 14, 2011

Attensity update

I talked with Michelle de Haaff and Ian Hersey of Attensity back in February. We covered a lot of ground, so let’s start with a very high-level view.

The four most interesting technical points were probably:

Some more specific notes include:  Read more

April 13, 2011

What Starcounter may be up to

Starcounter seems to be offering an in-memory object-based/object-oriented/whatever short-request DBMS that also talks SQL. I haven’t been briefed at this point, and hence don’t have detail beyond what’s on their rather breathless web site. I’m guessing this isn’t an H-Store/VoltDB architecture, but rather something more like what Workday runs.

Most of the crunch I found on the Starcounter website (emphasis mine) is:

Let’s say that it is possible to make a database that is 10,000 times faster than what you use today. It would then be possible for your computer language objects to live inside the database from the very beginning. From the first { Customer a = new Customer(); }. The objects could live in the database, not as a copy, but as both database object and a Java or C# object at the same time. The database would transparently be your heap. The time it would take to save your object to the database would be reduced to nothing.

If such a database existed, you could say goodbye to caches and the duality of business objects, the database objects/rows and the complexity that follows. The speed would be amazing. Goodbye to time consuming scale-out solutions. Actually, you would be able to say good bye to the databases as you know them. You only need your simple objects.

Such a technology would be the ultimate NoSQL database. But what if the ultimate NoSQL database had SQL support, ACID, checkpoints and recovery and other enterprise features? Your pure, clean objects would then become the fastest and most powerful database in the world.

Beside that, other clues to what Starcounter is doing include references to Hibernate and to the declining cost of RAM.

April 10, 2011

Use cases for low-latency analytics

At various times I’ve noted the varying latency requirements of different analytic use cases, which can be as different as the speed of a turtle is from the speed of light. In particular, back when I wrote more about CEP (Complex Event Processing), I listed some applications for super-low-latency and not-so-low-latency CEP alike. Even better were some longish lists of “active data warehousing” use cases I got from Teradata in August, 2009, generally focused on interactive customer response (e.g. personalization, churn prevention, upsell, antifraud) or in some cases logistics.

In the slide deck for the Teradata 6680/solid-state drive announcement, however, Teradata went in a slightly different direction. In its list of “hot data use case examples”, Teradata suggested:  Read more

April 10, 2011

Teradata integrates in solid-state storage

For once, I think Teradata’s annual hardware refresh is pretty interesting, because of the integration of flash storage into its high-end “active enterprise data warehouse” product line. The essence of the announcement is:

Read more

April 8, 2011

Revolution Analytics update

I wasn’t too impressed when I spoke with Revolution Analytics at the time of its relaunch last year. But a conversation Thursday evening was much clearer. And I even learned some cool stuff about general predictive modeling trends (see the bottom of this post).

Revolution Analytics business and business model highlights include:

Read more

← Previous PageNext Page →

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.