May 30, 2009

Reinventing business intelligence

I’ve felt for quite a while that business intelligence tools are due for a revolution. But I’ve found the subject daunting to write about because — well, because it’s so multifaceted and big. So to break that logjam, here are some thoughts on the reinvention of business intelligence technology, with no pretense of being in any way comprehensive.

Natural language and classic science fiction

Actually, there’s a pretty well-known example of BI near-perfection — the Star Trek computers, usually voiced by the late Majel Barrett Roddenberry. They didn’t have a big role in the recent movie, which was so fast-paced nobody had time to analyze very much, but were a big part of the Star Trek universe overall. Star Trek’s computers integrated analytics, operations, and authentication, all with a great natural language/voice interface and visual displays. That example is at the heart of a 1998 article on natural language recognition I just re-posted.

As for reality: For decades, dating back at least to Artificial Intelligence Corporation’s Intellect, there have been offerings that provided “natural language” command, control, and query against otherwise fairly ordinary analytic tools. Such efforts have generally fizzled, for reasons outlined at the link above. Wolfram Alpha is the latest try; fortunately for its prospects, natural language is really only a small part of the Wolfram Alpha story.

A second theme has more recently emerged — using text indexing to get at data more flexibly than a relational schema would normally allow, either by searching on data values themselves (stressed by Attivio) or more by searching on the definitions of pre-built reports (the Google OneBox story). SAP’s Explorer is the latest such view, but I find Doug Henschen’s skepticism about SAP Explorer more persuasive than Cindi Howson’s cautiously favorable view. Partly that’s because I know SAP (and Business Objects); partly it’s because of difficulties such as those I already noted.

Flexibility and data exploration

It’s a truism that each generation of dashboard-like technology fails because it’s too inflexible. Users are shown the information that will provide them with the most insight. They appreciate it at first. But eventually it’s old hat, and when they want to do something new, the baked-in data model doesn’t support it.

The latest attempts to overcome this problem lie in two overlapping trends — cool data exploration/visualization tools, and in-memory analytics. Tableau and Spotfire are known more for the former; hot BI vendor QlikTech is known for both. And many vendors — established or otherwise — are going to in-memory OLAP.

Collaboration and communication

The reason I’m finally buckling down and posting on this subject is the announcement of Google Wave, which I think foreshadows a revolution in communication and collaboration technology. Google Wave augurs two primary advances. First, it shows how to make email, instant messaging, microblogging, and so on much more useful. Second, Google Wave could evolve in a way that — finally — makes it truly practical for end-users to set up ad-hoc mini-portals that combine arbitrary URL-possessing resources, exposed to arbitrary workgroups of people.

If and when both of those promises are fulfilled, it will become vastly easier for people to reason together about analytic questions. That may take a little while, as Google Wave obviously wasn’t designed with business intelligence in mind. But whether from Google or from a frightened Microsoft redoubling its SharePoint efforts, there’s hope that we’ll see a leap forward in general collaboration technology. And since BI vendors are doing a generally decent job of exposing queries, charts and so on as portlets, it seems likely that business intelligence will benefit from the collaboration arms race.

That’s important. The first time I heard that reporting was as important for communication as for analytics was from Pilot Software a quarter-century or so ago, and it’s just as true now as it was then. In its first incarnations it probably will be a little too dumb for my tastes, focusing more on mindless reporting and same-old KPIs than on deeper analysis. Still, it’s a move in a good direction.

Other directions

As I said at the beginning, I find it too daunting to try to cover all facets of this subject in one post. So I’ll leave out, at a minimum:

plus some hobby horses you probably don’t want to hear about anyway until I work out a better way of articulating my opinions.

But by all means please comment on what I’ve left out just as vigorously as on what I’ve included. This post is just the first of many to come.

Comments

19 Responses to “Reinventing business intelligence”

  1. Reinventing business intelligence | DBMS2 — DataBase Management … | on May 30th, 2009 10:16 am

    [...] Original post: Reinventing business intelligence | DBMS2 — DataBase Management … [...]

  2. Robert Young on May 30th, 2009 3:20 pm

    >> using text indexing to get at data more flexibly than a relational schema would normally allow

    There is nothing more flexible than a 5NF datastore, no matter what the various vendors pontificate. What has been missing is the will to do the brain work to understand it (if I had a nickel for every time I’ve heard a MySql zealot crow about his wonderful flat file “database”, I’d be rich), and hardware to make implementation trivial. The SSD/multi-core machine takes care of the latter.

    Such a datastore will remove the need for OLTP/OLAP/BI data in separate, and maintained and synced, databases. Both TimesTen and SolidDB (and their ilk) should be considered, too. The real power of the SSD machine is the fact that one needs only the logic minimum of data, and that data can satisfy all needs with equal speed.

    The xml folk have spent a lot of effort to recreate the relational model in their files: schema + xQuery +xPath and such. But it’s still a document centric datastore, a non-starter; unless earning your bread depends on it.

    >> when they want to do something new, the baked-in data model doesn’t support it.

    Which is why basing the tool on the real world data model (xNF for the OLTP application) rather than some alleged “user friendly” data specification. I spent some (unhappy) time with BO, before SAP, and the Universe stuff was just a mess. How they managed to sell it, was a mystery. It was presented as a way that a User could specify the data, but it never worked that way, and the DBAs and developers had to replicate schemas in BO syntax. Bah.

    So, return to the roots of the database model, let the hardware do the heavy lifting, and consolidate on SQL.

  3. Curt Monash on May 30th, 2009 9:35 pm

    Robert,

    If “blue” is sometimes an attribute and sometimes a substring of the product name, all the normalization in the world won’t give the same practical flexibility as a text search.

  4. Robert Young on May 31st, 2009 11:05 am

    Curt,

    Not to extend this beyond what’s appropriate to a blog thread, but if “blue” is of interest as an attribute yet also of interest as an embedded substring at the same time, then it would seem to me that their is a large semantic disconnect in play in the data model.

    Mixing apples and oranges (attributes and substrings) is never a good path to follow. Naive users do that all the time since they just don’t know any better (“I just want anything that has ‘blue’ in it.”), but empowering them to shoot themselves in the foot is not in our long term interest.

  5. Curt Monash on May 31st, 2009 7:49 pm

    Robert,

    Let’s just say I think you’re being hopelessly idealistic, especially in a world where we don’t create all the data we later query.

    If my application looks at data initially created by customers, prospects, suppliers, webmasters, or whatever, it’s not very helpful to say “Well, I’m doing my own users a disservice unless I enforce strict naming conventions on the whole bloody world.”

  6. Google Wave – The New Face of BI « The Death of Business Intelligence on June 1st, 2009 8:13 am

    [...] DBMS2 – Reinventing business intelligence [...]

  7. Michael McIntire on June 1st, 2009 11:43 am

    Curt,

    I think you hit on the problem, it’s the complexity of the underlying data stream and transformation to presentation. The basic issue can be best seen in the investment required to make every Analytics/DW/ETL environment work. My view of the root of this problem is that all of the definition and connectivity of elements and transformations is 100% custom. We all get sold the next greatest tool this or structure that, but the reality is that our industry has not invested in the methods, processes, and tools to codify information so that it can be bound at runtime.

    More simply put, we do not identify & classify data when it is defined in a way that it can be used later. We businesses seem to lack the discipline to make data generators publish their data, or to have consistency of data definitions on which analytics depend. OLTP data is just one such example, unstructured textual data is another such example. As an industry, we do not build tools to exploit the movement of structure with data, in a manner which is efficient enough to actually use.

    But more to the point – few if any tools can deal with the dynamic nature of data change in any of the major dimensions: structure, temporal, codification and value distribution. Dynamically cascading entity/attribute changes into the analytic systems is just one example. Exposing Models to data, training data, changing data demographics, the list goes on. Point is, in order to materially change the BI Process, the methods we use today will have to eliminate “custom” and ALL the tools in the chain will need to understand the data, structure, and rules which make that possible.

    It’s either that, or the industry will need “Context Binding” methods, which select and Bind logic dependent on the context of the data element, to come into commercial use in order for the current BI methods to materially change. This seems to be where we’re heading, but I am reminded of the optimism surrounding the AI movement before we found the areas it could be useful – particularly since our development methods and processing systems are still deterministic…

    BTW – Normal Forms do not solve this BI problem. 5NF provides an abstract structure which avoids the need for DDL changes. The logic to decode this form, to navigate and make meaning of the data content still requires an application or some embodiment of logic/rules to derive an analytic, and to make/take action on that analytic. Which is the exact same problem as a flat file.

  8. Adi on June 2nd, 2009 1:25 am

    Robert I agree with on that matter.

    Regarding Google wave there is a lot of buzz lately but as always Google delivers a very basic application everybody talks about it and then very few people use it.

    The only success Google has is its search engine and in some way Gmail.

  9. John Evans on June 3rd, 2009 1:36 pm

    Hi Curt,

    >>It’s a truism that each generation of dashboard-like technology fails because it’s too inflexible. Users are shown the information that will provide them with the most insight. They appreciate it at first. But eventually it’s old hat, and when they want to do something new, the baked-in data model doesn’t support it.

    Baked-in, static data models are precisely the fundamental problem we see when initially engaging with our customers. In many cases the problem is exacerbated by an underlying warehouse that cannot easily be changed to accommodate new business requirements. Warehouses built using traditional methods become too brittle over time and companies are unwilling to risk (or unable to justify the expense of) a major overhaul to their warehouse. Kalido customers have found a top-down model-driven approach with an automatically generated data model allows them to quickly respond to new requirements. Our partner QlikTech provides a great solution to the visualization problem because there is no pre-defined model, and combined with a flexible data warehouse as a source, its ability to dynamically deliver more comprehensive information to meet user demands is further enhanced.

  10. The future of data marts | DBMS2 -- DataBase Management System Services on June 8th, 2009 7:28 am

    [...] My recent post on reinventing business intelligence [...]

  11. Google Fusion Tables | DBMS2 -- DataBase Management System Services on June 15th, 2009 7:10 am

    [...] Fusion Tables bears some vague resemblance to what I’m thinking about for the future of both business intelligence and data marts, it sounds as if it has a long way to go before it’s something most [...]

  12. An example of what’s wrong with big vendors’ approaches to BI (SAP in this case) | DBMS2 -- DataBase Management System Services on June 15th, 2009 9:28 am

    [...] Business intelligence and the associated data management processes need to be reimagined, and I’m increasingly coming to suspect that the big BI conglomerates aren’t up to the task. Categories: Analytic technologies, Business intelligence, SAP AG, Specific users, Theory and architecture  Subscribe to our complete feed! [...]

  13. Consumer Mailing Lists on November 19th, 2009 2:49 pm

    Great read, thanks for sharing. I found your thoughts on text indexing very intriguing. I would agree that this is definitely a more flexible method and makes finding information a lot easier for the user. I don’t know too much about this topic, but I do know that any program or software that makes researching and data mining easier on the user, is something that I can get behind. Thanks for your thoughts on this subject, I found it very helpful.

  14. Notes and cautions about new analytic technology | DBMS2 -- DataBase Management System Services on May 7th, 2010 11:05 pm

    [...] Comments I made at various points were foreshadowed in a post on reinventing business intelligence. [...]

  15. What matters in mobile business intelligence | DBMS2 -- DataBase Management System Services on July 15th, 2010 6:45 am

    [...] up to the potential of decision support. Some (not all) of those criticisms are being addressed by more recent dashboard technology developments. But with one exception, those criticisms are of little direct relevance to the mobile [...]

  16. Social technology in the enterprise | Text Technologies on September 14th, 2011 1:04 am

    [...] before in analytic contexts; it’s an important concept on the monitoring-oriented side of business intelligence and — if Oliver Ratzesberger is to be believed — in investigative analytics as well. [...]

  17. “Disruption” in the software industry | DBMS 2 : DataBase Management System Services on July 31st, 2013 9:02 pm

    [...] haven’t entirely forgotten the land-and-expand model. Yes, I’ve predicted a much-needed reinvention of/revolution in BI — but better technology alone rarely a disruption [...]

  18. Qwalytics: Self-Service Business Intelligence and Natural Language Reporting on October 27th, 2013 1:07 pm

    Very nice article, I used some of your useful concepts to write my own article: Self-Service Business Intelligence and Natural Language Reporting. I’d love to get your feedback. Don’t you think that Natural Language Business Intelligence is the next step of Business Intelligence’s evolution?

  19. Curt Monash on October 27th, 2013 9:42 pm

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.