April 19, 2014

Necessary complexity

When I’m asked to talk to academics, the requested subject is usually a version of “What should we know about what’s happening in the actual market/real world?” I then try to figure out what the scholars could stand to hear that they perhaps don’t already know.

In the current case (Berkeley next Tuesday), I’m using the title “Necessary complexity”. I actually mean three different but related things by that, namely:

  1. No matter how cool an improvement you have in some particular area of technology, it’s not very useful until you add a whole bunch of me-too features and capabilities as well.
  2. Even beyond that, however, the simple(r) stuff has already been built. Most new opportunities are in the creation of complex integrated stacks, in part because …
  3. … users are doing ever more complex things.

While everybody on some level already knows all this, I think it bears calling out even so.

I previously encapsulated the first point in the cardinal rules of DBMS development:

Rule 1: Developing a good DBMS requires 5-7 years and tens of millions of dollars.

That’s if things go extremely well.

Rule 2: You aren’t an exception to Rule 1. 

In particular:

  • Concurrent workloads benchmarked in the lab are poor predictors of concurrent performance in real life.
  • Mixed workload management is harder than you’re assuming it is.
  • Those minor edge cases in which your Version 1 product works poorly aren’t minor after all.

My recent post about MongoDB is just one example of same.

Examples of the second point include but are hardly limited to:

BDAS and Spark make a splendid example as well. 🙂

As to the third point:

Bottom line: Serious software has been built for over 50 years. Very little of it is simple any more.

Related links

Comments

3 Responses to “Necessary complexity”

  1. Brian Wood on April 21st, 2014 1:52 pm

    While I understand your premise,and if we accept the premise I also accpet your conclusions. However, necessary complexity, and apparent complexity are not the same thing. There is a lot of complexity that goes into the process of knowledge acquisition for a system like IBM’s Watson, or any of the Artificial Inteligence (AI), and particularly General AI areas models currently being researched and developed, but for them to be truely useful they need to have very low apparent complexity. You ask it a question and it gives you an answer. No mention of Neural Networks, or Markov models etc., simply the answer. For Data Warehouses and Analytics platforms in general, this could be the next big thing. Business users don’t need to know, or in many cases care how a problem is solved, they just need the answer in enough detail to assess their options and make (or accept) a choice. Anyway, just something to think about when you discuss complexity. You coudl talk about strange attractors and other obscure aspects of complexity theory, or you could have a system to determine how to best solve a problem, and trust its logic, processes, and methods. BTW, this is only my personal opinion, and should not be taken to imply anything about my employer’s strategy.

  2. Curt Monash on April 21st, 2014 2:06 pm

    Brian,

    I’m fine with the magic black box for something like natural language recognition, in which the computer truly should make an autonomous decision.

    But if the computer is just providing decision support, then the bases for its opinions needs to be more transparent.

    Yes, it’s possible that — for example — marketing efficiency could be slightly better served by a black box than by an understandable market. But short-term sales success isn’t the only goal of online marketing. I’d rather know who I’m giving what messages to, and why, than have slightly better short-term numbers.

  3. Heiko Korndorf on April 22nd, 2014 9:59 am

    Curt,

    If I summarize what you are saying (probably over-simplified) is that it is getting harder and harder to create and sell an innovative product (in this case a DBMS) because these things have become very complicated (because of the huge development efforts invested in them over the course of many years).

    That sounds like a rather static view to me. In my perception innovations can originate in very odd places (Nutch!?) and then develop and suddenly threaten the incumbents.

    Not sure if you feel that many NoSQL/Hadoop/etc. startups underestimate that but I have been to a major Hadoop event a couple of weeks ago and I was surprised to hear repeatedly “… is not a replacement of your existing DBMS/DWH infrastructure but nicely complements it” (I doubt you will hear the same statement in two or three years.)

    So whilst I believe that smaller innovations can potentially compete with established offerings that have enjoyed huge investments,
    I would think the bigger (probably philosophical) question is how much innovation is left. And in that respect your blog echoes the spirit of
    a quote by Larry Ellison from 2003: “There’s this bizarre notion in the computer industry that we’ll never be a mature industry … [it] is as large as it’s going to be.”

    I sincerely hope that is wrong.

    Regards,
    Heiko

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.