I first became an analyst in 1981. And so I was around for the early days of the movement from batch to interactive computing, as exemplified by:
- The rise of minicomputers as mainframe alternatives (first VAXen, then the ‘nix systems that did largely supplant mainframes).
- The move from batch to interactive computing even on mainframes, a key theme of 1980s application software industry competition.
Of course, wherever there is interactive computing, there is a desire for interaction so fast that users don’t notice any wait time. Dan Fylstra, when he was pitching me the early windowing system VisiOn, characterized this as response so fast that the user didn’t tap his fingers waiting.* And so, with the move to any kind of interactive computing at all came a desire that the interaction be quick-response/low-latency.
*That was well put. Unfortunately, VisiOn didn’t meet Dan’s standard, which is a big part of why VisiCorp wound up on the ash heap of history.
Once again, we’re in an era that features:
- A move from batch to interactive computing.
- Users’ desire for zero-wait interactions.
The two big examples I have in mind for a batch-to-interactive trend are:
- The replacement of batch data warehouse loading by continuous feeds.
- More generally, a movement to integrate short-request and analytic processing.
My top examples for zero-wait interactions are:
- “Speed of thought” business intelligence.
- Anything to do with consumer web page response times.
Let me be clear about confessing something — I’m conflating two different kinds of low latency, namely database freshness and user interface response. My two main reasons are:
- If you want to make decisions based on fresh data, you probably don’t want to take a long time making them.
- If you care enough about an analytic problem to repeatedly query a database, then you probably would like the database to be as fresh as possible.
I’ve been conflating those two things at least since I first came up with the speed of a turtle vs. speed of light analogy.
But how should we refer to more-or-less-immediate computing? The term “interactive” has long been played out. “Real-time” has definitional issues, as captured in the Wikipedia passage:
Real-time programs must guarantee response within strict time constraints. Often real-time response times are understood to be in the order of milliseconds and sometimes microseconds. In contrast, a non-real-time system is one that cannot guarantee a response time in any situation, even if a fast response is the usual result.
The use of this word should not be confused with the two other legitimate uses of real-time, which in the domain of simulations, means real-clock synchronous, and in the domain of data transfer, media processing and enterprise systems, the term is intended to mean without perceivable delay.
Similar problems adhere to a term I nonetheless sometimes use, namely “quasi real-time”.
The Sumo Logic guys propose an interesting alternative: human real-time. Billy Bosworth recently emailed me with a similar idea, from a conference panel that obviously struck a nerve. I like it, because it conveys the impression:
- Effectively real-time from a human perspective …
- … but not necessarily from a machine standpoint.
So am I overlooking some drawback to the term? If not, I’m going to start using “human real-time” to mean something like fast enough that humans don’t perceive an annoying lag.