Complex event processing (CEP)

Discussion of complex event processing (CEP), aka event processing or stream processing – i.e., of technology that executes queries before data is ever stored on disk. Related subjects include:

January 28, 2009

More Oracle notes

When I went to Oracle in October, the main purpose of the visit was to discuss Exadata. And so my initial post based on the visit was focused accordingly. But there were a number of other interesting points I’ve never gotten around to writing up. Let me now remedy that, at least in part. Read more

October 20, 2008

Coral8 proposes CEP as a BI data platform

It used to be that Coral8 and StreamBase were the two complex event/stream processing (CEP) vendors most committed to branching out beyond the super-low-latency algorithmic trading marketing. But StreamBase seems to have pulled in its horns after a management change, focusing much more on the financial market (and perhaps the defense/intelligence market as well). Aleri, Truviso, and Progress Apama, while each showing signs of branching out, don’t seem to have gone as far as Coral8 yet. And so, though it’s a small company with not all that many dozens of customers, my client Coral8 seems to be the one to look at when seeing whether CEP really is relevant to a broad range of mainstream – no pun intended – applications.

Coral8 today unveiled a new product release – the not-so-concisely named “Coral8 Engine and Portal Release 5.5” – and a new buzzphrase — “Continuous Intelligence.” The interesting part boils down to this:

Coral8 is proposing CEP — excuse me, “Continuous Intelligence” — as a data-store-equivalent for business intelligence.

This includes both operational BI (the current sweet spot) and dashboards (the part with cool, real-time-visualization demos). Read more

September 22, 2008

Web analytics — clickstream and network event data

It should surprise nobody that web analytics – and specifically clickstream data — is one of the biggest areas for high-end data warehousing. For example:

Read more

September 19, 2008

When BI, CEP, BAM, and Gartner meet together

Doug Henschen has two good articles based on Gartner’s Event Processing conference, on the theme of BI/event processing integration — an overview, and a detailed interview with Roy Schulte. And as I note elsewhere, Seth Grimes has a good article based on the conference too.

I have my own thoughts on these subjects, but I’m not ready to post them at the moment. In the mean time, I recommend the articles linked above.

July 2, 2008

Event processing vs. data-driven processing

Marco Seiriö offers a distinction between event processing and data-driven processing. Specifically, he says that if an event has an ID, then it’s true event processing; if it doesn’t, and what you’re doing looks somewhat like event processing anyway, then you’re doing data-driven processing. Read more

April 29, 2008

Truviso and EnterpriseDB blend event processing with ordinary database management

Truviso and EnterpriseDB announced today that there’s a Truviso “blade” for Postgres Plus. By email, EnterpriseDB Bob Zurek endorsed my tentative summary of what this means technically, namely:

  • There’s data being managed transactionally by EnterpriseDB.

  • Truviso’s DML has all along included ways to talk to a persistent Postgres data store.

  • If, in addition, one wants to do stream processing things on the same data, that’s now possible, using Truviso’s usual DML.

Read more

March 19, 2008

CEP is entering BI

I talked with both Coral8 and Truviso this afternoon. They both have their financial services efforts, of course. Coral8 also continues to get business doing data reduction for sensor networks — mainly RFID and utilities, I think. Coral8 is working on some really cool and confidential other stuff as well.

But my biggest takeaway from this pair of calls was that Coral8 and Truviso are penetrating general BI. Read more

March 19, 2008

What to call CEP

It seems that the CEP folks are still concerned about what to call themselves. There really are only three choices:

“Stream processing” might once have been on the list, but it has too many other meanings, and “streaming” adds more meanings yet.

“Complex” has the virtue of inertia; CEP is the closest thing the category has to an agreed-upon name. But few people want to buy technology that describes itself as being “complex.” And in any case it’s not clear how complex many of those events are. “Event stream processing” isn’t terribly well established, and to some extent it runs afoul of the same ambiguities as “stream processing.” What’s worse, those names lead to four-word product category names. Who really wants to market or hear about “complex event processing engines” or “event stream processing platforms”?

So let’s just call the category “event processing” and have done with it, OK? Products can, if they want, be “event processing somethings.” Names like that wouldn’t be any more of a mouthful than “data warehouse appliance,” and the latter category is doing pretty well for itself.


January 16, 2008

Fixing Twitter in three letters: CEP

There’s a lot of agitation today because Twitter broke under the message volume generated during Steve Jobs’ Macworld keynote. I don’t know what that volume was, but I just checked the lower volume of tweets (i.e., updates) going through the “public timeline” (i.e., everything) twice, and both times it was under 200 messages per minute. So, let’s say there’s a much higher volume at peak times, and also hypothesize that Twitter would like to grow a lot, and say that Twitter would like to handle 10-100,000 messages/minute – i.e., 1000+/second — as soon as possible.

That’s easy using CEP (Complex Event Processing). A Twitter update is just a string of 140 or fewer characters. It is associated with three pieces of metadata – author, time, and mode of posting. It should be visible in real time to any of the author’s “followers,” as well as in a single public timeline; perhaps there will be other kinds of Twitter channels in the future. In most cases, these updates are only visible to a user upon page refresh. Almost nNo Twitter user seems to have more than about 7,000 followers, even Robert Scoble or Evan Williams.* The average number of followers, at least among active updaters, is probably in the low hundreds now. So basically, this is all a heckuva lot easier than the tick-monitoring systems Wall Street firms are using today.

*I believe there’s a hard cap of 7,500, but nobody seems to have bumped against it yet.Twitterholic gives a different figure than Twitter does for Scoble. And it correctly shows Dave Troy with a little over 10,000.

Here’s how to implement that. Read more

November 13, 2007

Coral8 highlights some key issues with dashboards

Coral8 today is rolling out the Coral8 Portal, offering some BI basics for CEP (Complex Event Processing) filters and queries. In Release 1, this is primitive compared with other BI portals, and of direct interest only to organizations that have already decided they’re using CEP technology. Even so, it serves as a useful illustration of several important issues in dashboarding.

The simplest is that real-time dashboards require different visualizations than others. Most obvious is the ever-popular graph marching from right to left across the screen as time advances along the x-axis. There also are difference in styles between reports and tables that you actually read, vs. read-outs that you merely watch for flickers of change. (Of course those two examples hardly make for a complete list.)

More interesting is the flexibility and parameterization. While Coral8 sells to multiple markets, the design point for the portal is clearly financial trading. So, for example, a query may be registered with one ticker symbol, and an end user can easily customize it to slot in another one instead. In a way, this is a step toward the much greater flexibility that dashboards need overall.

Truth be told, if you put all such Coral8 flexibility features together they’re not yet very impressive. So what’s even more interesting is the overall architecture that could support much greater flexibility in the future. If dashboards gain the flexibility they need, and queries continue to be done in the conventional manner, query volumes will increase enormously. If it further is the case that they are upgraded in some near real-time manner, that’s another huge increase.

How huge? Well, I can make a case that it could be well over three orders of magnitude: Read more

← Previous PageNext Page →

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.