Truviso and EnterpriseDB announced today that there’s a Truviso “blade” for Postgres Plus. By email, EnterpriseDB Bob Zurek endorsed my tentative summary of what this means technically, namely:
There’s data being managed transactionally by EnterpriseDB.
Truviso’s DML has all along included ways to talk to a persistent Postgres data store.
If, in addition, one wants to do stream processing things on the same data, that’s now possible, using Truviso’s usual DML.
Note: Extended-relational DBMS like Postgres, Oracle, DB2, and Informix/Illustra have long offered the ability to add blades/cartridges. It’s easy to understand what these do when they simply add native management for a new datatype, and extend the parser, optimizer, and data access methods accordingly. But blades are used in other ways as well, and I’ve always found that somewhat confusing. A little bit of that appears to be going on in this case.
Bob added that there have been a lot of inquiries about the announcement today, without specifying from whom. Truviso marketing chief Roman Bukary, late of SAP, sent over some generic use cases, which pretty much boil down to my first two bullet points above. (More precisely, they agree if you replace “transactionally” with “persistently”; Roman also foresees data warehousing uses.)
I like this announcement. With one probable exception, it’s a good fit for every major use of event processing; the exception is super-low-latency apps, where no extraneous overhead is tolerable. (Those are found mainly in algorithmic trading, but could arise in security and network management as well.) But then, Truviso is being positioned away from its initial currency trading focus anyway.
Super-low-latency aside, the other big current use case for event processing is data reduction. I.e., you have a lot of incoming data – e.g., via satellite telemetry or intelligence intercepts or network monitoring sensors, or monitoring character movement in an MMO (Massively Multiplayer Online) game. You try to grab all the “interesting” stuff, while disregarding or even throwing away the rest. But the “throwing away” part is a little worrisome. So if instead you can seamlessly persist everything, even for a short period of time (e.g., measured in days), that’s goodness. Even if you can’t keep it all even for a short while – well, if the point of data reduction is to retain only a fraction of the incoming data, this scheme could make it easier to persist the keepers.
Another current use case for event processing is rules engines. Progress Apama has a rules paradigm all the way down, while Coral8 tells happily of a customer who uses event processing for all kinds of rules-based real-time CRM. But the Coral8 example is closely integrated with conventional persistent data stores, and the same is likely for other similar applications. Business activity monitoring (BAM) would be a special case of this.
As you know, my ultimate dream for business intelligence/analytic uses of event processing goes beyond BAM. I think many individuals in an enterprise should each track many different (but related) KPIs (Key Performance Indicators). Current query loads for reporting, dashboards, ad hoc query, etc. could easily go up by 2-3 orders of magnitude. When that happens, you want to consider different ways of doing things, specifically memory-centric ones. Normal memory-centric data processing might get the job done, but I have a suspicion that the right architecture will wind up looking a lot like event processing.
Once again, that’s a use for event processing that naturally integrates tightly with a persistent database.
An earlier press release declaring Truviso’s love for PostgreSQL