Progress, Apama, and DataDirect

Analysis of Progress Software and its various product lines, including Apama, DataDirect, and OpenEdge. Related subjects include:

August 10, 2007

Coral8 versus StreamBase

Besides talking about what Coral8 and StreamBase (and other CEP vendors) have in common, Mark Tsimelzon and I talked quite a bit about what he sees as some of the important differences. There were a lot, of course, but three in particular stood out.

1. Mark believes Coral8 has significantly lower latency than StreamBase. E.g., the Wombat/Coral8 combo achieves sub-millisecond latency, with Coral8 itself consuming less than a tenth of that. The best comparable figures from StreamBase that I currently know of are almost an order of magnitude slower.

Top-end speed aside, Mark believes that Coral8 is fundamentally better suited for complex queries and pattern recognition, while StreamBase works well with simpler queries. For example, his other performance claims notwithstanding, he concedes that StreamBase is at least comparable to Coral8 in its throughput for huge numbers of simple queries. (The number he mentioned was ½ million queries/second.) Indeed, while we barely talked about customer/marketing issues, Mark asserts that the companies’ respective customer bases reflect this complex/simple distinction.*
Read more

August 3, 2007

Competitive claims in CEP

For the most part, the vendors I talk with in complex event/stream processing like and speak well of each other (most of the exceptions seem to involve StreamBase). Even so, there are a lot of interesting competitive claims and counterclaims in this market. Prior posts and comment threads have covered Apama/StreamBase jousting on the subjects of who has more business and how many financial data feeds StreamBase supports. Other areas that generate interesting sparks are performance, parallelism, and determinism. Read more

August 3, 2007

A deeper dive into Apama

My recent non-technical Apama briefing has now had a much more technical sequel, with charming founder and former Cambridge professor John Bates. He still didn’t fully open the kimono – trade secrets and all that — but here’s the essence of what’s going on.

Complex event/stream processing (CEP) is all about looking for many patterns at once. Reality – the stream(s) of data – is checked against these patterns for matches. In Apama, these patterns are kept in a kind of tree – they call it a hypertree — and John says the work to check them is only logarithmic in the number of patterns.

Since patterns commonly have multiple parts — and usually also take time to unfold — what really goes on is that partial matches are found, after which what’s being matched against is the REMAINDER of the pattern. Thus, there’s constant pruning and rebalancing of the tree. What’s more, a large fraction of all patterns – at least in the financial trading market — involve a short time window, which again creates a need for ongoing, rapid tree modification. Read more

July 26, 2007

An era of easier database portability?

More and more, I find myself addressing questions of database portability and transparency, most particularly in the cases of EnterpriseDB, Ants Software, and now also Dataupia. None of those three efforts is very large yet, but so far I’d rate their respective buzzes to be very encouraging in the case of EnterpriseDB, non-discouraging or better in the case of Ants, and too early to judge for Dataupia. On the whole, it definitely seems like a matter worthy of attention.

With that as backdrop, where is all this compatibility/portability/transparency stuff going to lead? Read more

July 18, 2007

StreamBase rebuts

In my post Monday about Apama, I complained that StreamBase hadn’t offered a rebuttal to some of Apama’s claims. This has now been fixed. 🙂 Bill Hobbib, StreamBase’s VP of Marketing wrote in. Part of what he had to say was the following.

Adapters to Data Feeds

Your blog comment that adapters doesn’t seem like a key competitive differentiator is accurate, and since adapters are so straightforward to develop with StreamBase as part of a customer engagement, we’ve never found adapters to be a key competitive differentiator. The comment by a competitor that their advantage over StreamBase comes from their having developed more adapters suggests they cannot distinguish themselves based on the other functional capabilities that are important to customers. In reality, our speed/performance and scalability are orders of magnitude superior to competitors, as is the speed with which StreamBase applications are developed, deployed, and modified when business needs change. (If it were easy to develop applications with certain competitive systems, then one might assume they would make free evaluation versions of their product available for download from their websites!)

That being said, StreamBase offers adapters to a broad array of data feeds. Most of these are offered out-of-the-box by StreamBase, including the following:
* Financial Market Data: processes data from Reuters® RMDS™ and Reuters Triarch™
* TIBCO® Rendezvous™: converts Rendezvous message into StreamBase tuples and vice versa.
* StreamBase Adapter for JDBC: connects StreamBase to enterprise databases, allowing submission of SQL queries to external resources such as IBM® DB2™, Oracle®, Microsoft® SQLServer™, and Sybase®.
* StreamBase Adapter for JMS: integrates StreamBase with any JMS-compliant message bus.
* StreamBase Adapter for Microsoft Excel™: allows applications to publish data to Excel or read data from Excel.
* StreamBase CSV Adapters: allow applications to read data from, and write data to, comma-separated value (CSV) files.
* StreamBase SMTP adapter: taps into the IP stack on a running system to process live data, converts the IP packets into a TCP data stream, or reads IP packets from captured files.
* StreamBase XML Adapter: streams XML-formatted data records into and out of StreamBase applications

We also can connect to financial exchanges either using our own adapters or through a third-party partnership. Below you’ll find a listing of those.

Read more

July 16, 2007

Progress Apama

I finally got my promised briefing with Progress Apama. Unfortunately, nobody particularly technical was able to attend, but I came away with a better understanding even so.

Unlike StreamBase or Truviso, Apama has a rules-based architecture. In essence, the rules engine maintains state of various kinds, and matches that state against desired patterns, called “scenarios.” They can handle 100s or possibly even 1000s of scenarios at once. Read more

June 14, 2007

Native XML engine short list

I’ve been implying that the short list for native XML database engine vendors should be MarkLogic, IBM, and maybe Microsoft, on the theory that Progress and Intersystems tried the market and pulled back. Well, add Intersystems to the list, and not necessarily in last place. They’ve long had a very fast nonrelational engine in Cache’. Perhaps building Ensemble on it has induced them to sharpen up the XML capabilities again.

Anyhow, while I’m not at liberty to explain more of my reasoning (i.e., to disclose my evidence) — Cache’ should be taken seriously as an XML DBMS alternative … even if I never can seem to get a proper DBMS briefing from them (which is far from entirely being their fault).

April 28, 2007

Progress Software progress report

For the past 20+ years – all the way back to when it was still privately held — I’ve periodically gotten up to speed on Progress Software. I’m trying again now, and to that end dropped by yesterday for a chat with Jeff Stamen. I’ll give a brief overview now – which is probably all I’m qualified to do right now anyway – and then loop back with more detailed info after I get it.

After a reorganization at the beginning of this (November) fiscal year, the vast majority of Progress’ products fall into one of five buckets, which I shall glibly refer to in decreasing order of size as “Progress Classic,” “SOA,” “drivers,” “memory-centric,” and “EasyAsk.” Here’s a quick overview of each. Read more

April 18, 2007

Naming the DBMS disruptors

Edit: This post has largely been superseded by this more recent one defining mid-range relational DBMS.

I find myself defining a new product category – midrange OLTP/multipurpose DBMS. (Or just midrange DBMS for brevity.) Nothing earthshaking here; I’m simply referring to those products that: Read more

March 25, 2007

Oracle, Tangosol, objects, caching, and disruption

Oracle made a slick move in picking up Tangosol, a leader in object/data caching for all sorts of major OLTP apps. They do financial trading, telecom operations, big web sites (Fedex, Geico), and other good stuff. This is a reminder that the list of important memory-centric data handling technologies is getting fairly long, including:

And that’s just for OLTP; there’s a whole other set of memory-centric technologies for analytics as well.

When one connects the dots, I think three major points jump out:

  1. There’s a lot more to high-end OLTP than relational database management.
  2. Oracle is determined to be the leader in as many of those areas as possible.
  3. This all fits the market disruption narrative.

I write about Point #1 all the time. So this time around let me expand a little more on #2 and #3.
Read more

← Previous PageNext Page →

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.