A deeper dive into Apama
My recent non-technical Apama briefing has now had a much more technical sequel, with charming founder and former Cambridge professor John Bates. He still didn’t fully open the kimono – trade secrets and all that — but here’s the essence of what’s going on.
Complex event/stream processing (CEP) is all about looking for many patterns at once. Reality – the stream(s) of data – is checked against these patterns for matches. In Apama, these patterns are kept in a kind of tree – they call it a hypertree — and John says the work to check them is only logarithmic in the number of patterns.
Since patterns commonly have multiple parts — and usually also take time to unfold — what really goes on is that partial matches are found, after which what’s being matched against is the REMAINDER of the pattern. Thus, there’s constant pruning and rebalancing of the tree. What’s more, a large fraction of all patterns – at least in the financial trading market — involve a short time window, which again creates a need for ongoing, rapid tree modification. Read more
| Categories: Memory-centric data management, Progress, Apama, and DataDirect, Streaming and complex event processing (CEP) | 4 Comments |
The Coral8 story
Complex event/stream processing vendor Coral8 raised its hand and offered a briefing – non-technical, alas, but at least it was a start. Here are some of the highlights: Read more
An era of easier database portability?
More and more, I find myself addressing questions of database portability and transparency, most particularly in the cases of EnterpriseDB, Ants Software, and now also Dataupia. None of those three efforts is very large yet, but so far I’d rate their respective buzzes to be very encouraging in the case of EnterpriseDB, non-discouraging or better in the case of Ants, and too early to judge for Dataupia. On the whole, it definitely seems like a matter worthy of attention.
With that as backdrop, where is all this compatibility/portability/transparency stuff going to lead? Read more
| Categories: ANTs Software, Dataupia, Emulation, transparency, portability, EnterpriseDB and Postgres Plus, Progress, Apama, and DataDirect | 2 Comments |
Filemaker for composite application development
It’s not accurate to judge a product by its most obnoxious or least clueful partisans. Hence, even though some insult-spewers take umbrage at an accurate description of FileMaker’s capabilities,* it wouldn’t be fair to write the product off entirely.
*Mercifully, none of said insult-spewers seems to actually work at the company. I must confess that this makes it easier for me to take the (somewhat) high road here.
Possibly due to an actual understanding of enterprise technology, Tim Dietrich has weighed in on on the discussion from a different angle. Here’s a quote in which he gives an example of very successful FileMaker use:
Read more
| Categories: EAI, EII, ETL, ELT, ETLT, FileMaker | 2 Comments |
Dataupia – low-end data warehouse appliances
It’s unfortunate that Dataupia has concepts like “Utopia” and “Satori” in its marketing, as those serve to obscure what the company really offers – data warehouse appliances designed for the market’s low end. Indeed, it seems that they’re currently very low-end, because they were just rolled out in May and are correspondingly immature.
Basic aspects include:
- Type 1 appliances, which most other data warehouse appliance vendors (Teradata excepted) have moved away from. And there actually seems to be very little special about the hardware design to take advantage of the proprietary opportunity.
- Apparently limited redistribution of intermediate query result sets – i.e, the “fat head” architecture most competitors have moved away from. But it’s not pure fat-head; there’s some data redistribution.
- General lack of partnerships with the obvious software players (but they’re working on that).
- Low price point ($19,500 per 2-terabyte module).
Beyond price, Dataupia’s one big positive differentiation vs. alternative products is that you don’t write SQL directly to a Dataupia appliance. Rather, you talk to it through the federation capability in your big-brand DBMS, such as Oracle or SQL*Server. Benefits of this approach include: Read more
| Categories: Data warehouse appliances, Data warehousing, Dataupia, Emulation, transparency, portability | 3 Comments |
DATAllegro heads for the high end
DATAllegro Stuart Frost called in for a prebriefing/feedback/consulting session. (I love advising my DBMS vendor clients on how to beat each other’s brains in. This was even more fun in the 1990s, when combat was generally more aggressive. Those were also the days when somebody would change jobs to an arch-rival and immediately explain how everything they’d told me before was utterly false …)
While I had Stuart on the phone, I did manage to extract some stuff I’m at liberty to use immediately. Here are the highlights: Read more
| Categories: Data warehouse appliances, Data warehousing, Database compression, DATAllegro, Greenplum, Netezza, Teradata | 4 Comments |
EnterpriseDB has a huge partisan in FTD
The Register has a rip-roaring story on a (currently partial) conversion from Oracle to EnterpriseDB. Basically, FTD is royally pissed-off at Oracle, and EnterpriseDB stepped in with a very fast conversion.
Apparently, FTD decided they needed to Do Something after a Valentine’s Day meltdown, and the project was completed on EnterpriseDB in time for Mother’s Day.
One note of caution: When a user supports a vendor’s marketing this emphatically, it usually has gotten nice breaks on price and/or service. Your mileage may vary. On the other hand, EnterpriseDB is still a small enough company that, if you want them to love you to death, you can be pretty well assured that you’re important enough to them that they’ll do so.
Keep getting great research about data management and related technologies. Get a FREE subscription by RSS/Atom or e-mail!
| Categories: Emulation, transparency, portability, EnterpriseDB and Postgres Plus, Mid-range, OLTP, Oracle | 2 Comments |
StreamBase rebuts
In my post Monday about Apama, I complained that StreamBase hadn’t offered a rebuttal to some of Apama’s claims. This has now been fixed. 🙂 Bill Hobbib, StreamBase’s VP of Marketing wrote in. Part of what he had to say was the following.
Adapters to Data Feeds
Your blog comment that adapters doesn’t seem like a key competitive differentiator is accurate, and since adapters are so straightforward to develop with StreamBase as part of a customer engagement, we’ve never found adapters to be a key competitive differentiator. The comment by a competitor that their advantage over StreamBase comes from their having developed more adapters suggests they cannot distinguish themselves based on the other functional capabilities that are important to customers. In reality, our speed/performance and scalability are orders of magnitude superior to competitors, as is the speed with which StreamBase applications are developed, deployed, and modified when business needs change. (If it were easy to develop applications with certain competitive systems, then one might assume they would make free evaluation versions of their product available for download from their websites!)
That being said, StreamBase offers adapters to a broad array of data feeds. Most of these are offered out-of-the-box by StreamBase, including the following:
* Financial Market Data: processes data from Reuters® RMDS™ and Reuters Triarch™
* TIBCO® Rendezvous™: converts Rendezvous message into StreamBase tuples and vice versa.
* StreamBase Adapter for JDBC: connects StreamBase to enterprise databases, allowing submission of SQL queries to external resources such as IBM® DB2™, Oracle®, Microsoft® SQLServer™, and Sybase®.
* StreamBase Adapter for JMS: integrates StreamBase with any JMS-compliant message bus.
* StreamBase Adapter for Microsoft Excel™: allows applications to publish data to Excel or read data from Excel.
* StreamBase CSV Adapters: allow applications to read data from, and write data to, comma-separated value (CSV) files.
* StreamBase SMTP adapter: taps into the IP stack on a running system to process live data, converts the IP packets into a TCP data stream, or reads IP packets from captured files.
* StreamBase XML Adapter: streams XML-formatted data records into and out of StreamBase applicationsWe also can connect to financial exchanges either using our own adapters or through a third-party partnership. Below you’ll find a listing of those.
| Categories: Memory-centric data management, Progress, Apama, and DataDirect, StreamBase, Streaming and complex event processing (CEP) | Leave a Comment |
Progress Apama
I finally got my promised briefing with Progress Apama. Unfortunately, nobody particularly technical was able to attend, but I came away with a better understanding even so.
Unlike StreamBase or Truviso, Apama has a rules-based architecture. In essence, the rules engine maintains state of various kinds, and matches that state against desired patterns, called “scenarios.” They can handle 100s or possibly even 1000s of scenarios at once. Read more
| Categories: Memory-centric data management, Progress, Apama, and DataDirect, Streaming and complex event processing (CEP) | 2 Comments |
www.monashadvantage.com is down
The Monash Advantage site is down. It should be back up in a few days.
I had been intending to send an email blast to registered site-accessers anyway. Now I’ll do that for sure. If you’re a Monash Advantage member, details and explanations will be in that email.
I apologize for the inconvenience.
| Categories: About this blog | Leave a Comment |
