EnterpriseDB unveils Postgres Plus
EnterpriseDB is making a series of moves and announcements. Highlights include:
- Renaming/repositioning the product as “Postgres Plus.” The free product is now Postgres Plus, while the version you pay EnterpriseDB for is now Postgres Plus Advanced Server.
- Repackaging the products, so that Postgres Plus Advanced Server is a strict superset of Postgres Plus.
- New features added to Postgres Plus Advanced Server.
- Features newly migrated from Advanced Server down to Postgres Plus.
- A strategic investment by IBM.
- Stressing Postgres in EnterpriseDB marketing, and dropping the tag-line defining themselves as “the Oracle-compatible database company.”
So far as I can tell, most of the technical differences between Advanced Server and regular Postgres Plus lie in three areas: Read more
| Categories: Cache, Emulation, transparency, portability, EnterpriseDB and Postgres Plus, Mid-range, MySQL, OLTP, Open source, PostgreSQL | 1 Comment |
Cast Iron Systems focuses on SaaS data integration
When I wrote about data integration vendor Cast Iron Systems a year ago, its core message was “simplicity, simplicity, simplicity.” Supporting points included:
- An appliance delivery format.
- Lots of heuristics for automatic mapping and quick set-up. E.g., Cast Iron claims that 70% of a typical SAP-Salesforce.com connection can be done straight out of the box.
- The absence of data cleaning/transformation features that might complicate things.
Cast Iron still believes in all that.
Even so, its messaging has changed a bit. Cast Iron now bills itself, in the first sentence of its press release boilerplate, as “the fastest growing SaaS integration appliance vendor.” And when I talked with marketing chief Simon Peel today, the only use cases we discussed were connections between SaaS and on-premises apps. Read more
| Categories: Cast Iron Systems, Cloud computing, EAI, EII, ETL, ELT, ETLT, Informatica, Software as a Service (SaaS) | 2 Comments |
CEP is entering BI
I talked with both Coral8 and Truviso this afternoon. They both have their financial services efforts, of course. Coral8 also continues to get business doing data reduction for sensor networks — mainly RFID and utilities, I think. Coral8 is working on some really cool and confidential other stuff as well.
But my biggest takeaway from this pair of calls was that Coral8 and Truviso are penetrating general BI. Read more
| Categories: Aleri and Coral8, Analytic technologies, Business intelligence, Memory-centric data management, Streaming and complex event processing (CEP), Truviso | Leave a Comment |
What to call CEP
It seems that the CEP folks are still concerned about what to call themselves. There really are only three choices:
- Complex event processing
- Event processing
- Event stream processing
“Stream processing” might once have been on the list, but it has too many other meanings, and “streaming” adds more meanings yet.
“Complex” has the virtue of inertia; CEP is the closest thing the category has to an agreed-upon name. But few people want to buy technology that describes itself as being “complex.” And in any case it’s not clear how complex many of those events are. “Event stream processing” isn’t terribly well established, and to some extent it runs afoul of the same ambiguities as “stream processing.” What’s worse, those names lead to four-word product category names. Who really wants to market or hear about “complex event processing engines” or “event stream processing platforms”?
So let’s just call the category “event processing” and have done with it, OK? Products can, if they want, be “event processing somethings.” Names like that wouldn’t be any more of a mouthful than “data warehouse appliance,” and the latter category is doing pretty well for itself.
Data warehousing with paper clips and duct tape
An interesting part of my conversation with Dataupia’s CTO John O’Brien came when we talked about data warehousing in general. On the one hand, he endorsed the view that using Oracle probably isn’t a good idea for data warehouses larger than 10 terabytes, with SQL Server’s limit being well below that. On the other hand, he said he’d helped build 50-60 terabyte warehouses in Oracle years ago.
The point is that to build warehouses that big in Oracle or other traditional DBMS, you have to pull out a large bag of tricks. Read more
| Categories: Analytic technologies, Data warehouse appliances, Data warehousing, Microsoft and SQL*Server, Oracle | 17 Comments |
Dataupia catch-up
I had a catch-up phone meeting with Dataupia, since I hadn’t spoke with the company since the middle of last year. Like several other companies in the data warehouse specialist market, Dataupia can be annoyingly secretive. On the plus side – and this is very refreshing — Dataupia doesn’t seem to expect credit for accomplishments beyond those they’re willing to provide actual evidence for.
What I’ve gleaned about Dataupia’s customer activity to date amounts to: Read more
| Categories: Analytic technologies, Data warehouse appliances, Data warehousing, Dataupia, Emulation, transparency, portability | 1 Comment |
The core challenges of OLTP are changing
I wrote a few weeks ago about the H-Store project, which rejects a variety of assumptions underlying traditional OLTP database design. One of these is long transactions over open database connections. The idea is that the most demanding OLTP applications run on the Web, where abandonment is common, and hence the only sensible option is to break things up into simple chunks. Read more
| Categories: Application areas, OLTP | Leave a Comment |
More Twitter weirdness
Twitter commonly has the problem of duplicate tweets. That is, if you post a message, it shows up twice. After a little while, the dupe disappears, but if you delete the dupe manually, the original is gone too.
I presume what’s going on is that tweets are cached, the tweets are eventually batched to disk, and they don’t always get deleted from cache until some time after they’re persisted. If you happen to check the page of your recent tweets inbetween — boom, you get two hits. But what I don’t understand is why the two versions have different timestamps.
Presumably, this could be explained at a MySQL User Conference session next month, one of whose topics will be Intelligent caching strategies using a hybrid MemCache / MySQL approach. I’m so glad they don’t use stupid strategies to do this … Read more
| Categories: Cache, MySQL, OLTP, Specific users | 3 Comments |
IBM discontinues the solidDB MySQL engine
Last year, I thought that solidDB could at least potentially be an outstanding MySQL engine. But as per news posted on SourceForge last week, that’s not going to happen. At least, it’s not going to happen via any development efforts from IBM.
| Categories: IBM and DB2, Mid-range, MySQL, Open source, solidDB | 4 Comments |
LISP humor
Cartoon
Song (previously posted)
Poem
Another cartoon — not particularly funny on its own — that appears to come between two of the above
