SAS gets close to the database
One of the big announcements at the Teradata user conference this week (confusingly named “Partners”) is SAS integration. Now, SAS is integrating with other MPP data warehouse appliance vendors as well, but it’s likely that the Teradata integration is indeed the most advanced. For example, one customer proofpoint offered was an insurer who used this capability to reevaluate its risk profile at high speed after Hurricane Katrina. I doubt any of the other SAS/DBMS integrations I know of were in customer hands a year ago.
Three still-open questions I hope to address over the next couple of days are: Read more
| Categories: Analytic technologies, Data warehouse appliances, Data warehousing, Predictive modeling and advanced analytics, SAS Institute, Teradata | Leave a Comment |
The era of memory-centric BI may have finally started
SAP is acquiring Business Objects. There’s nothing inherent in BI Accelerator’s design that ties it to NetWeaver, SAP star schema InfoCubes, or any other particular current implementation detail. So BI Accelerator could become a lot more than an afterthought.
Combine that with Cognos’s acquisition of Applix and the continued success of upstart QlikView, and we could finally see a general memory-centric BI boom.
Maybe. There have been a lot of false alarms before.
| Categories: Analytic technologies, Business intelligence, Business Objects, Cognos, Memory-centric data management, QlikTech and QlikView, SAP AG | 3 Comments |
The four horsemen of data warehousing
I’ve been talking a lot to text mining vendors this week, as per a series of posts over on Text Technologies. Specifically, I’ve focused on the two with exhaustive extraction strategies, namely Attensity and Clarabridge. (Exhaustive extraction is Attensity’s term for separating the linguistic-analysis part of text mining from the DBMS-based BI/analytics part.)
So I asked each of Attensity and Clarabridge the side question as to which data warehouse software or appliances they were seeing. The answers were almost identical — Oracle, Microsoft SQL*Server, Teradata, and Netezza. One also mentioned MySQL and 2 HP prospects — but the HP sites were running NonStop SQL, not NeoView. Amazingly, there were no mentions of DB2. There also weren’t any mentions of the smaller specialist startups, such as DATAllegro, Greenplum, or Vertica.
SAP takes back MaxDB from MySQL
Way back in January, 2006, I wrote that MaxDB was not getting merged into MySQL. Given that, it makes sense for SAP to take back control of the product. As The Reg reports, that’s exactly what’s happening.
The bigger question is — how’s MySQL’s SAP certification coming along? Whether or not MySQL gets SAP-certified and included in the SAP product catalog will be a huge indicator of whether it’s ready for OLTP prime time.
Anybody want to place bets on which midrange OLTP DBMS gets certified for SAP first, MySQL or EnterpriseDB? MySQL has a large head start, but if my clients at EnterpriseDB have their priorities straight, they might wind up lapping MySQL even so.
| Categories: EnterpriseDB and Postgres Plus, Mid-range, MySQL, OLTP, SAP AG | 4 Comments |
Alpha Five claims to clobber FileMaker 9 on SQL performance
The Alpha Five guys decided to test the performance of their software vs. FileMaker on queries to a foreign database, and published the results. Given that Alpha Five designed and performed the tests, I bet you can guess who won.
From a quick read, it seems all the tests were single-table queries, and some or all were designed to highlight the flaws of a specific design choice made in FileMaker (doing certain work itself when it would be more efficient to push it to the foreign DBMS).
| Categories: Alpha Five, FileMaker | 6 Comments |
Calpont finally has a multipage website
Calpont’s website is finally more or less real. It still doesn’t say much except that the company is in alpha test with a Type II appliance, and that the product has a columnar DBMS architecture and Oracle transparency (with DB2) promised. Oh yes; it has 32 employees. The “Customer” tab doesn’t list any customers, but I guess they saved site design money by having it all ready to go when that situation changes.
Philip Howard’s recent article has a lot more meat than that, including the perplexing bit of info that Calpont is starting out with a shared-everything architecture. Based on that, as well as the company’s prior technical efforts, we can probably conclude they’re focused on rather small warehouses.
| Categories: Analytic technologies, Calpont, Data warehouse appliances, Data warehousing, Emulation, transparency, portability | Leave a Comment |
Oracle sincerely flatters DATAllegro
Actually, I’m kidding with the post title; I doubt that Oracle’s new deal with DATAllegro partners Dell and EMC has much to do with DATAllegro at all. Rather, I think it’s an example of a trend I’m also sensing* from other major hardware vendors — doing deals with multiple data warehouse software suppliers to cover different hardware size ranges. This just happens to be the first one to be announced.
*How’s that for a nice, vague euphemism?
DATAllegro is targeted at warehouses sized, at a minimum, in the tens of terabytes of user data. Oracle’s technology works well enough up into at least the multi-terabyte range — unless you’re looking to get the best possible price and/or performance on your system — but then things start getting dicey. So there isn’t a lot of overlap between the two Dell/EMC offerings. Read more
| Categories: Analytic technologies, Data warehouse appliances, Data warehousing, DATAllegro, EMC, Oracle | 1 Comment |
Database management system architecture implications of an eventual move to solid-state memory
I’ve pointed out in the past that solid-state/Flash memory could be a good alternative to hard disks in PCs and enterprise systems alike. Well, when that happy day arrives, what will be some of the implications for database management software architecture?
- Compression will be even more important. Cost per terabyte of storage will spike up for that storage that is moved from disk to solid-state.
- The sequential-rather-than-random reading strategy of data warehouse appliance makers may become less relevant. The one way to get rid of the disk-speed bottleneck is to get rid of disks.
- DBMS will need to write data as rarely as possible. Solid-state memory tends to wear out if you keep writing over it. Assuming this problem gets better over time (if it doesn’t, this whole discussion is moot) but isn’t totally solved, architectures which have fewer writes are on the whole better.
| Categories: Data warehouse appliances, Data warehousing, Database compression, Netezza, Solid-state memory, Theory and architecture | Leave a Comment |
A negative take on QlikView
Apparently, one user isn’t happy with QlikView at all. The main problem seems to be, in effect, frequently-repeated bulk loads from disk into the in-memory structures. (Obviously — at least absent more information — that could be an artifact of a stupidly ignorant installation, rather than a fundamental problem with the technology itself.) He’s also not at all enamored of QlikView’s app dev tools.
| Categories: Analytic technologies, Business intelligence, Memory-centric data management, QlikTech and QlikView | 2 Comments |
Four anonymous Netezza fans
I just found a blog post asking about Netezza that elicited quite a few responses, including at least four that purported to be from people whose companies had selected Netezza in a POC (Proof Of Concept) bake-off. One says Netezza was super-fast, even over DATAllegro, and DATAllegro’s professional services were lacking. One says Netezza is 50X faster than traditional alternatives on some queries, but up to 2X slower on some others. Two others just expressed love (or at least commitment) without giving details.
I haven’t yet looked through the rest of the responses in the thread.
