Unreliable web MySQL application (Technorati/WordPress)
Technorati yesterday exposed an application error, to wit (in what presumably should be a blog content region): Read more
| Categories: MySQL | 6 Comments |
Response to Rita Sallam of Oracle
In a comment thread on Seth Grimes’ blog, Rita Sallam of Oracle engaged in a passionate defense of her data warehousing software. I’d like to take it upon myself to respond to a few of here points here. Read more
| Categories: Benchmarks and POCs, Clustering, Data warehousing, Oracle, Parallelization | 10 Comments |
Oracle Optimized Warehouse Initiative
Oracle’s response to data warehouse appliances — and to IBM’s BCUs (Balanced Configuration Units) — so far is the Oracle Optimized Warehouse Initiative (OOW, not to be confused with Oracle Open World). A small amount of information about Oracle Optimized Warehouse can be found on Oracle’s website. Another small amount can be found in this recent long and breathless TDWI article, full of such brilliancies as attributing to the data warehouse appliance vendors the “claim that relational databases simply aren’t cut out for analytic workloads.” (Uh, what does he think they’re running — CODASYL DBMS?)
So far as I can tell, what Oracle Optimized Warehouse — much like IBM’s BCU — boils down to is the same old Oracle DBMS, but with recommended hardware configuration and tuning parameters. Thus, a lot of the hassle is taken out of ordering and installing an Oracle data warehouse, which is surely a good thing. But I doubt it does much to solve Oracle’s problems with price, price/performance, or the inevitable DBA hassles derived from a poorly-performing DBMS.
| Categories: Data warehouse appliances, Data warehousing, Oracle | 3 Comments |
Who is doing what in XML data management these days?
A comment thread to a post on a different subject has opened up a discussion of XML storage. Frankly, I haven’t kept up with my briefings on the subject, in part because XML support hasn’t proved to be very important yet to the big DBMS vendors, somewhat to my surprise. When last I looked, the situation wasn’t much different from what it was back in November, 2005. Unless I’ve missed something (and please tell me if I have!), here’s what’s going on: Read more
| Categories: IBM and DB2, Intersystems and Cache', MarkLogic, Microsoft and SQL*Server, Oracle, Structured documents | 7 Comments |
Oracle’s hefty price increases
Jeff Jones of IBM wrote in to point out that Oracle is slathering on the price increases. I quote: Read more
| Categories: Dataupia, Emulation, transparency, portability, EnterpriseDB and Postgres Plus, Oracle | 5 Comments |
Derek Rodner blasts ANTs Software
Derek Rodner got snarky, and blasted Ants Software. Highlights include (emphasis mine):
I have never seen more thinly veiled attempts to make themselves bigger than they are. … In 2005, they did almost a half million dollars in revenue. That’s right, I said a half million, or $467,000 to be exact. In 2006, it got worse at $288,000 in revenue and last year they did $360,000. Yet, they continue to drone on about their “consortium” which, from the outside simply looks like a beta program. Its no consortium. … And, they continue to mention a major deal with IBM that COULD be worth millions over time. You can read about it in every SEC filing. But, it has never materialized. … They announced a major Oracle partnership, but Oracle never acknowledges their existence. I think they simply signed up for the partner program at oracle and paid the $1500. … Sybase is paying them $1.4 million to do whatever they want with the entire product line from ANTs. … This means that Sybase can do whatever they want with the product, including reselling it without paying another dime to ANTs.
| Categories: ANTs Software | 6 Comments |
Detailed analysis of Perst and other in-memory object-oriented DBMS
Dan Weinreb — inspired by but not linking to my recent short post on McObject’s object-oriented in-memory DBMS Perst — has posted a detailed discussion of Perst on his own blog. For context, he compares it briefly to analogous products, most especially Progress’s — which used to be ObjectStore, of which Dan was the chief architect.
This was based on documentation and general sleuthing (Dan figured out who McObject got Perst from), rather than hands-on experience, so performance figures and the like aren’t validated. Still, if you’re interested in such technology, it’s a fascinating post.
| Categories: In-memory DBMS, McObject, Memory-centric data management, Object | Leave a Comment |
Open source in-memory DBMS
I’ve gotten email about two different open source in-memory DBMS products/projects. I don’t know much about either, but in case you care, here are some pointers to more info.
First, the McObject guys — who also sell a relational in-memory product — have an object-oriented, apparently Java-centric product called Perst. They’ve sent over various press releases about same, the details of which didn’t make much of an impression on me. (Upon review, I see that one of the main improvements they cite in Perst 3.0 is that they added 38 pages of documentation.)
Second, I just got email about something called CSQL Cache. You can read more about CSQL Cache here, if you’re willing to navigate some fractured English. CSQL’s SourceForge page is here. My impression is that CSQL Cache is an in-memory DBMS focused on, you guessed it, caching. It definitely seems to talk SQL, but possibly its native data model is of some other kind (there are references both to “file-based” and “network”.)
| Categories: Cache, DBMS product categories, In-memory DBMS, McObject, Memory-centric data management, Object, OLTP, Open source | 5 Comments |
ANTs bails out of the DBMS market
ANTs Data Server — i.e., the ANTs DBMS — has been sold off to a company called 4Js. It is now to be called Genero DB. Actually, 4Js has been selling or working on a version of the product called Genero DB since 2006, specifically an Informix-compatible one.
I’m not totally clear on why an Informix-compatible DBMS is needed in a world that already has Informix SE, but maybe IBM is overcharging for maintenance even on the low-end version of the product.
Meanwhile, ANTs, which had originally tried to get enterprises to migrate away from Oracle, is now focused on middleware called the ANTs Compatibility Server to help them migrate to Oracle, specifically/initially from Sybase.
| Categories: ANTs Software, Emulation, transparency, portability, IBM and DB2, Oracle, Sybase | 2 Comments |
Yahoo scales its web analytics database to petabyte range
Information Week has an article with details on what sounds like Yahoo’s core web analytics database. Highlights include:
- The Yahoo web analytics database is over 1 petabyte. They claim it will be in the 10s of petabytes by 2009.
- The Yahoo web analytics database is based on PostgreSQL. So much for MySQL fanboys’ claims of Yahoo validation for their beloved toy … uh, let me rephrase that. The highly-regarded MySQL, although doing a great job for some demanding and impressive applications at Yahoo, evidently wasn’t selected for this one in particular. OK. That’s much better now.
- But the Yahoo web analytics database doesn’t actually use PostgreSQL’s storage engine. Rather, Yahoo wrote something custom and columnar.
- Yahoo is processing 24 billion “events” per day. The article doesn’t clarify whether these are sent straight to the analytics store, or whether there’s an intermediate storage engine. Most likely the system fills blocks in RAM and then just appends them to the single persistent store. If commodity boxes occasionally crash and lose a few megs of data — well, in this application, that’s not a big deal at all.
- Yahoo thinks commercial column stores aren’t ready yet for more than 100 terabytes of data.
- Yahoo says it got great performance advantages from a custom system by optimizing for its specific application. I don’t know exactly what that would be, but I do know that database architectures for high-volume web analytics are still in pretty bad shape. In particular, there’s no good way yet to analyze the specific, variable-length paths users take through websites.
