June 12, 2007

Thoughts on database management in role-playing games

I’ve just started a research project on the IT-like technology of games and virtual worlds, especially MMORPGs. My three recent posts on Guild Wars attracted considerable attention in GW’s community, and elicited some interesting commentary, especially for the revelation of Guild Wars’ very simple database architecture. Specifically, pretty much all character information is banged into a BLOB or two, and stored as a string of tokens, with little of the record-level detail one might expect. By way of contrast, Everquest is run on Oracle (and being transitioned to EnterpriseDB), at least one console-based game maker uses StreamBase, and so on.

Much of the attention has focused on the implications for the in-game economy – how can players buy and sell to their hearts’ content if there’s no transactional back-end. Frankly, I think that’s the least of the issues. For one thing, without a nice forms-based UI you probably won’t create enough transactions to matter, and integrating that into the game client isn’t trivial. For another, virtual items can be literally created and destroyed by the computer, with no negative effect on game play, a factor which drastically reduces the integrity burdens the game otherwise would face.

Rather, where I think the Guild Wars developers at ArenaNet may be greatly missing out is in the areas of business intelligence, data mining, and associated game control. Here are some examples of analyses they surely would find it helpful to do. Read more

June 12, 2007

DBMS plug-compatibility gaining steam

ANTs Software’s primary focus isn’t really even on DBMS any more. Even so, it just announced a deal to replace Informix in a large retail chain’s in-store systems. (In its 1990s heyday, Informix wound up running in-store systems at an impressive list of major retailers. Of course, Informix was long ago acquired by IBM.)

EnterpriseDB has probably passed ANTs in the DBMS plug-compability business. And taken together they’re still pretty small. Even so, plug-compatible DBMS replacement has to be taken seriously as a (possibly) emerging trend. Economically, it makes all the sense in the world.

June 9, 2007

The database technology of Guild Wars

I have the enviable task of researching online game and virtual world technology. My first interview, quite naturally, was with the lead developers of a game I actually play – Guild Wars. The overview is in another post; that may provide context for this one, which focuses on the database technology. (I also did a short post just on the implications for Guild Wars players.) It also has a brief description of what Guild Wars is – namely, a MMORPG (Massively MultiPlayer Role-Playing Game) with the unusual feature that most of the game world is instanced rather than utterly shared.

First, some scope. ArenaNet (Guild Wars’ developer, now a subsidiary of NCsoft) runs Microsoft SQL Server, mainly Enterprise Edition, having just switched to 2005 4 months ago. They run 1500-2500 transactions/second all day, spiking up to 5000 in their busiest periods. They have no full-time DBA, and when the developers started this project they didn’t know SQL. They’ve only had one major SQL Server failure in the 2+ years the game has been running, and that was (like most of their bugs) a network driver problem more than an issue with the core system.

As for what’s going on — there are a few different kinds of database things that happen in an instanced MMORPG. Read more

June 8, 2007

Large DB2 data warehouses on Linux (and AIX)

I was consulting recently to a client that needs to build really big relational data warehouses, and also is attracted to native XML. Naturally, I suggested they consider DB2. They immediately shot back that they were Linux-based, and didn’t think DB2 ran (or ran well) on Linux. Since IBM often leads with AIX-based offerings in its marketing and customer success stories, that wasn’t a ridiculous opinion. On the other hand, it also was very far from what I believed.

So I fired some questions at IBM, Read more

June 8, 2007

Transparent scalability

I’ve been a DBMS industry analyst, in one guise or another, since 1981. So by now I’ve witnessed a whole lot of claims and debates about scalability. And there’s one observation I’d like to call out.

What matters most isn’t what kind of capacity or throughput you can get with heroic efforts. Rather, what matters most is the capacity and throughput you get without any kind of special programming or database administraton.

Of course, when taken to extremes, that point could become silly. DBMS are used by professionals, and requiring a bit of care and tuning is par for the course. But if you have a choice between two systems that can get the job done for you, one of which requires you to perform unnatural acts and one doesn’t – go for the one that works straightforwardly. Your overall costs will wind up being much lower, and you’ll probably get a lot more useful work done. A system that has to strain even to meet known requirements will probably fail altogether at meeting the as-yet-unknown ones that are sure to arise down the road.

Want to continue getting great research about DBMS, analytics, data integration, and other technologies related to data management? Get a FREE subscription by RSS/Atom or e-mail! We recommend taking the integrated feed for all our blogs, but blog-specific ones are also easily available.

June 7, 2007

StreamBase and Truviso

StreamBase is a decently-established startup, possibly the largest company in its area. Truviso, in the process of changing its name from Amalgamated Insight, has a dozen employees, one referenceable customer, and a product not yet in general availability. Both have ambitious plans for conquering the world, based on similar stories. And the stories make a considerable amount of sense.

Both companies’ core product is a memory-centric SQL engine designed to execute queries without ever writing data to disk. Of course, they both have persistence stories too — Truviso by being tightly integrated into open-source PostgreSQL, StreamBase more via “yeah, we can hand the data off to a conventional DBMS.” But the basic idea is to route data through a whole lot of different in-memory filters, to see what queries it satisfies, rather than executing many queries in sequence against disk-based data. Read more

June 6, 2007

The FileMaker story

Unfortunately, the first draft of this post got eaten. I’m now trying again.

In response to its small but vocal constituency, I got myself briefed on the FileMaker story. My conclusion, in a nutshell, is that FileMaker sometimes is a good alternative to low-end use of a standard relational DBMS. If you do feel able to use more standard-style products, you often should, for all sorts of obvious flexibility and future-proofing reasons. But if you can’t, or if you’re really confident the project won’t grow past a certain level, the FileMaker class of products can be a very appealing alternative.

Make no mistake; FileMaker is very different from conventional DBMS/app dev tool combos (and that’s the right comparison, as it combines aspects of both product categories into one). Read more

May 29, 2007

The petabyte machine

EMC has announced a machine — a virtual tape library — that supposedly stores 1.8 petabytes of data. Even though that’s only 584 terabytes uncompressed, it shows that the 1 petabyte barrier will be broken soon no matter how unhyped the measurement.

I just recently encountered some old notes in which Sybase proudly announced a “1 gigabyte challenge.” The idea was that 1 gig was a breakthrough size for business databases.

Time flies.

May 26, 2007

Whether or not to use MySQL

CIO Magazine has a pretty superficial back-and-forth about whether or not to use MySQL in enterprises. For example, one of the strongest claims in the pro-MySQL article is the not-so-staggering observation (italics theirs)

One way MySQL achieves this scalability is through a popular feature called stored procedures, mini, precompiled routines that reside outside of the application.

And the anti-MySQL article doesn’t have much in the way of crunchiness except for the fairly well-reasoned

Most of the required features for an RDBMS are firmly in place with the release of MySQL 5.0, but we can legitimately consider the maturity of some of these features as a possible reason to shy away from MySQL. For example, the lack of views, triggers and stored procedures has historically been the major criticism of MySQL. These have all been supported by MySQL for a year or so now, but by comparison, they have been features for about 10 years in most competing RDBMSes.

This article pair got Slashdotted, and some interesting byplay ensued. The general theme was along the lines of

“MySQL is terribly deficient out of the box.”
“Yes, but if you use this new, lightly-documented add-in, that specific problem is now solved.”

May 10, 2007

Another short white paper on MPP data warehouse appliances

Following up on an earlier piece, DATAllegro has sponsored a second white paper on MPP data warehouse appliances. This one focuses specifically on DATAllegro’s move from Type 1 to Type 2 (i.e., virtual) appliances, via its new V3 product line. The basic tradeoffs of this move include:

Actually, I didn’t make that last point explicitly in the paper, but it quite possibly trumps any performance disadvantages from the switch. And Moore’s Law itself certainly far outweighs any other performance-affecting factors.

← Previous PageNext Page →

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.