Sybase recently came up with Adaptive Server Enterprise 15.7, which is essentially the “Make SAP happy” release. Features that were slated for 2012 release, but which SAP wanted, were accelerated into 2011. Features that weren’t slated for 2012, but which SAP wanted, were also brought into 2011. Not coincidentally, SAP Business Suite will soon run on Sybase Adaptive Server Enterprise 15.7.
15.7 turns out to be the first release of Sybase ASE with data compression. Sybase fondly believes that it is matching DB2 and leapfrogging Oracle in compression rate with a single compression scheme, namely page-level tokenization. More precisely, SAP and Sybase seem to believe that about compression rates for actual SAP application databases, based on some degree of testing.
While Sybase ASE is unambiguously a row store, I’d be OK with calling that “columnar compression“. However, I wouldn’t expect compression ratios as strong as, say, Vertica’s, even in scenarios where Vertica was limited to dictionary compression only.
This is the second time I’ve heard recently about token compression being done one small block or page at a time (Sybase’s options for page size are 2/4/8/16K). As I noted in connection with Teradata’s similar strategy,
One benefit versus having a more global dictionary is that, since you compress fewer items, compression tokens can each be shorter. (The length of a typical token is a lot like the log of the cardinality of the dictionary.) Another benefit is that smaller dictionaries are faster to search. The obvious offsetting drawback is that a larger and more global dictionary has the potential to compress various items that wind up being left uncompressed in this smaller-scale scheme.
I could also have added:
- It is straightforward to do join operations on globally-tokenized data.
- It is forbiddingly difficult to do joins on locally-tokenized data; you need to decompress it before joining.
However, Sybase ASE does buffer data in compressed form, so it enjoys at least some benefits of in-memory compression.