Alternatives for Hadoop/MapReduce data storage and management
There’s been a flurry of announcements recently in the Hadoop world. Much of it has been concentrated on Hadoop data storage and management. This is understandable, since HDFS (Hadoop Distributed File System) is quite a young (i.e. immature) system, with much strengthening and Bottleneck Whack-A-Mole remaining in its future.
Known HDFS and Hadoop data storage and management issues include but are not limited to:
- Hadoop is run by a master node, and specifically a namenode, that’s a single point of failure.
- HDFS compression could be better.
- HDFS likes to store three copies of everything, whereas many DBMS and file systems are satisfied with two.
- Hive (the canonical way to do SQL joins and so on in Hadoop) is slow.
Different entities have different ideas about how such deficiencies should be addressed. Read more
Categories: Aster Data, Cassandra, Cloudera, Data warehouse appliances, DataStax, EMC, Greenplum, Hadapt, Hadoop, IBM and DB2, MapReduce, MongoDB, Netezza, Parallelization | 22 Comments |
Introduction to SnapLogic
I talked with the SnapLogic team last week, in connection with their SnapReduce Hadoop-oriented offering. This gave me an opportunity to catch up on what SnapLogic is up to overall. SnapLogic is a data integration/ETL (Extract/Transform/Load) company with a good pedigree: Informatica founder Gaurav Dillon invested in and now runs SnapLogic, and VC Ben Horowitz is involved. SnapLogic company basics include:
- SnapLogic has raised about $18 million from Gaurav Dillon and Andreessen Horowitz.
- SnapLogic has almost 60 people.
- SnapLogic has around 150 customers.
- Based in San Mateo, SnapLogic has an office in the UK and is growing its European business.
- SnapLogic has both SaaS (Software as a Service) and on-premise availability, but either way you pay on a subscription basis.
- Typical SnapLogic deal size is under $20K/year. Accordingly, SnapLogic sells over the telephone.
- SnapReduce is in beta with about a dozen customers, and slated for release by year-end.
SnapLogic’s core/hub product is called SnapCenter. In addition, for any particular kind of data one might want to connect, there are “snaps” which connect to — i.e. snap into — SnapCenter.
SnapLogic’s market position(ing) sounds like Cast Iron’s, by which I mean: Read more
Categories: Cloud computing, Data integration and middleware, EAI, EII, ETL, ELT, ETLT, SnapLogic, Software as a Service (SaaS) | 1 Comment |
Data integration vendors and Hadoop
There have been many recent announcements about how data integration/ETL (Extract/Transform/Load) vendors are going to work with MapReduce. Most of what they say boils down to one or more of a few things:
- Hadoop generally stores data in HDFS (Hadoop Distributed File System). ETL vendors want to be able to extract data from or load it into HDFS.
- ETL vendors have development environments that let you specify/script/whatever ETL jobs. ETL vendors want their development tools to develop ETL processes executed via MapReduce/Hadoop.
- In particular, this allows ETL vendors to exploit the parallel-processing capabilities of MapReduce.
Some additional twists include:
- Pentaho announced business intelligence and ETL for Hadoop last year.
- Syncsort thinks different sort algorithms should be usable with Hadoop. Consequently, it plans to contribute technology to the community to make sort pluggable into Hadoop. (However, Syncsort is keeping its own sort technology proprietary.)
- Syncsort is considering replicating some Hive functionality, starting with joins, hopefully running much faster. (However, Syncsort’s basic Hadoop support is a quarter or three away, so any more advanced functionality would probably come out in 2012 or beyond.)
- SnapLogic fondly thinks that its generation of MapReduce jobs is particularly intelligent.
Finally, my former clients at Pervasive, who haven’t briefed me for a while, seem to have told Doug Henschen that they have pointed DataRush at MapReduce.* However, I couldn’t find evidence of same on the Pervasive DataRush website beyond some help in using all the cores on any one Hadoop node.
*Also see that article because it names a bunch of ETL vendors doing Hadoop-related things.
Categories: Data integration and middleware, EAI, EII, ETL, ELT, ETLT, Hadoop, MapReduce, Parallelization, Pentaho, Pervasive Software, SnapLogic, Syncsort | 1 Comment |
Elastra sinks into the dead pool
Elastra is an ex-company. I’m not surprised, except by the fact that it took so long.
Categories: Elastra | 2 Comments |
DB2 OLTP scale-out: pureScale
Tim Vincent of IBM talked me through DB2 pureScale Monday. IBM DB2 pureScale is a kind of shared-disk scale-out parallel OTLP DBMS, with some interesting twists. IBM’s scalability claims for pureScale, on a 90% read/10% write workload, include:
- 95% scalability up to 64 machines
- 90% scalability up to 88 machines
- 89% scalability up to 112 machines
- 84% scalability up to 128 machines
More precisely, those are counts of cluster “members,” but the recommended configuration is one member per operating system instance — i.e. one member per machine — for reasons of availability. In an 80% read/20% write workload, scalability is less — perhaps 90% scalability over 16 members.
Several elements are of IBM’s DB2 pureScale architecture are pretty straightforward:
- There are multiple pureScale members (machines), each with its own instance of DB2.
- There’s an RDMA (Remote Direct Memory Access) interconnect, perhaps InfiniBand. (The point of InfiniBand and other RDMA is that moving data doesn’t require interrupts, and hence doesn’t cost many CPU cycles.)
- The DB2 pureScale members share access to the database on a disk array.
- Each DB2 pureScale member has its own log, also on the disk array.
Something called GPFS (Global Parallel File System), which comes bundled with DB2, sits underneath all this. It’s all based on the mainframe technology IBM Parallel Sysplex.
The weirdest part (to me) of DB2 pureScale is something called the Global Cluster Facility, which runs on its own set of boxes. (Edit: Actually, see Tim Vincent’s comment below.) Read more
Categories: Cache, Clustering, IBM and DB2, OLTP, Oracle | 15 Comments |
IBM InfoSphere Warehouse pricing, packaging, compression and more
IBM InfoSphere Warehouse 9.7.3 has been announced, and is planned for general availability late this month. IBM InfoSphere Warehouse is, in essence, DB2-plus, where the “plus” comprises:
- DPF (Data Partitioning Feature) — i.e., the ability to do shared-nothing scale-out.
- Unimportant add-ons — e.g., a mere 5 seats of the Cognos BI tool.
The main news in this release of InfoSphere Warehouse is probably pricing. While IBM has long had a funky server-power-based pricing scheme, it is now adding per-terabyte pricing, with a twist: IBM InfoSphere Warehouse now can be bought per terabyte of compressed user data. Specifically:
- IBM InfoSphere Warehouse 9.7.3 Enterprise Edition can be bought for production for $70K or so per terabyte of compressed user data.
- IBM InfoSphere Warehouse 9.7.3 Departmental Edition can be bought for production for $35K or so per terabyte of compressed user data.
- Development/test seats of IBM InfoSphere Warehouse cost about $2K per user.
- High availability/disaster recovery instances are priced as if they were managing 1 TB each — unless, of course, you have an active-active configuration, in which case they’re priced according to their full amount of data.
Per-terabyte pricing is generally a good way to think about analytic DBMS costs, for at least two reasons: Read more
Categories: Data warehousing, Database compression, IBM and DB2, Pricing | 1 Comment |
Oracle on active-active replication
I am beginning to understand better some of the reasons that Oracle likes to review analyst publications before they go out. Notwithstanding what an Oracle executive told me Friday, I received an email from Irem Radzik of Oracle which said in part:
I am the product marketing director for Oracle GoldenGate product. We have noticed your blog post on Exadata covering a description for Active Data Guard. It refers to ADG being the “preferred way of Active-Active Oracle replication”.
I’d like to request correction on this comment as ADG does not have bidirectional replication capabilities which is required for Active-Active replication. GoldenGate is a complementary product to Active Data Guard with its bidirectional replication capabilities (as well as heterogeneous database support) and it is the preferred solution for Active-Active database replication.
Please note also a correction on product name spelling, notwithstanding that at least one Oracle person read the post before that, requested a different change, but didn’t notice that error.
Categories: Oracle | 6 Comments |
Oracle and IBM workload management
When last night’s Oracle/Exadata post got too long — and before I knew Oracle would request a different section be cut — I set aside my comments on Oracle’s workload management story to post separately. Elements of Oracle’s workload management story include:
- Oracle’s workload management product is called Oracle Database Resource Manager.
- Oracle Database Resource Manager has long managed CPU. For Exadata, Oracle added in management of I/O. Management of RAM is coming.
- Another aspect of Oracle workload management is “instance caging.” If you’re running multiple instances of Oracle on the same box – e.g. one with 128 cores and thus 256 threads – instance caging can keep an instance confined to a specific number of threads.
- Policies can let some classes of user get access to more threads in Oracle Parallel Query than others do.*
- Oracle offers a QoS (Quality of Service) layer, at least on Exadata, that tries to use Oracle’s workload management capabilities to enforce SLAs (Service Level Agreements). For example, if you want a certain query to always be answered in no more than 0.3 seconds, it tries to make that happen. However, this technology is new in the current Oracle release, and will be enhanced going forward.
*Recall that “degrees of parallelism” in Oracle Parallel Query can now be set automagically.
One reason I split out this discussion of workload management is that I also talked with IBM’s Tim Vincent yesterday, who added some insight to what I already wrote last August about DB2/InfoSphere Warehouse workload management. Specifically:
- DB2/InfoSphere Warehouse workload management has multiple ways to manage use of CPU resources.
- DB2/InfoSphere Warehouse workload management doesn’t directly manage consumption of I/O or RAM resources. However, it can influence usage of I/O or RAM by:
- Limiting the number or rows read or returned.
- Adjusting priorities as to which queries get to prefetch the most records.
- DB2/InfoSphere Warehouse workload management doesn’t allow you to directly set an SLA mandating query response time. However, if query response times exceed a target SLA, DB2/InfoSphere Warehouse workload management can cause a statistics dump that might help you tune your way out of the problem.
Categories: Data warehousing, IBM and DB2, Oracle, Workload management | Leave a Comment |
Oracle and Exadata: Business and technical notes
Last Friday I stopped by Oracle for my first conversation since January, 2010, in this case for a chat with Andy Mendelsohn, Mark Townsend, Tim Shetler, and George Lumpkin, covering Exadata and the Oracle DBMS. Key points included: Read more
Application areas for SAS HPA
When I talked with SAS about its forthcoming in-memory parallel SAS HPA offering, we talked briefly about application areas. The three SAS cited were:
- Consumer financial services. The idea here is to combine information about customers’ use of all kinds of services — banking, credit cards, loans, etc. SAS believes this is both for marketing and risk analysis purposes.
- Insurance. We didn’t go into detail.
- Mobile communications. SAS’ customers aren’t giving it details, but they’re excited about geocoding/geospatial data.
Meanwhile, in another interview I heard about, SAS emphasized retailers. Indeed, that’s what spawned my recent post about logistic regression.
The mobile communications one is a bit scary. Your cell phone — and hence your cellular company — know where you are, pretty much from moment to moment. Even without advanced analytic technology applied to it, that’s a pretty direct privacy threat. Throw in some analytics, and your cell company might know, for example, who you hang out with (in person), where you shop, and how those things predict your future behavior. And so the government — or just your employer — might know those things too.