How immediate consistency works
This post started as a minor paragraph in another one I’m drafting. But it grew. Please also see the comment thread below.
Increasingly many data management systems store data in a cluster, putting several copies of data — i.e. “replicas” — onto different nodes, for safety and reliable accessibility. (The number of copies is called the “replication factor”.) But how do they know that the different copies of the data really have the same values? It seems there are three main approaches to immediate consistency, which may be called:
- Two-phase commit (2PC)
- Read-your-writes (RYW) consistency
- Prudent optimism 🙂
I shall explain.
Two-phase commit has been around for decades. Its core idea is:
- One node commands other nodes (and perhaps itself) to write data.
- The other nodes all reply “Aye, aye; we are ready and able to do that.”
- The first node broadcasts “Make it so!”
Unless a piece of the system malfunctions at exactly the wrong time, you’ll get your consistent write. And if there indeed is an unfortunate glitch — well, that’s what recovery is for.
But 2PC has a flaw: If a node is inaccessible or down, then the write is blocked, even if other parts of the system were able to accept the data safely. So the NoSQL world sometimes chooses RYW consistency, which in essence is a loose form of 2PC: Read more
Categories: Aster Data, Clustering, Hadoop, HBase, IBM and DB2, Netezza, NoSQL, Teradata, Vertica Systems | 11 Comments |
Hadoop notes: Informatica, Splunk, and IBM
Informatica, Splunk, and IBM are all public companies, and correspondingly reticent to talk about product futures. Hence, anything I might suggest about product futures from any of them won’t be terribly detailed, and even the vague generalities are “the Good Lord willin’ an’ the creek don’ rise”.
Never let a rising creek overflow your safe harbor.
Anyhow:
1. Hadoop can be an awesome ETL (Extract/Transform/Load) execution engine; it can handle huge jobs and perform a great variety of transformations. (Indeed, MapReduce was invented to run giant ETL jobs.) Thus, if one offers a development-plus-execution stack for ETL processes, it might seem appealing to make Hadoop an ETL execution option. And so:
- I’ve already posted that BI-plus-light-ETL vendors Pentaho and Datameer are using Hadoop in that way.
- Informatica will be using Hadoop as an execution option too.
Informatica told me about other interesting Hadoop-related plans as well, but I’m not sure my frieNDA allows me to mention them at all.
IBM, however, is standing aside. Specifically, IBM told me that it doesn’t see the point of doing the same thing, as its ETL engine — presumably derived from the old Ascential product line — is already parallel and performant enough.
2. Last year, I suggested that Splunk and Hadoop are competitors in managing machine-generated data. That’s still true, but Splunk is also preparing a Hadoop co-opetition strategy. To a first approximation, it’s just Hadoop import/export. However, suppose you view Splunk as offering a three-layer stack: Read more
Categories: EAI, EII, ETL, ELT, ETLT, Hadoop, IBM and DB2, Informatica, Log analysis, MapReduce, Splunk | 9 Comments |
In-memory, (hybrid) memory-centric DBMS — three analytic glossary draft entries
These are three closely-related draft entries for the DBMS2 analytic glossary. Please comment with any ideas you have for their improvement!
1. We coined the term memory-centric data management to comprise several kinds of technology that manage data in RAM (Random Access Memory), including:
- In-memory DBMS (DataBase Management Systems).
- Hybrid memory-centric DBMS.
- Other kinds of in-memory data stores, such as:
- Caching layers.
- In-memory data stores that are tightly tied to specific analytic tools, for example the in-memory data management part of QlikView.
- Complex event/stream processing.
Related link
- Many examples of memory-centric data management (April, 2012)
2. An in-memory DBMS is a DBMS designed under the assumption that substantially all database operations will be performed in RAM (Random Access Memory). Thus, in-memory DBMS form a subcategory of memory-centric data management systems.
Ways in which in-memory DBMS are commonly different from those that query and update persistent storage include: Read more
Categories: Analytic glossary, Cache, In-memory DBMS, Memory-centric data management, Streaming and complex event processing (CEP) | 7 Comments |
In-database analytics — analytic glossary draft entry
This is a draft entry for the DBMS2 analytic glossary. Please comment with any ideas you have for its improvement!
Note: Words and phrases in italics will be linked to other entries when the glossary is complete.
“In-database analytics” is a catch-all term for analytic capabilities, beyond standard SQL, running on the same machine as and under the management of an analytic DBMS. These can run in one or both of two modes:
- In-process or unfenced, i.e. in the same process as the DBMS itself. This option gives maximum performance, but any defects in the analytic code may crash the whole DBMS. Also, it generally requires that the code be in the same language as the DBMS, i.e. C++.
- Out-of-process or fenced, i.e. in a separate process. This option sacrifices performance, in favor of reliability and language flexibility.
In-database analytics may offer great performance and scalability advantages versus the alternative of extracting data and having it be processed on a separate server. This is particularly likely to be the case in MPP (Massively Parallel Processing) analytic DBMS environments.
Examples of in-database analytics include:
- Creating temporary data structures that persist past the life of a query.
- Creating temporary data structures that are non-tabular.
- Predictive modeling that uses all the same nodes in an MPP cluster where the data resides.
- Predictive analytics (scoring only).
Other common domains for in-database analytics include sessionization, time series analysis, and relationship analytics.
Notable products offering in-database analytics include:
- Teradata Aster SQL/MR.
- Multiple other analytic platforms, such as Sybase IQ, Vertica, or IBM Netezza. Indeed, in-database analytics are a defining feature of analytic platforms.
- Fuzzy Logix (for predictive analytics).
Categories: Analytic glossary, Aster Data, Data warehousing, IBM and DB2, MapReduce, Netezza, Parallelization, Predictive modeling and advanced analytics, Sybase, Teradata, Vertica Systems | 8 Comments |
Analytic platform — analytic glossary draft entry
This is a draft entry for the DBMS2 analytic glossary. Please comment with any ideas you have for its improvement!
Note: Words and phrases in italics will be linked to other entries when the glossary is complete.
In our usage, an “analytic platform” is an analytic DBMS with well-integrated in-database analytics, or a data warehouse appliance that includes one. The term is also sometimes used to refer to:
- Any analytic DBMS or data warehouse appliance.
- Other kinds of software, or software/hardware combination, that support broad analytic capabilities.
To varying extents, most major vendors of analytic DBMS or data warehouse appliances have extended their products into analytic platforms; see, for example, our original coverage of analytic platform versions of as Aster, Netezza, or Vertica.
Related posts
- Our original definition of “analytic platform” (February, 2011)
- Our original feature list for analytic platforms (January, 2011)
Categories: Analytic glossary, Aster Data, Data warehouse appliances, Data warehousing, Netezza, Vertica Systems | 3 Comments |
Data warehouse appliance — analytic glossary draft entry
This is a draft entry for the DBMS2 analytic glossary. Please comment with any ideas you have for its improvement!
Note: Words and phrases in italics will be linked to other entries when the glossary is complete.
A data warehouse appliance is a combination of hardware and software that includes an analytic DBMS (DataBase Management System). However, some observers incorrectly apply the term “data warehouse appliance” to any analytic DBMS.
The paradigmatic vendors of data warehouse appliances are:
- Teradata, which embraced the term “data warehouse appliance” in 2008.
- Netezza — now an IBM company — which popularized the term “data warehouse appliance” in the 2000s.
Further, vendors of analytic DBMS commonly offer — directly or through partnerships — optional data warehouse appliance configurations; examples include:
- Greenplum, now part of EMC.
- Vertica, now an HP company.
- IBM DB2, under the brand “Smart Analytic System”.
- Microsoft (Parallel Data Warehouse).
Oracle Exadata is sometimes regarded as a data warehouse appliance as well, despite not being solely focused on analytic use cases.
Data warehouse appliances inherit marketing claims from the category of analytic DBMS, such as: Read more
Categories: Analytic glossary, Data warehouse appliances, Data warehousing, EMC, Exadata, Greenplum, HP and Neoview, IBM and DB2, Microsoft and SQL*Server, Netezza, Oracle, Teradata | 4 Comments |
DBMS2 analytic glossary — a new project
Enterprise software terminology is too often mired in confusion. I hope to lessen that by publishing a series of web pages that define and describe various industry terms, with one or several paragraphs per subject, and plenty of internal and external links.
Absent a better name, I’ll refer to this as an “analytic glossary”. I want users of the analytic glossary to learn or confirm:
- What terms mean.
- Which products exemplify which product categories.
- Which additional subjects to look into.
I will do or closely direct the core writing myself. I may hire outside help for ancillary tasks, such as adding links, or for various kinds of wordsmithing.
All this presupposes a site redesign, which hasn’t begun. But I’ve started to draft the content. As I do, I’ll post it here. And I very much want you to comment.
If you think I got something wrong or left anything out — even if it’s just a nuance — please speak up! Later, when the glossary pages are live, I’ll link them back to the original blog post discussions. Thus, your comments will be part of the permanent glossary record. Read more
Categories: About this blog, Analytic glossary | 5 Comments |
What kinds of metadata are important anyway?
In today’s post about HCatalog, I noted that the Hadoop/HCatalog community didn’t necessarily understand all the kinds of metadata that enterprises need and want, especially in the context of data integration and ETL and ELT (Extract/Transform/Load/Transform). That raises a natural question — what kinds of metadata do users need or want? In the hope of spurring discussion, from vendors and users alike, I’m splitting this question out into a separate post.
Please comment with your thoughts about ETL-related metadata needs. The conversation needs to advance.
In the relational world, there are at least three kinds of metadata:
- Definitional information about data structures, without which you can’t have a relational database at all. That area seems binary; either you have enough to make sense of your data or you don’t.
- Statistics about columns and tables, such as the most frequent values and how often they occur, which are kept for the purpose of optimization. Those seem to be nice-to-haves more than must-haves. The more information of this kind you have, the more chances you have to save resources.
- Historical and security information about data. This is where things get really complicated. It’s also where Hadoop is still in the “So what exactly should we build?” stage of design.
As I see it:
- Historical information about data answers questions in the realm of “Who did what to which data when?”
- Security information about data answers questions around “Who may do what to which data in the future?”
- They overlap because:
- They rely on closely related schemes for assessing roles and identity.
- Audit trails, a key aspect of security and compliance, could logically be viewed as falling in the realm of “history”.
Categories: EAI, EII, ETL, ELT, ETLT, Hadoop | 9 Comments |
HCatalog — yes, it matters
To a first approximation, HCatalog is the thing that will make Hadoop play nicely with all kinds of ETLT (Extract/Transform/Load/Transform). However, HCatalog is both less and more than that:
- Less, because some of the basic features people expect in HCatalog may well wind up in other projects instead.
- More, in that HCatalog could (also) be regarded as a manifestation of efforts to make Hadoop/Hive more like a DBMS.
The base use case for HCatalog is:
- You have an HDFS file, and know its record format.
- You write DDL (Data Description Language) to create a virtual table whose columns look a lot like the HDFS file’s record fields. The language for this is HiveQL, Hive’s SQL-like language.
- You interact with that table in any of variety of ways (batch operations only), including but not limited to:
- Hive.
- Pig.
- JDBC/ODBC.
- ETL tools such as Talend.
- DBMS integrations such as Teradata Aster’s SQL-H.
Major variants on that include: Read more
Categories: Hadoop, HBase, Hortonworks, Informatica | 12 Comments |
Notes on some basic database terminology
In a call Monday with a prominent company, I was told:
- Teradata, Netezza, Greenplum and Vertica aren’t relational.
- Teradata, Netezza, Greenplum and Vertica are all data warehouse appliances.
That, to put it mildly, is not accurate. So I shall try, yet again, to set the record straight.
In an industry where people often call a DBMS just a “database” — so that a database is something that manages a database! — one may wonder why I bother. Anyhow …
1. The products commonly known as Oracle, Exadata, DB2, Sybase, SQL Server, Teradata, Sybase IQ, Netezza, Vertica, Greenplum, Aster, Infobright, SAND, ParAccel, Exasol, Kognitio et al. all either are or incorporate relational database management systems, aka RDBMS or relational DBMS.
2. In principle, there can be difficulties in judging whether or not a DBMS is “relational”. In practice, those difficulties don’t arise — yet. Every significant DBMS still falls into one of two categories:
- Relational:
- Was designed to do relational stuff* from the get-go, even if it now does other things too.
- Supports a lot of SQL.
- Non-relational:
- Was designed primarily to do non-relational things.*
- Doesn’t support all that much SQL.
*I expect the distinction to get more confusing soon, at which point I’ll adopt terms more precise than “relational things” and “relational stuff”.
3. There are two chief kinds of relational DBMS: Read more