August 30, 2008

Are analytic DBMS vendors overcomplicating their interconnect architectures?

I don’t usually spend a lot of time researching Ethernet switches. But I do think a lot about high-end data warehousing, and as I noted back in July, networking performance is a big challenge there. Among the very-large-scale MPP data warehouse software vendors, Greenplum is unusual in that its interconnect of choice is (sufficiently many) cheap 1 gigabit Ethernet switches.

A recent Network World story suggested that Greenplum wasn’t alone in this preference; other people also feel that clusters of commodity 1 gigabit Ethernet switches can be superior to higher-performing ones. So I pinged CTO Luke Lonergan of Greenplum for more comment. His response, which I got permission to publish, was:

It turns out that non-blocking bandwidth at large scale is very cheap now due to switch vendors using FAT tree internally for switches from 48 ports to 672 (See the Force10 ES1200 and others). Also as the SDSC authors point out, you can build larger non-blocking networks that scale to huge size based on FAT or CLOS. Some of us built them for supercomputers in 1998 (see, scaling even low latency supercomputers up to thousands of nodes.

So – bandwidth is cheap, latency is expensive. Data analysis is a bandwidth problem, not a latency problem.

To put it mildly, Ethernet switching is not one of my core areas of expertise. So I’m just throwing this subject out for discussion. Thoughts, anybody?


4 Responses to “Are analytic DBMS vendors overcomplicating their interconnect architectures?”

  1. Nair on September 3rd, 2008 12:56 pm
  2. Steve Wooledge on September 3rd, 2008 2:07 pm

    David Cheriton, the lead of the distributed systems group at Stanford University, wrote a nice post about this topic:

  3. Curt Monash on September 3rd, 2008 6:24 pm


    Thanks for the catches!!

    There actually was a THIRD broken link. Grrr. That’s what happens when one forgets to test …


  4. Shawn Fox on September 5th, 2008 12:01 am

    Netezza also uses gigabit ethernet to connect the various components of the database together.

    Every set of 14 SPUs connects to a SPA via gigabit ethernet. A SPU is a basically a node consisting of RAM, CPU, disk drive, and the FPGA. The SPA is a custom gigabit ethernet switch with additional functionality. The SPA then connects to a cisco gigabit ethernet switch. The Linux host is connected to the cisco switch via either a single or dual gigabit ethernet depending on the size of the s ystem. This yields 180MB/second or so throughput between host and the SPUs.

Leave a Reply

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:


Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.