“You gotta get a gimmick if you wanna get ahead” sing the strippers in the classic musical Gypsy. The same rule seems to apply to analytical databases: each vendor has its own little twist that makes it unique, if not necessarily better than the competition. This applies even, or maybe especially, to the non-columnar systems that use a massively parallel (“shared-nothing”) architecture to handle very large volumes.
You’ll note I didn’t refer to these systems as “appliances”. Most indeed follow the appliance path pioneered by industry leader Netezza, I’ve been contacted by Aster Data, Microsoft), Dataupia, and Kognitio. A review of my notes shows that no two are quite alike.
Let’s start with Dataupia. CEO and founder Foster Hinshaw was also a founder at Netezza, which he left in 2005. Hinshaw still considers Netezza the “killer machine” for large analytical workloads, but positions Dataupia as a more flexible product that can handle conventional reporting in addition to ad hoc analytics. “A data warehouse for the rest of us” is how he puts it.
As it happens, all the vendors in this group stress their ability to handle “mixed workloads”. It’s not clear they mean the same thing, although the phrase may indicate that data can be stored in structures other than only star/snowflake schemas. In any event, the overlap is large enough that I don’t think we can classify Dataupia as unique on that particular dimension. What does set the system apart is its ability to manage “dynamic aggregation” of inputs into the data cubes required by many business intelligence and reporting applications. Cube building is notoriously time-consuming for conventional databases, and although any MPP database can presumably maintain cubes, it appears that Dataupia is especially good at it. This would indeed support Dataupia’s position as more reporting-oriented than its competitors.
The other apparently unique feature of Dataupia is its ability to connect with applications through common relational databases such as Oracle and DB2. None of the other vendors made a similar claim, but I say this is “apparently” unique because Hinshaw said the connection is made via the federation layer built into the common databases, and I don’t know whether other systems could also connect in the same way. In any case, Hinshaw said this approach makes Dataupia look to Oracle like nothing more than some additional table space. So integration with existing applications can’t get much simpler.
One final point about Dataupia is pricing. A 2 terabyte blade costs $19,500, which includes both hardware and software. (Dataupia is a true appliance.) This is a much lower cost than any competitor.
The other true appliance in this group is DATAllegro. When we spoke in April, it was building its nodes with a combination of EMC storage, Cisco networking, Dell servers, Ingres database and the Linux operating system. Presumably the Microsoft acquisition will change those last two. DATAllegro’s contribution was the software to distribute data across and within the hardware nodes and to manage queries against that data. In my world, this falls under the heading of intelligent partitioning, which is not itself unique: in fact, three of the four vendors listed here do it. Of course, the details vary and DATAllegro’s version no doubt has some features that no one else shares. DATAllegro was also unique in requiring a large (12 terabyte) initial configuration, for close to $500,000. This will also probably change under Microsoft management.
Aster Data lets users select and assemble their own hardware rather than providing an appliance. Otherwise, it generally resembles the Dataupia and DATAllegro appliances in that it uses intelligent partitioning to distribute its data. Aster assigns separate nodes to the tasks of data loading, query management, and data storage/query execution. The vendor says this makes it easy to support different types of workloads by adding the appropriate types of nodes. But DATAllegro also has separate loader nodes, and I’m not sure about the other systems. So I’m not going to call that one unique. Aster pricing starts at $100,000 for the first terabyte.
Kognitio resembles Aster in its ability to use any type of hardware: in fact, a single network can combine dissimilar nodes. A more intriguing difference is that Kogitio is the only one of these systems that distributes incoming data in a round-robin fashion, instead of attempting to put related data on the same node. It can do this without creating excessive inter-node traffic because it loads data into memory during query execution—another unique feature among this group. (The trick is that related data is sent to the same node when it's loaded into memory. See the comments on this post for details.)
Kognitio also wins the prize for the oldest (or, as they probably prefer, most mature) technology in this group, tracing its WX2 product to the WhiteCross analytical database of the early 1980’s. (WX2…WhiteCross…get it?) It also has by far the highest list price, of $180,000 per terabyte. But this is clearly negotiable, especially in the U.S. market, which Kognitio entered just this year. (Note: after this post was originally published, Kognitio called to remind me that a. they will build an appliance for you with commodity hardware if you wish and b. they also offer a hosted solution they call Data as a Service. They also note that the price per terabyte drops when you buy more than one.)
Whew. I should probably offer a prize for anybody who can correctly infer which vendors have which features from the above. But I’ll make it easy for you (with apologies that I still haven’t figured out how to do a proper table within Blogger).
______________Dataupia___DATAllegro___Aster Data___Kognitio
Mixed Workload_____Yes________Yes________Yes________Yes
Intelligent Partition___Yes________Yes________Yes________no
Appliance__________Yes________Yes________no________no
Dynamic Aggregation__Yes________no_________no________no
Federated Access_____Yes________no_________no________no
In-Memory Execution__no________no_________no________Yes
Entry Cost per TB___$10K(1)___~$40K(2)______$100K______$180K
(1) $19.5K for 2TB
(2) under $500K for 12TB; pre-acquisition pricing
As I noted earlier, some of these differences may not really matter in general or for your application in particular. In other cases, the real impact depends on the implementation details not captured in such a simplistic list. So don’t take this list for anything more than it is: an interesting overview of the different choices made by analytical database developers.
Wednesday, August 27, 2008
Subscribe to:
Post Comments (Atom)
11 comments:
David:
One quick point, if I may: Kognitio's offering is called Data Warehousing as a Service (DaaS). In addition, please understand that Kognitio allows its WX2 database to be deployed across multiple locations at no additional charge, since its licensing is based on the amount of data being analyzed, not seats or sites.
Thanks.
David,
You made a statement I didn't quite understand:
"It can do this [distribute incoming data in a round-robin fashion] without creating excessive inter-node traffic because it loads data into memory during query execution"
Can you explain this in a little more detail?
Sorry to have been obscure. Most MPP databases place related data on the same node. For example, if most of your queries look at data by customer, you might put all data for the same customer on the same node. This means that most query processing can occur without moving data from one node to another, which improves performance because such movement could easily become a bottleneck. This allocation process is what I referred to as "intelligent partitioning", and, as I said, is a very common strategy. Kognitio doesn't bother with this, apparently because loading data into memory before the query is executed allows it to reduce the volume of inter-node traffic. Quite honestly, I don't see why that is so, but I'll take their word for it.
Agreed -- I don't see the connection between loading data into memory and reducing inter-node traffic either :)
I shall ask them to clarify...
David - Thanks for writing about Aster. From our experience, intelligent partitioning is actually far superior to round-robin (also known as 'random') scattering of data across nodes. With random scattering, the system cannot exploit the benefit of optimal organization of data on different nodes and needs to pull data from all nodes. This actually forces systems that do random scattering to keep data in memory, because pulling all data from disks is infeasible. Since memory per node is limited, in-memory systems require a lot of hardware (e.g. 100 nodes for 5 TB) increasing total cost significantly. Aster goes beyond simple hash partitioning and have patent-pending partitioning algorithms for optimally placing data across nodes (POD Partitioning). In addition, Aster nodes optimize query performance by combining compression and aggressive buffering to optimally shuffle data segments at query run-time (POD Transport). With Aster, one could set up a 5 TB system with 3-4 nodes. With 100 nodes, the capacity would be 150-200 TB, giving far superior price/performance results.
I did receive a clarification from Kognitio about how they WX2 avoid excessive inter-node traffic. The one-sentence answer is that when data is loaded from disk to memory, the system redistributes it so related data is in the same node’s memory.
The long answer explains how they do this efficiently. One critical point is that WS2 converts multi-step SQL statements into a single stream of machine language. This extracts only the necessary data and determines how it should be organized, so the volume of data is minimized and the data makes just one trip through the network before reaching the correct node. It also lets the system process all intermediate results in memory, rather than writing th results of each step to disk as a conventional SQL engine might. Another critical point is that the system can store some data permanently in memory, so it does not need to reload it from disk for each new query. This data is selected by the database administrator, not automatically, so there is some user skill involved. The system does provide some help via reports on past results.
The Kognitio Web site offers several white papers with some additional information. Start with the one written by Bloor Research, whose work I greatly respect. But even that paper doesn’t make the critical point about data being redistributed when loaded into memory--you read it here first.
David:
Two other points (apologies for multiple posts): Kognitio's database is WX2, not WS2...and to clarify, WX2 has been repeatedly and significantly upgraded during its time on the market. Yes, the database is the "most mature" on the market today (to use your words), but it's also in its sixth iteration, with significant investment in time and money being made to get WX2 to the point where it is today. It's so stable, in fact, that several companies are using WX2 in a BC/DR implementation. You may be less likely to do that with databases where the code is not as proven over time.
Thanks, again...and have a great weekend.
http://www.asterdata.com/blog/index.php/2008/10/06/aster-ncluster-30-aligning-product-with-vision/
http://www.asterdata.com/blog/index.php/2008/10/06/growing-your-business-with-frontline-data-warehouses/
http://www.asterdata.com/blog/index.php/2008/10/06/conversation-with-lenin-gali-director-of-bi-for-sharethis/
This is a really useful posting.
How are these tools different from vendors like Vertica and ParAccel that boast the use of column based DBMS
Hi Andy,
Thanks for your question. I've addressed that topic in a some detail in a recent DM Review column, which you can find in my archive at
http://archive.raabassociatesinc.com/2008/07/analytical-database-options/
Basically, the MPP systems use a row-oriented structure and rely on partitioning to spread the work of querying across multiple processors and memory stores. The columnar systems use a column-oriented structure that lets them reduce the work of querying by only reading the columns of data needed for a particular query. The one other salient point is that the columnar systems generally have not been deployed at the 100TB scales common to the MPP systems. (Sybase IQ is a major exception--it has proven itself at that level.) Of course, the vendors swear scalability is no problem, but I always want to see some production installations first.
Post a Comment