Blog: Mark Madsen Subscribe to this blog's RSS feed!

Mark Madsen

Open source is becoming a required option for consideration in many enterprise software evaluations, and business intelligence (BI) isn't exempt. This blog is the interactive part of my Open Source expert channel for the Business Intelligence Network where you can suggest and discuss news and events. The focus is on open source as it relates to analytics, business intelligence, data integration and data warehousing. If you would like to suggest an article or link, send an e-mail to me at open_source_links@ThirdNature.net.

About the author >

Mark, President of Third Nature, is a former CTO and CIO with experience working in both IT and vendors, including a stint at a company used as a Harvard Business School case study. Over the past decade, Mark has received awards for his work in data warehousing, business intelligence and data integration from the American Productivity & Quality Center, the Smithsonian Institute and TDWI. He is co-author of Clickstream Data Warehousing and lectures and writes about data integration, business intelligence and emerging technology.

 

Last week I presented at the Big Data Summit and attended Hadoop World in New York. Both events focused on the use of Hadoop and MapReduce for the processing and analyzing of very large amounts of data.

The Big Data Summit was organized by Aster Data and sponsored by Informatica and Microstrategy. Given that the summit was in the same hotel as that used for Hadoop World the following day, it would be reasonable to expect that most of the attendees would be attending both events. This was not entirely the case. Many of the summit attendees came from enterprise IT backgrounds and these folks were clearly interested in the role of Hadoop in enterprise systems. Whereas many of them were knowledgeable about Hadoop, an equal number were not.

The message coming out of the event was that Hadoop is a powerful tool for the batch processing of huge quantities of data, but coexistence with existing enterprise systems is fundamental to success. This is why Aster Data decided to use the event to launch their Hadoop Data Connector, which uses Aster's SQL-MapReduce (SQL-MR) capabilities to support the bi-directional exchange of data between Aster's analytical database system and the Hadoop Distributed File System (HDFS). One important use of Hadoop is to preprocess, filter, and transform vast quantities of semi-structured and unstructured data for loading into a data warehouse. This can be thought of as Hadoop ETL. Good load performance in this environment is critical.

Case studies from Comscore and LinkedIn demonstrated the power MapReduce in processing pedabytes of data. In the case of Comscore they are aiming to manage and analyze 3 months of detailed records (160 billion records) using Aster SQL/MR. LinkedIn, on the other hand is using a combination of Hadoop and Aster's MapReduce capabilities and moving data between the two environments. Performance and parallel processing is important for efficiently managing this exchange of data. This latter message was repeated by several other case studies at both events.

Hadoop World had a much more open source and developer feel to it. It was organized by Cloudera and had about 500 attendees. About half the audience was using Amazon Web Services and clearly experienced in Hadoop. Sponsors included Amazon Web Services, IBM, Facebook and Yahoo, all of whom gave keynotes. These keynotes were great for big numbers. Yahoo, for example, has 25,000 nodes running Hadoop (the biggest cluster has 4,000 nodes). Floor space and power consumption become major issues when deploying this level of commodity hardware. Yahoo processes 490 terabytes of data to construct its web index. This index takes 73 hours to build and has experienced a 50% growth in a year. This highlights the issues facing many web-based companies today, and potentially other organizations in the future.  

Although the event was clearly designed to evangelize the benefits of Hadoop, all of the keynotes emphasized interoperability with, rather than replacement of, existing systems. Two relational DBMS connectors were presented at the event including Sqoop from Cloudera and support for the Cloudera DBInputFormat interface from Vertica. Cloudera also took the opportunity of announcing it was evolving from a Hadoop services company to being a developer of Hadoop software.

The track sessions were grassroots Hadoop-related presentations. There was a strong focus on improving the usability of Hadoop and adding database and SQL query features to the system. I felt on several occasions many people were trying to reinvent the wheel and trying to solve problems that had already been solved by both open source and commercial database products. There is a clear danger of trying to expand Hadoop and MapReduce from being an excellent system for the batch processing of vast quantities of information to being a more generalized DBMS.  

The only real attack on existing database systems came surprisingly from the J. P. Morgan financial services company. The presentation started off by denigrating current systems and presenting Hadoop as an open source solution that solved everyone's problems at a much lower cost. When it came to use cases, however, the speakers positioned Hadoop as suitable for processing large amounts of unstructured data with high data latency. They also listed a number of "must have" features for the use of Hadoop in traditional enterprise situations: improved SQL interfaces, enhanced security, support for a relational container, reduced data latency, better management and monitoring tools, and an easier to use developer programming model. Sounds like a relational DBMS to me. Somehow the rhetoric at the beginning of the session didn't match the more practical perspectives of the latter part of the presentation.

In summary, it is clear that Hadoop and MapReduce have an important role to play in data warehousing and analytical processing. They will not replace existing environments, but will interoperate with them when traditional systems are incapable of processing big data and when certain sectors of an organization use Hadoop to mine and explore the vast data mountain that exists both inside and outside of organizations. This makes the current trend toward hybrid RDBMS SQL and MR solutions from companies such as Aster Data, Greenplum and Vertica an interesting proposition. It is important to point out, however, that each of these vendors takes a different approach to providing this hybrid support and it is essential that potential users match the hybrid solution to application requirements and developer skills. It is also important to note that Hadoop is more than simply MapReduce.   

If you want to get up to speed on all things Hadoop, read some case studies, and gain an understanding of its pros and cons versus existing systems then get Tom White's (I am not related!) excellent new book "Hadoop: The Definitive Guide" published by O'Reilly.



Posted October 6, 2009 1:37 PM
Permalink | No Comments |

Open source master data management got a boost on Monday when Talend announced that they acquired Xtentis MDM from Amalto. This product was geared towards creation of repository-style MDM applications, for example a product master data repository or a customer key cross-reference hub.

                     

Xtentis was a Java and XML-based product with an Eclipse UI, so it's a reasonably good technical fit with Talend's tools. While the product information links have been removed from their web site, you can still access the Xtentis product data sheet if you're interested in the functionality and user interface.

 

Talend's goal is to provide a generic MDM application that can be used for different subject areas. They will take over the application from Amalto and are already working on open-sourcing the base code with a planned product release date of January, 2010. It's not clear yet what the differences will be between the community edition and the subscription version. If their ETL tools are an indication, it will likely be in the areas of ease of use for multiple developers, manageability and more complete product line integration.

 

The development plan Talend described involves integration with their ETL and real-time integration tools. This is typically a weak point with MDM products on the market. Most MDM software, whether transaction-oriented or analytical, still requires the use of an ETL or real-time data integration product.

 

Talend claims this is, or will be, the first open source MDM product. That depends on how you define MDM, as the Sun Mural MDM project was announced in May of 2008. I lean toward Talend's claim of "first" because the Mural project was more of a data interchange and index system aimed at Java developers. Most IT people think of as master data management as something broader and deeper, with more functionality.

 

Mural is also unlikely to see much adoption. The project is still in a base state and the last official Mural announcement was over a year ago, showing how little has been going on internally. With Oracle owning multiple data integration and MDM products, it's hard to image that Mural will see any budget or staff dedicated to maintenance.

Posted October 1, 2009 7:30 AM
Permalink | No Comments |

I hear fairly often that consulting firms and systems integrators are more likely to use open source tools that IT because it allows them to be more competitive. They gain an edge by saving customers money on software licenses, or by having more customizable tools for projects, thus pricing themselves under competitors or providing a better fit with client needs. The other hope is that by freeing project budget from the software licenses, this could translate into more money spent on work with the consultants.

SI_use_OSS.gifWhile these points are all valid, the survey data on adoption seems to disprove the belief. An interesting pattern in the data is that consultants are generally less likely than IT professionals to use open source tools in this space (10% for consultants versus 36% for IT). The usage by respondent role is shown in the chart.

It is notable that 49% of the consultants and systems integrators are evaluating open source software today, signaling a possible shift. What this also says is that, far from leading the technology market, SIs and consultants seem to trail it, following the money rather than leading their customers in the market.

Even with the sudden rise in evaluation, consultants and SIs significantly trail IT departments. If you are in an IT organization that relies heavily on consultants for project work then using open source tools will require that you find qualified consultants ahead of time. Given these statistics, they are likely to be rarer than you expect.

 

We'd love to have your input on open source BI/DW software you're using and the challenges you faced. If you have 10 minutes, take our online survey. It will be open through September 22.


Posted September 16, 2009 4:30 AM
Permalink | No Comments |

The MySQL May conference keynote videos and presentations files are all posted so you can download the ones you're interested in now. Embedded below is the video and slide deck for my keynote on Thursday.

The gist of this presentation is that business intelligence and analytics are the #1 IT spending priority, BI technology is becoming a commodity, open source BI and DW tools are maturing, and the supporting stats about open source BI and DW adoption.




If you want to look at the slides at your own pace, they're embedded below:


The open source stats are from a survey on open source BI adoption I've been running for a couple months, sponsored by Infobright and Jaspersoft. You can see a recap of this keynote plus some more stats and short talks by the CEOs of Infobright and Jaspersoft in "The State of Open Source BI and Data Warehousing" webcast at the MySQL web site.

We'll have a paper discussing the results of the adoption survey available for download soon. Look for it some time next month.

Links (includes case studies from Monolith Software and Consorte Media):
Keynote video
Slides (PDF available via Slideshare)


Posted May 22, 2009 12:37 PM
Permalink | No Comments |

I thought it would be nice to share some data on database size from the open source business intelligence / data warehouse adoption survey we've been running. Database size is a popular topic so some real data on size might be helpful if you're planning a deployment.

The question we asked was "How much raw data (in gigabytes) is being stored or accessed?" The chart below shows the results (with some annotation).

db-size-graph.gif

The databases in use are not all open source. This is the size regardless of database type. The restriction is that people are using open source in some part of the data warehouse stack, so an open source BI tool accessing an Oracle database would be included. Even so, the bulk of the respondents are using open source databases like MySQL and Postgres.

The general pattern follows what we see in the commercial data warehouse market, with the bulk of installations (82%) less than a terabyte in size. We do see a lower overall size relative to the completely commercial market - the number there is roughly 65%.

The truth is that for many organization, size is not a critical factor relative to other concerns. At the same time, the query performance is still a challenge for most. The difficulty of getting good query performance is one of the major factors driving people to look at appliances, columnar databases and other data warehouse platforms.

In the open source market there are quite a few options, some of which I listed a while ago. Two notable companies in the MySQL market are Infobright, makers of a columnar storage engine, and Kickfire, a hardware-based MySQL-compatible appliance. Both are aiming at the largest part of the market with products that are aimed at the under 10 terabyte space and with significnatly lower costs than one expects in the data warehouse platform market.

I'll be doing a live webcast to preview some of the other data from the survey on Wednesday, April 29 at 10:00 AM Pacific. Also speaking will be Miriam Tuerk, CEO of Infobright, and Brian Gentile, CEO of Jaspersoft. After our respective talks we'll be taking questions online.

Also, the survey is running through May, so you can still add your stats to the picture.


Posted April 28, 2009 6:03 PM
Permalink | No Comments |

I uploaded the slides from last's week's webcast on operational data integration and open source. They're embedded below for online viewing.

This is an overview of the difference between application integration and data integration, the differences in use and requirements for DI between business intelligence and OLTP, some integration architecture discussion, and why open source is an even better fit in the operational DI arena than it is for BI projects.

If you want to download a PDF of the slides or listen to a replay, you can find this talk under "How to Use the Right Tools for Operational Data Integration" on Talend's webcast page. There's no direct link to the presentation page so you have to click through.


More detailed description of the webcast

Data integration tools were once used solely in support of data
warehousing, but that has been changing over the past few years. The
fastest growing area today for data integration is outside the data
warehouse, whether it's one-time data movement for migrations and
consolidations or real-time data synchronization for master data
management projects.


Data integration tools have proven to be
faster, more flexible and more cost effective for operational data
integration than the common practice of hand-coding or using
application integration technologies. The developer focus of these
technologies also makes them a prime target for open source
commoditization.



During the presentation you will learn about the differences between
analytical and operational data integration, technology patterns and
options, and recommendations for how to begin using tools for
operational data integration.



Key points:

  • How to map common project scenarios to integration architectures and
    tools
  • The technology and market changes that favor use of tools for
    operational data integration
  • The differing requirements for operational vs. analytic data
    integration
  • Advantages of open source for data integration tasks
    embed:


Posted March 23, 2009 5:00 AM
Permalink | No Comments |

I'm doing a research survey on open source data warehouse and BI adoption that takes about 5 minutes to fill out.There's an almost complete lack of data specific to the business intelligence and data warehouse market - all the open source studies I read are generic and at best they extrapolate what's happening based on the general IT market. I want to change that.

If you have evaluated open source tools in any area of the business intelligence stack - databases, ETL tools, reporting, visualization - please consider filling out the survey whether it passed your evaluation or not, so we can begin to understand where and how people are using open source. It's as important to understand what's wrong with open source tools as what's right.

The results of this research will be summarized in a keynote at the MySQL Conference on April 23 but we'll be extending it throughout this year.

This first survey we're running is about adoption so we can answer some basic questions:
What industries, departments or functional areas are using open source?
What countries are leading the adoption?
What software categories are being used: reporting, OLAP, ETL, data mining, databases?
Why are people choosing or deciding against open source in this segment of the market?

Thanks also to Infobright (open source columnar database) and JasperSoft (open source BI stack) who are kind enough to donate a TomTom One XL portable GPS for a prize drawing after the survey is done. If you complete the survey and provide your information, you'll be entered to win. We'll do the drawing and announce the name at the MySQL conference.

Whether your open source evaluation led you to think of it as the holy grail or the devil's chalice, please take 5 minutes to fill out the survey.


Posted March 20, 2009 10:18 AM
Permalink | No Comments |

A useful attribute of all open source tools is the ability to download and start evaluating the software immediately to see if it fits the requirements. There is no vendor involvement slowing the process of evaluation.

Bloor research published a report comparing costs of various data integration products but one of the more interesting items isn't about cost - it's the average time required for a company to evaluate various data integration products. Open source is the clear winner here.

weeks_to_eval_software.gif
Figure: Person-weeks required for evaluation. Source: Bloor Research

When working with proprietary software vendors, trials and proofs of concept require management involvement and multiple levels of approval. The legal department is often involved since there's usually a trial license agreement. The process is not under the developer's control, the schedule is governed by vendor terms and the process requires extra work.

Beyond the ease of evaluation, it's easier to get started with a project. With open source, time spent evaluating tools that might never be used can instead be spent on a proof of concept that is reusable in production.

If the proof of concept fails, the same time has been spent as it would with any other software. If the proof of concept succeeds, it can be moved directly into production without the required up-front commitments that traditional vendors need.

Speed to deliver is one of the open source advantages I described in more detail in the open source data integration paper I wrote for Talend. Also, if you'd like to read the Bloor report you can download a full copy from the Pervasive web site




Posted March 3, 2009 11:55 AM
Permalink | No Comments |
Below are the slides from the presentation I gave yesterday on open source BI adoption. The talk is a brief overview of the rationale and benefits, some of the situations appropriate for use, and a few thoughts on internal barriers to use.

This is part of a webcast done jointly with Actuate on the Business Intelligence Network. You can listen to the archived presentation as well as seeing Actuate's presentation on BIRT by going to the webcast registration page.

Posted February 20, 2009 4:20 AM
Permalink | No Comments |

Open source database adoption for BI and data warehousing appears to lag the open source BI and ETL tools. There are lots of reasons for this documented elsewhere, but one reason becoming less valid is performance.

An IDC survey of data warehouse size reported that ~60% of data warehouses are less than a terabyte in size. Several other surveys over the past few years reported similar findings. This tells us that the industry focus on scale-out options is overkill for the majority of people deploying data warehouses. What's needed is cost-effective performance at a scale of less than a terabyte. There are interesting vendors of both close and open source databases and appliances that work well in this size range.

Gartner recently gave some recommendations on open source databases and data warehousing that I think are inappropriate. They suggest MySQL as the only viable option. Part of their rationale is sound: commercial support and company viability. Most of the open source databases are smaller vendors or the projects are community supported rather than commercial, making them less suitable for enterpriuse use.

Where Gartner goes wrong is that  MySQL isn't as good for BI workloads. It's easy to find information on basic MySQL performance, but not for data warehouse workloads. Maybe that's why Gartner overlooked this MySQL performance test. MySQL couldn't complete the 100GB scale tests, and part of the reason is obvious: missing features for large-scale queries.

These are some of the reasons companies have stepped in to offer new storage engines and appliances that are MySQL compatible. Infobright is delivering a MySQL-compatiable BI-focused product - it's hard to get proper scaling and performance with standard MySQL as a data warehouse database. Kickfire offers a different option for performance in an appliance package. Postgres and Ingres offer better features for both querying and managability with data warehouse workloads. EnterpriseDB delivers commercial support for Postgres as well as providing a scale-out option, removing another of the Gartner criticisms.

Jos van Dongen did a small scale TPC-H benchmark with a group of open source databases and one of the major vendors (name withheld since they don't allow third party publication of bechmarks). What's most interesting is how well the (relatively new) MySQL 5.1 release performed. Even more amazing is how well MonetDB and LucidDB performed relative to the others. Maybe it shouldn't be a surprise since we're talking about columnar engines, query workloads and a small scale test. He's got a nice chart showing the BI-related features in these open source databases.

When you grow a dataset to the 10GB and 100GB scales (which Jos is doing), the results will sureley change. The maturity of a database is really seen when you have to do three things: manage larger volumes of data, optimize complex queries on that volume, and deal with concurrrent users querying this data. I suspect there will be a reshuffling of his benchmark results at larger sizes.

Other interesting performance information is the benchmark Josh Berkus wrote about in his blog post on a Postgres benchmark run at Sun, where he notes that Postgres is almost as fast as Oracle on equivalent hardware, at significantly lower cost. (I know this old info for those of you who follow Postgres more closely) While not a DW-specific benchmark, it does demonstrate equivalent performance levels - the key point. A similar benchmark was done with MySQL, DB2, Oracle and Microsoft a few years ago and showed similar results.


Posted February 8, 2009 12:19 PM
Permalink | 1 Comment |