What with it being Valentine’s Day and all, you probably didn’t wake up this morning asking yourself, “I wonder what’s new with DataFlux?” That, my friend, is where you and I differ. Except that I actually asked myself that question a couple of weeks ago, and by now have had time to get an answer. Which turns out to be rather interesting.
DataFlux, as anyone still reading this probably knew already, is a developer of data quality software and is owned by SAS. DataFlux’s original core technology was a statistical matching engine that automatically analyzes input files and generates sophisticated keys which are similar for similar records. This has now been supplemented by a variety of capabilities for data profiling, analysis, standardization and verification, using reference data and rules in addition to statistical methods. The original matching engine is now just one component within a much larger set of solutions.
In fact, and this is what I find interesting, much of DataFlux’s focus is now on the larger issue of data governance. This has more to do with monitoring data quality than simple matching. DataFlux tells me the change has been driven by organizations that face increasing pressures to prove they are doing a good job with managing their data, for reasons such as financial reporting and compliance with government regulations.
The new developments also encompass product information and other types of non-name and address data, usually labeled as “master data management”. DataFlux reports that non-customer data is is the fastest growing portion of its business. DataFlux is well suited for non-traditional matching applications because the statistical approach does not rely on topic-specific rules and reference bases. Of course, DataFlux does use rules and reference information when appropriate.
The other recent development at DataFlux has been creation of “accelerators”, which are prepackaged rules, processes and reports for specific tasks. DataFlux started offering these in 2007 and now lists one each for customer data quality, product data quality, and watchlist compliance. More are apparently on the way. Applications like this are a very common development in a maturing industry, as companies that started by providing tools gain enough experience to understand how the applications commonly built with those tools. The next step—which DataFlux hasn’t reached yet—is to become even more specific by developing packages for particular industries. The benefit of these applications is that they save clients work and allow quicker deployments.
Back to governance. DataFlux’s movement in that direction is an interesting strategy because it offers a possible escape from the commoditization of its core data quality functions. Major data quality vendors including Firstlogic and Group 1 Software, plus several of the smaller ones, have been acquired in recent years and matching functions are now embedded within many enterprise software products. Even though there have been some intriguing new technical approaches from vendors like Netrics and Zoomix, this is a hard market to penetrate based on better technology alone. It seems that DataFlux moved into governance more in response to customer requests than due to proactive strategic planning. But even so, they have done well to recognize and seize the opportunity when it presented itself. Not everyone is quite so responsive. The question now is whether other data quality vendors will take a similar approach or this will be a long-term point of differentiation for DataFlux.
Thursday, 14 February 2008
What's New at DataFlux? I Thought You'd Never Ask.
Posted on 14:15 by Unknown
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment