Every now and then you read something along the lines of “more words will be published in the next 30 minutes than in the whole of human history up to the battle of Lepanto”. Whether this is literally true or not (and who could know?), there is certainly more data sloshing around than ever. This is partly because there are so many more transactions, partly because each transaction generates so much more detail, and partly because so much previously hidden data is now exposed on the Internet. The frustration for marketers is that they know this data could be immensely useful, if only they could employ it effectively.
This is the fundamental reason I spend so much time on database technology (to manage all that data), analytics (to make sense of it) and marketing automation (to do something useful with it). Gaining value from data is the greatest new challenge facing marketers today—as opposed to old but still important challenges like managing your brand and understanding your customers. Since the answers are still being discovered, it’s worth lots of attention.
One subject I haven’t explored very often is mining data from the public Internet (as opposed to data captured on your own Web site—the “private” Internet, if you will). Marketers don’t seem terribly concerned with this, apart from specialized efforts to track comments in the press, blogs, social networks and similar forums. Technologists do find the subject quite fascinating, since it offers the intriguing challenge of converting unstructured data into something more useful, preferably using cool semantic technology. It doesn’t hurt that tons of funding is available from government agencies that want to spy on us (for our own protection, of course). The main marketing application of this work has been building business and consumer profiles with information from public Web sources. Zoominfo may be the best known vendor in this field, although there are many others.
But plenty of other interesting work has been going on. I recently spoke with Denodo, which specializes in what are called “enterprise data mashups”. This turns out to be a whole industry (maybe you already knew this—I admit that I need to get out more). See blog posts by Dion Hinchcliffe here and here for more than I’ll ever know about the topic. What seems to distinguish enterprise mashups from the more familiar widget-based Web mashups is that the enterprise versions let developers take data from sources they choose, rather than sources that have already been formatted for use.
Since Denodo is the only mashup software I’ve examined, I can’t compare it with its competitors. But I was quite impressed with what Denodo showed me. Basically their approach is to build specialized connectors, called “wrappers,” that (a) extract specified information from databases, Web sites and unstructured text sources, (b) put it into queryable structure, and (c) publish it to other applications in whatever format is needed. Each of these is easier said than done.
Denodo showed me how it would build a wrapper to access competitive data exposed on a Web site—in this case, mobile phone rate plans. This was a matter of manually accessing the competitor’s site, entering the necessary parameter (a Zip code), and highlighting the result. Denodo recorded this process, read the source code of the underlying Web page, and developed appropriate code to repeat the steps automatically. This code was embedded in a process template that included the rest of the process (restructuring the data and exposing it). According to Denodo, the wrapper can automatically adjust itself if the target Web page changes: this is a major advantage since links might otherwise break constantly. If the Web page changes more than Denodo can handle, it will alert the user.
As I mentioned, Denodo will place the data it retrieves into a queryable format—essentially, an in-memory database table. It could also copy the data into a physical database if desired, although this is an exception. The data can be sorted and otherwise manipulated, and joined with data from other wrappers using normal database queries. Results can be posted back to the original sources or be presented to external systems in pretty much any format or interface: HTML, XML, CSV, ODBC, JDBC, HTTP, Web service, and the rest of the usual alphabet soup. Denodo can join data using inexact as well as exact matches, allowing it to overcome common differences in spelling and format.
The technicians among you may find this terribly exciting, but to most marketers it is pure gobbledygook. What really matters to them is the applications Denodo makes possible. The company cites several major areas, including gathering business and competitive intelligence; merging customer data across systems; and integrating business processes with Web sites.
Some of these applications resemble the integration-enabled interaction management offered by eglue (click here for my post). The difference is Denodo’s greater ability to access external data sources, and what I believe is a significantly more sophisticated approach to data extraction. On the other hand, eglue offers richer features for presenting information to call center agents. It does appear that Denodo significantly lowers the barriers to many kinds of data integration, which should open up all sorts of new possibilities.
The price seems reasonable, given the productivity benefits that Denodo should provide: $27,000 to $150,000 per CPU based on the number of data sources and other application details. An initial application can usually be developed in about two weeks.
Denodo was founded in Spain in 1999. The company has recently expanded outside of Europe and now has nearly 100 customers worldwide.
Wednesday, May 28, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment