Friday, April 24, 2015

Bombora Feeds B2B Data to Everyone

One of the little patterns that caught my attention at last week’s Marketo conference was that several vendors mentioned using data from the same source: Madison Logic Data, which recently renamed itself Bombora*. The company was already familiar to me through clients who deal with it. But I had never gotten a clear picture of exactly what they do. Those additional mentions finally pushed me to explore further.

A couple of emails later and I was on the phone with Erik Matlick, Madison Logic founder and Bombora CEO. We went through a bit of the back story: Madison Logic was founded as a B2B media company six years ago. It built a network of B2B publishers to sell ads and gather data about their site visitors. Both businesses grew nicely, but the company found that selling media conflicted with finding partners to gather data. So last November is spun off the data piece as Madison Logic Data,, keeping Madison Logic as the media business.  The change from Madison Logic Data to Bombora was announced on April 13.

To which you probably say, who cares? Fair enough. What really matters is what Bombora does today and, more pointedly, what it can do for you. Turns out, that’s quite a bit.

Bombora’s core business is assembling data about B2B companies and individuals. It does this through a network of publishers who put a Bombora pixel on their Web pages, which lets Bombora track activities including article and video viewing, white paper downloads, Webinar attendance, on-site search, and participation in online communities. The company tags its publishers' content with a 2,300-topic taxonomy, allowing it to associate visitors with intent based on the topics they consume. It identifies visitors based on IP address, domain, and registration forms on the publisher sites. It also attaches demographic information based on information they provide on the registration forms and the publishers have in their own profiles. The volumes are huge: 4 billion transactions per month, more than 250 million business decision makers, and 85 million email addresses collected in a year.

All that information has many uses: [feel free to insert your favorite cliché about how data being important]. Like meat packers who use every part of the pig but the squeal, Bombora is determined the squeeze the most value possible from the data it assembles. This means selling it intent and demographic audience segments for display advertising, marketing automation and email segmentation, Web audience analytics, data enhancement, content personalization, media purchasing, and predictive modeling.  Different users get different data: sometimes cookies, device IDs or email addresses, and sometimes by company, individual, or segment. Publishers who contribute data are treated as part of a co-op and get access to all 2,300 intent topics. Others only can select from around 60 summary categories.

If you’re a B2B marketer, you’re probably drooling at the thought of all that data. So why haven’t you heard of Madison Logic and Bombora before? Well, like those thrifty meat packers, Bombora sells only at wholesale.  In each channel, partners embed the Bombora data within their own products. Sometimes its baked into the price and sometimes you pay extra. It’s a “Bombora inside” strategy and makes perfect sense: everything’s better with data.

At the risk of beating a dead pig, I'll also point out that Bombora illustrates a point I've made before: that public sources of data will increasingly supplement and to some degree may even replace privately gathered data.  This is a key part of the "madtech" vision that says the data layer of your customer management infrastructure will increasingly reside outside of your company's control.  The risk to companies who use this data is that their competitors can access it just as easily, so there's still a need to build proprietary data sources in addition to adding value in other areas such as better analytics or customer experience.

Enough of that.  I'm hungry.

Friday, April 17, 2015

Marketo Adds Custom Objects. It's a Big Deal. Trust Me.

My first question when Marketo announced its new mobile app connector this week wasn’t, “What cool new things can marketers do?” but “Where is the data stored?”

It's not that I'm obsessed with data.  (Well, maybe a little.)  But one of Marketo’s biggest technical weaknesses has always been an inflexible data model. Specifically, it hasn’t let users set up custom objects (although they’ve been able to import custom objects from Salesforce.com or Microsoft Dynamics CRM). This was a common limitation among early B2B marketing automation products but many have removed it over the years. Indeed, even $300 per month Ontraport is about to add custom objects (and does a good job of explaining the concept in a typically wry video).

Sure enough, when I finally connected with Marketo SVP Products and Engineering Steve Sloan, he revealed that the mobile data is being managed through a new custom objects capability – one that Marketo didn’t announce prominently because they felt Marketing Nation attendees wouldn’t be interested. I suspect that underestimates the technical savvy of Marketo users, but no matter.

For people who understand such things, the importance is clear: custom objects open the path to Marketo supporting new channels and interactions, removing a major roadblock to competing as the core decision engine of an enterprise-grade customer management system. This will be more true once Marketo finishes its planned migration of activity data to a combination of Hadoop and HBase.  This will give vastly greater scale and flexibility than the current relational database (MySQL). Sloan said that even before this happens, data in the custom objects will be fully available to Marketo rules for list building and campaign flows.

The strategic importance of this development to Marketo is high. Marketo is increasingly squeezed between enterprise marketing suites and smaller, cheaper B2B marketing automation specialists. Its limited data structure and scale were primary obstacles to competing in the B2C market, where custom data models have always been standard. Even in B2B, Marketo’s ability to serve the largest enterprises was limited without custom objects. While this one change won’t magically make Marketo a success in those markets, its prospects without the change were bleak.

All that being said, the immediate impact of Marketo’s new mobile and ad integration features is modest. The mobile features let Marketo capture actions within a mobile app and push out messages in response. This is pretty standard functionality, although Marketo users will benefit from coordinating the in-app messages with messages in other channels. Similarly, the advertising features make it simpler to export audiences to receive ads in Facebook, LinkedIn, and Google and to find similar audiences in ad platforms Turn, MediaMath, and Rocketfuel. Again, this is pretty standard retargeting and look-alike targeting, with the advantage of tailoring messages to people in different stages in Marketo campaigns. The actual matching of Marketo contacts to the advertising audiences will rely on whatever methods the ad platform has available, not on anything unique to the Marketo integration.

In fact, I’d say the audience reaction to the announcement of these features during the Marketing Nation keynote was pretty subdued. (They were probably more excited that they can now manage their email campaigns from their mobile devices.) So maybe next time, Marketo should make the technical announcements during the big speech: at least the martech geeks will be on their chairs cheering, even if everybody else just keeps looking at their email or cat videos or whatever it is they do to amuse themselves during these things.

Note: for an excellent in-depth review of what Marketo announced, look at this post from Perkuto.




Wednesday, April 15, 2015

Marketo Conference: Is Predictive Modeling The Future of Marketing Automation?

Marketo held its annual Marketing Nation Summit this week, hosting 4,000+ clients and partners. The event seemed relatively subdued for Marketo – I didn’t spot one costumed character – but the over-all atmosphere was positive. The company made two major product announcements, expanding the reach of Marketo campaigns into mobile apps and display ad retargeting. Those struck me as strategically valuable, helping to secure Marketo’s place at the center of its users’ customer management infrastructure. Unfortunately, I wasn’t able to gather enough technical detail to understand how they work. I’ll try to write about them once that happens.

As usual for me, I spent much of conference prowling the exhibit hall checking out old and new vendors. Marketo has attracted a respectable array of partners who extend its capabilities. By far the most notable presence was predictive modeling vendors – Leadspace, Mintigo, Lattice Engines, Infer, Fliptop, SalesPredict, 6Sense, Everstring plus maybe some others I’m forgetting. I’ve written about each of these individually in the past, but seeing them in the same place brought home the very crowded nature of this market. It also prompted many interesting discussions with them vendors themselves, who, not surprisingly, are an especially analytical and thoughtful group.

Many of those conversations started with the large number of vendors now in the space and how many would ultimately survive. I actually found this concern a bit overwrought – there are other segments, most obviously B2B marketing automation itself, that support many dozens of similar vendors. By that standard, predictive analytics is still far from overcrowded. At the risk of some unfair (and unjustifiably condescending) stereotyping, I’ll propose that part of their concern comes from a sort of Spock-like rationality that says only a few different products are really needed in any given segment. That may indeed be logically correct, but real markets often support more players than anyone needs. I see nothing inherent in the predictive marketing industry that will limit it to a few survivors.

In fact, almost immediately after wondering whether there were too many choices, many vendors observed that they were already sorting themselves into specialists serving different customer types or applications. Some products sell mostly to smaller companies, some to companies with many different products, some to customers who want new prospect names, some who want to incorporate external behaviors, and so on. Here, the vendors’ perception is more nuanced than my own; they see differences that I hadn’t noticed. Despite these distinctions, I still expect that most vendors will broaden rather than narrow their scope over time. But maybe that’s my own inner Spock looking for more simplicity than really exists.

One factor simplifying buyers' selection decision was that nearly all clients test multiple systems before making a purchase.  This contrasts sharply with marketing automation, where many companies still buy the first system they consider and few conduct an extensive pre-purchase trial.  The main reason for this anomaly is that modeling systems are highly testable: buyers give each competitor a set of data, let them build a model, and can easily see whose scores do a better job of identifying right people.  It also probably helps that people buying predictive systems are generally more sophisticated marketers.  There's some danger to relying extensively on test results, since they obscure other factors such as time to build a model and how well models retain their performance over time.  I was also a bit puzzled that nearly every vendor reported winning nearly every test.  I don't think that's mathematically possible.

Probably the most interesting set of discussions revolved around the long-term relation of predictive functions to the rest of the customer management infrastructure. This was sometimes framed as whether predictive modeling will be a "feature" embedded in other systems or a "product" sold on its own. My intuition is it's a feature: marketers simply want to select on model scores the same way they’d select any other customer attribute, so scoring should be baked into whatever marketing system they’re using. But the counter argument starts with the empirical observation that marketing automation vendors haven’t done this, and speculates that maybe there’s a sound reason: not just that they don’t know how or it’s too hard, but that modeling systems need data that is stored outside of marketing automation or should connect with multiple execution systems that marketing automation does not. The data argument makes some sense to me, although I think marketing automation itself should also connect with those external sources. I don’t buy the execution system argument.  Marketing automation should select customer treatments for all execution systems; scores should be an input to the marketing automation selections.

But there’s a deeper version of this question that asks about the role of predictive analytics within the customer management process itself. Marketo CEO Phil Fernandez touched on this indirectly during his keynote, when he observed that literally mapping the customer journey as an elaborate flow chart is inherently unrealistic, because customers follow many more paths than any manageable chart could contain. He also came back to it with the image of a “self-driving” marketing automation system that, like a self-driving car, would let the user specify a goal and then handle all the details independently. Both examples suggest replacing marketer-created rules to guide customer treatments with predictive systems that select the best action in each situation. As several of the predictive vendors pointed out to me (with what sounded like the voice of painful experience), this requires marketers to give up more control than they may find comfortable – either because machines really can’t do this or because it would put marketers out of a job if the machines could. Personally, I'll bet on the machines in this contest, although with many caveats about how long it will take before humans are fully or even largely replaced.

However, and here’s the key point that came up in the most interesting discussions: predictive models can’t do this alone. At the most abstract, marketing involves picking the best customer treatment in each situation.  But models can only pick from the set of treatments that are available. In other words, someone (or some thing) has to create those treatments and, prior to that, decide what treatments to create. In current marketing practice, those decisions are made with a content map that plots available content against customer life stages and personas.  This makes sure that appropriate content is available for each situation. Proper value measurement – which means estimating the incremental impact on lifetime value of each marketing message – also relies on persons and life stages as a framework. So any machine-based approach to customer management has to generate personas, life stages, and content to be complete.*

I see no inherent reason that machines couldn’t ultimately do the persona and life stage definition. None of vendors do it today, although several appear to have given it some thought. Automated content creation is already available to a surprising degree and will only get better. But, to get back to my point: the technologies to do these things are very different from predictive modeling. So if new technology is to replace marketing automation as the controller of customer treatments, that technology will include much more than predictive modeling by itself.
 _________________________________________________________________________
* Yes, it has occurred to me that a fully machine driven system might not need personas and lifestages, which are aggregations needed because humans can't deal with each person separately.  But marketers won't adopt that approach until (a) machines can also create content without the persona / lifestage framework and (b) humans are willing to trust the black box so completely they don't need personas and lifestages to help understand what the machines are up to. On the other hand, you could argue that content recommendation engines like BrightInfo (also at the Marketo show)  already work without personas and lifestages...although I think they usually focus on a near-term action like conversion rather than long-term impact like incremental lifetime value. 

Thursday, April 02, 2015

MarTech Conference: Some Random Impressions and Interesting New Vendors

I spent last Tuesday and Wednesday at the MarTech Conference in San Francisco, where close to 1,000 marketing technologists heard sage advice, shared experiences, and otherwise frolicked with their peers.

It might just be me, but this MarTech felt a bit less intense – less tribal -- than the first, held last August in Boston. It might have been the larger crowd, more laid back West Coast audience, or that talking to so many exhibitors reduced the time attendees spent interacting with each other. Or maybe  the second MarTech experience is inherently less ground-breaking than the first.

That said, the conference was still tremendously valuable, with a wide range of speakers covering the organizational, operational, and technical issues surrounding marketing technology management. With so much wisdom flying through the air, I won’t pretend to have captured everything. (You can download the slide decks from the MarTech Slideshare channel.) Here are a few points that, for whatever reason, struck me in particular:

- Thomas Stubbs of Coca-Cola describing the lessons from several huge Web site launches tied to the World Cup (and therefore with fixed deadlines). What stood out was that the smoothest deployment  happened when the creative and technical work was done by two independent agencies.  This gave the technical team the leverage to push back against unrealistic timelines. By contrast, when the company used a combined creative-technical agency, the technical team’s objections were often overruled by agency management.

- Several presenters diagrammed the actual marketing systems in place at their companies. This was an important reminder of how complicated marketing technology infrastructures are in the real-world, as opposed to the simple diagrams drawn by people like me.

- A panel with Oracle, Salesforce.com and Marketo, where incisive questions from Chief Martech Himself Scott Brinker highlighted the different philosophies underlying each vendor’s marketing cloud and how few companies really deploy any cloud in full.

- A panel of venture capitalists where at least one member argued that marketing technology is the one sector where best of breed systems can outcompete integrated suites, because the small performance advantage of best of breed systems translates into large financial value. I’d say the only place that is clearly true is among large enterprises, which have the skills to take advantage of the most sophisticated features and where small percentage benefits translate to large dollar amounts. If you want a (pseudo) equation: integration for best of breed costs pretty much the same regardless of company size, while the benefits increase with company size. So there’s a point beneath which the costs outweigh the benefit. The real question is whether SaaS technologies will reduce integration costs or make them volume-related. If that happens – and the jury is very much out – then best of breed would make sense for many more companies, especially if best of breed vendors can build in enough automation that high user skills are also not required to benefit from their systems.

As I mentioned earlier, the biggest difference between this MarTech and the previous edition was a large exhibition hall. There were about seventy vendors covering an extraordinary range of functions. In fact, the very variety was intriguing: everything from content management to reporting to predictive modeling to social intelligence to data enhancement to plain old marketing automation. Many firms were familiar to me but others were new. Some of the unfamiliar ones I found most intriguing were (in alphabetical order):

- Acrolinx: very sophisticated scoring of content to match its tone, complexity, subject, and other attributes to target audiences. It doesn’t write copy but does analyze the copy you give it.

- Advanse: rule-based ad customization drawing on an advertiser’s individual-level data as matched to audience cookies. The system won’t create new ads by itself but will test alternatives designed by the user and will automatically deploy the winner.

- brandAnalyzer by Global Science Research: uses a large, projectable consumer survey and social media traffic to build detailed psychological profiles of brand customers. These are used to help guide creative development and media placements, but not to target individual consumers.

- Amplero by Globys: self-directed machine-learning to find the best message for each customer for each situation. Definitely very cool but we spoke twice and I still don’t understand exactly what it’s doing. I know the system generates messages with random combinations of offers, incentives, and prices and then refines its choices over time based on which perform best in which situations. But that leaves a lot of questions unanswered. I’ll get a briefing and let you know what I find.

- Insightpool: identifies social media influencers and manages campaigns to build relationships with them. What’s interesting is that the system looks for different types of influencers and designs different campaigns based on different user goals (traffic, followers, reviews, etc.) Campaigns contain different types of social media interactions (mentions, follows, direct messages, etc.) at different intervals or sequences.

- Persado: generates and tests optimal language for emails, landing pages, social media, and other messages. Essentially, it does multivariate testing with the test cases chosen by the system based on a huge database of marketing language and sophisticated semantic understanding. Just to be clear, it’s finding the best version for an entire audience or segment, not creating different versions for each individual. (Persado wasn't really new to me, but I've yet to review them formally.  Will do that soon.)

- Unmetric: social intelligence for brands. This is social media monitoring on steroids, ranking performance of individual content items, classified by topic and brand, and then offering a variety of reports and alerts. It also tracks response time to social media customer service requests.

- Whatrunswhere: tracks ad placements in online media to give competitive intelligence. World-wide coverage with details about reach and frequency.

Of course, there were many other interesting exhibitors; apologies to those I've left out. The full list is available here on the MarTech Conference site.

Thursday, March 26, 2015

Terminus Offers Targeted Display Ads for B2B

Tuesday’s post on the Adobe Marketing Cloud illustrated the complexity of solutions that combine many marketing and advertising components. Despite my best efforts, and much cooperation from Adobe, I’m sure it still misses many nuances of how Adobe components do or don’t work together. Nor does it address the challenges that users face in making sense of it all. Coincidentally, yesterday's FierceCMO blog indirectly quotes Lenovo’s Michael Ballard on this very point: “Adobe is not a solution for everyone because it requires a lot of expertise and attention. To help the system run smoothly, Lenovo has both dedicated internal Adobe teams, as well as full-time support people on Adobe's payroll, which means the company spends equal amounts on both products and support services.” And Lenovo uses just only two of Adobe’s eight solutions.



For many smaller companies, a suite like Adobe, Oracle, IBM, Salesforce.com, SAS, or Teradata isn’t an option and would be overkill if it were. Those firms will generally do better with a simpler solution that was built as an integrated whole, such as a StrongView (recently reviewed here) or SiteCore (reviewed here not so recently).

But such integrated solutions are rarely comprehensive. This means that meeting a full range of needs requires connecting with external systems.  Some vendors do this better than others. A good foundational system also needs a robust customer database that can integrate data from all the channels, whether it supports them directly or via partners.

One area where partners are especially common is ad management. Although martech and adtech are on the path to merging into madtech, they’re still largely separate, especially outside the big enterprise suites. This means that integration has largely meant sharing audiences defined in a marketing automation system with demand side platforms (DSPs), so the DSPs can bid on audience members when they appear on ad networks. For B2B marketers, this has happened mostly through Demandbase and what was formerly Bizo (now part of LinkedIn). Vendemore (reviewed here) is another option.

Terminus, which officially launched in February, is new alternative. The system imports lists of target accounts from a company’s CRM or marketing automation system, or lets clients build their own lists from Terminus’ own B2B company database. Users can then specify the corporate roles they want to target. Terminus uses third party data partners to identify cookies belonging to people in the specified roles at the selected companies and connects to ad exchanges, Facebook, and mobile apps to bid on them. The system automatically monitors response and optimizes its bids to get the best results within a company’s campaign budget.

Here are a few features to consider when comparing Terminus with other B2B advertising solutions:

- self-service. The system provides users with a multi-step process to import their data, select target segments, assign creative materials, set daily and/or total campaign budgets, define campaign end dates, and start executing campaigns. During campaign set-up, the system shows how many companies and contacts match the segmentation criteria within the imported CRM or marketing automation data and how many Terminus can find in its cookie pool.

- email-based targeting. Terminus selects cookies by using LiveRamp and other data partners to match them against email addresses from the CRM or marketing automation system. The company says this is more accurate than targeting based on IP address. (The company can also use its own database to reach people not in the clients' systems.)  Integration is currently available for Salesforce.com CRM, Marketo, and HubSpot, with plans to add Salesforce.com Pardot and Oracle Eloqua shortly.

- ad inventory.
Terminus currently integrates with well over fifty ad networks and exchanges. It is adding programmatic premium inventory and direct deals to the base programmatic impressions.

- sales stages. Terminus imports opportunity stage from the CRM system and can use this in segmentation. This lets it build separate campaigns to serve different creative to companies in different stages. But the system isn’t tracking messages to specific individuals, so it can’t avoid showing the same message multiple times to the same person.

- automated optimization. Terminus adjusts bids based on twenty variables such as ad size and time of day without asking marketers to make any decisions. Like most real time bidding systems, it typically aims to optimize click through rate or cost per click. The system will automatically serve alternative creatives and pick the best one; true a/b testing and reporting is planned for future release.

- reporting: the system shows spend, reach, impressions, and clicks for each account, giving marketers a precise view of how each account has been treated. Opportunity value can be imported to add return on investment analysis. Reports can be filtered by opportunity status, for example to show only closed deals. Reports can also summarize results by device, ad format, creative version, and other variables. Marketers can change creative from within the reporting screen if they spot something that isn’t performing well.

- pricing. Terminus charges a fixed platform fee and passes through the advertising expenses at cost. The company considers this among its most important innovations, compared with other firms that make their profit by marking up advertising expenses. The Terminus approach provides greater clarity and lets the company optimize its clients’ ad purchases without affecting its own profits. The platform fee is currently about $500 per month although Terminus expects to offer several tiers starting from $725 to $1,000 per month later this year.

Tuesday, March 24, 2015

Adobe Marketing Cloud Marches Towards Martech and Adtech Integration


At pretty much the same moment I was publishing my post on the merger of martech and adtech into madtech, Adobe was announcing its latest marketing products, including a press release on uniting “Data-driven Marketing and Ad Tech” . Naturally, this caught my attention.

As you might expect, Adobe’s reality is considerably more complicated than the simplicity of the “madtech” vision. Like the other enterprise software vendors who offer broad martech and adtech solutions, Adobe has built its marketing cloud by buying specialist systems. And, again like its competitors, it has only integrated them to a limited degree.

In Adobe’s case, the various products remain as distinct “solutions” served by a common set of “core services”. The current set of eight solutions includes Analytics (Web, video and mobile analytics, née Omniture Site Catalyst), Social (social publishing, based on Context Optional), Target (Web optimization and personalization, derived from Omniture Test & Target/Offermatica), Experience Manager (Web content management , originally Day Software), Media Optimizer (based on Efficient Frontier and Demdex), Campaign (formerly Neolane); Primetime (addressable TV) and Audience Manager (data management platform, formerly Demdex). Of course, the products have all been modified to some degree since their acquisitions.  But each still has its own data store, business logic and execution components.

Rather than replacing these components with common systems, Adobe has enabled a certain amount of sharing through its core services. In the case of customer data, the “profiles and audiences” core service maintains a common ID that is mapped to identities in the different solutions. This means that even though most customer data stays in the solutions’ own databases, the core service can use that data to build audience segments. There's also an option to load some attributes into the core services profiles themselves.   Audiences, which are lists of IDs, can either be defined in solutions and sent to the core service or built within the core service itself.  Either way, they can then be shared with other solutions. Data from external systems can also be imported to the core service in batch processes and used in segmentation.

Adobe says that data stored in the solutions can be accessed in real time.  I'm skeptical about performance of such queries, but the ability to store key attributes within the core service profiles should give marketers direct access when necessary.  There’s certainly a case to be made that digital volumes are so huge and change so quickly that it would be impractical to copy data from the solutions to a central database. Where external data is concerned, marketers will increasingly have no choice but to rely on distributed data access.

But here’s the catch: Adobe's approach only works if all your systems are actually tied into the central system. Adobe recognizes this and is working on it, but so far has only integrated five of its solutions with the profiles and audiences core service. These are Analytics, Target, Campaign, Audience Manager, and Media Optimizer. The rest will be added over time.

The second big limit to Adobe’s current approach is sharing with external systems. Only Adobe solutions can access other solutions’ data through core services. This makes it difficult to substitute an external product if you already have one in place for a particular function or don’t like Adobe’s solution.

Adobe does connect with non-Adobe systems through Audience Manager, its data management platform, which can exchange data with a company’s own CRM or operational databases, business partners, and external data pools and ad networks. Audience Manager can hold vast amounts of detailed data, but does not store personally identifiable information such as names or email addresses. Audience Manager can also copy Web behavior information directly from Analytics, the one instance (so far as I know) where detailed data is shared between Adobe solutions.


So far, I’ve only been discussing data integration. The various Adobe components also have their own tools for segmentation, decision logic, content creation, and other functions. These are also slowly converging across products: for example, there is an “assets” core service that provides a central asset library whose components can be uploaded to at least some of the individual solutions. The segmentation interface is also being standardized product-by-product. There’s no point in trying to list exactly what is and isn't standard today, since this will only change over time.

The lesson here is that suites are not simple. Marketers considering Adobe or any other Marketing Cloud need to examine the details of the architectures, integration, and consistency among the components they plan on using. The differences can be subtle and the vendors often don’t explain them very clearly. But it pays to dig in: the answers have a big impact on whether the system you choose will deliver the results you expect.

Thursday, March 19, 2015

Everstring Offers Fast, Flexible, Account-based Predictive Models for B2B Sales and Marketing

Remember how much simpler life was back in 2010? Among our quaint notions, we thought that B2B companies couldn’t build predictive models because they didn’t have enough data about their customers and prospects. The Internet has changed that, providing oceans of relevant detail from company Web sites, social media, job boards, and other sources. Today, at least a dozen vendors are offering predictive models for B2B lead scoring, sales intelligence, and customer success management.

Many of the original scoring vendors specialized in a single application.  But today, most are broadening their products to serve multiple purposes. This lets the vendor charge more to each client and blocks out potential competitors. Marketers also benefit since they only have to buy, learn, and integrate a single system.

Everstring is relative newcomer to the B2B predictive modeling arena, founded in 2012 but only seriously entering the market after it received $12 million in Series A funding last August. The late start has let it adopt a broader scope from the beginning, offering both lead scoring and new prospect identification. The company plans to extend its offerings later this year to include real-time treatment recommendations.

But what really sets Everstring apart are two other factors: it works at the account rather than individual level, and it builds models really, really fast – as in, six minutes for a new model once data connections are in place. The two factors are related: Everstring can work quickly because it only imports a client’s account list and sales activities, saving complicated data mapping and analysis, and because it has preclassified its master database of six million U.S. businesses into clusters based on similarities in products, technologies used, hiring patterns, news events, social data, and other factors. This means that building a new model only requires using activity history to identify the client’s responsive accounts and finding which segments have the highest concentrations of those accounts.

That’s pretty light work compared with loading individual level data and identifying which attributes are most predictive for each client’s business. Matching against six million companies rather than 100 million individuals speeds things up too. The approach also lets clients score anonymous leads if IP address or similar information can identify their company. Models for different products can be based built by selecting only accounts that purchased that product.

Once a model is built, Everstring can score any new leads by just by identifying the segment their company belongs to and applying that segment’s score. Lists of new prospects require simply taking names from the highest-scoring segments.

Sounds pretty simple, eh? That’s because I’ve over-simplified. The data gathering and actual math are actually quite complicated.  Beyond that, Everstring does more than provide segment scores, which  measure the fit between a new account and the client’s previously responsive prospects. Specifically, it also measures purchase intent by based on more than 1 billion clicks per day on third party Web sites and emails. And it measures engagement by analyzing visitor behaviors on the client’s own Web site, gathered through a tracking pixel, plus other data imported from marketing automation. The combination of fit, intent, and engagement will guide the real-time treatment recommendations and can support additional scoring applications.  Fit scores alone are much more limited..

So, how do you deploy all this? Everstring has standard integrations with Salesforce.com, Marketo and Oracle Eloqua, which can send data for the initial model building and score new accounts as they are added to those systems. A real-time API can integrate with other CRM and marketing automation systems.

Pricing for Everstring is based on the types of models and volume. Lead scoring usually runs from $60,000 to $100,000 per year. New prospect names is additional. Pricing for real-time message selection isn’t yet set. The system currently has about 25 clients, nearly all added since last August.