Showing posts with label marketing process. Show all posts
Showing posts with label marketing process. Show all posts

Friday, March 30, 2012

Survey of Surveys: Budgets and Process are Main Barriers to Marketing Technology Success

I recently gave a Web presentation comprised almost entirely of slides from different surveys. This was a bit of an experiment and, sad to say, it didn’t seem terribly successful. I did weave the slides into a nice little story line – marketers know they need better technology, poor data is the root of their problem, and we know how to solve this – but even that wasn’t enough. Pity.

Still, preparing the slides gave me a chance to scan the surveys in my archives, which was entertaining in its own little way. Many surveys ask similar questions, which gave me some choices during my preparation. But I didn’t look carefully at how they compare.

Today I’ll do that. I’ve chosen one of the most popular questions: what are the barriers to marketing technology adoption? I have versions of this from seven different surveys within the past year.

Of course, each survey uses different terms. To make the comparison, I collapsed the various answers into a few reasonably-distinct categories, committing a certain amount of shoe-horning along the way. I then recorded where each answer ranked in each survey, compiled the results, and did a crude ranking with a combination of mathematical wizardly and body english.  (Multiple answers for the same survey indicate I placed several questions into the same category.)

Results are below.  I've shaded the first ranked answers in orange and the second and third ranked answers in yellow.


My first observation was the sheer inconsistency of the answers. Budget issues emerged as a clear number one, but they reached that rank on just four of the seven surveys and ranked quite low on the other two that included them. The second-ranked item (marketing process) was never listed first; it ended where it did because it had the most twos and threes. No other item was ranked first more than once or in the top three more than twice.

Things made a bit more sense when I looked at the survey audiences. Winterberry and Forrester were specifically about online marketing, Gleanster and Marketing Sherpa were B2B surveys, and IBM and the two CMO Council studies were of general marketers. Since most B2B marketing is also online, it makes sense to look at the first four as one group and the other three as another.

Now we see some interesting consistencies:

• Budget isn’t much of an issue for the online and B2B marketers, but dominant for the mixed marketers.

• Marketing process and marketing staff skills are major concerns for online and B2B but rarely mentioned by the mixed marketers.

• Senior management support, and to a lesser extent IT support and technology capabilities, are significant barriers for mixed marketers but don’t slow down the online and B2B groups.

• Metrics, organizational silos, and the economy are cited occasionally by both groups but don’t seem to be major issues for either.

So there’s a fairly coherent picture after all.

• Online and B2B marketers are struggling to keep up with a rapidly changing marketplace, meaning their biggest problems are people and process. The importance of their work is obvious enough that budgets and senior management support are generally available. They have the technical savvy and independence to avoid issues with IT support and organizational silos.

• Mixed marketers, working in traditional channels, still struggle with budgets, metrics, and senior management. They have mature marketing organizations, so process and skills are in place, at least for traditional programs. They do struggle more with IT, technology, and organizational silos, because they lack their own technical skills and have limited clout in the organization.

• Everybody says they care about metrics but it's rarely a top priority.


Or at least that’s my take. I’ve displayed the actual surveys below – if you reach other conclusions or spot any other patterns, let me know.























Thursday, February 09, 2012

NurtureHQ Offers "Dead Easy Marketing Automation". Is That Enough?

I don’t know whether to laugh or cry.

New-ish marketing automation vendor NurtureHQ showed me its product recently. It’s really nice. Clean interface, easy to use, all the standard marketing automation features. Particular strengths in:

• split testing (separately for email subject lines and versions)
• lead scoring (automatically reduces scores from older events)
• CRM integration (Highrise, Capsule CRM, Sugar CRM, Salesforce.com, and an open API)
• marketing analysis (users can track multiple outcomes per campaign)
• content management (nice email/form/page builder, user-defined variables can be shared across messages to make changes easy)
• selection, segmentation, and campaign flow based on list tags (highly intuitive)
• low price ($495 per month for up to 20,000 contacts and 10 users with unlimited emails and landing pages and no annual contract)

In other words, this is a worthy alternative to Act-On, SalesFUSION, Net-Results, MakesBridge, Genoo, and other small business systems. Definitely take a look if you’re in that market for that sort of product.

But there’s my problem: “that sort of product” is widely available already. NurtureHQ hopes to differentiate itself as “dead easy marketing automation”, which it arguably is. But is it so much easier than the other products I just listed? I think not.  Regardless of whether it’s the easiest of them all, the difference isn’t likely to be large enough to matter.

Naturally, NurtureHQ disagrees. Before building its system, the company spent six months talking to current marketing automation users.  Many said they never progressed beyond email because the next step was too hard. This led NurtureHQ to believe that an even easier system could find a broader market than current products.

Other vendors have asked the same questions and reached the same conclusions.  But I’m beginning to think those marketers were really saying something else. What they found “too hard” wasn’t the software, but the planning and content creation needed for serious marketing automation. Without a clear understanding of what they wanted to do, they couldn’t figure out how to get the system to do it. That’s not a software problem.  Even a system that could build programs just by reading marketers' minds wouldn’t work if those minds didn't know what they wanted in the first place.

This isn’t a new insight. Marketing automation gurus have long argued that companies need to define their processes in advance of deploying a new system. The case was made yet again this week in an excellent blog post by Joby Blume, describing his former company’s struggles with marketing automation. (Be sure to read the comments). Howard Sewell of Spear Marketing Group made a similar point on his own blog. I reached the same conclusion myself in a post using data from a Gleanster report on marketing automation.

None of this means that “ease of use” is a bad strategy for NurtureHQ and others. On a practical level, ease of use helps sell systems to people who otherwise wouldn’t buy them. But vendors can't succeed if their clients fail – especially they rely on revenue from subscription renewals. So it only makes sense to sell marketing automation to companies without adequate processes, content, and other resources if those companies understand they’ll need to add those resources later. To really ensure success, vendors must actively help their clients through training and, in some cases, services to do the work for them.   Vendors including LeadLife, Genoo, and MakesBridge already offer low-cost service packages for their clients.  Other vendors also have service arms and agency partners to help out on a project basis. Third-party training resources such as the Marketing Automation Institute (where I’m a board member) can also help to fill the gap and benefit from substantial vendor funding.

What does this mean for the software vendors themselves? If the real keys to success are marketers’ skills and processes, does it really matter what’s in the software? To put another way, is marketing automation software already a commodity?

I hate to say it, but, to some degree, yes. There are certainly differences among products, both in capabilities and ease of use. But most marketers can find several systems that will meet their needs. This means vendors are increasingly competing on other dimensions including their own marketing and sales skills, cost structures, supporting services, pricing, and financial resources. As I pointed out last week,  it’s no coincidence that four of the five largest vendors are venture-funded (six of seven, if you include Infusionsoft and HubSpot). Another factoid that makes the point even more clearly: three of the four fastest growing received major new funding in the past year.

Still, I’m not entirely ready to give up on technology as a major differentiator. What’s needed is more radical innovation than a better interface. If the real barriers to success are creating content and identifying appropriate programs, then technology must address those directly. There are some already tools to help generate content, such as systems for news curation  and video posting.  I can't think of any products that recommend the right marketing programs, but proper analytics can identify patterns that reveal opportunities, and it’s perfectly conceivable that a rule-based system could check for known issues and make recommendations. HubSpot’s marketing grader does a something like this although it only examines externally-available information.

Of course, marketers will still have to create content and design programs. But better technology could dramatically reduce the necessary effort and move marketers past the deer-in-the-headlights paralysis of not knowing where to start. Vendors who really want to expand the market beyond the resource-rich few should look in this direction.


Tuesday, August 10, 2010

Don't Fix Your Marketing Process

Summary: In a constantly changing world, flexibility is more important than optimization. Marketers need people, processes and technology that allow them to react quickly to new opportunities.

The always-insightful Adam Needles is running a series of blog posts this week that summarize the “real state” of B2B demand generation. So far, his main points have been that the role of B2B marketing has expanded to cover the entire buying cycle from initial lead generation through closed deals and that new technology must be accompanied by changes in people, process and content to have an impact. Tomorrow’s post will apparently discuss the need to tie marketing efforts to revenue.

This is good stuff and well articulated, but industry gurus have been making similar points for a long time. The real question is what to do about it. HOW can marketers adjust their staffing and processes, given the practical constraints of time and budget? And can systems provide specific capabilities that will make the adjustment easier?

The conventional wisdom is that marketers need to become more efficient, more attuned to individual buyers’ movement through the purchase cycle, and better coordinated with sales departments. But although these are certainly valid goals, I think they understate the problem.

Specifically, they make an implicit assumption that marketers are facing a stable situation. This is what allows them to design a new set of processes and techniques optimized for that situation.
I’d argue that the situation is highly unstable. Marketers face continued rapid change in the methods and media they have available. In this situation, any optimized process will rapidly become obsolete. So, the key requirement is flexibility itself. The most successful organizations will be those whose people, processes and technology can most effectively exploit new opportunities as they appear.

(The classic example of the conflict between stability and flexibility is the competition between Ford and General Motors in the 1920’s. Henry Ford relentlessly, even obsessively, optimized his company to make Model T’s more efficiently. But even though Ford kept driving down his costs, he ultimately lost to a General Motors that was able to change its products more quickly. Just thought I’d throw that in there.)

What does an organization optimized for flexibility look like? I think it keeps its processes simple, so they can be easily adjusted. This may mean they’re broken down into many small, connected processes that can be changed individually without affecting the other processes around them. (“Modular” and “loosely coupled” are better terms for this but sound too geeky.)

It certainly means that results are measured closely and frequently, so successes and failures are identified quickly and exploited or discarded as appropriate. It also means the organization makes experimentation easy, in terms of funding, staff time and tolerance for mistakes. It probably suggests that staff members should be more generalists than specialists, which implies greater willingness to pay for training and perhaps wider use of outside resources to provide particular skills on demand.

From a technology standpoint, flexibility implies ease of integration with new data sources, marketing methods and external systems. That’s very different from one vendor trying to include as many functions as possible. (On the other hand, multi-function suites always do seem to win in the market, precisely because they require less integration. Perhaps this will change if integration itself becomes easy enough.)

Flexibility also implies greater ease of use, particularly in terms of setting up and modifying marketing programs and processes. The need for many small, loosely connected processes has some specific implications for interface design. The need for measurement also implies better reporting technologies – a topic that several marketing automation vendors have recently begun to address.

Circling back for a moment to staff skills, all this integration, process coupling and analysis seems to mean that those "generalists" are going to be more technically adept than today's marketers, even if they are not as specialized in terms of the particular media. I'd like to believe that really great technology and interfaces can reduce the level of technical skill required, but suspect that won't happen any time soon.

I’ll admit these are somewhat half-baked notions, since they were largely triggered by Adam’s posts this week. On the other hand, I’ve been thinking for quite some time that we need to move beyond just telling marketers to nail down their processes. Perhaps a recognition that we must manage in a period of continuous change is a good next step.

Tuesday, December 09, 2008

Measuring Usability: A Task-Based Approach

I think we all know that the simplest practical measure of intelligence is how often someone agrees with you. On that scale, University of Ottawa Professor Timothy Lethbridge must be some kind of genius, because his course notes on Software Usability express my opinions on the topic even better and in more detail than I’ve yet to do for myself. Specifically, he lists the following basic process for measuring usability:

- understand your users, and recognize that they fall into different classes
- understand the tasks that users will perform with the system
- pick a representative set of tasks
- pick a representative set of users
- define the questions you want to answer about usability
- pick the metrics that answer those questions
- have the users perform the tasks and measure their performance

This is very much the approach that I’ve been writing about, in pretty much the same words. Happily, Lethbridge provides additional refinement of the concepts. Just paging through his notes, some of his suggestions include:

- classifying users in several dimensions, including the job type, experience with the tasks, general computer experience, personality type, and general abilities (e.g. language skills, physical disabilities, etc.). I’d be more specific and add skills such as analytical or technical knowledge.

- defining tasks based on use cases (I tend to call these business processes, but it’s pretty much the same); understanding how often each task is performed, how much time it takes, and how important it is; and testing different tasks for different types of users. “THIS STEP CAN BE A LOT OF WORK” the notes warn us, and, indeed, building the proper task list is probably the hardest step in the whole process.

- a list of metrics:

- proficiency, defined as the time to complete the chosen tasks. That strikes me as an odd label, since I usually think of proficiency as an attribute of a user not a system. The obvious alternative is efficiency, but as we’ll see in a moment, he uses that for something else. Maybe “productivity” would be better; I think this comes close to the standard definition of labor productivity as output per hour.

- learnability, defined as time to reach a specified level of proficiency.

- efficiency, defined as proficiency of an expert. There’s no corresponding term for “proficiency of a novice”, which I think there should be. So maybe what you really need is “expert efficiency” and “novice efficiency”, or “expert and novice “productivity”, and discard “proficiency” altogether.

- memorability, defined as proficiency after a period of non-use. If you discard proficiency, this could be “efficiency (or productivity) after a period of non-use”, which makes just as much sense.

- error handling, defined as number or time spent on deviations from the ideal way to perform a task. I’m not so sure about this one. After all, time spent on deviations is part of total time spent, which is already captured in proficiency or efficiency or whatever you call it. I’d rather see a measure of error rate, which would be defined as number or percentage of tasks performed correctly (by users with a certain level of training). Now that I think about it, none of Lethbridge’s measures incorporate any notion of output quality—a rather curious and important omission.

- satisfaction, defined subjectively by users on a scale of 1 to 5.

- plot a “learning curve” on the two dimensions of proficiency and training / practice time; the shape of the curve provides useful insights into novice productivity (what can new users do without any training); learnability (a steep early curve means people learn the system quickly) and eventual efficiency (the level of proficiency where the curve flattens out).

- even expert users may not make best use of the system if stop learning before they master all its features. So they system should lead them to explore new features by offering tips or making contextual suggestions.

At this point, we’re about half way through the notes. The second half provides specific suggestions on:

- measuring learnability (e.g. by looking at features that make systems easy to learn);

- causes of efficiency problems (e.g. slow response time, lack of an easy step-by-step route to perform a task);

- choosing experts and what to do when experts are unavailable (basically, plot of learning curve of new users);

- measuring memorability (which may involve different retention periods for different types of tasks; and should also distinguish between frequently and infrequently used tasks, with special attention to handling emergencies)

- classifying errors (based on whether they were caused by user accidents or confusion [Lethbridge says that accidents are not the system’s fault while confusion is; this is not a distinction I find convincing]; also based on whether the user discovers them immediately or after some delay, the system points them out, or they are never made known to the user)

- measuring satisfaction (surveys should be based on real and varied work rather than just a few small tasks, should be limited to 10-15 questions, should use a “Likert Scale” of strongly agree to strongly disagree, and should vary the sequence and wording of questions)

- measuring different classes of users (consider their experience with computers, the application domain and the system being tested; best way to measure proficiency differences is to compare the bottom 25% of users with the 3rd best 25%, since this will eliminate outliers)

This is all good stuff. Of course, my own interest is applying it to measuring usability for demand generation systems. My main take-aways for that are:

1. defining user types and tasks to measure are really important. But I knew that already.

2. choosing the actual metrics takes more thought than I’ve previously given it. Time to complete the chosen tasks (I think I’ll settle on calling it productivity) is clearly the most important. But learnability (which I think comes down to time to reach a specified level of expertise) and error rate matter too.

For marketing automation systems in particular, I think it’s reasonable to assume that all users will be trained in the tasks they perform. (This isn’t the case for other systems, e.g. ATM machines and most consumer Web sites, which are used by wholly untrained users.) The key to this assumption is that different tasks will be the responsibility of different users; otherwise, I’d be assuming that all users are trained in everything. So it does require determining which users will do which tasks in different systems.

On the other hand, assuming that all tasks are performed by experts in those tasks does mean that someone who is expert in all tasks (e.g., a vendor sales engineer) can actually provide a good measure of system productivity. I know this is a very convenient conclusion for me to reach, but I swear I didn’t start out aiming for it. Still, I do think it’s sound and it may provide a huge shortcut in developing usability comparisons for the Raab Guide. What is does do is require a separate focus on learnability so we don’t lose sight of that one. I’m not sure what to do about error rate, but do know it has to be measured for experts, not novices. Perhaps when we set up the test tasks, we can involve specific contents that can later be checked for errors. Interesting project, this is.

3. the role of surveys is limited. This is another convenient conclusion, since statistically meaningful surveys would require finding a large number of demand generation system users and gathering detailed information about their levels of expertise. It would still be interesting to do some preliminary surveys of marketers to help understand the tasks they find important and, to the degree possible, to understand the system features they like or dislike. But the classic usability surveys that ask users how they feel about their systems are probably not necessary or even very helpful in this situation.

This matters because much of the literature I’ve seen treats surveys as the primary tool in the usability measurement. This is why I am relieved to find an alternative.

As an aside: many usability surveys such as SUMI (Software Usability Measurement Inventory) are proprietary. My research did turn up what looks like a good public version
Measuring Usability with the USE Questionnaire by Arnold M. Lund from the
Society for Technical Communication (STC) Usability SIG Newsletter of October 2001. The acronym USE stands for the three main categories: Usefulness, Satisfaction and Ease of Use/Ease of Learning. The article provides a good explanation of the logic behind the survey, and is well worth reading if you’re interested in the topic. The questions, which would be asked on a 7-point Likert Scale, are:

Usefulness
- It helps me be more effective.
- It helps me be more productive.
- It is useful.
- It gives me more control over the activities in my life.
- It makes the things I want to accomplish easier to get done.
- It saves me time when I use it.
- It meets my needs.
- It does everything I would expect it to do.

Ease of Use
- It is easy to use.
- It is simple to use.
- It is user friendly.
- It requires the fewest steps possible to accomplish what I want to do with it.
- It is flexible.
- Using it is effortless.
- I can use it without written instructions.
- I don't notice any inconsistencies as I use it.
- Both occasional and regular users would like it.
- I can recover from mistakes quickly and easily.
- I can use it successfully every time.

Ease of Learning
- I learned to use it quickly.
- I easily remember how to use it.
- It is easy to learn to use it.
- I quickly became skillful with it.

Satisfaction
- I am satisfied with it.
- I would recommend it to a friend.
- It is fun to use.
- It works the way I want it to work.
- It is wonderful.
- I feel I need to have it.
- It is pleasant to use.

Apart from the difficulties of recruiting and analyzing a large enough number of respondents, this type of survey only gives a general view of the product in question. In the case of demand generation, this wouldn’t allow us to understand the specific strengths and weaknesses of different products, which is a key objective of any comparative research. Any results from this sort of survey would be interesting in their own right, but couldn’t themselves provide a substitute for the more detailed task-based research.