**************************************
I finished my detailed analysis of the demand generation deployment survey results yesterday and posted it to the resource library of the Raab Guide site. This turned out to be a major project (the analysis, not the posting) because I revisited the data from a company perspective. The analysis in my earlier blog posts looked at average deployment rates by feature, without relating those to particular companies.
As often happens, averages gave misleading results. For example, one of the original factoids that most impressed me was that 80% of features ever deployed are deployed by the second month. This seems to suggest that people deploy quickly and then are largely done. But analyzing data by company, I found a very wide divergence in behaviors: some companies deploy nearly all features immediately, while others start very slowly.
Specifically, I grouped the companies into four quartiles, ranked by the number of features they deployed during the first week. This figure itself varied hugely, from 0.6 features per company in the lowest quartile to 8.8 features/company in the highest. But what I found is the companies who start with very few features will add them steadily over time, while the ones who deploy many features immediately quickly reach the maximum. So, that average of 80% deployment by the second month really is a combination of rates ranging from 57% to 97% for the different quartiles.
table 10 [table numbers refer to tables in the paper] | |||||
% of final features deployed by time period (companies stay in original quartile over time) | |||||
quartile | first week | first month | second month | third month | later |
1 (tortoise) | 0.05 | 0.32 | 0.57 | 0.72 | 1.00 |
2 | 0.28 | 0.55 | 0.70 | 0.77 | 1.00 |
3 | 0.49 | 0.76 | 0.84 | 0.86 | 1.00 |
4 (hare) | 0.68 | 0.93 | 0.97 | 0.97 | 1.00 |
average | 0.39 | 0.66 | 0.80 | 0.84 | 1.00 |
In other words, we have a classic tortoise vs the hare race, with some fast starters and others moving slow but steady. Looking at the number of features per company rather than percentages, we see the tortoises (quartile 1) never quite catch up, but do greatly narrow the gap.
table 9 | |||||
average features per company by quartile (companies stay in original quartile over time) | |||||
quartile | first week | first month | second month | third month | later |
1 (tortoise) | 0.6 | 3.4 | 6.1 | 7.7 | 10.7 |
2 | 3.0 | 6.0 | 7.7 | 8.3 | 10.9 |
3 | 5.6 | 8.7 | 9.7 | 9.9 | 11.5 |
4 (hare) | 8.8 | 12.0 | 12.6 | 12.6 | 12.9 |
average | 4.5 | 7.6 | 9.2 | 9.6 | 11.5 |
My fundamental interpretation is that the companies who deploy many features immediately have done their homework and are ready to go from day one, while those who start slowly did little advance preparation. The figures above suggest it takes the tortoises about three months to approach the initial deployment levels of the hares - so it seems this is the length of the delay from lack of preparation.
I actually tightened the analysis even more by looking separately at deployment rates for basic, advanced and optional features within each quartile. (Basic features are needed for simple email campaigns; advanced and optional features are more complex and less common. The paper describes the definitions in detail.) Looking just at the basic features, you'll see they're deployed sooner than average, and that even the tortoises finish implementing them by the second or third month. (You'll also note that, even among basic features, the tortoises never quite deploy as many as the hares.)
table E-1 | |||||||
cumulative features deployed by quartile (based on first period rank) | |||||||
% of final features deployed by period | average features deployed | ||||||
quartile | feature category | first week | first month | second month | third month | later | |
1 (tortoise) | basic | 0.10 | 0.53 | 0.85 | 0.93 | 1.00 | 4.4 |
2 | basic | 0.52 | 0.79 | 0.88 | 0.90 | 1.00 | 4.7 |
3 | basic | 0.71 | 0.88 | 0.92 | 0.92 | 1.00 | 4.8 |
4 (hare) | basic | 0.84 | 0.98 | 1.00 | 1.00 | 1.00 | 5.0 |
avg | 0.56 | 0.80 | 0.91 | 0.94 | 1.00 | 4.7 |
The paper draws a number of other conclusions from the data and makes some helpful if generic recommendations (select the right system, prepare in advance, plan for expansion, test and measure). That's all good stuff but far from world-changing. What's really interesting is the details themselves - go ahead and dig into the paper and see what you find.
No comments:
Post a Comment