Sunday, October 22, 2017

When to Use a Proof of Concept in Marketing Software Selection -- And When Not

“I used to hate POCs (Proof of Concepts) but now I love them,” a Customer Data Platform vendor told me recently. “We do POCs all the time,” another said when I raised the possibility on behalf of a client.

Two comments could be a coincidence.  (Three make a Trend.)  But, as the first vendor indicated, POCs have traditionally been something vendors really disliked. So even the possibility that they’ve become more tolerable is worth exploring.

We should start by defining the term.  Proof of Concept is a demonstration that something is possible. In technology in general, the POC is usually an experimental system that performs a critical function that had not previously been achieved.  A similar definition applies to software development. In the context of marketing systems, though, a POC is usually not so much an experiment as a partial implementation of an existing product.  What's being proven is the system's ability to execute key functions on the buyer's own data and/or systems. The distinction is subtle but important because it puts the focus on meeting the client's needs.  

Of course, software buyers have always watched system demonstrations.  Savvy buyers have insisted that demonstrations execute scenarios based on their own business processes.  A carefully crafted set of scenarios can give a clear picture of how well a system does what the client wants.  Scenarios are especially instructive if the user can operate the system herself instead of just watching a salesperson.  What scenarios don’t illustrate is loading a buyer’s data into the system or the preparation needed to make that data usable. That’s where the POC comes in.

The cost of loading client data was the reason most vendors disliked POCs. Back in the day, it required detailed analysis of the source data and hand-tuning of the transformation processes to put the data into the vendor’s database.  Today this is much easier because source systems are usually more accessible and marketing systems – at least if they’re Customer Data Platforms – have features that make transformation and mapping much more efficient.

The ultimate example of easier data loads is the one-click connection between many marketing automation and CRM “platforms” and applications that are pre-integrated with those platforms. The simplicity is possible because the platforms and the apps are cloud-based, Software as a Service products.  This means there are no custom implementations or client-run systems to connect. Effortless connections let many vendors to offer free trials, since little or no vendor labor is involved in loading a client’s data. 

In fact, free trials are problematic precisely because so little work goes into setting them up. Some buyers are diligent about testing their free trial system and get real value from the experience. But many set up a free trial and then don't use it, or use it briefly without putting in the effort to learn how the system works.  This means that all but the simplest products don’t get a meaningful test and users often underestimate the value of a system because they haven’t learned what it can do.

POCs are not quite the same as free trials because they require more effort from the vendor to set up.  In return, most vendors will require a corresponding effort from the buyer to test the POC system.  On balance that’s a good thing since it ensures that both parties will learn from the project.

Should a POC be part of every vendor selection process? Not at all.  POCs answer some important questions, including how easily the vendor can load source data and what it’s like to use the system with your own data.  A POC makes sense when those are critical uncertainties.  But it’s also possible to answer some of those questions without a POC, based on reviews of system documentation, demonstrations, and scenarios. If a POC can’t add significant new information, it’s not worth the time and trouble.

Also remember that the POC loads only a subset of the buyer’s data. This means it won't show how the system handles other important tasks including  matching customer identities across systems, resolving conflicts between data from different sources, and aggregating data from multiple systems. Nor will working with sample data resolve questions about scalability, speed, and change management. The POC probably won’t include fine-tuning of data structures such as summary views and derived variables, even though these can greatly impact performance. Nor will it test advanced features related to data access by external systems.

Answering those sorts of questions requires a more extensive implementation.  This can be done with a pilot project or during initial phases of a production installation. Buyers with serious concerns about such requirements should insist on this sort of testing or negotiate contracts with performance guarantees to ensure they’re not stuck with an inadequate solution.

POCs have their downsides as well. They require time and effort from buyers, extend the purchasing process, and may limit how many systems are considered in depth.  They also favor systems that are easy to deploy and learn, even though such systems might lack the sophistication or depth of features that will ultimately be more important for success.

In short, POCs are not right for everyone. But it’s good to know they’re more available than before. Keep them in mind as an option when you have questions that a POC is equipped to answer.


Monday, October 16, 2017

Wizaly Offers a New Option for Algorithmic Attribution

Wizaly is a relatively new entrant in the field of algorithmic revenue attribution – a function that will be essential for guiding artificial-intelligence-driven marketing of the future. Let’s take a look at what they do.

First a bit of background: Wizaly is a spin-off of Paris-based performance marketing agency ESV Digital (formerly eSearchVision). The agency’s performance-based perspective meant it needed to optimize spend across the entire customer journey, not simply use first- or last-click attribution approaches which ignore intermediate steps on the path to purchase. Wizaly grew out of this need.

Wizaly’s basic approach to attribution is to assemble a history of all messages seen by each customer, classify customers based on the channels they saw, compare results of customers whose experience differs by just one channel, and attribute any difference in results to that channel   For example, one group of customers might have seen messages in paid search, organic search, and social; another might have seen messages in those channels plus display retargeting. Any difference in performance would be attributed to display retargeting.

This is a simplified description; Wizaly is also aware of other attributes such as the profiles of different customers, traffic sources, Web site engagement, location, browser type, etc. It apparently factors some or all of these into its analysis to ensure it is comparing performance of otherwise-similar customers. It definitely lets users analyze results based on these variables so they can form their own judgements.

Wizaly gets its data primarily from pixels it places on ads and Web pages. These drop cookies to track customers over time and can track ads that are seen, even if they’re not clicked, as well as detailed Web site behaviors. The system can incorporate television through an integration with Realytics, which correlates Web traffic with when TV ads are shown. It can import ad costs and ingest offline purchases to use in measuring results. The system can stitch together customer identities using known identifiers. It can also do some probabilistic matching based on behaviors and connection data and will supplement this with data from third-party cross device matching specialists.

Reports include detailed traffic analysis, based on the various attributes the system collects; estimates of the importance and effectiveness of each channel; and recommended media allocations to maximize the value from ad spending.  The system doesn't analyze the impact of message or channel sequence, compare the effectiveness of different messages, or estimate the impact of messages on long-term customer outcomes. As previously mentioned, it has a partial blindspot for mobile – a major concern, given how important mobile has become – and other gaps for offline channels and results. These are problems for most algorithmic attribution products, not just Wizaly.

One definite advantage of Wizaly is price: at $5,000 to $15,000 per month, it is generally cheaper than better-known competitors. Pricing is based on traffic monitored and data stored. The company was spun off from ESV Digital in 2016 and currently has close to 50 clients worldwide.

Saturday, October 07, 2017

Attribution Will Be Critical for AI-Based Marketing Success


I gave my presentation on Self-Driving Marketing Campaigns at the MarTech conference last week. Most of the content followed the arguments I made here a couple of weeks ago, about the challenges of coordinating multiple specialist AI systems. But prepping for the conference led me to refine my thoughts, so there are a couple of points I think are worth revisiting.

The first is the distinction between replacing human specialists with AI specialists, and replacing human managers with AI managers. Visually, the first progression looks like this as AI gradually takes over specialized tasks in the marketing department:



The insight here is that while each machine presumably does its job much better than the human it replaces,* the output of the team as a whole can’t fundamentally change because of the bottleneck created by the human manager overseeing the process. That is, work is still organized into campaigns that deal with customer segments because the human manager needs to think in those terms. It’s true that the segments will keep getting smaller, the content within each segment more personalized, and more tests will yield faster learning. But the human manager can only make a relatively small number of decisions about what the robots should do, and that puts severe limits on how complicated the marketing process can become.

The really big change happens when that human manager herself is replaced by a robot:



Now, the manager can also deal with more-or-less infinite complexity. This means we no longer need campaigns and segments and can truly orchestrate treatments for each customer as an individual. In theory, the robot manager could order her robot assistants to create custom messages and offers in each situation, based on the current context and past behaviors of the individual human involved. In essence, each customer has a personal robot following her around, figuring out what’s best for her alone, and then calling on the other robots to make it happen. Whether that's a paradise or nightmare is beyond the scope of this discussion.

In my post a few weeks ago, I was very skeptical that manager robots would be able to coordinate the specialist systems any time soon.  That now strikes me as less of a barrier.  Among other reasons, I’ve seen vendors including Jivox and RevJet introduce systems that integrate large portions of the content creation and delivery workflows, potentially or actually coordinating the efforts of multiple AI agents within the process. I also had an interesting chat with the folks at Albert.ai, who have addressed some of the knottier problems about coordinating the entire campaign process. These vendors are still working with campaigns, not individual-level journey orchestration. But they are definitely showing progress.

As I've become less concerned about the challenges of robot communication, I've grown more concerned about robots making the right decisions.  In other words, the manager robot needs a way to choose what the specialist robots will work on so they are doing the most productive tasks. The choices must be based on estimating the value of different options.  Creating such estimates is the job of revenue attribution.  So it turns out that accurate attribution is a critical requirement for AI-based orchestration.

That’s an important insight.  All marketers acknowledge that attribution is important but most have focused their attention on other tasks in recent years.  Even vendors that do attribution often limit themselves to assigning user-selected fractions of value to different channels or touches, replacing the obviously-incorrect first- and last-touch models with less-obviously-but-still-incorrect models such as “U-shaped”, “W-shaped”,  and “time decay”.  All these approaches are based on assumptions, not actual data.  This means they don’t adjust the weights assigned to different marketing messages based on experience. That means the AI can’t use them to improve its choices over time.

There are a handful of attribution vendors who do use data-driven approaches, usually referred to as “algorithmic”. These include VisualIQ (just bought by Nielsen), MarketShare Partners (owned by Neustar since 2015) Convertro (bought in 2014 by AOL, now Verizon), Adometry (bought in 2014 by Google and now part of Google Analytics), Conversion Logic, C3 Metrics, and (a relatively new entrant) Wizaly. Each has its own techniques but the general approach is to compare results for buyers who take similar paths, and attribute differences in results to the differences between their paths. For example: one group of customers might have interacted in three channels and another interacted in the same three channels plus a fourth. Any difference in results would be attributed to the fourth channel.

Truth be told, I don’t love this approach.  The different paths could themselves the result of differences between customers, which means exposure to a particular path isn’t necessarily the reason for different results. (For example, if good buyers naturally visit your Web site while poor prospects do not, then the Web site isn’t really “causing” people to buy more.  This means driving more people to the Web site won’t improve results because the new visitors are poor prospects.) 

Moreover, this type of attribution applies primarily to near-term events such as purchases or some other easily measured conversion.  Guiding lifetime journey orchestration requires something more subtle.  This will almost surely be based on a simulation model or state-based framework describing influences on buyer behavior over time. 

But whatever the weaknesses of current algorithmic attribution methods, they are at least based on actual behaviors and can be improved over time.  And even if they're not dead-on accurate, they should be directionally  correct. That’s good enough to give the AI manager something to work with as it tells the specialist AIs what to do next.  Indeed, an AI manager that's orchestrating contacts for each individual will have many opportunities to conduct rigorous attribution experiments, potentially improving attribution accuracy by a huge factor.

And that's exactly the point.  AI managers will rely on attribution to measure the success of their efforts and thus to drive future decisions.  This changes attribution from an esoteric specialty to a core enabling technology for AI-driven marketing.  Given the current state of attribution, there's an urgent need for marketers to pay more attention and for vendors to improve their techniques. So if you haven’t given attribution much thought recently, it’s a good time to start.

__________________________________________________________________________
* or augments, if you want to be optimistic.

Thursday, September 28, 2017

Customer Data Platforms Spread Their Wings

I escaped from my cave this week to present at two conferences: the first-ever “Customer Data Platform Summit” hosted by AgilOne in Los Angeles, preceding Shop.org, and the Technology for Marketing conference in London, where BlueVenn sponsored me. I listened as much as could along the way to find what’s new with the vendors and their clients. There were some interesting developments.
  • Broader awareness of CDP. The AgilOne event was invitation-only while the London presentation was open to any conference attendee, although BlueVenn did personally invite companies it wanted to attend. Both sets of listeners were already aware of CDPs, which isn’t something I’d expect to have seen a year or two ago. Both also had a reasonable notion of what a CDP does. But they still seemed to need help distinguishing CDPs from other types of systems, so we still have plenty more work to do in educating the market.

  • Use of CDPs beyond marketing. People in both cities described CDPs being bought and used throughout client organizations, sometimes after marketing was the original purchaser and sometimes as a corporate project from the start. That was always a potential but it’s delightful to hear about it actually happening. The widely a CDP is used in a company, the more value the buyer gets – and the more benefit to the company’s customers. So hooray for that.

  • CDPs in vertical markets. The AgilOne audience were all retailers, not surprisingly given AgilOne’s focus and the relation of the event to Shop.org. But I heard in London about CDPs in financial services, publishing, telecommunications, and several other industries where CDP hasn’t previously been used much. More evidence of the broader awareness and the widespread need for the solution that CDP provides.

  • CDP for attribution. While in London I also stopped by the office of Fospha, another CDP vendor which has just become a Sponsor of the CDP Institute. They are unusual in having a focus on multi-touch attribution, something we’ve seen in a couple other CDPs but definitely less common than campaign management or personalization. That caught my attention because I just finished an analysis of artificial intelligence in journey orchestration, in which one major conclusion was that multi-touch attribution will be a key enabling technology. That needs a blog post of its own to explain, but the basic reason is AI needs attribution (specifically, estimating the incremental value of each marketing action) as a goal to optimize against when it's comparing investments in different marketing tasks  (content, media, segmentation, product, etc.)

If there's a common thread here, it's that CDPs are spreading beyond their initial buyers and applications.  I’ll be presenting next week at yet another CDP-focused event, this one sponsored by BlueConic in advance of the Boston Martech Conference. Who knows what new things we'll see there?

Saturday, September 16, 2017

Vizury Combines Web Page Personalization with a Customer Data Platform

One of the fascinating things about tracking Customer Data Platforms is the great variety among the vendors.

It’s true that variety causes confusion for buyers. The CDP Institute is working to ease that pain, most recently with a blog discussion you’re welcome to join here.  But for me personally, it’s been endlessly intriguing to trace the paths that vendors have followed to become CDPs and learn where they plan to go next.

Take Vizury, a Bangalore-based company that started eight years ago as an retargeting ad bidding platform. That grew into a successful business with more than 200 employees, 400 clients in 40 countries, and $30 million in funding. As it developed, the company expanded its product and, in 2015, released its current flagship, Vizury Engage, an omnichannel personalization system sold primarily to banks and insurance companies. Engage now has more than a dozen enterprise clients in Asia, expects to double that roster in the next six months, and is testing the waters in the U.S.

As often happens, Vizury’s configuration reflects its origins. In their case, the most obvious impact is on the scope of the system, which includes sophisticated Web page personalization – something very rare in the CDP world at large. In a typical implementation, Vizury builds the client’s Web site home page.  That gives it complete control of how each visitor is handled. The system doesn't take over the rest of the client's Web site, although it can inject personalized messages on those pages through embedded tags.

In both situations, Vizury is identifying known visitors by reading a hashed (i.e., disguised) customer ID it has placed on the visitor’s browser cookie. When a visitor enters the site, a Vizury tag sends the hased ID to the Vizury server, which looks up the customer, retrieves a personalized message, and sends it back to the browser.  The messages are built by templates which can include variables such as first name and calculated values such as a credit limit.  Customer-specific versions may be pregenerated to speed response; these are updated as new data is received about each customer. It takes ten to fifteen seconds for new information to make its way through the system and be reflected in output seen by the visitor.

Message templates are embedded in what Vizury calls an engagement, which is associated with a segment definition and can include versions of the same message for different channels. One intriguing strength of Vizury is machine-learning-based propensity models that determine each customer’s preferred channel. This lets Vizury send outbound messages through the customer’s preferred channel when there’s a choice. Outbound options include email, SMS, Facebook ads, and programmatic display ads. These can be sent on a fixed schedule or be triggered when the customer enters or leaves a segment. Bids for Facebook and display ads can be managed by Vizury’s own bidding engine, another vestige of its origins. Inbound options include on-site and browser push messages.

If a Web visitor is eligible for multiple messages, Vizury currently just picks one at random. The vendor is working an automated optimization system that will pick the best message for each customer instead. There’s no way to embed a sequence of different messages within a given engagement, although segment definitions could push customers from one engagement to the next. Users do have the ability to specify how often a customer will be sent the same message, block messages the customer has already responded to, and limit how many total messages a customer receives during a time period.

What makes Vizury a CDP is that it builds and exposes a unified, persistent customer database. This collects data through Vizury's own page tags, API, and mobile SDK; external tag managers; and batch file loads.  Data is unified with deterministic methods including stitching of multiple identifiers provided by customers and of multiple applications on the same device. The system can do probabilistic cross-device matching but that's not reliable enough for most financial service applications.  Vizury doesn’t do fuzzy matching based on customer names and addresses, which is not a common technique in Asia.

The system includes standard machine learning algorithms that predict product purchase, app uninstalls, and message fatigue in addition to channel preference and ad bidding. Results can be applied to tasks other than personalization, such as lead scoring.  Algorithms are adapted for each industry and trained on the client’s own data. Users can't currently apply machine learning to other tasks.

Vizury uses a typical big data stack including Hadoop, Hive, Pig, Hbase, Flume, and Kafka. Clients can access the data directly through Hadoop or Hbase.  Standard reports show results by experience, segment, and channel, and users can create custom reports as well.


Pricing for Vizury is based on the number of impressions served, another echo of its original business. Enterprise clients pay upwards of $20,000 per month, although U.S. pricing could be different.





Friday, September 08, 2017

B2B Marketers Are Buying Customer Data Platforms. Here's Why.

I’m currently drafting a paper on use of Customer Data Platforms by B2B SaaS marketers.  The topic is more intriguing than it sounds because it raises the dual questions of  why CDPs haven’t previously been used much by B2B SaaS companies and what's changed.  To build some suspense, let’s first review who else has been buying CDPs.

We can skip over the first 3.8 billion years of life on earth, when the answer is no one. When true CDPs first emerged from the primordial ooze, their buyers were concentrated among B2C retailers. That’s not surprising, since retailers have always been among the data-driven marketers. They’re the R in BRAT (Banks, Retailers, Airlines, Telcos), the mnemonic I’ve long used to describe the core data-driven industries*.

What's more surprising is that the B's, A's, and T's weren't also early CDP users.  I think the reason is that banks, airlines, and telcos all capture their customers’ names as part of their normal operations. This means they’ve always had customer data available and thus been able to build extensive customer databases without a CDP.

By contrast, offline retailers must work hard to get customer names and tie them to transactions, using indirect tools such as credit cards and loyalty programs. This means their customer data management has been less mature and more fragmented. (Online retailers do capture customer names and transactions operationally.  And, while I don’t have firm data, my impression is that online-only retailers have been slower to buy CDPs than their multi-channel cousins. If so, they're the exception that proves the rule.)

Over the past year or two, as CDPs have moved beyond the early adopter stage, more BATs have in fact started to buy CDPs.  As a further sign of industry maturity, we’re now starting to see CDPs that specialize in those industries. Emergence of such vertical systems is normal: it happens when demand grows in new segments because the basic concepts of a category are widely understand.  Specialization gives new entrants as a way to sell successfully against established leaders.  Sure enough, we're also seeing new CDPs with other types of specialties, such as products from regional markets (France, India, and Australia have each produced several) and for small and mid-size organizations (not happening much so far, but there are hints).

And, of course, the CDP industry has always been characterized by an unusually broad range of product configurations, from systems that only build the central database to systems that provide a database, analytics, and message selection; that's another type of specialization.  I recently proposed a way to classify CDPs by function on the CDP Institute blog.** 

B2B is another vertical. B2B marketers have definitely been slow to pick up on CDPs, which may seem surprising given their frenzied adoption of other martech. I’d again explain this in part by the state of the existing customer data: the more advanced B2B marketers (who are the most likely CDP buyers) nearly all have a marketing automation system in place. The marketers' initial assumption would be that marketing automation can assemble a unified customer database, making them uninterested in exploring a separate CDP.  Eventually they'd discover that nearly all B2B marketing automation systems are very limited in their data management capabilities.  That’s happening now in many cases – and, sure enough, we’re now seeing more interest among B2B marketers in CDPs.

But there's another reason B2B marketers have been uncharacteristically slow adopters when it comes to CDPs.  B2B marketers have traditionally focused on acquiring new leads, leaving the rest of the customer life cycle to sales, account, and customer success teams.  So B2B marketers didn't need the rich customer profiles that a CDP creates.  Meanwhile, the sales, account and customer success teams generally worked with individual and account records stored in a CRM system, so they weren't especially interested in CDPs either.  (That said, it’s worth noting that customer success systems like Gainsight and Totango were on my original list of CDP vendors.)

The situation in B2B has now changed.  Marketers are taking more responsibility for the entire customer life cycle and work more closely with sales, account management, and customer success teams. This pushes them to look for a complete customer view that includes data from marketing automation, CRM, and additional systems like Web sites, social media, and content marketing. That quest leads directly to CDP.

Can you guess who's leading that search?  Well, which B2B marketers have been the most active martech adopters? That’s right: B2B tech marketers in general and B2B SaaS product marketers in particular. They’re the B2B marketers who have the greatest need (because they have the most martech) and the greatest inclination to try new solutions (which is why they ended up with the most martech). So it’s no surprise they’re the earliest B2B adopters of CDP too.

And do those B2B SaaS marketers have special needs in a CDP?  You bet.  Do we know those needs are?  Yes, but you’ll have to read my paper to find out.

_______________________________________________________
*It might more properly be FRAT, since Banking really stands for all Financial services including insurance, brokers, investment funds, and so on.  Similarly, Airlines represents all of travel and hospitality, while Telco includes telephone, cable, and power utilities and other subscription networks.  We should arguably add healthcare and education as late arrivals to the list.  That would give us BREATH.  Or, better still, replace Banks with Financial Services and you get dear old FATHER.

**It may be worth noting that part of the variety is due to the differing origins of CDP systems, which often started as products for other purposes such as tag management, big data analytics, and campaign management.   That they've all ended up serving roughly the same needs is a result of convergent evolution (species independently developing similar features to serve a similar need or ecological niche) rather than common origin (related species become different over time as they adapt to different situations).  You could look at new market segments as new ecological niches, which are sometimes filled by specialized variants of generic products and are other times filled by tangentially related products adapting to a new opportunity.

My point here is there are two separate dynamics at play: the first is market readiness and the second is vendor development.  Market readiness is driven by reasons internal to the niche, such as the types of customer data available in an industry.  Vendor development is driven by vendor capabilities and resources.  One implication of this is that vendors from different origins could end up dominating different niches; that is, there's no reason to assume a single vendor or standard configuration will dominate the market as a whole.  Although perhaps market segments served by different configurations are really separate markets.

Thursday, August 31, 2017

AgilOne Adds New Flexibility to An Already-Powerful Customer Data Platform


It’s more than four years since my original review of AgilOne, a pioneering Customer Data Platform. As you might imagine, the system has evolved quite a bit since then. In fact, the core data management portions have been entirely rebuilt, replacing the original fixed data model with a fully configurable model that lets the system easily adapt to each customer.

The new version uses a bouquet of colorfully-named big data technologies (Kafka, Parquet, Impala, Spark, Elastic Search, etc.) to support streaming inputs, machine learning, real time queries, ad hoc analytics, SQL access, and other things that don’t come naturally to Hadoop. It also runs on distributed processors that allow fast scaling to meet peak demands. That’s especially important to AgilOne since most of its clients are retailers whose business can spike sharply on days like Black Friday.

In other ways, though, AgilOne is still similar to the system I reviewed in 2013. It still provides sophisticated data quality, postal processing, and name/address matching, which are often missing in CDPs designed primarily for online data. It still has more than 300 predefined attributes for specialized analytics and processing, although the system can function without them. It still includes predictive models and provides a powerful query builder to create audience segments. Campaigns are still designed to deliver one message, such as an email, although users could define campaigns with related audiences to deliver a sequence of messages. There’s still a “Customer360” screen to display detailed information about individual customers, including full interaction history.

But there’s plenty new as well. There are more connectors to data sources, a new interface to let users add custom fields and calculations for themselves, and workflow diagrams to manage data processing flows. Personalization has been enhanced and the system exposes message-related data elements including product recommendations and the last products browsed, purchased, and abandoned. AgilOne now supports Web, mobile, and social channels and offers more options for email delivery. A/b tests have been added while analytics and reporting have been enhanced.

What should be clear is that AgilOne has an exceptionally broad (and deep) set of features. This puts it at one end of the spectrum of Customer Data Platforms. At the other end are CDPs that build a unified, sharable customer database and do nothing else. In between are CDPs that offer some subset of what AgilOne offers: advanced identity management, offline data support, predictive analytics, segmentation, multi-channel campaigns, real time interactions, advanced analytics, and high scalability. This variety is good for buyers, since it means there’s a better chance they can find a system that matches their needs. But it’s also confusing, especially for buyers who are just learning about CDPs and don’t realize how much they can differ. That confusion is something we’re worrying about a lot at the CDP Institute right now. If you have ideas for how to deal with it, let me know.