NPS Blog

Do your customers really like you better than before?

Do your customers really like you better than before?More companies these days are using the Net Promoter System (NPS) to gauge how well they’re doing in the eyes of their customers. They conduct short, frequent surveys, often after every significant interaction. They typically ask how likely (on a zero-to-10 scale) the customer would be to recommend the company or product, and what the primary reason is for the rating. The companies then classify customers as promoters (ratings of 9-10), passively satisfied (7-8) or detractors (0-6). The resultant Net Promoter Score—promoters minus detractors—helps companies track their performance and gauge the effectiveness of investments in building customer loyalty.

Once a company starts this process, it often happens that scores begin to climb. And it’s easy to start patting yourself and your colleagues on the back—wow, we’re doing better and better by our customers. But you shouldn’t accept these favorable results too easily.

For one thing, you need to know about competitors’ scores. Maybe there’s something going on in the industry that’s affecting everybody. The best way to learn where you stand relative to the competition—which, after all, is what matters most—is to sponsor an anonymous study of everybody’s customers, using public lists. These competitive benchmark studies gather likelihood-to-recommend data on everyone in the industry. Done repeatedly, they enable you to see relative standings and trend lines over time.

But let’s say you really do seem to be gaining, relative to competitors. Now it’s time to scrutinize your own methodology and procedures, because companies regularly run into some methodological pitfalls:

  • Random variation. Any questionnaire produces “noise”—statistically insignificant changes caused by random variation. Check your sample sizes and confidence intervals to gauge your study’s reliability. Be sure you know how many points of change are necessary for an 80% or 90% confidence level, given your sample size and starting score.
  • Changes in methodology from one period to the next. If you introduce different sampling methods or questions, you won’t have an apples-to-apples comparison. It’s the same if you question new customer segments or rely on different transactions as the questionnaire trigger. In any of these cases, changes in your scores are likely to be meaningless. One company even found that a different typeface on an email altered response rates and thus scores.
  • A policy or process change with unintended consequences. At one bank, a member of the risk-management team made a small change to a calculation of customer value. The change had unintended consequences for the segmentation of customers and subsequently on sampling methodology. NPS jumped, and the bank attributed the improvement to a recent operational initiative. Only later did executives discover that the improvement wasn’t real.
  • Score gaming. A handful of companies have found frontline employees begging for better scores, coaching their customers on how to respond to the survey or even interfering with survey administration. At a mobile telecom company’s retail stores, a few salespeople figured out how to substitute their own mobile number for the customer’s number in the survey system. They’d spend an hour each night filling out surveys about themselves—mostly nines and 10s, no doubt. At other companies, middle managers sometimes game the system by excluding or including certain types of customers. An auto insurer, for instance, decided to survey only those customers whose claims were paid, excluding people who withdrew claims or were denied. Another company excluded data from customers who had not paid their bill on time.

Any time you notice a big jump in NPS, it pays to be skeptical. If the improvement reflects real changes, by all means learn what happened so you will know what to do more of. But be sure the source of the improvement doesn’t lie somewhere else. Otherwise, people will be drawing the wrong conclusions about the efficacy of their initiatives.

 

 

 

Comments are closed.