Thursday, March 16, 2017

Enough with the NPS surveys!

How likely would you be to recommend [our widget] to a friend, family member or colleague?

Please use a 0-10 scale, where 0 means “I’m not at all likely to recommend” and 10 means “I’m extremely likely to recommend”

You’ve probably seen this question – or a variant of it – many times over after you’ve made a purchase, interacted with a Brand or are even an existing customer of a service. 

It’s called NPS® - or Net Promoter Score® - and is the most widely used measure of customer experience in the business world.

The question above is a specific measure of Product customer satisfaction but there are all kinds of similar measures such as transactional questions to measure how a specific staff member has done in meeting your needs as a consumer.

Measuring and analysing the feedback from your input is what keeps some of us in widgets.

It results in lots of shiny looking presentations, comparisons to other brands and industries, acts as a key input to decision making and is used to measure performance of individuals, products and Brands as a whole – including when it comes to their rewards.

As with any other measure, the output is only as good as the quality of the inputs – and for some time I’ve thought that the quality of the inputs has been really variable - due to the problem of over surveying.

Here’s my e-mail surveys folder:

While some of these are Brands seeking online reviews (many of which are keen for positive reports to go onto sites like TripAdvisor - but want you to get in touch with them direct and not publicise the issue if you’ve had a problem), they’re all measuring their Customer Experience.

Not only are there so many of them, they could improve their execution:

  • Don’t ask for your experience of a product (and not the sale process) as you close the sale at checkout – i.e. the product hasn’t even arrived, let alone the consumer being able to evaluate it
  • Less multiple reminders and over reminders (food establishments seem particularly bad at these)
  • Some don’t offer the ability to opt out – resulting in very poor quality inputs and brands being marked as Spammers
  • Be wary of incentivising positive reports and social media sharing – it skews results
  • Cut down the overly long surveys with too many questions – resulting in people clicking on the same number for each ranking question asked and not giving sufficient consideration to the verbatim feedback provided

Can we also do something about the scheduling of surveys as an industry?

In general there are a very small number of specialist agencies who undertake this research on the behalf of Brands so why don’t we start using a scheduling model of only sending each recipient X (4?) number of messages per month across an agency’s entire client base? (The agencies could manage without causing any DPA impacts)

This would result in better measurement, more meaningful feedback (the words are more important than the numbers), better consideration of Brands and less unsubscribes and spam reports.

After all, if Customer Experience professionals as consumers themselves tire of entering feedback then maybe it’s time for change.

No comments:

Post a Comment