lowercasing NPS

The number of times a CX and CRM optimisation project has started with staring at an NPS score and wondering what to do next is almost incalculable. 

Sometimes NPS has been used effectively, for instance, at the end of a specific customer journey or right after a key touchpoint, when there’s a bit more context behind a simple score out of 10. But even then, it rarely sparks inspiration; it only diagnoses that for some, things went well, and for others, they didn’t.

We helped a tech client run a series of workshops and we'd spent months designing hands-on product demos for prospective customers. The sessions were fully booked, engagement was high, and the demos showcased genuinely innovative use cases for the software.

At the end we wanted to know if we should expand the program, or pivot to another approach, so we ran down the route all CX professionals do and asked:

How likely are you to recommend [product] to a friend or colleague?

We got a 6, and off the back of that score, there was only more confusion.

The team had spent a significant amount of time designing the experience, recruiting participants, but  ultimately, nobody could pinpoint what actually needed to be fixed

Was it decent for everyone but great for no one? Was it brilliant for half the audience and terrible for the other half? Did someone have a bad day? Did they not like the speakers? The other participants? The score told us nothing about what participants had just experienced, only that it could have been better

Over many years and many NPS surveys later, we realised that the problem isn’t the research method, it’s the metric itself. We aren’t asking the wrong follow-up questions; we’re asking the wrong primary one. 

The quest for additional context and information that actually drives decisions resulted in the team at lowercase designing and building TBX Action - a tool that lets us add emotional context to the questions we want to ask, so we can actually design solutions that solve problems.

On the 14th of August, lowercase invited some of Australia’s most well-known brands to get under the hood of the conversations we’ve been having with CX teams around Australia, to unpack what NPS means to their business, have a conversation about our perspective on NPS, and what we’ve seen driving success in its place. 

 

Our view on NPS is that there are Six Fatal Flaws

During our workshop, we put forward a case for why NPS has become a liability rather than an asset. Leaning into and expanding on the work by our friends at Skyscanner, here’s where we have landed.


1. It’s Meaningless -
NPS is an inwardly focused CX term, with the primarily related Google search queries are “what is NPS?” and "what is a good NPS score?" - indicating endemic confusion about purpose and interpretation.

2. It’s Misleading - The way NPS is calculated means the same NPS score can represent completely different customer realities.

3. It’s Volatile - Any shift between promoter/detractor groups gets double-counted, creating a chaotic metric that’s hard to design against.

4. It’s Biased - Cultural differences create false comparisons.
At a macro level, different cultural codes dictate response patterns, while at a micro level, individual biases and rating tendencies impact responses, making cross-demographic comparisons difficult.

5. It Masks Progress - Movement within categories is invisible. A customer improving from 2 to 5 shows major progress, but the score will remain the same.

6. Poor Business Predictor - Despite CSAT correlating better with business outcomes (.70 vs .57), it continues to be optimised for.

 

The Cultural Bias Blindspot

One workshop participant challenged our cultural bias criticism:
"NPS training accounts for this", but this response perfectly illustrated the problem. If your metric requires cultural disclaimers and adjustment factors, then we should be right to question its relevance as a universal business tool. Good metrics shouldn't need disclaimers.

 

The Metric Problem

Customer Experience covers the entire journey, maintains the customer relationship, and holds the direct link to the voices of our customers—business assets that are incredibly valuable. Despite this, we know that CX teams are struggling to report back on the initiatives they are driving in a way that helps them unlock funding and broader validation as a core business unit.

Our theory for this is that we suffer from a metric problem. With different departments having ownership of sections of the funnel, CX needed to span across the entirety of it. Marketing measures discoverability, Sales tracks units sold, Product counts features deployed, Support reduces complaints, while CX claims ownership of "all of it" but struggles to prove it through fragmented metrics.

Since CX covers such a huge amount of ground, the metrics it influences are not directly "owned" by CX. Instead, CX has become responsible for NPS, CSAT, and CES. These are static and vague metrics that lack a direct connection to the key metrics that concern the C-Suite.

This metric problem means that CX departments, despite owning crucial business assets and delivering initiatives that drive business impact, struggle to validate and portray this impact effectively.

Static Metrics Keep Us Stuck In The “What”

The fundamental issue we’ve found through both our work with CX teams and the conversations of the day was that traditional metrics tell you that something changed, never why it changed.

Static metrics show an outcome and by themselves provide little context for why they are the number they are. They have no emotional context behind them and no location context along the journey.

To provide a live validation of this, we had participants interact with our TBX Action tool to map their actual customer journey emotions versus their measured metrics.

From Static to Dynamic: The Power of “Why”

Adding conversational context to the “what” means we can get under the hood of why things are working or not working. Asking questions across the journey gives us the ammunition to diagnose the areas of improvement more effectively.

We're not saying to abandon CX metrics entirely, but instead add the correct context and you'll find you're no longer designing blind, but instead have the correct framing to solve customer problems that wasn't available before.

 

The 2025 Reality

NPS was designed to mitigate the financial and technical roadblocks of 2003, but we're in 2025 now and businesses are digital first and able to find operational efficiencies through AI-powered tools.

We have started using AI via TBX Action that tracks a TBX score across the journey, providing both location and emotional context that helps give static metrics the ability to drive impact at an organisational level.

What's Next

The companies winning in CX aren't those with the highest NPS scores, they're those who understand the emotional drivers behind customer behavior.

Next
Next

The key to your customer's mind: How your brand can adopt a customer-centric strategy.