Why I’m Not Satisfied with a Perfect NPS

After nearly two decades of delivering training and enablement services—from graduate training to Scrum, Agile and DevOps culture—I’ve learned that immediate feedback metrics like NPS (net promoter score)  are only part of the equation. Last week, I received a perfect NPS score of +100 for an enablement session I delivered. It’s unusual as it suggests that absolutely everyone would strongly recommend the training, and I should have been thrilled. But the truth is, I wasn’t. I’ve been pondering this for a while, and the more I think about it, the more I’m convinced that NPS can sometimes be a vanity metric, especially when it comes to short engagements like training sessions or immersive enablements. That’s because I know that NPS alone doesn’t tell me if the training truly made a difference, and I believe there are other, better metrics that can indicate the value of training and enablement services.

Why NPS Falls Short in Training Contexts

NPS is designed to capture a snapshot of customer satisfaction by asking one simple question: "How likely are you to recommend this product or service to someone else?" It’s a useful metric for gauging initial reactions, but it doesn’t tell the whole story when it comes to training and enablement. Here’s why:

  • Enthusiasm Is Fleeting: A high NPS score often reflects the excitement and enthusiasm participants feel immediately after a session. That’s great, but enthusiasm doesn’t always translate into long-term impact. 🚀
  • Confidence Is Harder to Measure: While enthusiasm is important, confidence is critical. Do participants leave the session feeling not just excited, but equipped and confident to apply what they’ve learned? This is a harder, but far more valuable, metric to capture. 😎
  • Impact Takes Time: The true measure of a successful training session isn’t how participants feel immediately afterward, but the impact they’re able to create weeks and months later. This is the metric that really matters, and it’s one that NPS simply doesn’t capture. 🙌

A Better Way to Measure Training Success

Once I’d realised that NPS can fall short, I started experimenting with a few different ways to try and come up with a way to measure and reflect the value and get a true feeling for the success of Training and enablement services.

1. Target Outcomes:

The first step to define a new measure involved the target outcomes that I always define for the engagements we run. I identified key measures that I believed would reflect the successful delivery of these outcomes. This included:

  • Quantifying Coverage: Ensuring that all the material was covered comprehensively.
  • Incorporating Proven Formulas: Leveraging techniques and practices that I’ve refined over years of experience to create an effective enablement experience.

However, this approach felt too much like a checklist—focused more on inputs I could control rather than the true impact. While this method would likely yield a high score, it didn’t fully capture the true measure of success I was aiming for.

Example Target Outcomes recorded in Stellafai

2. Measuring Confidence, Enthusiasm, and Fun

Next, I shifted focus to the outputs and feedback I aimed to generate by the end of the enablement sessions. I concentrated on three key things that I hoped to generate during my time with the participants:

  • Enthusiasm: Usually high, this reflects the energy and excitement participants feel during the sessions.
  • Confidence: More challenging to achieve, but crucial for ensuring participants feel capable of applying what they've learned.
  • Fun: Because if the training isn’t enjoyable, what’s the point?

These metrics provided a more realistic view of the session's immediate impact, with high levels of enthusiasm and fun, and slightly more reserved confidence scores—understandable given the short time we had together. I also started tracking individual activities: all of the different things that we do, sessions we run, practices we teach, to try and see if there were any learnings or patterns of success.

However, I realised these measures still missed the most critical element: what happens after the enablement. The real value isn’t in what participants feel in the room, but in the lasting impact and behavior change in the weeks and months that follow. None of these scores addressed that long-term impact.

Tracking long-term change with a Key Result in Stellafai

3. Tracking long-term change with a Key Result in Stellafai

So I’ve started tracking whether the training content truly lands and embeds itself within the organisation, leading to sustained behavior change. This means monitoring:

  • Behavioral Shifts: Are the behaviors and practices taught being adopted and maintained within the organisation?
  • Value and Impact: Does the training deliver the value anticipated in the original business case?

Immediately after the course, these metrics are expected to be zero because the impact hasn’t yet had time to manifest. However, by staying connected and continuing to monitor these lagging indicators, I can better assess the true, long-term impact of the training.

So I’m now looking at an NPS of +100, but I’m seeing my own weighted score, reflecting long-term success, as less than 50. Some might see this as falling short and not even being halfway to having a perfect success. But it may actually be a more accurate reflection of where we stand in driving real change.

Looking for help running engagements or bettering your own NPS score? Get in touch here - we’d love to help 🚀