In customer education, we learned a hard lesson: completion rates don't tell you much.
A course with a 95% completion rate might look successful in a dashboard. But if learners aren't applying what they learned, if they can't perform differently in their actual work, if the business outcomes haven't shifted - what did we actually accomplish?
We had to expand our thinking. Stop measuring the performance of learning. Start measuring the performance that results from learning.
I think we're making the same mistake with leadership evaluation. And "executive presence" is where it shows up most clearly.
The Executive Presence Problem
Brené Brown calls it out in Strong Ground: "Executive presence is often code for 'you don't look or sound like what a leader is supposed to look and sound like.'"
If you've ever received feedback about your executive presence - especially if you're a woman - you know exactly what she means. The feedback is vague. Subjective. It's about tone, appearance, some ineffable quality you're supposed to embody but can't quite define.
"You need more executive presence." "Work on your executive presence." "They don't see you as having executive presence yet."
What does that actually mean? What are we measuring? What outcome are we trying to drive?
Usually, it means: you don't match the image we have in our heads of what a leader looks like.
It's a completion rate metric for leadership. Surface-level. Easy to observe. Completely disconnected from actual impact.
What Are We Actually Trying to Measure?
Here's the question we should be asking: What does this leader enable their organization to do that it couldn't do otherwise?
Not: Do they look like a leader? Not: Do they sound authoritative in meetings? Not: Do they perform confidence in high-stakes situations?
But: What shifts because they're in the room?
When we evaluate customer education programs, we've learned to ask:
Can learners perform the task in their actual environment?
Are they making different decisions than they made before?
Is the business outcome we care about moving?
What's the velocity of adoption? The quality of application?
These are impact questions. Performance questions. They require us to look beyond surface metrics to actual organizational capability.
We should be asking the same questions about leaders.
A Different Framework: Leadership Impact
What if instead of evaluating "executive presence," we evaluated leadership impact across dimensions that actually matter?
1. Decision Quality Under Uncertainty
Can this leader make sound calls when they don't have all the information? When the situation is moving fast? When the safe answer isn't available?
This isn't about being decisive for the sake of appearing confident. It's about the quality of decisions over time, particularly in ambiguous situations.
In customer education, we don't just measure whether someone completed the module on troubleshooting. We measure whether they can actually troubleshoot novel problems they haven't seen before.
Same principle. Can this leader navigate what they haven't encountered before?
2. Trust and Psychological Safety
Does this leader build environments where people do their best work? Where people surface problems early? Where the team takes intelligent risks?
You can measure this. You can see it in retention, in the quality of ideas that surface, in how quickly problems get escalated, in whether people are bringing their full capability to the work.
In customer education, we measure whether learners feel safe asking questions, whether they engage with challenging material, whether they persist through difficulty. These are leading indicators of actual learning.
In leadership, trust and psychological safety are leading indicators of actual performance.
3. Perspective Under Pressure
Brené Brown defines calm as "creating perspective and mindfulness while managing emotional reactivity."
Can this leader bring clarity to complicated situations? Can they experience their own feelings without overreacting to heightened emotions? Can they help the organization see what matters when everything feels urgent?
This is about emotional regulation, yes. But it's also about organizational capacity. Leaders who can't manage their own reactivity create chaos. Leaders who can manage it create space for the organization to think clearly.
We measure this in customer education all the time - can learners maintain clarity when they're troubleshooting a critical issue? When a customer is escalating? When the pressure is on?
Why wouldn't we measure it in leaders?
4. Developmental Impact
Who is this leader building? What capabilities are emerging in their organization that weren't there before?
You can see this in the growth trajectories of their people. In the quality of leaders they develop. In whether their team members go on to succeed in bigger roles.
In customer education, we talk about enabling learners to teach others - that's when you know learning has really stuck. Someone doesn't just know how to do something; they can bring others along.
Leadership works the same way. The best leaders build more leaders.
5. Organizational Movement
Is the organization more capable because this leader is in it? Can it move faster, think more clearly, execute more effectively, adapt more readily?
This is the ultimate outcome measure. Not what the leader looks like or sounds like. What the organization can do.
In customer education, we measure business outcomes - time to productivity, feature adoption, support ticket reduction, customer retention. These tell us whether the education program is actually moving the business.
In leadership evaluation, we should measure the same thing: Is the business more capable? Is it moving?
The Gender Dimension
Here's why this matters particularly for women: "executive presence" feedback is almost always coded.
Too direct. Too soft. Too emotional. Too assertive. Too collaborative. Not commanding enough. Too commanding.
The target keeps moving because the target isn't actually about capability. It's about matching an image.
Meanwhile, the impact measures I'm describing? They're gender-neutral.
Can you make quality decisions under uncertainty? Do you build trust? Can you manage reactivity and bring perspective? Are you developing capability in others? Is the organization more capable because you're in it?
These questions don't care what you look like or sound like. They care what you enable.
The Shift
We made this shift in customer education because we had to. Completion rates weren't predictive. They didn't tell us what we needed to know. They optimized for the wrong thing.
We need to make the same shift in leadership evaluation.
Stop measuring whether someone looks like a leader. Start measuring whether they enable the organization to perform like it has leadership.
Stop evaluating executive presence. Start evaluating leadership impact.
The organizations that figure this out first? They'll stop losing talented people who don't "look the part" but absolutely have the capability. They'll stop promoting people who perform leadership but can't actually do it. They'll build deeper benches and more adaptable organizations.
Just like in customer education: measure what matters. Measure impact. Measure the performance that results from the presence of this leader, not the performance of presence itself.
That's the opportunity.
Learning by Design is written by Courtney Sembler. Courtney currently helps companies build scalable customer education programs. After spending over a decade scaling HubSpot Academy globally, she now explores the systems, strategies, and realities of workplace learning, leadership, and customer experience—the kind that drives retention, adoption, and revenue by design, not by accident. Published twice weekly with monthly deep dives. Connect with her on LinkedIn and subscribe to Learning by Design.
