New research from Anthropic's Education team reveals something counterintuitive: when AI produces polished-looking outputs, people stop questioning them. Even though users invest more effort upfront directing the work, they become less critical once they see something that looks finished.
For those of us building customer education programs, this should set off alarm bells.
The Pattern We're Seeing
Anthropic analyzed nearly 10,000 conversations to understand how people develop "AI fluency"—the skills needed to collaborate effectively with AI tools. What they found challenges some assumptions about how AI augments human work:
When AI creates artifacts (code, documents, presentations, apps), users are:
14.7 percentage points more likely to clarify their goals upfront
14.5pp more likely to specify formats
13.4pp more likely to provide examples
But they're also:
5.2pp less likely to identify missing context
3.7pp less likely to check facts
3.1pp less likely to question the reasoning
In other words: people become more directive at the start, but less evaluative at the end. The moment something looks done, critical thinking drops off.
Sound familiar?
The Customer Education Parallel
This mirrors a pattern many of us have seen in traditional customer education. When we deliver polished help docs, slick video tutorials, or comprehensive certification programs, customers often treat them as endpoints rather than starting points.
The production value creates an illusion of completeness. Customers assume: "This looks professional, so it must be everything I need." They stop questioning whether it actually solves their specific problem. They don't iterate. They don't ask follow-ups. They treat the resource as finished, and move on.
AI is amplifying this dynamic—and it's happening at scale.
Why This Matters Now
As AI becomes embedded in every product experience, your customers are simultaneously:
Using your product
Using AI tools to accelerate their work with your product
Consuming AI-generated education content about your product
Potentially using AI to interpret your education content
Each layer introduces new opportunities for the "polished output trap." A customer might use AI to quickly generate a dashboard in your analytics platform, see something that looks right, and never verify if it's actually measuring what they need. Or they might ask AI to summarize your documentation and miss critical context that would prevent them from making costly mistakes.
The stakes are higher because the speed is faster. Customers can now create finished-looking work in minutes that would have taken days. But "finished-looking" and "actually finished" are very different things.
What Anthropic's Research Tells Us Works
The strongest pattern in the data is clear: iteration and refinement matters more than anything else.
Conversations where users iterated—building on previous exchanges rather than accepting the first response—showed 2.67x more AI fluency behaviors overall. These conversations were:
5.6x more likely to involve questioning the AI's reasoning
4x more likely to identify missing context
The research also found that only 30% of users explicitly tell AI how they want to collaborate. Meaning 70% of people never set expectations like "push back if my assumptions are wrong" or "walk me through your reasoning first."
Translation for customer education: We need to design programs that encourage ongoing dialogue, not one-time consumption.
Rethinking Customer Education for the AI Era
Here's what this means for how we build customer education programs:
1. Design for Questioning, Not Just Completion
Traditional education design optimizes for completion rates. Finished the course? Check. Passed the quiz? Check. Got the badge? Check.
But completion metrics don't tell us if customers developed the judgment to know when something's wrong, incomplete, or needs context. We need to explicitly teach customers to interrogate solutions—AI-generated or otherwise.
This might look like:
Building "pause and verify" moments into learning paths
Teaching customers how to spot missing context in their own work
Creating exercises where the "right answer" is recognizing that you need more information
2. Build Iteration Infrastructure (aka Everboarding)
If iteration is the strongest predictor of AI fluency, customer education can't be front-loaded anymore. One-and-done onboarding creates the conditions for accepting first outputs without refinement.
Instead, we need continuous touchpoints that:
Bring customers back to refine their understanding
Surface new context as they use the product in different ways
Create natural moments to question whether their initial approach still serves them
This is what I call Everboarding—transforming education from a linear path into an ongoing system of value delivery.
3. Treat AI Fluency as a Core Customer Capability
Just like we help customers become fluent in using our products, we're now helping them become fluent in using AI with our products. This isn't optional—it's a competitive differentiator.
Companies that help customers develop AI fluency alongside product fluency will create:
More successful users (who achieve outcomes, not just completions)
Stickier relationships (because the company invested in capability, not just features)
Better advocacy (because customers can articulate the how, not just the what)
4. Model What "Good" Looks Like
If only 30% of users set collaboration expectations with AI, there's a massive opportunity to teach customers how to be more intentional about learning.
Your education content should:
Show iteration, not just final states
Demonstrate how to push back on first drafts
Make visible the questions experts ask that beginners don't
This means sometimes publishing "messy" learning content—showing the refinement process, not just the polished result.
The Bigger Shift: Education as Strategic Infrastructure
For years, many companies have treated customer education as a cost center—a way to deflect support tickets or check a "customer success" box. AI puts pressure on that model from two directions:
AI can now handle routine education delivery (answering basic questions, generating simple tutorials, creating first-draft content)
AI also creates new education challenges (the polished output trap, the need for critical evaluation skills, the complexity of AI-augmented workflows)
The companies that win won't be the ones that use AI to make cheaper versions of traditional education content. They'll be the ones that recognize education as the mechanism for developing customer judgment, critical thinking, and fluency in an AI-augmented world.
That's a strategic capability, not a support function.
What I'm Curious About
This research raises questions I'd love to explore with this community:
Are you seeing the "polished output trap" with your customers? Where they accept AI-generated work without sufficient verification?
How are you designing for iteration in your education programs, not just completion?
What does "AI fluency" look like in your specific product context? How do you know when customers have developed it?
The shift is already happening. The question is whether we're building customer education that helps people navigate it—or whether we're just adding to the pile of polished outputs they won't question.
Learning by Design is written by Courtney Sembler. Courtney currently helps companies build scalable customer education programs. After spending over a decade scaling HubSpot Academy globally, she now explores the systems, strategies, and realities of workplace learning, leadership, and customer experience—the kind that drives retention, adoption, and revenue by design, not by accident. Published twice weekly with monthly deep dives. Connect with her on LinkedIn and subscribe to Learning by Design.
