What Does It Mean to Be an Expert in the Age of AI?
We need a new vocabulary for the strange new competence AI is making possible.
A post has been making the rounds about a student who used NotebookLM to compress a semester of learning into 48 hours.
The setup was simple, almost offensively simple.
He uploaded multiple textbooks, research papers, and lecture transcripts, and then he asked a better class of question than most students ever ask:
What are the 5 core mental models every expert in this field shares?
Where do experts fundamentally disagree, and what is each side’s strongest argument?
What questions expose the difference between deep understanding and memorized facts?
Then he spent two days working those questions hard.
By the end, according to the post, he could hold his own in conversation with people far more credentialed and experienced than he was.
Whether that exact story happened exactly as told is almost beside the point.
Because anyone paying attention can see the broader phenomenon already emerging:
AI can now generate a form of competence that looks a lot like expertise, feels a lot like expertise, and often performs like expertise—while still being meaningfully different from the real thing.
That difference matters.
It matters for schools. It matters for hiring. It matters for institutions. It matters for how we assess authority, credibility, and trust. And it matters for ambitious people who are trying to understand what kind of edge AI actually gives them.
We are missing the words for what is happening.
So let’s build them.
The missing category
The old vocabulary is no longer enough.
For most of modern life, we used a fairly simple ladder:
novice → intermediate → expert
That worked reasonably well in a world where deep competence was slow, apprenticeship mattered, and access to the structure of a field was expensive.
But AI has changed the shape of the climb.
A person can now acquire, in days, something that used to take months just to orient toward:
the core models of a field
the main schools of thought
the central debates
the strongest objections
the open questions
the diagnostic questions that separate shallow understanding from real understanding
That is not mere memorization.
That is not mere summary.
And yet it is not identical to the kind of expertise earned through years of practice, friction, error correction, and repeated contact with reality.
So we need a term for this new middle zone.
I think the best term is this:
Scaffolded expertise
Scaffolded expertise is expert-level conceptual and conversational competence built through external cognitive supports, but not yet fully hardened by long practice, repeated error correction, or independent contact with reality.
That is what tools like NotebookLM, ChatGPT, Claude, Perplexity, and domain-specific AI workflows increasingly make possible.
The competence is real.
That point is important.
It is fashionable in some circles to dismiss all AI-assisted competence as fake. That is lazy. If a person can map the field, articulate the core models, represent opposing schools fairly, answer probing questions, identify weak arguments, and hold a serious conversation with experts, then something real has happened.
But it is equally mistaken to collapse that into full expertise.
Because what may still be missing is:
tacit judgment
embodied pattern recognition
robustness under ambiguity
edge-case sensitivity
fluency without tools
the credibility that comes from having paid one’s dues in contact with the domain itself
So the phenomenon is neither fake nor final.
It is scaffolded.
And the existence of scaffolded expertise means our whole taxonomy of competence needs an update.
The before-and-after: why AI forces a new definition of “expert”
Here’s the simplest way to see what changed.
Before AI: the ladder was basically three rungs
In most domains, the world implicitly treated competence like this:
Familiarity (knows the terms)
Fluency (can explain the ideas)
Expertise (can reliably do the work—and be trusted under pressure)
And here’s the key: in the pre-AI world, if someone sounded like an expert, there was a decent chance they had paid a large apprenticeship cost. Performance in conversation was often (not always) a proxy for real depth.
After AI: conversation is no longer a clean proxy
AI compresses orientation and articulation so much that the old ladder splinters.
Now there are at least two new middle layers that used to be rare at scale:
Map-level understanding: you can see the field’s structure—core models, schools, debates, open questions.
Scaffolded expertise: you can perform at a near-expert level in structured environments because external scaffolds (AI + retrieval + critique loops) are doing part of the cognitive work.
This is the crux: you can now reach “expert-like” performance faster than you can reach expert-like judgment.
So instead of one vague word—expert—we should at minimum distinguish:
Summary fluency (sounds informed)
Scaffolded expertise (performs impressively with support)
Load-bearing expertise (remains reliable when support is removed and the situation gets messy)
That middle category is the one we’ve been missing.
The new split that AI introduces
The AI era does not eliminate expertise.
It splits it.
The most important distinction is no longer simply between novice and expert.
It is between:
people who can talk
people who can map
people who can perform with scaffolding
people who can operate without it
people whose judgment is trustworthy under stress
That middle territory barely existed at scale before.
Historically, if someone could perform at an expert level in conversation, there was a decent chance they had paid a substantial apprenticeship cost. Now that inference is weaker.
A person may now have:
elite conceptual mapping
strong synthetic ability
debate-grade reasoning
impressive presentational fluency
rapidly assembled subject competence
without yet having:
years of contact with the domain
tacit pattern recognition
instinct for anomalies
tested judgment
earned credibility
This is why people increasingly feel a strange dissonance when evaluating smart AI users.
They are not frauds.
But neither are they obviously what the old world would have called experts.
They occupy a new category.
The illusions of the AI era
This new landscape produces a set of predictable illusions.
1. The fluency illusion
Because I can explain it clearly, I understand it deeply.
This was already a danger before AI, but AI massively amplifies it. A person can now produce beautifully articulated explanations long before they possess durable understanding.
2. The map illusion
Because I can map the field, I can perform in the field.
Knowing the intellectual terrain is powerful. But map possession is not the same as operational mastery. A person can understand every school of psychotherapy and still be a poor therapist. A person can map corporate finance and still make terrible decisions in a live deal.
3. The authority illusion
Because I can reason like an expert, I have earned expert credibility.
This is subtler. AI can genuinely elevate reasoning quality. It can help a person represent views fairly, compare arguments, and identify hidden assumptions. But social authority is not conferred by performance alone. It is also tied to judgment, track record, risk-bearing, and costly proof.
4. The credential illusion
Because someone learned this quickly with AI, their competence must be fake.
This is the equal and opposite error.
It is becoming unfashionable to say this plainly, but here it is anyway: AI can create real competence very quickly.
Not complete competence. Not always stable competence. Not always trustworthy competence.
But real competence nonetheless.
And institutions that dismiss that fact will be blindsided by people who look underqualified on paper and outperform the papered class in practice.
The central principle
If I had to compress the whole argument into one sentence, it would be this:
AI compresses orientation, synthesis, and discourse much faster than it compresses judgment, tacit knowledge, and credibility.
That is the heart of the matter.
It explains why AI can feel miraculous and underwhelming at the same time.
Miraculous, because it can collapse months of confusion into days of structured understanding.
Underwhelming, because when the situation becomes messy, adversarial, embodied, high-stakes, or truly novel, the missing layers reveal themselves.
The tool can accelerate your climb.
It cannot simply waive the laws of reality.
Why this matters socially
The consequences are bigger than study hacks.
Education
Schools still assess competence as though the key bottleneck were information retrieval or summary production. That is increasingly obsolete.
The valuable questions are changing.
Not:
Can the student repeat the material?
Can the student summarize the chapter?
But:
Can they identify the field’s deepest disagreements?
Can they steelman opposing views?
Can they tell signal from noise?
Can they apply models under ambiguity?
Can they detect when the model breaks?
In other words: educational systems will increasingly need to distinguish summary fluency from scaffolded expertise, and scaffolded expertise from load-bearing judgment.
Hiring
Resumes and credentials will become less legible as proxies for competence.
A person with three months of disciplined AI-assisted immersion may, in some contexts, outperform a person with three years of passive credential accumulation.
That does not mean experience no longer matters.
It means the distribution of capability is becoming less visible from conventional signals.
Hiring systems will have to get better at testing for actual performance, actual judgment, and actual stability.
Institutions
Institutions are built on trust—especially trust about who is allowed to decide, teach, diagnose, allocate, sign, and lead.
If AI increases the supply of people with high presentational competence but mixed stability, institutions face a sorting problem.
The challenge is no longer just identifying intelligence.
It is identifying what kind of competence is present, how durable it is, and whether it is safe to rely on when the stakes rise.
Culture
Expect a strange social period where many people sound much smarter than the old world trained us to expect.
Some of them will be bluffing.
Some of them will be genuinely formidable.
And some will be in between: unusually capable, unusually accelerated, but not yet fully load-bearing.
That middle group will be one of the defining human types of the AI era.
A second axis: stability
One reason people are confused is that they are still treating expertise as a one-dimensional thing.
But in the AI era, competence has at least two dimensions:
Height — how advanced the performance is
Stability — how well that performance survives pressure, novelty, and the removal of support
This matters because a person can now have high-height, low-stability competence.
That would describe someone with scaffolded expertise: they may be dazzling in structured environments, highly articulate, conceptually sharp, and fast-moving—yet less reliable when tools are removed, feedback is delayed, or the situation becomes genuinely chaotic.
By contrast, load-bearing expertise is high-height, high-stability competence.
It travels better. It degrades less. It remains trustworthy when things stop going according to script.
This distinction will become increasingly important in every field where decisions matter.
A third axis: where competence comes from
We can also classify expertise by provenance.
Absorbed expertise
Built from reading, listening, and watching.
Scaffolded expertise
Built through external cognitive systems: AI tutors, retrieval systems, model comparison, synthetic questioning, iterative critique.
Embodied expertise
Built through doing, failing, correcting, repeating, and paying real costs.
Generative expertise
Built through creating new frameworks and solving previously unsolved problems.
Again, notice what this does.
It gives us language more precise than the blunt old distinction between “expert” and “not expert.”
And language matters because once you can name a thing, you can start evaluating it correctly.
The real challenge now
The challenge is not deciding whether AI-made competence is real.
Of course it is real.
The challenge is learning not to confuse one kind of real competence for another.
That means learning to ask better questions.
Not just:
Is this person smart?
Can they talk about the topic?
Do they have credentials?
But:
Can they distinguish the central from the peripheral?
Can they reason under uncertainty?
Can they detect when the framework fails?
Can they survive adversarial pressure?
Can they generate insight without the scaffold?
Has their judgment been stress-tested by reality?
Those questions will increasingly determine who deserves trust.
Back to the student with NotebookLM
Now we can return to the original story.
What happened there was not magic.
And it was not fraud.
It was a glimpse of a new pattern.
A person used a powerful cognitive scaffold to extract the structure of a field at high speed: the mental models, the fault lines, the diagnostic questions, the strong arguments on all sides. Then he trained against those questions intensely enough to produce the appearance—and probably part of the reality—of expertise.
That is extraordinary.
But the right conclusion is not: expertise is dead.
The right conclusion is: expertise has split into layers, and AI has made the middle layers dramatically more accessible.
That changes the strategy for students.
It changes the strategy for institutions.
And it changes the strategy for anyone ambitious enough to use these tools seriously.
The old world asked, “How long have you studied?”
The new world will increasingly ask, “What kind of competence do you actually have?”
And that is the question that matters.
Final thought
We are entering a world in which many more people will possess rapid, portable, high-performance competence.
Some of that competence will be shallow theater.
Some of it will be real and formidable.
And some of it will be what I’ve called scaffolded expertise: real, accelerated, impressive, but not yet fully self-supporting.
That category is going to matter more and more.
Because if you can build scaffolded expertise in days, then the real competitive edge shifts.
It is no longer merely access to information.
It is the ability to:
ask the right structuring questions
build the right scaffolds
know what layer of competence you actually possess
and keep climbing until your expertise becomes load-bearing
That is the real game now.
Not whether AI can help you sound smart.
But whether you can use it to turn speed into substance.
Appendix: The full taxonomy (if you want the whole ladder)
If you’re building systems—education, hiring, institutions—you often need more granularity than “smart / not smart” or “expert / not expert.” Here’s the full ladder that sits behind the simplified before-and-after.
0) Lexical familiarity — knows the vocabulary; can follow surface discussion.
1) Summary fluency — can explain the topic coherently; sounds informed.
2) Map-level understanding — sees the field’s terrain: core models, schools, debates, open questions.
3) Scaffolded expertise — near-expert performance with strong external supports (AI + retrieval + critique loops).
4) Functional expertise — can reliably do the work in standard cases; applies models in practice.
5) Load-bearing expertise — stress-tested judgment under ambiguity and edge cases; reliable when stakes are real.
6) Generative expertise — advances the field: new models, new syntheses, new questions.
7) Foundational authority — reshapes the field’s architecture: categories, frameworks, what counts as a good question.
That’s the ladder. But the new thing AI does is widen the gap between levels 3 and 5—and flood the world with level-3 performance.
Compressed definitions
Scaffolded expertise: high-level conceptual and conversational competence built through external cognitive supports, not yet fully hardened by long practice.
Load-bearing expertise: judgment that remains reliable under pressure, ambiguity, and edge cases, even when supports are removed.
Generative expertise: the ability not just to navigate a field, but to improve it.
One-sentence thesis
AI does not abolish expertise; it differentiates it.


