Mary Fung
essayApril 21, 2026

The apprenticeship was a curriculum

Junior labor was the curriculum. AI eliminated the labor and didn't replace the curriculum.

A new analyst sits down at her first job. She is given a tool that drafts the first version of every memo, every model, every analysis she'll ever be asked to do. Her job, technically, is to review the output. The trouble is that she has never produced any of these things herself. She is a reviewer of work she has never done.

This is the new entry-level role in most of the industries I see, and the version of it that gets discussed publicly is the easy one — what about the jobs? The harder version, the one no one wants to host a panel on, is what happens to judgment.

The apprenticeship in knowledge work was never primarily about producing output. The junior associate drafting the memo, the junior analyst rebuilding the model, the junior accountant ticking and tying the workpapers — they were producing output, yes. But the actual product of those years was something else. It was pattern recognition. It was the slow accumulation of a thousand small experiences of being wrong, getting corrected, and noticing the shape of the correction. By the time a junior had done the same task five hundred times, they didn't just know how to do it. They could feel when something was off — in their own work, and eventually in someone else's.

That feeling is what we call judgment. It does not transfer through documentation. It is not something you can hire for. It is grown.

The grind was the curriculum. We just never had to name it that way, because the curriculum was a side effect of the work, and the work was the point.

It isn't anymore.

When the tool produces the first draft of the memo, the junior never struggles to write a clean topic sentence. When the tool rebuilds the model, the junior never makes the off-by-one error in the cash flow waterfall and gets quietly humiliated by a senior who spots it in three seconds. When the tool ticks and ties the workpapers, the junior never finds the strange entry that didn't reconcile and doesn't have to chase it down to its source. The mistakes were the curriculum. The recovery from the mistakes was the curriculum. Watching a senior flip to the right page on the right tab and ask the right question — that was the curriculum too.

It is possible that the curriculum still happens, just at a different layer. Maybe the new junior learns by reading a thousand AI outputs critically. Maybe she develops taste from the review rather than the writing. I am open to this. But I notice that the people I trust most to spot a wrong AI output are people who would have been able to spot a wrong human output, because they have done the human version of the work for a long time. I haven't yet met someone who acquired good taste exclusively from review. I haven't ruled it out. I don't see it yet either.

So: a generation of juniors is being asked to review work they could not produce. The seniors who can produce it are aging out at the rate seniors always do. The pipeline that converted juniors into seniors used to be the work itself. The work is now thinner.

What organizations are doing about this falls into three buckets, in order of frequency.

The first bucket is denial. We will roll out the AI tools, and the analysts will become more productive, and the existing development pathway will continue working because of inertia. This is the most common stance. It will be wrong on a delay of about five years.

The second bucket is pretending the problem is one of training content. We will give the analysts a course on prompt engineering, or critical thinking, or working with AI. These courses are almost always weak, because the thing being lost — implicit pattern recognition built through repetition — is not the thing a course can replace.

The third bucket is rare, and it is closest to being right. It treats the development of judgment as infrastructure. It asks: what are the specific patterns a junior in this domain must internalize before they can responsibly review AI output? It defines those patterns explicitly — the way a residency program defines the cases a doctor in training must see. It designs deliberate exercises where the junior produces work without AI, gets reviewed by a senior, and accumulates the corrections. It accepts that this is more expensive than the AI-augmented version, and does it anyway, because the alternative is a generation of juniors who can polish AI output but cannot improve it.

The third bucket requires something most organizations don't have: someone who knows the work cold, and can articulate what's actually being learned in the grind, and has the standing to redesign the early-career pathway around it. Those people are rare. They were rare before AI made the question urgent. The ones who exist are mostly busy doing other things.

There is one more thing worth saying, because it is the part that changes how you staff a team and not just how you train one. The bottleneck in knowledge work used to be junior labor. There was always more work than there were juniors to do it. That bottleneck is gone. The new bottleneck is senior taste. The teams that ship good AI-augmented work are limited by how many people they have who can tell the difference between AI output that is correct and AI output that is plausible. Adding AI tools without adding senior reviewers does not ship more good work. It ships more confidently wrong work, faster.

The apprenticeship was always a curriculum. The curriculum did not get noticed because the work and the learning were the same thing. They aren't the same thing anymore. So the curriculum has to be named, designed, and run as a deliberate program — or we lose the path that turned juniors into seniors, and we don't get a new one for free.

← back to the field