On Enablement Without Understanding

What AI tools are doing to how we work

Yes, this is another essay about AI. No, I will not apologize.

And before you call me a luddite, let me explain.

Much has been said about what these tools can do, and a lot of it is true. In many corporate environments, their use is already taken for granted. Some companies have even gone as far as mandating it.

On the surface, that is not a bad thing.

But something feels...off?

We are producing more than ever. Output is off the charts. Exec dashboards are lighting up like Christmas trees.

But it is not clear that we are understanding more.

If anything, we may be understanding less.

We have mistaken output for productivity. And in doing so, we are quietly trading understanding for speed.

That is worth reflecting on. Because I think we have gotten something important wrong about where these tools should sit in how we work.

We Misapplied the Lesson

Rich Sutton’s The Bitter Lesson is one of the most cited essays in AI, and for good reason.

The idea is simple: when building intelligent systems, trying to make them think like us does not work. It works for a while, then plateaus. What wins, again and again, is something distinctly non-human: Scale.

Put more simply: More compute. Less human. Less hand-crafted intuition.

That lesson still holds.

But somewhere along the way, we blurred a boundary.

What began as a lesson about how to build systems started to feel like a lesson about how we should work.

If scale beats understanding in our models, why not in our workflows too? More code. More output. Less time spent thinking things through.

Not as a conscious decision, necessarily. More as a quiet shift in behaviour.

And now, with tools that can generate in seconds what used to take hours, that shift is accelerating.

The problem is that the lesson was never meant for us.

We are not the system being optimized.

We are the ones responsible for understanding it.

Leverage Cuts Both Ways

The promise of these tools is leverage.

Copilot. Claude Code. Whatever you are using, the pitch is broadly the same: faster output, higher productivity, more shipped work.

That framing is incomplete.

Leverage is a multiplier. It does not just amplify output. It amplifies exposure.

In software, faster generation usually means more code, more decisions, and more surface area to reason about, maintain, and eventually fix.

So yes, the upside is real.

But so is the downside, and it is much less obvious at first: The erosion of judgment.

Teams may be shipping faster. PR counts may be up. But how many people can still clearly explain why something was built, what trade-offs were made, or what is likely to break next?

We have built workflows where output scales, but comprehension does not.

And that imbalance rarely shows up immediately. It shows up later: in fragile systems, slower debugging, vague ownership, and decisions no one feels fully confident defending.

We Are Scaling the Wrong Thing

This would be less dangerous if it stayed isolated.

It does not.

Code compounds. Systems are not written once; they are built on, layer by layer, decision by decision, over time.

And now those layers are increasingly generated. Today’s output becomes tomorrow’s context.

So if weak reasoning enters the system early, it rarely stays contained. It gets copied forward. Assumptions harden. Bugs become patterns. Patterns become architecture.

And because the system still appears to work, it still looks like progress.

That is the danger.

A large AI-assisted change can pass review, pass tests, and still leave behind a system nobody fully understands. The cost shows up later: in debugging, in handovers, in the next engineer inheriting a pattern nobody can properly explain but everyone now depends on.

Reintroducing Productive Friction

One of the real strengths of these tools is that they remove friction.

They collapse the distance between idea and output. They reduce effort. They smooth over rough edges that used to slow us down.

That is a genuine gain.

And to be clear: not all of the friction we have lost was valuable. Some of it was just waste. Bad tooling. Repetitive effort. Slow feedback loops. Process theatre. Nobody needs to romanticise that.

But it would be a mistake to treat all friction as inefficiency. Some of it was a feature, not a bug.

Writing something yourself forced you to clarify what you meant. Debugging built a mental model of the system. Explaining a design choice exposed whether you actually understood the trade-offs. Certain kinds of effort did not just slow the work down; they made understanding possible.

That is the distinction that matters.

The problem is not that AI removes friction. The problem is that it removes both kinds at once: the waste, and sometimes the very moments where reasoning used to become visible, testable, and shared.

So the goal is not to bring slowness back for its own sake.

It is to preserve the specific points in a workflow where judgment gets formed.

I say this as someone who genuinely likes using these tools. They really do make me 10x faster.

Full disclosure: I even used AI to help write this essay. Not to generate it in one shot, but to challenge rough thoughts, refine them, and make them clearer.

That is exactly the distinction I am trying to defend.

The tool helped me sharpen the thinking. It did not replace the need to do it.

What This Actually Requires

If the real risk is that output is becoming detached from understanding, then teams need more than the usual engineering slogans.

“Ship smaller.” “Review harder.” “Measure quality, not quantity.”

None of those are wrong. But they are not enough.

The real task is to build workflows where understanding remains visible.

1. Make Understanding Visible Before Shipping

For meaningful changes, code should not stand alone.

The work should carry some visible trace of reasoning: what problem is being solved, why this approach was chosen, what alternatives were considered, and what is most likely to break.

Review has to shift too. In a world of plausible generated output, the reviewer’s job is no longer just to inspect the artifact. It is to test the author’s mental model.

Why is this the right boundary? What assumptions does this rely on? What happens when those assumptions fail? What was the second-best option?

And if the author cannot clearly explain the change before merge, then the code may be ready before the understanding is.

2. Distinguish AI-Assisted from AI-Unexamined

The issue is not whether AI touched the work.

The issue is whether the person shipping it can still verify, explain, and defend what the tool helped produce.

“AI-assisted” is fine.

“AI-unexamined” is the danger.

That is the standard teams should care about. Not purity, and not abstinence. Ownership.

3. Use AI Where It Expands Reasoning

Most people reach for these tools at the execution layer: write this for me.

That is useful, but it is not where the most valuable leverage is.

Use them earlier: to compare approaches, surface edge cases, identify hidden assumptions, and pressure-test requirements before code exists.

Used that way, AI widens the thinking process.

Used carelessly, it can collapse it.

4. Watch for Comprehension Loss

Teams are good at measuring throughput and much worse at noticing when understanding is decaying.

But that decay has signals: thinner explanations, shallower reviews, hazier ownership, messier handoffs, and work that nobody can fully defend a week later.

Those are not soft concerns. They are early warnings that output is outrunning understanding.

A workflow that keeps producing artifacts while steadily draining comprehension is not high-performing.

It is borrowing against the future.

The Line We Need to Hold

These tools can generate code, but they do not own the consequences. They can produce answers, but they do not bear the cost of being wrong. They can accelerate work, but they do not decide what matters.

We do.

Moving faster is valuable.

But not if it comes at the cost of the skills that make meaningful work possible in the first place.

Some of those skills were built in the friction we have now removed.

The challenge is not to bring all of it back.

It is to put the right parts back in.