← ALL POSTS
AICareerDeveloper ToolsSoftware Engineering

The Programmer Who Refused to Change

The threat to programmers in 2026 is not AI replacing them — it is AI-augmented colleagues outpacing them, and that process is already well underway.

March 26, 20269 min read

There is a programmer I know — smart, experienced, principled — who has decided that AI-assisted development is mostly hype. He is not wrong about everything. A lot of it is hype. But he is making a mistake that I suspect a lot of developers are quietly making, and I want to be honest about what that mistake actually costs.

This is not a post about AI taking your job. That framing is both too dramatic and too convenient — dramatic enough to dismiss as alarmism, convenient enough to use as cover for not changing. The real situation is more mundane, and in some ways more serious.

The Framing Problem

Most discourse about AI and programming lives at one of two poles. The first: AI will automate software engineering within a few years, learn to code is dead, we are all obsolete. The second: AI is just another tool, senior engineers will always be needed, stop panicking.

Both are wrong in ways that matter.

The real story is not binary. It is a shift in the productivity distribution across the field. Some developers are completing in two hours what used to take two days. Some are not. Companies are noticing. The question is not "will AI replace programmers?" The question is "what happens to the programmers at the lower end of the productivity distribution when the bar for acceptable output keeps moving up?"

That question has a concrete answer, and it is already playing out.

What "Not Adapting" Actually Looks Like

The failure mode is not the developer who refuses to touch AI tools. That person is easy to identify and, frankly, unusual. The more common failure mode is subtler: the developer who uses AI tools superficially and gets nothing meaningful from them.

This looks like using GitHub Copilot for autocomplete and treating it as a faster way to type. It looks like pasting errors into ChatGPT, copying the fix, and not understanding why it works. It looks like generating boilerplate and feeling productive without examining whether the output is correct, idiomatic, or well-suited to the problem.

These developers are using AI. They are not adapting to it. The distinction matters because they are getting the workflow disruption without getting the productivity gains. They are context-switching constantly, second-guessing generated code they do not understand well enough to evaluate, and shipping work that a more deliberate engineer would have caught before it landed.

Stagnation is the failure mode, not refusal. And stagnation is much easier to rationalize.

The Productivity Gap Is Widening

Let me be specific about what is actually happening in teams right now.

Developers who use AI effectively are completing tasks that previously required a full sprint in a day or two. Codebases that used to require a team of five are being maintained by two people who have figured out how to use AI as a force multiplier on architecture decisions, test generation, documentation, and routine implementation. This is not hypothetical. It is the current state of a growing number of engineering organizations.

The consequence is straightforward: companies need fewer engineers to produce the same output. When headcount decisions get made — and they always get made — the engineers who get cut are not the ones the company can least afford to lose. They are the least productive ones. That is the productivity gap, and it is the actual threat.

This is not a future problem. It is a present one. The compression is already happening.

The Skills Being Commoditized

If you want to understand your exposure honestly, look at what you spend most of your day doing.

Boilerplate code generation, CRUD implementation, standard API integrations, writing tests for well-specified functions, producing first-draft documentation — these are the tasks that AI handles well enough to change the economics of who does them. Not perfectly. Not without supervision. But well enough that a senior engineer with AI can do them faster than a junior engineer without it.

The phrase "well enough" is doing a lot of work here, and it is worth sitting with. AI does not need to do these tasks perfectly to change your situation. It needs to do them well enough that the cost of having a dedicated human do them no longer makes sense at the same scale. That threshold has already been crossed for a significant portion of what fills a mid-level developer's week.

If your day is mostly this work, you are in the most exposed position in the current market. Not because your skills are worthless, but because they are increasingly not the constraint.

What Is Not Being Commoditized

The skills that remain genuinely hard — and that AI handles poorly — share a common characteristic: they require judgment formed by experience with things going wrong in the real world.

System design under real constraints, where the tradeoffs are not abstract but tied to specific team capabilities, existing technical debt, and actual usage patterns. Debugging truly novel failures, the kind where the error message is misleading and the bug lives three layers below where anyone thought to look. Understanding what the product actually needs versus what was asked for. Making calls under uncertainty when the cost of waiting for more information exceeds the cost of being wrong. Knowing when the AI is confidently incorrect.

These skills are not immune to AI augmentation — AI can accelerate parts of all of them. But they cannot be replaced by it, because they are fundamentally about pattern recognition and contextual judgment accumulated over years of building things that broke in unexpected ways.

If your work lives here, you are not safe from the productivity shift, but you are not the most exposed. And crucially, AI makes you more productive in this domain rather than less necessary.

The Compounding Effect

Here is the part that concerns me most about developers who are not adapting: the gap is not static.

Engineers who use AI effectively are shipping more. They are working across more varied codebases, encountering more edge cases, building intuition faster. They are not just keeping pace — they are accelerating their own experience accumulation.

Engineers who are not adapting are not just falling behind in tool use. They are falling behind in experience. The developer who ships three times the features is also debugging three times the production issues, reviewing three times the code, and developing judgment three times as fast. Compounding works in both directions.

This is the version of the problem that does not show up in any single performance review but becomes unmistakable over two years.

The Career Trajectory Shift

Junior roles are the most exposed, and it is worth being direct about why.

The traditional entry-level path was structured around a simple trade: you write boilerplate, do the implementation work that senior engineers do not have time for, and slowly earn trust and context until you are ready for higher-order problems. That path assumed boilerplate implementation was genuinely scarce labor.

It is less scarce now. This does not mean junior roles disappear — it means the value proposition of a junior engineer shifts. The new junior role looks more like AI-assisted implementation under senior direction, with an emphasis on good judgment about when the AI output is acceptable, good communication about what is and is not working, and rapid iteration over a wider surface area than previous generations of juniors ever touched.

This is a harder role in some ways. It requires developing judgment earlier than previous generations had to. But it is not a dead end. It is a different shape of entry into the field, and the developers who adapt to that shape will accumulate experience faster than their predecessors did under the old model.

What Adaptation Actually Requires

I want to be precise here, because "learn AI tools" is advice too vague to act on.

The core skill is learning to break complex problems into well-specified sub-tasks. This is prompting discipline, but it is really just clear thinking about problem decomposition applied to a new interface. Developers who were already good at this adapt faster. Developers who were used to holding ambiguity in their head and resolving it while coding have to be more deliberate.

The second skill is critical evaluation of AI output. Not reflexive suspicion — that is as unproductive as blind acceptance. Specifically: reading generated code with enough understanding to catch correctness problems, security issues, and cases where the output is technically functional but wrong for the context. This requires knowing the domain well enough to evaluate the answer, which means AI adoption does not reduce the value of deep knowledge. It increases the value of being able to apply that knowledge quickly.

The third is using AI to accelerate exploration rather than replace thinking. Generating five candidate approaches to a design problem in an hour and evaluating them is different from outsourcing the design decision. The first makes you faster and better. The second makes you dependent on something that does not understand your constraints.

None of this requires learning every new tool that ships. It requires a genuine change in how you approach the work.

The Honest Closing

The programmer I mentioned at the start is not going to be fine if he does not change. Not because he is not skilled — he is — but because skill in the old game is not sufficient protection against a shift in what the game rewards.

This is not a comfortable message. It is also not a message of doom. The field is not contracting. Demand for software is not decreasing. What is changing is the distribution of who gets to build it and what they need to be able to do.

The programmers who will thrive are not necessarily the ones who were best at the old game. They are the ones who are most willing to change how they play — specifically, who have the intellectual honesty to look at what is changing, the curiosity to engage with new tools seriously rather than superficially, and the judgment to know what those tools are and are not good for.

That is a learnable set of skills. It is also a choice.


← BACK TO ALL POSTS