From Writing Code to Managing Agents: Most Engineers Aren't Ready

By James Aspinwall, co-written by Alfred Pennyworth (my trusted AI) — February 27, 2026, 12:32

Based on Mihail Eric’s talk at Stanford University


The job title says “software engineer,” but the job is changing underneath it. Mihail Eric, speaking at Stanford, lays out a thesis that should make every working developer stop and think: the engineers who thrive in the next five years won’t just write code — they’ll manage agents that write code. And most engineers aren’t ready for that shift.

The AI-Native Engineer Is a New Role

Eric defines an “AI-native engineer” as someone with strong traditional fundamentals — algorithms, system design, debugging — combined with deep competence in agentic workflows. Not someone who prompts ChatGPT. Someone who designs multi-agent systems, supervises their output, and architects tasks so agents can execute them reliably.

This isn’t aspirational. It’s the baseline for the current generation entering the workforce.

The Junior Squeeze

Three forces are converging on junior engineers:

  1. Post-COVID correction. Companies over-hired around 2021, then discovered they could cut 20-30% of staff and keep operating.
  2. Supply glut. CS graduation numbers have doubled or tripled over the past decade.
  3. AI leverage. Employers are asking: “Do I need more people, or fewer AI-native people plus tools?”

The result is fewer traditional junior roles. The ones that remain demand a fundamentally different skillset.

Building Up Piecemeal, Not All At Once

Eric’s practical advice for working with agents is refreshingly anti-hype: start with one agent. Get it reliably building non-trivial software. Only add a second agent when you have clearly separable tasks and confidence in each one’s output.

Think of each agent as a focused intern with an isolated task. You don’t hand ten interns the same codebase on day one. You bring them in one at a time, verify their work, then expand scope.

The “10 agents orchestrated by a master prompt” fantasy is exactly that — a fantasy. The engineers actually shipping with agents are doing it incrementally.

Context Switching Is the Last-Boss Skill

Here’s the part that resonated most with me, as someone running multiple agents daily in my own workflow.

Managing agents is managing parallel workstreams. Each agent works on a different task. Each can get stuck. You need to remember what each was doing, diagnose where it went wrong, unblock it, and push it forward — all while keeping the other agents productive.

This is context switching at scale. Eric calls it a “last-boss skill,” and he’s right. It mirrors human management: the best multi-agent operators are people who already manage human developers well. If you’ve never managed a team, managing agents will expose that gap fast.

Your Codebase Is Your Agent’s Onboarding Doc

An “agent-friendly codebase” is one where an agent dropped into the repo can understand what’s going on and make safe changes. Eric identifies three pillars:

Tests are contracts. If your test coverage is poor, agents have no explicit contracts defining correctness. They will break things because there’s no automated way to tell them they broke things.

Documentation must be consistent. If the README says one thing and the code does another, agents get confused — just like humans do. They’ll ask “do I follow the docs or the code?” and often pick wrong.

APIs must be canonical. If the same object can be created two different ways in two different places, agents won’t know which is right. Consistency isn’t just aesthetics — it’s infrastructure for automation.

How Agents Produce Spaghetti Code

Without guardrails, agents drift. A misunderstanding at step one gets reused and magnified at step two, then compounded at step three. Eric calls this cascading error propagation, and it’s the primary mechanism by which agents produce spaghetti code.

The fix isn’t better prompts. It’s better engineering: clean design, strong tests, linting, style checks. The first thing an agent sees must be robust, because everything it builds afterward inherits that foundation — for better or worse.

Taste Still Matters

Eric draws a line between “functional” software and “incredible” software. The difference is taste — the extra miles you go after the requirements are satisfied.

In his Stanford class, every student built the same required flows. The standout students went beyond: bonus features, more robust error handling, polished UX. They cared about the product, not just the grade. Some of those students are now turning class projects into startups.

AI can produce functional software. Taste is what turns functional into remarkable, and taste is still a human skill.

Experimentation Is the Method

Eric frames AI-native development as fundamentally experimental. Try tools. Try workflows. See what works in practice. Discard what doesn’t.

He cites Anthropic’s Claude team reportedly rewriting Claude every week or two using Claude itself — constantly iterating based on feedback. Even top teams don’t have all the answers. The point is to bake experimentation into your ongoing workflow, not to find a process and freeze it.

The Case for Juniors

Counterintuitively, Eric argues juniors have advantages over seniors in the AI transition:

This doesn’t mean experience is worthless. It means experience plus adaptability beats experience plus resistance.

The Product Risk Nobody Talks About

The easiest trap in the AI era: you ask Claude to build something, keep adding features for a month, and end up with a beautiful, over-engineered product that nobody wants.

AI makes building cheap. That makes product sense more valuable, not less. Validate demand before you invest in features. The ability to build anything doesn’t mean you should build everything.

The Allocation of Intelligence

Harvard’s Rem Koning offers the closing frame: we’re moving into a world where the key skill is allocating intelligence, not just wielding it. AI-native means embedding AI into products so it can act directly with customers, pulling the human out of the loop where possible.

The trillion-dollar question: what happens when AIs start collaborating with each other? Whoever answers that well builds the next generation of transformative companies.


What This Means for WorkingAgents

Every point Eric makes validates the direction we’re building toward. Agent-friendly codebases. Incremental agent adoption. Context management across parallel workstreams. Taste and product sense as differentiators.

The engineers who will thrive aren’t the ones who write the most code. They’re the ones who manage the most intelligence — human and artificial — toward outcomes that matter.

The shift from writing code to managing agents isn’t coming. It’s here.