James Aspinwall | February 19, 2026
The individual contributor is dead. Not literally — people still write code, ship features, and fix bugs. But the role as we’ve known it for thirty years is undergoing a structural collapse. What remains will look so different from today’s IC work that calling it the same job is misleading.
This is the core argument emerging from conversations between engineers who’ve watched every goalpost they set for AI fall in sequence: LeetCode, math reasoning, requirements decomposition, architectural judgment. Each was supposed to be the firewall. None held.
The IC Becomes the Orchestrator
A senior engineer’s day already trends toward meta-work: prioritizing, decomposing problems, reviewing output, making judgment calls between competing approaches. AI acceleration pushes this to its logical conclusion. The future IC spends most of their time managing AI agents — choosing between disagreeing LLMs, validating output, orchestrating parallel workstreams — and relatively little time writing code by hand.
Two organizational patterns will coexist. Some teams will have managers with no ICs at all, directing AI agents directly. Others will have “ICs” who are functionally managers of both AI and humans. The common thread: direct production work gets delegated. The human’s job is judgment, direction, and accountability.
The 80% Threshold
AI doesn’t need to beat the best humans. It needs to reliably beat 80-90% of practitioners in a domain. Once it crosses that line, the economics become irresistible, even if a handful of human outliers remain superior.
We’ve seen this dynamic before. Radiology AI can read scans with remarkable accuracy. Autonomous vehicles handle most driving scenarios competently. Yet neither has displaced its human counterpart at scale. The bottleneck isn’t capability — it’s accountability, regulation, and human psychology around machine risk.
Someone must be legally responsible when things go wrong. You can’t meaningfully punish a model. This creates a structural demand for humans in the loop, not because they’re better, but because the system needs someone to blame. London insists on human drivers in automated trains. Airlines require two pilots despite strong autopilots. These aren’t technical decisions — they’re institutional ones.
Eventually, we may see something like legal personhood for AI systems, analogous to corporate personhood, enabling contracts and accountability chains. But practical liability questions — who pays when the robotaxi crashes? — remain unresolved. Regulation will slow replacement in high-stakes fields for years, possibly decades.
The Taste Question
The most interesting debate is whether “taste” — judgment, aesthetic sense, product intuition — is a durable human advantage or just another temporary limitation.
The uncomfortable truth: most people’s taste and creativity are far more derivative than they believe. AI can already remix styles and, with enough scale and experimentation, brute-force its way to many winning combinations. When the cost of generating and testing creative variations approaches zero, the sheer volume of AI attempts overwhelms the average human’s output.
But there’s a ceiling. Writers and researchers doing genuinely original work already find that LLM outputs mostly regurgitate their own prior thinking back at them. For truly novel insight — connecting ideas nobody has connected before — AI remains a sophisticated interpolation engine. It remixes. It doesn’t discover.
The deeper problem is zero-cost imitation. If AI can instantly replicate any style, codebase, or creative approach from its training data, what incentive remains for original creators? Current IP law — patents, copyright, trade secrets — was built for a world where imitation required effort. That assumption is breaking.
Consumer Surplus and Collapsing Margins
As the cost of producing decent software approaches zero, the economics invert. Profits compress. Value shifts massively toward consumers.
The historical parallel is the word processor. It displaced secretaries and typists, but the result wasn’t fewer documents — it was vastly more documents written by less skilled people. Net societal impact: enormously positive. We should expect the same pattern: more software, written by less software-competent people, solving problems that were previously too expensive to address.
For builders, this means thinner margins on everything that can be commoditized. The interesting counterargument: at some tipping point, customers may actually prefer AI-generated and AI-reviewed systems. If AI code review catches more bugs than human review, human involvement in critical paths becomes a liability signal, not a quality signal. We’re not there yet. But the trajectory is clear.
The Career Bet
Engineers face a binary choice, and the only wrong answer is neither.
The augment bet: your role persists but is supercharged. Invest aggressively in AI tooling. Run multiple agents in parallel. Spend heavily on the best tools available. Your competitive advantage is the multiplier you get from orchestrating AI effectively.
The replace bet: your current role is disappearing. Reskill now toward domains that sit on top of or adjacent to AI — the roles that emerge when production cost drops to near zero.
The clearly losing strategy is assuming your job looks the same next year. Software roles will change at a pace that feels absurd over the next twelve to twenty-four months.
The Leisure Paradox
Every tool that promised to free time has historically increased throughput instead. Medical IT was supposed to give doctors more time with patients. It gave them more patients. Scribes were supposed to reduce documentation burden. They increased documentation volume.
AI will follow the same pattern. The founders and creators who gain hours from AI delegation won’t use those hours for leisure. They’ll start more projects, chase more revenue, fill the space with more ambition. Keynes predicted fifteen-hour work weeks by now. Productivity rose exactly as he forecast. But appetites rose faster — bigger houses, more status goods, higher lifestyle expectations.
What Remains Scarce
In a world where software and content approach zero marginal cost, scarcity concentrates in the physical and the positional. The best seat at a live event. A unique experience. Direct human connection. These stay expensive — and likely become absurdly so, driven by inequality and status competition.
Handmade work gains value precisely because it’s inefficient. “100% human-generated” becomes a selling point, a premium label, even when the AI version is objectively comparable. People will pay for the story, the humanness, the proof that someone cared enough to do it the slow way.
Openness as Survival Trait
Careers will become far more dynamic. Instead of three to five roles in a lifetime, future workers may cycle through dozens of distinct jobs as AI continuously reshifts what’s scarce and valuable.
People high in openness to experience — those who enjoy reinvention, who find identity in learning rather than in a fixed professional label — will thrive. Those who crave stability, who define themselves by their current role, will find the coming decades deeply uncomfortable.
The rational bet is on shifts, not total collapse. Markets and labor demand will migrate to new frontiers rather than vanish. But the transition will be uneven, fast, and emotionally brutal for anyone who assumed the ground beneath them was solid.
The IC isn’t gone. But the IC who just writes code — who defines their value by production output rather than judgment, direction, and taste — is standing on a floor that’s already giving way. The question isn’t whether to move. It’s where to land.