On-demand exhaustive AI-analysis
Complete visibility into time & dollars spent
Create meaningful reports and dashboards
Track and forecast all deliverables
Create and share developer surveys
Align and track development costs
Compilers didn't kill programming. AI won't either. But the path to senior? That's fracturing.
In four years, something irreversible will have happened.
A student will graduate with a computer science degree in 2026 having never written production code in a world without an AI assistant. My CEO, Hersh, pointed this out to me over a year ago, and I haven't been able to shake it since. ChatGPT learned to code in late 2022. By 2026, every CS graduate will have spent their entire educational journey with an AI looking over their shoulder, ready to finish their sentences, spot their bugs, generate their boilerplate.
This isn't a complaint. It's just a fact—the kind of fact that forces you to ask uncomfortable questions.
Here's the one that keeps surfacing in conversations with engineering leaders: If developing intuition about software requires struggling, failing, and being wrong—a lot—and AI is increasingly smoothing over that struggle, are we headed for a crisis?
I think the answer is yes, immediately, and also no, eventually. Let me explain.
Anders Ericsson spent his career studying how humans develop expertise. His research on deliberate practice—later popularized (and somewhat mangled) by Malcolm Gladwell's "10,000 hours" concept—revealed something uncomfortable about mastery: it requires suffering.
"The development of genuine expertise requires struggle, sacrifice, and honest, often painful self-assessment," Ericsson wrote. The key word is painful. Not difficult. Not challenging. Painful.
Robert Bjork, a cognitive psychologist at UCLA, built on this work with his concept of "desirable difficulties"—the counterintuitive finding that short-term struggle leads to long-term learning. When students get everything right immediately, they're not learning deeply. They're just performing.
Linus Torvalds, creator of Linux and Git, embodied this in his own development. He learned programming by typing code from magazines—line by line, character by character, mistake by mistake. That repetition built an instinct for system behavior that AI cannot replicate. He didn't learn by having an assistant catch his errors. He learned by making them, sitting with them, and eventually understanding them.
This is how senior engineers have been forged for decades. Not through tutorials. Not through code reviews alone. Through the accumulated scar tissue of ten thousand mistakes. Through the late nights debugging something that should work but doesn't. Through the humbling experience of thinking you understand something, only to discover you've been wrong for months.
The grumpy senior engineer who seems impossible to replace? They're not valuable because they can write a for loop. They're valuable because they've been wrong in ways you haven't been wrong yet. Their intuition isn't magic—it's pattern recognition built on a foundation of failures you've never had the chance to experience.
And now we're asking: what happens when AI prevents those failures from ever occurring?
In the 1950s, assembly programmers watched the rise of FORTRAN with something between skepticism and horror. Here was a "high-level language" that would let anyone describe what they wanted, and a compiler would figure out the machine instructions. The old guard was not impressed.
Code written by a compiler, they argued, would be too slow. It would be bloated. It would lack the precision and performance of hand-crafted assembly. Relying on this abstraction was "cheating." It would produce a generation of programmers who didn't really understand what was happening inside the machine.
They weren't entirely wrong. Early compilers were inefficient compared to hand-tuned assembly. The skeptics had data on their side.
But here's what happened: compilers got better. Dramatically better. Within a few decades, optimized compiler output began to outperform most hand-crafted assembly in most cases. The programmers who understood both the high-level abstraction and the underlying mechanics became incredibly valuable—not as assembly coders, but as the experts who could optimize the new tools and debug the rare cases where they failed.
The assembly programmers weren't put out of work. Many became the people who built and improved the very compilers they once feared.
I learned about compiling to machine code in college. In practice, that skill is largely obsolete for me—replaced by JIT languages, garbage collection, and a dozen other abstractions I don't have to think about. Did the world break? No. It shifted. The art of optimizing exactly how code compiles into processor instructions became a specialized skill, abstracted away from most developers.
This is the natural evolution of productivity. Hard problems get "solved," creating new levels of abstraction. The specialists who understand what's happening beneath remain valuable—sometimes more valuable than ever—but they become a smaller, more specialized group. Meanwhile, everyone else gets to focus their bandwidth on what creates value for customers and users rather than managing memory allocation.
Linus Torvalds, who has seen more technological transitions than almost anyone alive, put it plainly in a recent interview: "In the history of programming, it was compilers that brought the biggest speed-up in productivity. So while AI may prove useful, it's still just a tool."
Still just a tool. That's worth sitting with.
But there's a wrinkle in the optimistic story, and it's a serious one.
When compilers took over low-level coding, junior programmers still had plenty of work. They moved up the abstraction stack. There was always a next rung on the ladder.
What's different about AI is that it's not just automating the bottom of the stack. It's compressing the entire middle.
The data is stark. A Harvard study examined 285,000 U.S. firms and 62 million workers from 2015 to 2025. It found that when companies start using generative AI, junior employment drops by about 9 to 10 percent within six quarters, while senior employment barely changes. Employment for software developers aged 22-25 has declined nearly 20% from its late 2022 peak. Entry-level tech job postings dropped 67% between 2023 and 2024.
This isn't companies laying off juniors. It's companies not hiring them in the first place. The change didn't come from pink slips. It came from silence.
AWS CEO Matt Garman called this trend "one of the dumbest things I've ever heard." His logic is simple: "How's that going to work when ten years in the future you have no one that has learned anything?"
Industry observers are calling this the "hollowed-out career ladder"—plenty of seniors at the top, AI tools doing the grunt work at the bottom, and very few juniors learning the craft in between.
So the question becomes: where will all the junior engineers go? And more importantly, where will the next generation of senior engineers come from?
[Part 2: What "Senior Engineer" Means in an AI-Native World coming soon]