On-demand exhaustive AI-analysis
Complete visibility into time & dollars spent
Create meaningful reports and dashboards
Track and forecast all deliverables
Create and share developer surveys
Align and track development costs
It's not about whether you can code faster than AI. Research shows technical competence barely predicts who struggles—professional identity threat does. If your sense of self is tightly coupled to writing code, that's the vulnerability.
In 2016, Geoffrey Hinton made a prediction that sent shockwaves through medicine. "People should stop training radiologists now," declared the man who would later be called the Godfather of AI. Deep learning would surpass them within five years. It was, he said, completely obvious.
Eight years later, Hinton publicly admitted he was wrong. Mayo Clinic's radiology staff grew 55% since that prediction. U.S. radiologist numbers increased 7%. The specialty now faces its largest shortage in history, with imaging backlogs stretching months at some centers.
The prediction wasn't just off on timing. It misunderstood what actually determines who thrives and who struggles when technology transforms a profession.
The same misunderstanding is happening right now in software engineering. And if you're trying to figure out which side of this divide you'll land on, the radiologists have something important to teach you. It's not about whether you can code faster than AI. It's about something far more personal.
Here's something that might be uncomfortable to admit: developers have always hated other people's code.
In 1982, MIT researchers Ralph Katz and Thomas Allen studied 50 R&D project groups and documented what they called "Not Invented Here" syndrome. They found that project group performance peaked around 1.5 years of tenure, then declined noticeably by year five. The cause wasn't skill degradation. It was that teams became increasingly insular, rejecting external ideas and believing they possessed a monopoly of knowledge in their field.
The pattern has been replicated across decades of research. Developers systematically distrust code they didn't write, regardless of its quality. Even production-tested libraries used by thousands of developers get dismissed as substandard if they use unfamiliar patterns. A 2019 study by Hannen, Antons, and colleagues confirmed NIH syndrome damages innovation, but also found it's deeply tied to team identity. Some teams derive their entire sense of self from the belief that they can "build anything." Adopting external code feels like an attack on that identity.
Sound familiar?
When developers report low trust in AI-generated code, they're not discovering a new form of skepticism. They're applying existing distrust to a new source. Stack Overflow's 2025 developer survey found 84% of developers are using or planning to use AI tools, up from 76% the year before. But trust is declining. Nearly half actively distrust AI accuracy. Among experienced developers, 20% report "high distrust" while only 2.6% report "high trust."
Developers are using tools they don't believe in. This isn't cognitive dissonance as failure mode. It's the same relationship many developers have always had with code they didn't write.
When researchers at JMIR studied 206 medical professionals to understand why some embraced AI tools while others resisted, they found something counterintuitive. Technical competence barely mattered. What predicted resistance was a concept psychologists call Professional Identity Threat — the degree to which someone's sense of self is bound up in tasks that technology now performs.
The study identified two distinct dimensions. First, threats to professional capabilities: reduced autonomy and control over decisions. Second, threats to professional recognition: challenges to expertise and status. Here's what made it fascinating: medical students, whose identities were still forming, experienced stronger identity threats than experienced physicians. The newer you were to a profession, the more vulnerable you were to feeling displaced.
This maps onto what we're seeing in engineering teams. The developers who struggle most with AI coding assistants aren't the ones who lack technical skill. They're the ones whose professional identity is tightly coupled to the act of writing code: the craft, the elegance, the problem-solving process itself. When AI generates the code, something breaks. Not their competence. Their sense of who they are.
The Software Craftsmanship movement, born from the Agile Manifesto, explicitly valued "well-crafted software" and "productive partnerships." This craft identity creates genuine psychological tension with AI tools. Research shows 77% of developers are spending less time writing code, with almost half believing coding skill might become secondary to prompt engineering. The identity shift — from creators to orchestrators, from builders to overseers — isn't a skill gap. It's an existential question playing out across millions of terminals.
I'll be honest about my own journey here.
As a founding CTO, I wanted control over every line of code. I understood every architectural decision because I made them. Every bug was traceable because I knew the system intimately. The codebase was, in a very real sense, an extension of how I thought about problems.
Then the team grew. And I had to learn something uncomfortable: trusting code I didn't write. Not just senior engineers who'd proven themselves, but junior developers still learning. The shift wasn't about their competence. It was about my willingness to let go of control and build systems that could accommodate varying degrees of skill and different approaches to problems.
What made that transition possible wasn't faith in any individual contributor. It was building review processes, testing frameworks, and feedback loops that caught problems regardless of who introduced them. The system mattered more than the individual line of code.
AI is the same transition, just accelerated. The code comes from a source I don't fully understand, using reasoning I can't always follow. But the response shouldn't be to reject it reflexively. It should be to build better systems for verification, review, and iteration. The developers who thrive will be the ones who can extend the same trust — and the same verification rigor — to AI-generated code that they've learned to extend to human contributors.
And here's the thing about AI that makes this easier than the human version: it keeps getting better. Every model iteration improves. The junior developer you trust today might plateau. The AI assistant you distrust today will be meaningfully more capable in six months. Building systems that can incorporate improving capabilities seems like a better bet than building walls against them.
Recent research on 1,250 workers discovered that creative professionals, who face the strongest identity threat at 71.7%, are also adopting AI the fastest at 74.6%. A full 85.7% of workers live with what researchers call "unresolved AI tensions."
The insight buried in this data challenges the usual advice about change management. Cognitive dissonance isn't a barrier to overcome. It's the default state. The professionals who thrive won't be the ones who resolve their ambivalence about AI. They'll be the ones who act despite it.
Think about what that means practically. If you're waiting to feel comfortable with AI tools before fully committing, you've identified the wrong success criterion. The developers who pull ahead will be the ones who tolerate the discomfort, who keep working with tools they don't fully trust while maintaining enough vigilance to catch errors.
This is the same dynamic we saw with every major tool transition. Developers didn't fully trust version control systems at first either. Or containerization. Or cloud infrastructure. The ones who succeeded weren't the ones who waited for perfect confidence. They were the ones who built verification systems and kept moving.
So here's the uncomfortable truth: the divide isn't between good engineers and bad ones. It's between engineers whose identities can evolve and those whose identities are too tightly coupled to practices being automated.
If you're reading this and feeling defensive, that's useful information. Not a verdict, but a signal. The defensiveness itself suggests how tightly bound your identity is to current practices.
In Part 2, we'll look at what history actually shows about professional transitions — from architecture after CAD to finance after quants — and examine hard data on where displaced developers actually go. The patterns are more nuanced than either "everyone's fine" or "the apocalypse is here." And there's a framework for thinking about which capabilities complement AI rather than compete with it.
Jeremy Freeman is CTO and co-founder of Allstacks, where he leads engineering and has spent the past decade helping engineering organizations understand what drives productivity.