On-demand exhaustive AI-analysis
Complete visibility into time & dollars spent
Create meaningful reports and dashboards
Track and forecast all deliverables
Create and share developer surveys
Align and track development costs
Teams using AI agents are writing detailed specs again—and some worry it's "going back to waterfall." It's not. The feedback loop collapsed from six months to twenty minutes. That changes everything.
There's a question rattling around engineering leadership circles right now, usually asked in a half-joking, half-nervous tone: "Are we... going back to waterfall?"
The concern is understandable. Look at how teams are working with AI agents today — GitHub Copilot, Cursor, Claude Code, Devin, Amazon's new Kiro IDE — and you'll notice something that feels eerily familiar. More upfront planning. Detailed specifications before coding begins. Engineers spending serious time writing requirements documents instead of just diving into code.
François Zaninotto at Marmelab captured the anxiety perfectly in a piece titled "Spec-Driven Development: The Waterfall Strikes Back." He wrote that this new approach "reminds me of the Waterfall model, which required massive documentation before coding so that developers could simply translate specifications into code."
If you lived through the waterfall era, this might trigger some PTSD. I get it.
But I don't think that's what's happening here. What I'm seeing is something different — something that takes the hard-won lessons of agile and adapts them to a world where the fundamental bottleneck has shifted.
Let me explain what I mean.
To understand where we're headed, it helps to remember why agile took over in the first place.
Here's an irony that most people don't know: Winston Royce, the guy who supposedly "invented" waterfall in his 1970 paper, actually warned against using it. He drew that famous sequential diagram — requirements, design, implementation, testing — and then explicitly said it was "risky and invites failure." The software industry took his diagram and ignored his warnings. Classic.
Waterfall's fatal flaw wasn't that it valued planning. Planning is good. The problem was the cost of being wrong. You'd spend months writing comprehensive specs, hand them off to a development team, and then... wait. Six months. A year. Sometimes longer.
When the software finally emerged, one of two things had happened: either the business had moved on and no longer needed what was built, or the team had discovered halfway through that the spec was fundamentally flawed — but by then, changing course meant throwing away months of work.
The FBI's Virtual Case File project is the canonical horror story: $170 million and four years, abandoned with nothing to show for it. California's Court Case Management System: $333 million, same result. The pattern repeated across government and enterprise throughout the 80s and 90s. Requirements locked in too early, integration nightmares discovered too late, delivered products that no longer matched evolved needs.
Martin Fowler, one of the Agile Manifesto authors, captured waterfall's core flaw: "My greatest problem with it is how it tends to defer discovery of problems till late in the project, at which point there's little time or energy to deal with them effectively."
The Agile Manifesto in 2001 was a direct response to this pain. Its core insight was simple: we can't predict the future, so let's stop pretending we can. Instead of big upfront specs, work in short iterations. Instead of comprehensive documentation, prioritize working software. Instead of following a plan, respond to change.
This worked. It worked incredibly well. Projects that used agile approaches were three times more likely to succeed than waterfall projects. The U.S. Department of Defense eventually made waterfall approaches effectively illegal for software procurement. The industry shifted, and for good reason.
But here's the thing about agile that we sometimes forget: it was an adaptation to constraints. Specifically, the constraint that building software was slow and expensive.
When building takes six months, you can't afford to be wrong about requirements. But you also can't afford to spend six months on requirements, because the world will change before you're done. Agile's solution was to shrink everything — smaller specs, shorter builds, faster feedback loops. "Good enough" specs were acceptable because you'd learn quickly whether you got it right and could course-correct before too much time passed.
The spec didn't need to be perfect because the iteration was fast enough to catch mistakes.
Now think about what's happened in the last eighteen months.
GitHub reports that 41% of new code on their platform is now AI-generated. JetBrains' 2025 developer survey shows 85% of developers use AI tools regularly. Teams are shipping features in hours that used to take weeks.
The building part just got dramatically faster.
And when the building part gets faster, something interesting happens to the rest of the process: the bottleneck shifts. Suddenly, the thing that takes the longest isn't writing code — it's figuring out what code to write.
This is the core insight that I think a lot of people are missing in the "are we going back to waterfall" debate. The reason agile worked wasn't because specs are bad. It worked because the feedback loop between "write spec" and "see working software" was too long. You couldn't afford to invest heavily in specs when you wouldn't learn if they were right for six months.
But what if that feedback loop collapsed from six months to six days? Or six hours?
That changes the calculus entirely.
The term "spec-driven development" has been floating around for most of 2025, pushed by tools like GitHub's Spec-Kit, Amazon's Kiro IDE, and platforms like Tessl. The basic idea: instead of jumping straight into code with a vague prompt, you invest time upfront in a detailed specification — user goals, acceptance criteria, technical constraints, edge cases — and then hand that to your AI agent.
Thoughtworks defines it as "a development paradigm that uses well-crafted software requirement specifications as prompts, aided by AI coding agents, to generate executable code."
GitHub's engineering blog puts it more bluntly: "The issue isn't the coding agent's coding ability, but our approach. We treat coding agents like search engines when we should be treating them more like literal-minded pair programmers."
When you give an agent a vague prompt like "build me an authentication system," you get... something. It compiles. It looks right. And then you spend the next three hours figuring out why it doesn't actually work the way you needed it to. The stack isn't what you wanted. The architecture doesn't fit your existing system. Edge cases are handled weirdly or not at all.
Sound familiar? It should. It's the same problem we had with offshore development teams in the 2000s. Same problem we had with junior developers who were handed underspecified tickets. The problem was never "these people can't code." The problem was "we didn't tell them what we actually needed."
AI agents are the same way, except faster. As one engineering leader put it: "If your plan is flawed, an AI will simply get you to a flawed result faster." Or more bluntly: "Waterfall will be a faster death march with AI if you aren't careful."
So yes, spec-driven development asks you to invest more upfront in defining what you want. But here's the crucial difference from waterfall: the feedback loop is still fast.
Let me try to articulate what I think is actually happening here.
Waterfall's problem wasn't "we thought about requirements before coding." Every successful project does that. The problem was the lag time between specification and learning. You wrote a spec, then waited months or years to find out if it was right. By then, the cost of change was catastrophic.
With AI agents, that dynamic inverts completely.
You write a detailed spec. The agent builds it in twenty minutes. You look at it, realize your spec missed something important, update the spec, and the agent rebuilds it in another twenty minutes. The iteration that used to take six months now takes an afternoon.
This is what I mean when I say the bottleneck has shifted. In the old world, building was expensive, so you minimized building and accepted imprecise specs. In the new world, building is cheap, so you can afford to be precise about specs and iterate rapidly when they're wrong.
Here's the other thing that makes this fundamentally different from waterfall: the agent has no ego.
Think about what made waterfall so painful in practice. You hand a 200-page spec to a development team. They spend six months building it. You come back and say "actually, we need to change this fundamental thing." What happens? Politics. Frustration. "Why didn't you think of this before?" Sunk cost fallacy kicks in. People resist throwing away work they've invested in emotionally.
An AI agent doesn't care. You can tell it "actually, throw all that away and rebuild it this way instead" and it just... does it. No hurt feelings. No "but we already built it the other way." No passive-aggressive Slack messages.
This changes the psychology of spec-writing entirely. In waterfall, you agonized over specs because mistakes were so costly to fix. With agents, you can write a spec, see how it plays out, and revise freely. The spec becomes a living conversation rather than a contract carved in stone.
François Zaninotto, despite his "Waterfall Strikes Back" critique, actually demonstrated this himself. He built a 3D sculpting tool with Claude Code in about 10 hours — no formal specification at all. "I just added small features one by one, correcting the software when the agent misunderstood me." That's not waterfall. That's agile on fast-forward.
So no, I don't think we're going back to waterfall. The conditions that made waterfall fail — long feedback loops, high change costs, emotional resistance to rework — don't exist in the same way with AI agents.
But I also don't think we can stay in pure "vibe coding" mode — just prompting agents with whatever's in your head and hoping for the best. That works for prototypes. It doesn't scale to production systems.
What's emerging is something new. And it requires a new set of skills: writing better specs, faster, with tests that define success in executable terms.
In Part 2, we'll dig into the practical side: how Kent Beck is using TDD as a "superpower" with AI agents, what skills matter most in this new world, and what I think happens next.
[Continue to Part 2: How to Write Specs for AI Agents Coming Soon]