On-demand exhaustive AI-analysis
Complete visibility into time & dollars spent
Create meaningful reports and dashboards
Track and forecast all deliverables
Create and share developer surveys
Align and track development costs
For twenty years, a 1:6 PM-to-engineer ratio was industry gospel. AI didn't eliminate the bottleneck, it moved it. The question is no longer "can we build it" but "should we build it."
For twenty years, we have optimized engineering organizations around a single assumption: coding is slow and expensive.
This assumption shaped everything. Team structures. Hiring ratios. Reporting hierarchies. The entire operational machinery of software development was built to feed a bottleneck that sat squarely in the hands of engineers.
What happens when that assumption breaks?
In 2007, Marty Cagan published a simple rule of thumb that would become industry gospel: one product manager for every six to ten engineers. The ratio made intuitive sense. A single PM could define enough work to keep a team busy. Engineering capacity was the scarce resource. Product decisions could wait; code could not.
The ratio varied by company culture. Google and Microsoft hovered around 1:6-9. Meta pushed toward 1:12 with heavy reliance on senior individual contributors. Amazon's "two-pizza teams" implied roughly 1:6-8. But the underlying logic remained constant everywhere: coding is the constraint.
And for two decades, it was.
Building software required armies of engineers. A prototype that might take three weeks to develop justified waiting another week for user feedback. The rhythm of software development was set by how fast humans could translate specifications into working systems.
Then something shifted.
Andrew Ng recently made an observation worth sitting with.
"Things that used to take six engineers three months to build," he said on the No Priors podcast, "my friends and I, we'll just build on a weekend."
GitHub's controlled studies support this. Developers using Copilot completed coding tasks 55% faster than those without it. Some teams report even larger gains for boilerplate, scaffolding, and test generation.
But Ng wasn't celebrating the speed. He was pointing to a problem.
"The bottleneck," he explained, "is deciding what do we actually want to build."
When a prototype took three weeks, waiting a week for user feedback felt acceptable. When a prototype takes a day, that same week feels, in Ng's words, "really painful."
The constraint did not disappear. It moved.
There is a principle in economics that helps explain what is happening. When two goods are complements, like cars and gasoline, falling prices in one leads to higher demand for the other. As cars became cheaper and more accessible, demand for gasoline increased.
Ng applied this logic to software teams: "Given a clear specification for what to build, AI is making the building itself much faster and cheaper. This will significantly increase demand for people who can come up with clear specs for valuable things to build."
In other words: when coding gets cheaper, product thinking gets more valuable.
One of his teams recently proposed a ratio of one product manager to 0.5 engineers. More PMs than engineers for a single project.
"That would have sounded absurd a year ago," he noted.
What makes this shift interesting is that it did not come out of nowhere. The "what to build" problem was acute long before AI made building faster.
CB Insights analyzed over 100 failed startups and found the same pattern repeatedly: "no market need" ranked as the number one or number two reason for failure, affecting 35-42% of companies. These were not technical execution failures. Teams built products that nobody wanted.
The feature waste data tells a similar story. Pendo's 2019 Feature Adoption Report found that 80% of features were rarely or never used, representing an estimated $29.5 billion in wasted investment across publicly-traded cloud companies. The average feature adoption rate was 6.4%. For every 100 features shipped, only about 6 drove 80% of actual user engagement.
The Standish Group's CHAOS reports corroborate this. Only 31% of IT projects succeeded as of 2020. The top success factors were user involvement and clear requirements, not engineering capability.
In other words, organizations were already building the wrong things faster than they could figure out what to build right. AI coding acceleration does not create this bottleneck. It exposes it.
Here is where the math gets concrete.
McKinsey studied 40 product managers using AI tools and found a 40% overall productivity improvement. That sounds impressive until you compare it to engineering gains.
The GitHub Copilot controlled study showed developers completing coding tasks 55% faster. An academic replication found 55.8% faster completion of an HTTP server implementation. GitHub's broader survey found that 46% of code is now AI-generated in organizations using these tools.
But the nature of the gains differs. For developers, AI accelerates their core deliverable: working code that can be tested and deployed. For product managers, the 40% gain concentrates in document production. PRD writing, status updates, competitive analysis summaries.
A December 2025 survey of 1,750 product professionals revealed where PM AI usage actually concentrates: PRD writing (21.5%), creating mockups and prototypes (19.8%), and improving communication (18.5%). User research sat at just 4.7%. Roadmap strategy at 1.1%.
AI helps PMs produce documents. It does not yet help them discover what customers need, align stakeholders around hard tradeoffs, or make judgment calls about which of many possible features will create actual value.
The aspiration gap is telling. Product ideation showed a 29 percentage point gap between current AI usage (19.6%) and desire to use AI for it (48.6%). Growth strategy showed a 24.7 point gap. PMs want AI to help with strategic work. Current tools do not deliver.
If development velocity increases by 55% while product definition velocity increases by 40% (and that 40% concentrates in supporting artifacts rather than core discovery work), the math on team ratios changes.
Consider a simplified model. A team with 7 engineers and 1 PM operates at Cagan's original ratio. If AI makes those engineers 55% more productive at their core work while the PM gains 40% on document production but minimal acceleration on discovery and validation, the PM becomes the constraint.
The PM cannot run customer interviews faster. Cannot synthesize qualitative insights faster. Cannot build stakeholder alignment faster. Cannot validate that the roadmap reflects actual market need faster.
At Allstacks, we see this playing out in conversations with engineering leaders. The question they are asking is not "how do we measure developer productivity with AI tools?" It is "how do we know if we are building the right things now that we can build anything quickly?"
As these trends continue, I expect we will see ratios compress toward 1:3 or 1:4 in the near term. Organizations that fully embrace AI-assisted development may approach 1:1 or 1:2 within five years. Not because PMs are more valuable in some abstract sense, but because the structural bottleneck in software organizations is shifting from "can we build it" to "should we build it."
Marty Cagan was not wrong. His 1:6-10 ratio was an accurate observation about how software organizations worked when coding was the constraint. He looked at the evidence available in 2007 and described what he saw.
We are doing the same thing here, looking at the evidence from today. The inputs have changed. AI has compressed the time and cost of code generation. The studies show clear productivity gains for engineering work. The studies also show that product management gains concentrate in areas that do not address the core discovery and validation bottleneck.
When you follow these trends forward, you arrive at a different ratio. Not because Cagan was mistaken, but because the underlying economics shifted.
The organizations that recognize this shift early will have an advantage. They will invest in PM capacity, either through headcount or through AI tools that actually accelerate discovery rather than just document production. They will restructure how they measure success, moving from "did we ship it" to "did we build the right thing."
The constraint flipped. The ratio will follow.
Jeremy Freeman is the CTO at Allstacks, where he helps engineering leaders understand how their teams deliver software.