On-demand exhaustive AI-analysis
Complete visibility into time & dollars spent
Create meaningful reports and dashboards
Track and forecast all deliverables
Create and share developer surveys
Align and track development costs
AI now writes most of the code deployed. But software engineers aren't obsolete—they're making more product decisions than ever. Here's what changed...
Here’s a question no one asks in sprint planning: when a user story says “allow users to reset their password,” who decides what happens when someone requests a reset for an email that doesn’t exist?
Does the interface say “email not found” — clear feedback, but it leaks information about which emails are registered? Or does it say “if an account exists, we’ve sent a reset link” — secure, but potentially confusing? What’s the token expiration — 10 minutes? 24 hours? Does a new reset request invalidate the old one? Can someone reset their password while logged in on another device?
None of these questions appears in the ticket. The PM didn’t specify them. The designer didn’t mock them. And yet someone has to answer them before the feature ships.
That someone is the engineer. It has always been the engineer.
Every experienced developer knows this feeling: you’re deep in implementation, the spec is silent on some critical behavior, and you make a call. You choose the secure-but-confusing option. You set the token to expire in one hour. You decide that yes, a new request should invalidate the old one. You move on.
These aren’t technical decisions. They’re product decisions — choices about how the software should behave that shape user experience, define edge cases, and determine what the product actually is in practice. Software engineers make dozens of them every day, embedded in the flow of writing code, rarely documented, rarely credited.
This is what I call micro product decisions: the countless judgment calls that happen in the gap between what a specification says and what working software requires.
AI didn’t create these decisions. But AI is about to make them the entire job.
Frederick Brooks won the Turing Award for, among other things, explaining why software is hard. In his seminal 1986 paper “No Silver Bullet,” he wrote something that product managers and engineers have been quietly proving true for four decades:
“It is really impossible for clients, even those working with software engineers, to specify completely, precisely, and correctly the exact requirements of a modern software product before having built and tried some versions.”
Read that again. Really impossible. Not “difficult” or “expensive” or “time-consuming.” Impossible.
Brooks identified what he considered the hardest part of building software — and it wasn’t writing code. It was “the specification, design, and testing of this conceptual construct.” The product work. The what-should-this-thing-actually-do work.
The data backs him up. According to industry analyses, 48% of developers cite “changing or poorly documented requirements” as a leading cause of project failure. Thirty-nine percent of failures trace directly to poor requirements gathering. Seventy percent of digital transformation initiatives collapse because of requirements issues.
We’ve known this for decades. And yet the dominant mental model of software engineering — the one that shapes org charts, hiring practices, and compensation structures — still treats engineers as translators. Product managers decide what, and engineers figure out how. Clean separation. Tidy handoffs.
Anyone who has actually written code knows this is fiction.
Now, I think we'd be remiss to accept that this could not change. There may never be a complete spec, but we can define them more. Using AI to help, given enough context. Its possible to make improvements.
WEBINAR FOR YOU
Your Jira Tickets Are Killing Code Quality: Get Them Build Ready
How to Catch and Fix Bad Requirements Before They Become Expensive Rework
AI coding tools build directly from your Jira tickets. Vague requirements, missing context, and all leading to costly rework. Jeff Keyes (Field CTO, Allstacks) and Jim Grundner (Head of Engineering, Allstacks) walk through what’s going wrong upstream and what automated requirements review looks like on real Jira data. Includes a live product walkthrough and Q&A.
A study on incomplete software requirements captured something fascinating about how developers actually work. When requirements are incomplete — which, per Brooks, is always — engineers fill gaps in two ways: by seeking clarification from stakeholders, or by making assumptions.
The explicit assumptions get discussed, documented, and challenged. But the implicit ones — what researchers called “unconscious” gap-filling — persist through design and implementation without anyone acknowledging they happened.
These implicit assumptions are product decisions. They shape user experience, define edge case behavior, and determine how the software actually works. They’re just not recognized as such.
Consider what happens in a typical hour of coding. An AI-assisted software engineer implementing a search feature encounters questions like: Should empty queries return all results or show an error? How should the system handle special characters? What’s the maximum query length? Should search be case-sensitive? What happens when results are loading — show a spinner, skeleton UI, or the previous results?
Each question has multiple defensible answers. Each answer shapes the product. And each decision gets made by the engineer, in the moment, based on judgment, experience, and whatever context they’ve absorbed about what the product should feel like.
Ryan and O’Connor’s empirical study of 46 software companies with 181 team members found that “expert knowledge is mostly tacit” and that “the acquisition and sharing of tacit knowledge… are significant factors in effective software teams.” Software development requires knowledge that can’t be written down in specifications — and that knowledge gets applied constantly, in the act of writing code itself.
The engineer making an implicit assumption about search behavior isn’t failing to follow the spec. They’re exercising product judgment in the absence of a spec that could ever be complete enough. They’re making a micro product decision.
And they’ve been doing this all along.
The industry has maintained a convenient fiction for decades: that writing code is primarily a technical activity, and product decisions happen elsewhere — in roadmap meetings, in PRDs, in design reviews.
This fiction served everyone’s interests. Product managers could believe they controlled the product. Engineers could disclaim responsibility for product outcomes. Organizations could draw clean lines on org charts. Compensation structures could treat “product” and “engineering” as separate career ladders.
But the fiction was always that — a fiction. Every engineer who has shipped software knows that implementation is product definition. The code doesn’t just express the spec; it fills the spec’s gaps, resolves its ambiguities, and makes the thousand small decisions that determine what using the product actually feels like.
The convenient fiction worked because the micro product decisions were invisible. They happened in the flow of coding, undocumented, unremarked upon. An engineer would make fifty product decisions in an afternoon and describe their work as “implementing the login flow.”
AI is about to make that description obsolete.
Here’s what happens when AI writes the code.
Satya Nadella reported in 2025 that 20-30% of the code in Microsoft’s repositories is now AI-generated. Sundar Pichai said well over 30% of code at Google involves AI-generated suggestions. I certain it's more AI-generated code now, like Spotify, where their top engineers are using AI for coding 100% of the time. The downstream impact is evident. Patrick Collison noted that pull requests per engineer at Stripe are up about 30% year over year.
GitHub’s research found that Copilot users complete tasks 55% faster. But the more revealing statistic: 73% report staying in a flow state, and 87% say the tool preserves mental effort during repetitive tasks.
When AI handles the mechanical work of translating intent into syntax, what’s left?
The micro product decisions. The judgment calls. The gap-filling that specifications could never eliminate.
Matt Garman, the CEO of AWS, articulated this shift with unusual bluntness in a leaked internal conversation in August 2024. “Coding is just kind of like the language that we talk to computers,” he said. “It’s not necessarily the skill in and of itself. The skill in and of itself is like, how do I innovate? How do I go build something that’s interesting for my end users to use?”
Dario Amodei, the CEO of Anthropic, confirmed this is already happening at his company. “I have engineers within Anthropic who say, ‘I don’t write any code anymore. I just let the model write the code. I edit it. I do the things around it.’”
The “things around it” are the micro product decisions. They were always part of coding. Now they’re becoming most of it.
Mark Zuckerberg offered perhaps the most vivid description of where this leads. At LlamaCon in April 2025, he said: “Every engineer is effectively going to end up being more of a tech lead in the future that has their own little army of engineering agents that they work with.”
An army of agents. Each engineer becomes an orchestrator — not a translator of someone else’s vision, but a director of intelligent tools that require clear, contextual, product-aware instruction.
Tobi Lütke, the CEO of Shopify, introduced a term for the skill this requires: “context engineering.” On the Acquired podcast, he described it as “the fundamental skill of using AI well” — the ability to state a problem with enough context that, without any additional information, the task becomes plausibly solvable.
Think about what context engineering actually requires. You need to anticipate edge cases before they arise. You need to define problems precisely enough that an AI can solve them without asking clarifying questions. You need to understand user needs well enough to specify behavior that specs never covered. You need to make, explicitly and upfront, all the micro product decisions that used to happen implicitly during implementation.
Context engineering is product management by another name. It’s the same gap-filling engineers have always done — just made visible, deliberate, and impossible to ignore.
A Google Developer Tools executive captured the transformation: “Your job as a developer is going to look a lot more like an architect. It is going to be about taking big, complex problems and breaking them down into smaller, solvable tasks. You’ll need to be thinking about the bigger picture about what you’re trying to produce, rather than the intermediate language to express that in machine code.”
If AI were merely making engineers faster at the same job, we’d expect organizational structures to stay roughly the same. More throughput, same roles. But that’s not what’s happening.
OpenAI operates with fewer than 30 product managers. Nate Gonzalez, the company’s Head of Business Products, explained in December 2025: “We have fewer than 30 PMs because we want to be the model of what it looks like to build a company on top of AI.” Engineers absorb product responsibilities by design.
Anthropic has a similar philosophy. Dario Amodei described their approach: “You want a relatively small set of people where almost everyone you hire is really, really good.” They prioritize talent density over headcount — versatile engineers rather than specialized role fragmentation.
Shopify’s Tobi Lütke issued one of the most consequential internal memos in recent tech history in April 2025. “Using AI effectively is now a fundamental expectation of everyone at Shopify,” he wrote. The policy change that matters most: “Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI. What would this area look like if autonomous AI agents were already part of the team?”
Shopify added AI usage questions to performance reviews. Teams must justify hiring before AI alternatives are exhausted. This doesn’t eliminate the need for product thinking — it pushes product thinking down to every engineer.
The“product engineer” archetype is emerging to describe this reality. PostHog defines product engineers as professionals who “own entire features, from analysis to post-launch iteration, bridging the gap between a PM and a traditional developer.” Sonya Park, a Technical Lead Manager at Mixpanel, captures the essence: “The crux of being a product engineer is synthesizing customer input to create a solution, while still maintaining a good return on investment.”
Luca Rossi, a product leader, predicts: “Today only a handful of startups work this way, but a few years from now, once AI is everywhere, this will likely be what is expected of all engineers.”
The shift toward product-thinking engineers isn’t happening because AI can do everything. It’s happening because AI has clarified what it can’t do — and that list is exactly what engineers have always quietly handled.
Product and strategic decisions can’t be automated. AI cannot determine what to build, how to prioritize features, or how domain expertise should shape technical choices. It can’t decide whether the password reset should prioritize security or usability. It can’t judge whether a particular edge case matters to users or can be safely ignored.
Judgment calls about tradeoffs stay with humans. Security versus usability. Performance versus maintainability. Feature completeness versus time to market. These require understanding context that no prompt can fully convey.
The “why” behind requirements remains human territory. AI can implement what you specify. It cannot determine what you should specify based on user needs, business goals, and product intuition.
The Harvard Business School study with 758 BCG consultants revealed why this matters. Researchers found that AI capabilities form a “jagged technological frontier” — unpredictably uneven across tasks. For tasks within AI’s capabilities, consultants completed 12.2% more work, 25.1% faster, with 40%+ higher quality. For tasks outside those capabilities, AI output actually worsened human performance.
Professor Edward McFowland III at HBS warned: “We get productivity boosts. Things got done faster, but getting done faster to the wrong answer in many cases is not ideal.”
Someone has to judge where AI applies and where it doesn’t. Someone has to decide whether the AI’s output solves the right problem or just a problem. Someone has to make the micro product decisions that determine whether technically correct code is actually correct for users.
That someone is the engineer. The same engineer who has always made those decisions. The difference is that now there’s no pretending they don’t.
Here’s what’s actually changing. When AI handles the mechanical translation of intent into syntax, engineers spend proportionally more time on everything else — and “everything else” was always where the micro product decisions lived.
Imagine an engineer who previously spent 60% of their cognitive effort on syntax, architecture, and debugging, and 40% on the implicit product decisions embedded in implementation. AI compresses the first category. The second category doesn’t shrink — it expands to fill the space.
An engineer using AI might now spend 30% of their effort on directing, reviewing, and refining AI output, and 70% on the judgment calls about what the software should do. The micro product decisions didn’t disappear. They multiplied. What was a background hum is now the main signal.
This is why “context engineering” matters. When you prompt an AI to implement a feature, you’re forced to make explicit all the decisions that used to happen implicitly during coding. What should happen on error? How should edge cases behave? What’s the right tradeoff between competing concerns? You can’t write a good prompt without answering product questions.
The engineers who thrive with AI are those who recognize this shift — who understand that their value was never primarily in typing syntax, and who embrace the product judgment that was always the harder, more valuable part of the work.
At Allstacks, we spend our days helping engineering organizations understand how their teams actually work. The pattern we observe matches what the research shows: the engineers who thrive with AI are those who already thought of themselves as problem-solvers first and coders second.
But here’s what the data also shows: the window for gradual adaptation is closing.
Tobi Lütke’s memo contained the starkest warning: “Frankly, I don’t think it’s feasible to opt out of learning the skill of applying AI in your craft; you are welcome to try, but I want to be honest — I cannot see this working out today, and definitely not tomorrow.”
Satya Nadella framed the timeline: “Thirty years of change is being compressed into three years.” He compared the current moment to “GUI, internet servers, and cloud-native databases all being introduced into the app stack simultaneously.”
Sam Altman offered career advice that echoes the old “learn to code” guidance: “The obvious tactical thing is just get really good at using AI tools. Like when I was graduating as a senior from high school, the obvious tactical thing was to get really good at coding. And this is the new version of that.”
Gartner predicts that by 2027, generative AI will require 80% of the engineering workforce to upskill. Seventy percent of software engineering leadership roles will explicitly require oversight of AI-generated code.
The evidence from the leading edge is stark. Twenty-five percent of companies in Y Combinator’s Winter 2025 batch had codebases that were 95% AI-generated. Entry-level hiring at the 15 biggest tech firms fell 25% from 2023 to 2024.
The engineers who remain will be those who operate at a higher level of abstraction — making the micro product decisions that AI cannot make, filling gaps that AI cannot identify, and exercising product judgment that specs could never fully capture.
Here’s what I want every engineer reading this to understand: the micro product decisions were always the job. They were embedded in implementation, hidden in the flow of coding, made so quickly and automatically that even you might not have noticed you were making them.
Every time you decided how an edge case should behave. Every time you chose between two valid approaches based on what felt right for users. Every time you filled a gap in the spec with judgment and intuition. You were doing product work.
AI didn’t create this. AI just made it impossible to pretend otherwise.
When AI handles syntax, engineers focus on judgment. When AI writes the boilerplate, engineers spend more time on the decisions that boilerplate never captured. When companies operate with fewer than 30 PMs and expect engineers to absorb product responsibilities, they’re not inventing a new role — they’re recognizing what the role always was.
The transformation isn’t that engineers will become product thinkers. It’s that they’ll finally be recognized as such — and expected to develop those skills deliberately rather than exercising them invisibly.
Frederick Brooks identified the specification problem 40 years ago. Engineers have been solving it every day since, making the countless micro-product decisions that specs could never anticipate. AI just removed the last excuse for pretending this work didn’t matter.
The engineers who thrive will be those who embrace this amplified role: not just building what’s specified, but shaping what should exist. Not just writing code, but deciding what the code should do. Not just implementing features, but making the product decisions that features require.
You were always a PM. AI just made sure everyone notices.
Jeremy Freeman is CTO and co-founder of Allstacks, where he leads engineering and has spent the past decade helping engineering organizations understand what drives productivity.