On-demand exhaustive AI-analysis
Complete visibility into time & dollars spent
Create meaningful reports and dashboards
Track and forecast all deliverables
Create and share developer surveys
Align and track development costs
AI code generation is driving code costs to zero. Engineering leaders must shift from measuring output to proving ROI. Here’s what that actually means.
“The cost of code is going to zero.”
That single line stuck with me from my recent Stacked Sessions conversation with Duncan Grazier, Chief AI Officer at BuildOps.
Today, anyone can talk to a model for five seconds and generate a substantial chunk of functional code. This shift breaks most of the ways engineering leaders currently measure, communicate, and justify their teams. Here’s the substance of what Duncan and I worked through.
Story points, PR counts, and sprint velocity measurements were always proxies. They measure the effort of producing software, not the impact of it. When effort collapses, those metrics go with it.
Duncan rhetorically asked me, if one command generates two weeks of work, what does effort even mean? He is pushing to move beyond metrics that show how much was shipped to metrics that mean something to the business.
Duncan was blunt about this, and I think he’s right: engineering leaders need to get better at speaking like MBAs. They need to speak the language that gets budgets approved.
Token spend is now a real cost line on the P&L. When you go to your CFO and say, “We improved velocity 20%,” you get a polite nod. When you say, “We’re adding 20% to developer cost through token spend, and here’s the revenue return we expect from the work that unlocks” — that’s a conversation finance can engage with. That math is familiar to anyone who’s ever thought about investment returns, and engineering leaders need to think this way.
“Look, the reality is the cost of code goes to zero. You have to be able to explain value some other way… You talk about revenue, you talk about bottom line, you talk about P&L.” — Duncan Grazier
Duncan’s team at BuildOps hasn’t radically restructured. They maintain the same team formation. However, who gets involved in development has changed. Designers on his team are now writing front-end code because the barrier dropped. Everyone who has opinions about the software — and that’s everyone — now has tools to do something about it, with the right guardrails in place.
AI became a team multiplier. Instead of reducing headcount, he got more of his team involved because of it
The democratization of it still leaves tough questions. What does this mean for the engineering manager in the middle? The one responsible for hiring, retaining the team, and delivering at 3–5x the previous output rate.
Build guardrails, not frameworks. Duncan and I kept coming back to this. A framework is a fixed structure: document it, train on it, and two weeks later a new model drops that changes every assumption. A guardrail is a principle. What does quality mean for this team? What does security look like regardless of which tools generate the code?
Focus on setting guardrails and guiding principles. The current models are the worst they’ll ever be. Every engineering leader is managing change on top of change on top of change. Frameworks for tools may hinder you; guardrails will help you navigate and adapt faster as models and toolsets shift.
The full conversation is worth your time. Duncan gets into token cost math, what team structures will look like for companies being founded today versus ones that already exist, and what he actually tells engineering managers who are trying to figure out where they fit in this. Find the full episode on the Stacked Sessions YouTube channel.
Here’s what I want to know: how are you measuring AI impact on your team? Are you tracking token costs against productivity gains? Have you figured out how to translate engineering velocity into business metrics your CFO cares about?
Also, connect with me on LinkedIn. I want to hear what’s working for you.