On-demand exhaustive AI-analysis
Complete visibility into time & dollars spent
Create meaningful reports and dashboards
Track and forecast all deliverables
Create and share developer surveys
Align and track development costs
DORA's latest research reveals a troubling split: developers feel more productive than ever, but system quality is declining. The skill that bridges the gap? Code review—now consuming 25% of engineering workflows and separating high-performing teams from the rest.
The DORA paradox no one's talking about: Individual developer productivity is up. Platform stability is down. Code review time has jumped from 10% to 25% of the workflow. What gives?
The latest DORA research reveals a troubling split in engineering teams using AI tools. Developers report feeling more productive than ever—but overall system quality is slipping. It's the kind of finding that makes engineering leaders exhale slowly before asking the obvious question: Are we actually getting better, or just getting faster at making mistakes?
Trevor Herr has been living this question. As Director of Engineering Applications at Syndio, he's spent the past year and a half navigating what he calls "the most transformative year in engineering I can recall in my career." On a recent episode of Stacked Sessions, Trevor sat down with Jeff Keyes to unpack what's actually working, what isn't, and why the skills that matter are shifting underneath us.
Trevor's journey mirrors what many engineering leaders experienced. At the start of 2024, AI tools were "glorified search tools"—useful for refactoring, decent at writing tests, but constantly screwing up coding conventions. Then agent mode arrived.
"I still remember the first time I saw an agent fix something," Trevor recalls. "It changed some code, generated a linting error, read the error from my console, and just... fixed it. That was the moment I realized this wasn't just another tool release."
By the end of 2025, every major player had a CLI, an agent mode, something that could iterate in the background. The shift from "tools with potential" to "tools we can actually use" happened faster than anyone anticipated.
But here's what Trevor noticed that most hype cycles miss: the tools that iterate require humans who know how to collaborate.
The biggest misconception Trevor sees? Expecting AI to nail it in one shot.
"A lot of people were like, 'Now you can just one-shot something—just tell it to go fix that bug,"' he explains. "And sure, maybe for something very simple. But when you want to do real work with something complex, working with it is the key."
The analogy Trevor keeps coming back to: it's like working with a junior engineer.
You give them a task. They come back with work. You review it. You give feedback. They iterate. That loop—which every engineering leader has run hundreds of times with people—is exactly the loop that makes AI tools productive.
"The way LLMs work is they're giving you the most common answers—the average," Trevor says. "Which is not always the best. That's when you keep working with it. 'No, this isn't performing enough. What are more performant solutions?' Just iterating through that."
The insight here is critical: If your team expects AI to replace the collaboration loop, they'll be disappointed. If they treat AI as a collaborator that needs good context, clear feedback, and iteration cycles—the same things junior engineers need—they'll ship real work.
This is where the DORA paradox starts making sense.
"One thing my engineers do is keep us honest," Trevor shares. "They'll hint at it—'Hey, this quality dip might be related to someone just not using the tool the right way.' And what's the right way? Understanding and reviewing that code."
Here's the uncomfortable math: If AI lets developers generate code faster, and code review time has jumped from 10% to 25% of the workflow (per DORA's findings—and Trevor says it's even higher at some organizations), then reviewing code isn't overhead anymore. It's the core skill.
"One skillset that's going to set apart engineers is how good you can review code," Trevor states plainly.
But there's a trap. Faster code generation often means bigger PRs—and bigger PRs mean longer, sloppier reviews.
"I always think about this engineer I worked with who would say 'light reading alert' with sirens when he'd dump a huge PR on us," Trevor laughs. "I've seen that too many times with AI-generated code today. It's so easy because you get into this flow—you build a plan, you're going through all the steps. It takes restraint to say, 'Hey, this is a good stopping point. I need to get this out for review.'"
Trevor's advice isn't abstract. Here's what's working for his teams:
1. Make the AI environment as good as your dev environment.
"Having a really good AI tooling environment is going to make the tooling work better," Trevor says. That means MCP servers pulling documentation, ticket context, and codebase patterns into agent workflows. Just like you'd onboard a junior engineer with great docs, you need to onboard your AI tools the same way.
2. Force collaboration on AI learnings.
Trevor found that engineers kept fumbling over the same problems—like an agent misunderstanding service architecture and making bad calls. "If every engineer has to figure that out individually, everyone's going to have that problem." The fix: share what works, document the gotchas, make AI collaboration a team sport.
3. Apply the "would I have written this?" test.
"Make sure it generates code that you would write," Trevor advises. "If you look at it and think, 'That's something I would have wrote'—then you should have confidence. If you're not understanding that code, whether you wrote it or someone else did, you're going to create the same issues one way or the other."
4. Smaller PRs, more often.
The temptation to let agents run wild and generate massive changesets is real. Fight it. "Having more agents generate smaller PRs instead of one that generates a big one" is the pattern Trevor sees working. It preserves review quality and keeps context-switching manageable.
5. Give your team time to actually learn the tools.
"One of my engineering managers said some engineers don't feel like they have enough time to work with the tool," Trevor recalls. "At first I thought, 'Just use it while you're doing the work.' But as the year went on, I realized there's real overhead to setting up the environment and learning the patterns." Make that investment explicit.
Trevor's observation lands with engineering leaders because it names something they're feeling: the work is changing, and not everyone has caught up.
"We're still learning the way to work with everything," he admits. "Let's maybe talk again in a year and we'll have a better answer."
But here's what's clear now: The engineers who thrive won't be the ones generating the most code. They'll be the ones who review with precision, collaborate with AI like a thought partner, and know when to stop the agent and ship something solid.
Code review isn't a bottleneck. It's the new superpower.
This post is based on a conversation from Stacked Sessions, where Jeff Keyes sits down with engineering leaders navigating the intersection of software development and business strategy. [Listen to the full episode with Trevor Herr →]