On-demand exhaustive AI-analysis
Complete visibility into time & dollars spent
Create meaningful reports and dashboards
Track and forecast all deliverables
Create and share developer surveys
Align and track development costs
Vivek Haldar—18-year Google veteran, former VP of AI Agents at Emergence AI—on why the copilot era was just the warm-up, and what engineering leaders need to do now that we're in the agentic era.
Vivek Haldar joined me on Stacked Sessions recently. Founder of Enchiridion Labs, former VP of AI Agents at Emergence AI, 18-year veteran at Google. In 35 minutes, he said the thing most engineering leaders aren't ready to say out loud.
We are not in the copilot era anymore. We're in the agentic era. And the gap between teams who get that and teams who don't is already showing up in output.
Most developers I talk to still think about AI the way Vivek describes Phase One: smarter autocomplete. GitHub Copilot finishing a line of code. A suggestion here, a shortcut there.
Vivek is blunt about that era: "When you were doing autocomplete, you were still very much at the steering wheel. You just had a much fancier car, maybe with cruise control."
Phase Two is different in kind, not just degree. In the agentic era, the model writes code, runs it, runs the tests, finds issues, and loops back to fix them. No keyboard required on your end.
"The light bulb for me went on when I saw an agent loop writing a piece of code, running it, running the tests, finding issues, and then looping back to fix it. That is the very basic loop you go through as a programmer every day."
If your engineers are still treating AI as autocomplete, they're not in Phase Two yet. Not even close.
What leaders should do: Stop benchmarking AI adoption by tool usage alone. Ask your team: are they running agent loops, or are they still in the driver's seat for every line?
Something Vivek said that I keep coming back to: the model isn't everything. The harness, the layer of tooling, context, and workflow structure around the model, determines much of what you actually get out of it.
"A lot of the squeezing the juice out of the models is actually up to the harness. Even if the model was kept constant, you can see very notable differences depending on what harness you're using."
He compared it to the Mac vs. PC story. A first-party harness tuned to a specific model builds up real advantages over time. Those advantages are already visible in production at teams paying attention.
"It's moving so fast that even if you think you're AI maxing, you're probably not — because your mental model of what these tools can do is anchored to six months ago."
Worth sitting with that one.
What leaders should do: Carve out explicit time, call it "harness time," for your team to experiment with and recalibrate to current tools. If a percentage of your engineering time isn't going here, you will fall behind in ways that won't show up on a dashboard until it's too late.
Vivek puts the productivity gap between AI-proficient engineers and AI skeptics at 10x. Not 10%. Ten times.
"You can already see this K-shaped divergence — your AI-maxing engineers are churning out features and having an impact much, much faster than your AI-skeptic engineers. They might be great engineers. But they're still writing code by hand, and the bar has moved up."
The old career progression playbook, the rubrics, the promotion frameworks, all built for a world that doesn't exist anymore.
"All the old playbooks and best practices are getting torn up. The challenging part is that the old system is deprecated, but the new one isn't built yet."
That's actually a fair description of where most orgs are right now.
What leaders should do: Treat AI proficiency as a core engineering competency, not a soft skill to encourage. Build it into performance conversations now, before the divergence gets too wide to manage.
Every CFO I talk to is asking the same question: what are we getting for all this AI spend? Vivek has a practical frame for it.
"The lens through which you should look at ROI is: what fraction of a software engineer are you actually getting? Claude Code works at the level of a fresh-grad software engineer. So ask — how much does an entry-level engineer cost, and what fraction of that cost are you paying?"
He also pushed back hard on chasing productivity metrics: "The measure can't become the target. That is the classic failure mode of engineering metrics. If you make commits a target, people will just game that."
The better signal? Token spend as a proxy for actual agent utilization, and whether the output is moving the product forward.
Vivek closed with a concept worth knowing: the dark factory.
OpenAI shipped an entire enterprise product with zero handwritten code. Ramp built fully autonomous agent-driven systems. These teams aren't just using AI to ship features faster. They're building the infrastructure that does the building.
"People are shifting engineering resources away from building out a feature to building out the harness that can do that for you."
That's the move. Not "how do we make our engineers faster," but "how do we build the system that builds the product."
What leaders should do: Stop asking only how AI is helping your engineers ship faster. Start asking whether you're building the infrastructure that lets AI operate at the team and org level, not just for individual contributors.
The copilot era was the warm-up. What comes next is structural, and the teams that understand that aren't just shipping faster. They're playing a different game entirely.
Jeff Keyes is Field CTO at Allstacks. Stacked Sessions explores the intersection of engineering leadership and business strategy. Subscribe wherever you get your podcasts.