On-demand exhaustive AI-analysis
Complete visibility into time & dollars spent
Create meaningful reports and dashboards
Track and forecast all deliverables
Create and share developer surveys
Align and track development costs
JetBrains just retired human pair programming and replaced it with AI agent orchestration. Here's what that signals about engineering leadership in the age of AI agents.
Last week, the company JetBrains retired Code With Me.
If you're not familiar, Code With Me was JetBrains' collaborative pair programming tool: two engineers, one session, solving a problem together in real time. It was good software. A lot of teams used it. JetBrains built it carefully.
They killed it anyway, because they had something they believed was more important to build. They called it JetBrains Central: a production-grade platform for orchestrating AI coding agents across development teams. Not a tool for two engineers to work together. A tool for one engineer to direct, route, and supervise multiple AI agents working in parallel.
This is not a product decision. It is a statement about where JetBrains believes software engineering is going, made by a company that has spent 25 years watching engineers work. You can disagree with their timing. You cannot dismiss the signal. And for engineering leaders, it reframes a question that most organizations are not yet asking: managing AI agents is a fundamentally different problem from managing engineers, and the leadership infrastructure built for one does not transfer to the other.
Here is what the transition to agentic development actually looks like inside an engineering organization right now: code volume is up. PR count is up. Developers feel busier. Releases are not necessarily faster. And when something goes wrong, it is increasingly difficult to trace why.
That is not a coincidence. It is the direct result of a leadership infrastructure that was designed for human engineers operating at human throughput, being asked to manage AI agents operating at a fundamentally different scale and pace.
Most engineering management tools were built around the core assumption that the people doing the work can explain what they are doing. You can ask them in a standup. You can read their PR descriptions. You can look at their commit history and reconstruct the reasoning.
AI agents do not stand up. They produce output. And when that output is plentiful and arrives quickly, the volume itself becomes a form of opacity. A team running three agents in parallel across two features is producing more raw material in a day than a fully-staffed human team produced in a week two years ago. The question is not whether the material is being generated. The question is whether any of it is moving toward what actually matters.
That gap, between generation and impact, is now the defining management challenge in software engineering.
Anthropic's 2026 Agentic Coding Trends Report put numbers to something most engineering leaders are sensing but have not yet named. Developers now use AI in 60% of their work. But they fully delegate only up to 20% of tasks to AI autonomously.
That ratio is not going to stay there. The entire point of platforms like JetBrains Central is to push that delegation ceiling higher, resulting in more tasks handled autonomously, for longer stretches, with less moment-to-moment human input. JetBrains is explicitly designing for agents that "work for hours or days" and coordinate with other agents to complete complex, multi-step work.
What happens to the engineering leader when 40% of the code being written on their team was not reviewed by a human at the point of generation? What happens to sprint planning when agents can burn through a backlog at a rate your planning process was never calibrated for? What happens to quality when the feedback loop between specification and implementation is compressed to minutes?
The answer is not that things break. The answer is that things break in ways that are much harder to catch, because the volume of the system makes individual signals harder to read.
Most organizations are much further along on "deploying AI agents" than they are on "supervising AI agents."
Deploying is the easy part. You integrate a tool. You enable the feature. You watch throughput metrics go up and declare success.
Supervision is the harder, slower problem. It requires knowing, at the organizational level, what agents are actually working on, whether it aligns with business priorities, where the quality signals are degrading before they reach production, and which parts of the codebase have accumulated risk because agents built there without adequate spec definition.
This is why the JetBrains Central announcement matters beyond the product features. JetBrains is solving the individual developer's supervision problem by giving one engineer a control layer to manage the agents they are personally orchestrating.
But it does not solve the engineering leader's supervision problem, which operates at a completely different level of abstraction. The VP of Engineering is not supervising individual agent sessions. They are responsible for the aggregate behavior of an engineering organization where every developer is now, in effect, running their own small team of AI collaborators. The coordination problem is not at the IDE level. It is at the organizational intelligence level.
The engineering organizations that are genuinely ahead of this problem share a common characteristic: they invested in organizational-level intelligence before the volume got too high to track manually.
They are not reading every PR. They are not reviewing every agent session log. Instead, they have systems that surface the signals that matter, including when work has drifted from spec, when a feature has accumulated complexity that exceeds the team's ability to review it properly, and when deployment confidence is declining even as commit volume is rising.
This is exactly the problem the Allstacks Software Engineering Intelligence Platform was built to solve. Not at the IDE level, and not at the individual engineer level. At the organizational level, a VP of Engineering needs to see what is happening across a 200-person team running AI-assisted workflows, before those workflows surface problems in a release.
JetBrains made a bet this week. They looked at the direction of software engineering and decided that agent orchestration was important enough to retire a working product for.
The question for engineering leaders is not whether you agree with the bet. It is whether your organization has the visibility to manage a team that is taking the same bet, without asking permission, right now.
How much of the code your team merged last sprint was written with significant AI assistance? Of that code, how much went through a spec validation step before the AI started building? Where are the three features in your current roadmap where agentic development has increased speed but quietly increased risk?
If you know the answers, you have the intelligence you need. If you are estimating, or if you've never thought to track it, that is the supervision gap JetBrains' announcement is pointing directly at.