Agentic coding is changing the engineering workforce—just not how you think

90% of Fortune 100 companies now use GitHub Copilot.1 Gartner projects 90% of enterprise software engineers will use AI code assistants by 2028—up from under 14% in early 2024.2 The adoption curve is vertical.
But the conversation has been stuck on the wrong question. Everyone's asking "how much faster are individual engineers?" when they should be asking "how does this change what my engineering organisation can take on?"
Here's the shift: a team of 14 that used to own a product vertical might now need only 3 to maintain the same output. That doesn't mean you fire 11 people. It means you suddenly have 11 engineers available for priorities that were stuck in the backlog. The constraint on what your company can build just changed.
Engineering is the ideal proving ground for this. Unlike content generation or customer service, we have hard feedback loops. Code compiles or it doesn't. Tests pass or they fail. Features ship or they don't. Hallucinations get caught in PR review, not published to customers. This makes engineering one of the lowest-risk domains to integrate AI—and one of the easiest to measure properly.
Individual productivity is a distraction
The studies paint a messy picture at the individual level. METR's rigorous July 2025 trial found experienced developers completed tasks 19% slower with AI assistance—while believing they were 20% faster.3 McKinsey found the opposite: AI-assisted developers completed new code nearly twice as fast.4
Both can be true. Junior developers in unfamiliar codebases see 26-39% gains. Senior developers working in systems they built see single digits or nothing. The variance depends on task, codebase and skill with the tools.
But this is the wrong frame. Individual productivity numbers tell you very little about what matters: your organisation's ability to deliver ROI.
That used to scale linearly with headcount. You wanted more output, you hired more people. AI changes the equation. If three engineers can now deliver what fourteen did, you haven't just improved productivity—you've unlocked capacity. The question becomes: what do you do with it?
The replacement narrative misses the point
Thomas Dohmke, GitHub's CEO: "The companies that are the smartest are going to hire more developers. If you 10x a single developer, then 10 developers can do 100x."5 He hasn't seen "a single company say, 'We've finished everything on our backlog thanks to AI.' If anything, AI is creating more possibilities and more work."
Goldman Sachs deployed Devin AI handling routine tasks while maintaining their 12,000-person engineering team.6 Their CIO framed it as engineers defining problems and supervising agent output—not being replaced by it.
The workforce isn't shrinking. It's being reallocated. Senior engineers are no longer just generating their own output—they're leading a team of AI agents to complete a project. The role has shifted from individual contributor to orchestrator: defining the problem, directing the agents, reviewing the output, handling the genuinely creative work. The capacity freed up goes toward initiatives that were previously resource-constrained.
This is the conversation we're having with engineering leaders across Europe and the US. Nobody's asking "how many people can I cut?" They're asking "what can we now take on that we couldn't before?"
The measurement gap
Here's the problem: most organisations can't see this clearly.
The 2024 DORA Report found that as AI adoption increased, delivery throughput actually decreased at the organisational level, with a 7% drop in delivery stability.9 Individual activity metrics spike—PRs merged, tasks completed—while organisational outcomes stay flat or decline.
The old proxies (story points, velocity, lines of code) were already suspect. Now they're actively misleading. An engineer pumping out twice as many PRs doesn't help if review time doubles and bug rates tick up.
I've written before about the challenge of answering "What did that £20M engineering spend actually produce?" AI makes this harder and more important simultaneously.
What's missing is the connection between individual output and organisational capacity. You need to see both: who's getting value from AI tools, and how that translates to what your teams can collectively deliver. The correlation between individual-level gains and macro-level outcomes is where the insight lives.
This is what we're building at Flowstate. We track time allocation per person, per project, per quarter—which means we can model how AI adoption changes effective capacity across your organisation, not just individual throughput.
AI costs need the same rigour as cloud costs
The other dimension nobody's tracking properly: AI spend is usage-based, just like cloud compute. And we all remember what happened when enterprises first adopted cloud without governance.
Average monthly AI spend is projected to rise 36% year-over-year.10 Most organisations report agent costs exceeded expectations. Nearly half point to "runaway tool loops" as the cause of budget overruns.
I covered the pricing dynamics in Moore's Law for AI is officially dead—frontier model prices are rising, not falling, and the tools built on top are all moving to usage-based pricing because flat-rate subscriptions became unsustainable.
Token consumption needs to sit next to engineering output in the same view. Otherwise you can't answer the basic question: is the AI spend generating enough capacity gain to justify the cost?
Our research at Flowstate shows this is becoming critical for resource planning. You can't forecast headcount or budget without understanding how AI spend scales with team activity. The companies treating AI costs as a retrospective line item are going to get surprised. The ones building it into workforce planning will see the trade-offs clearly.
Connecting spend to outcomes
When AI spend connects directly to business outcomes, the ROI conversation becomes straightforward.
You're not justifying tool costs in isolation. You're showing: "This team freed up 40% capacity through AI tooling. We reallocated that to the payments modernisation project. That shipped two quarters early. Here's the revenue impact."
The insight that matters isn't "are we more productive?" It's "how should our workforce plan change given what AI enables?"
If your teams can genuinely deliver more with the same headcount, you have options. Double down on the roadmap. Take on adjacent opportunities. Build the features that were perpetually deprioritised. The constraint was always capacity. AI changes the constraint.
The companies that can see this—measuring individual productivity and organisational capacity and AI costs in a unified view—will make better allocation decisions. They'll spot where AI investment pays off and where it doesn't. They'll adjust dynamically rather than planning on gut feel.
Looking ahead to 2026
The 2026 landscape will be defined by who figured out measurement in 2025.
AI coding tools will be table stakes. The differentiation will come from structured adoption, governance that prevents cost blowouts, and measurement systems that connect AI investment to organisational capacity—not just individual throughput.
The workforce will continue reshaping around engineers as problem definers and output supervisors, with AI handling mechanical work. Teams that invest in upskilling now will have engineers who can leverage the tools effectively. Teams that don't will burn tokens without the capacity gains.
The smart money is on doing more with your engineering organisation, not less. But only if you can see clearly what AI actually enables.
That's what we're building toward.
- GitHub Copilot Statistics & Adoption Trends – Second Talent ↩
- Gartner Says 75% of Enterprise Software Engineers Will Use AI Code Assistants by 2028 – Gartner ↩
- Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity – METR ↩
- Unleashing developer productivity with generative AI – McKinsey ↩
- GitHub CEO says the 'smartest' companies will hire more software engineers – Yahoo Finance ↩
- Goldman Sachs is piloting its first autonomous coder – CNBC ↩
- Announcing the 2024 DORA Report – Google Cloud ↩
- The State Of AI Costs In 2025 – CloudZero ↩