GenAIProduct ManagementEnterprise TransformationAgile

Classical Agility is Dead: What AI-Native Teams Actually Need

January 6, 202612 min read

The agile methodologies we've spent decades perfecting were designed for a world where implementation was the bottleneck. AI just collapsed that assumption. Here's what comes next.

The Disappearing Middle

Karri Saarinen, founder of Linear, recently articulated something many of us have felt but struggled to name: "The middle of software work has been the most important part for a long time. You started with an idea, and eventually you shipped something, but almost all of the effort lived in between."

That middle, the translation of intent into implementation, has been the gravitational center of how we build software for decades. It's why we have sprints. It's why we have estimation rituals. It's why we have entire frameworks like SAFe designed to coordinate hundreds of people doing this translation work at scale.

And it's disappearing.

Pure coding agent workflows can now produce working code from goals, context, and tasks. They operate more independently, requiring you to touch the code less and rely on the IDE less. As Saarinen puts it, the IDE is becoming "more of a code viewer than a writing tool."

This isn't a prediction about 2030. This is what's happening right now, in January 2026, in the enterprises I work with every day.

The Fundamental Miscalculation of Classical Agility

Every agile methodology, from Scrum to SAFe to Kanban, shares a hidden assumption: that implementation is the primary constraint. Our ceremonies, our metrics, our entire organizational machinery was designed to optimize for a world where converting requirements into working software was slow, expensive, and unpredictable.

Consider what we optimized for:

  • Sprint planning exists because implementation takes time and we need to allocate that time carefully
  • Story points exist because we need to estimate how much implementation capacity we're consuming
  • Daily standups exist because implementation work creates blockers that need to be surfaced quickly
  • Retrospectives exist because we need to continuously improve our implementation velocity

The Agile Manifesto itself, with its emphasis on "working software over comprehensive documentation," was a rebellion against waterfall's front-loading of specification. The message was clear: don't waste time documenting, start building.

But what happens when building becomes nearly free?

The Numbers Tell the Story

The data from 2025 is unambiguous:

  • 82% of developers now use AI coding assistants daily or weekly
  • Developers using GitHub Copilot complete 126% more projects per week
  • 21% of Google's code is now AI-assisted
  • Large enterprises see 33-36% reduction in development time
  • 50% of developers in top-quartile organizations use AI coding tools daily

According to Menlo Ventures, companies spent $37 billion on generative AI in 2025, up from $11.5 billion in 2024. That's a 3.2x increase in a single year. The largest share, $19 billion, went to user-facing products and software that leverage AI models.

McKinsey and PWC report 20-50% productivity boosts for teams adopting GenAI. Some organizations are seeing sprints that used to take two weeks compress to one week. The constraint is no longer "how fast can we code?" It's shifting elsewhere.

Where the Bottleneck Actually Moved

If AI collapses the middle, where does the constraint move? Having led GenAI transformations for enterprise insurers, I see it shifting to two places:

1. The Front: Intent Clarity

What actually needs to be built is still the important question. Understanding the problem, gathering the right context from customers and internal teams, and shaping the work so it can be acted on effectively matters more than ever. Agents act directly on that input.

In the AI era, garbage in equals garbage out at unprecedented speed. When an AI agent can execute on vague requirements in minutes rather than weeks, you'll discover ambiguity in your thinking far faster than you're used to. And that ambiguity will ship to production before you've finished your coffee.

This is why "intent clarity" is becoming the core skill. In Intent-Based Development, intent is the unit of work. Every commit, design choice, and test point traces back to a single, stated purpose with context and an expected result. The emerging concept of Adaptive Intent-Driven Development takes this further: leadership becomes about setting clear intents and context, then letting the network self-organize to execute.

2. The Back: Verification and Review

Here's the counterintuitive finding from 2025: as AI code generation exploded, code review became the new bottleneck. Teams with high AI adoption see 98% more pull requests and 154% larger PRs, but PR review time increases 91%.

Sonar's 2026 survey found 96% of developers don't fully trust AI-generated code accuracy. Stack Overflow's 2025 survey confirms it: developer trust in AI accuracy dropped from 43% in 2024 to just 33% in 2025.

AI-generated code may be syntactically correct while being semantically wrong. It lacks the implicit context a human developer carries. Reviewers must work harder to understand why certain decisions were made. The skill shift is from "how do I write this code?" to "how do I verify this code is correct, secure, and maintainable?"

What This Means for SAFe and Enterprise Frameworks

The assumptions underlying heavyweight frameworks are increasingly misaligned with reality. This critique isn't new. Alistair Cockburn, one of the original signatories of the Agile Manifesto, noted that what they did when they invented Agile was a specific thing, and the emerging methodologies for Agile at scale are not that thing.

But AI sharpens the critique considerably:

  • 10-12 week PI planning cycles were designed for a world where implementation is slow and expensive. When AI can execute in hours, planning horizons of months feel absurd.
  • Story point estimation assumes implementation time is variable and hard to predict. With AI, the variable is increasingly "how clear is the intent?" not "how long will coding take?"
  • Sprint commitments assume a stable relationship between team capacity and deliverable output. AI makes that relationship radically non-linear.

The 2025 State of Agile Report says it plainly: the methodology has reached a "major turning point" as AI moves from being a supportive tool to an orchestrator in delivery cycles.

Some are predicting that 2026 is the year agile transformations cease to exist as we've known them. In their place, organizations will turn to hybrid, flexible, and modern ways of working.

The New Competencies

So what should AI-native teams actually optimize for? I see three critical competencies:

1. Context Engineering

The most valuable contributors now aren't just those who can implement solutions quickly. They're the ones who can translate ambiguous product intent into atomic, unambiguous requirements, simultaneously considering existing system behavior, intended behavior, edge cases, and technical decisions. They become high-fidelity context providers.

This is the new craft. Writing code is less like constructing a solution and more like setting up the conditions for a good solution to emerge. Tools like Linear embody this shift by capturing intent, needs, constraints, and ownership in ways that make work understandable before, during, and after execution. Spec-Driven Development takes it further by automatically generating AI-readable specifications that become the control surface for all AI contributions.

2. Agent Orchestration

25% of Linear workspaces now use agents; in enterprise that's over 60%. Gartner predicts 40% of enterprise applications will feature task-specific AI agents by 2026, up from less than 5% in 2025.

Directing and managing agent work becomes the craft. This isn't about prompt engineering. It's about designing the conditions under which agents can succeed. Deloitte's research on agentic AI strategy emphasizes that leading organizations are reimagining operations and managing agents as workers. The organizations stuck in pilot mode are treating agents like tools. The successful ones are treating them like team members.

3. Verification at Scale

If AI generates code faster than humans can review it, verification must be automated and stratified. Leading companies like Netflix have adopted "shift left" approaches, moving quality checks earlier in the pipeline.

The hybrid model emerging in 2025 is clear: automatically approve small, low-risk, well-scoped changes while routing schema updates, cross-service changes, and security-sensitive modifications to humans. AI review must categorize PRs by risk and selectively automate approvals.

This isn't optional infrastructure. It's the only sustainable path toward matching AI generation speed.

A New Operating Model

Here's what I believe the AI-native operating model looks like:

Replace sprint planning with intent cycles. Instead of committing to a set of stories for two weeks, teams should focus on clarifying intent with enough context that AI agents can execute. The planning question shifts from "what will we build?" to "what do we need to be true, and how will we verify it?"

Replace estimation with risk classification. Story points don't make sense when AI can execute some tasks 100x faster than before and others remain unchanged. What matters is classifying work by risk, ambiguity, and verification complexity.

Replace daily standups with context refinement. The blockers aren't "I'm stuck on implementation." The blockers are "the intent isn't clear enough for the agent to execute" or "we're not confident in the verification criteria."

Replace retrospectives with feedback loop optimization. The question isn't "how do we improve our velocity?" It's "how do we reduce the time from intent to verified outcome?"

The Stakes for Enterprise

Gartner's projection suggests agentic AI could drive approximately 30% of enterprise application software revenue by 2035, surpassing $450 billion. Organizations that adapt their operating models will capture this value. Organizations that don't will find themselves moving at 2015 speeds in a 2026 market.

The MIT finding that 95% of enterprise AI implementations are falling short isn't surprising. Most organizations are trying to bolt AI onto operating models designed for a different constraint. They're adding copilots to their existing processes rather than redesigning those processes around the new constraint frontier.

This is why Gartner also predicts that by 2028, three out of four enterprise software engineers will depend on AI coding assistants, up from fewer than one in ten in early 2023. The change isn't optional. It's industrial.

What Should Leaders Do?

If you're a decision-maker trying to navigate this, here's my advice:

  1. Stop optimizing your implementation machine. Your bottleneck has moved. Every investment in faster sprints is wasted effort.

  2. Start investing in intent clarity. Build capabilities for capturing customer context, shaping requirements with precision, and encoding intent in ways agents can act on.

  3. Redesign your verification infrastructure. You need automated code review, risk stratification, and quality gates that can match AI generation speed.

  4. Treat agents as workers, not tools. Give them clear assignments, access to context, and integration with your coordination systems.

  5. Embrace lightweight transformation. Heavy frameworks will slow you down. The clear winners are organizations that adopt AI-agility organically, step-by-step.

The Death That Gives Life

"Classical agility is dead" isn't a eulogy. It's an evolution. The core insight of the Agile Manifesto remains more relevant than ever: individuals and interactions over processes and tools. It's just that the individuals now include AI agents, and the interactions look completely different.

When the middle disappears, what remains is what always mattered most: understanding what should exist in the world, and verifying that what you built achieves it. Implementation was always just the means, never the end.

The organizations that thrive won't be those with the fastest sprints or the most elaborate frameworks. They'll be those who can articulate intent with precision, provide agents with rich context, and verify outcomes with confidence.

Welcome to the era where knowing what to build matters more than knowing how to build it.

Share:

Related Articles