Maya gets to her desk at 8:40 with the intention of finishing a review she started yesterday, a pull request touching the data pipeline’s retry logic, the kind of change that looks minimal but isn’t. By 8:55, one of her AI agents has flagged a build failure. She context-switches to triage it: a dependency conflict introduced by something the agent scaffolded overnight. She fixes it. That took twenty minutes and required holding a different mental model entirely. She returns to the retry logic PR and re-reads the last three files she had annotated. The thread she was following, the one about how this change interacts with the deduplication layer under backpressure, has partially evaporated. She rebuilds it. At 9:50 a second agent surfaces a question in Slack. She marks it for later, and it sits in a tab all morning as a low-grade pull on her attention. The review gets done at 11:15 and it is thorough because she is thorough.

In a different version of this morning, she might have spent those same hours designing the caching layer the team has been deferring for two months. That work requires the kind of sustained, uninterrupted focus she has not had on a Tuesday in a while. She does not complain. She is good at her job and adapts. But somewhere in the middle of re-reading files she already annotated, she is spending cognitive budget she did not plan for, on work that was not on her list, that will not appear in any metric, and that is quietly making the deferral of the caching layer feel normal.

That story is not about AI making engineers unproductive. It is about how AI changes the texture of senior engineering work in ways that aggregate metrics do not capture.

Engineering Systems Do Not Fail at the Point of Creation

AI tooling has genuinely expanded what small teams can achieve, but software systems rarely fail because they were difficult to build. They fail because they become difficult to operate. The work required to make systems reliable, reviewing architectural decisions, validating integration boundaries, maintaining coherent abstractions, and preserving institutional understanding, has always been less visible than feature delivery, and AI acceleration is widening that visibility gap.

As change velocity increases, organizations experience growing review backlogs, delayed test cycles, and rising dependence on a shrinking set of senior engineers capable of reasoning about system behavior under stress. These dynamics manifest as subtle friction: longer incident resolution, hesitancy to refactor, a gradual erosion of confidence in the predictability of deployments. Over time, that friction becomes structural.

Assimilation Capacity Is a Finite Resource

Every engineering organization has a limited capacity to absorb change. This includes how well engineers understand how new components interact with existing systems, the time available for thoughtful architectural review, the clarity with which design intent is communicated across teams, and the resilience of operational practices during unexpected events.

Historically, the pace of change was constrained enough that these mechanisms could adapt organically. AI-assisted development is decoupling that equilibrium. When output increases faster than assimilation capacity, systems evolve in ways that few individuals fully comprehend. Local optimizations accumulate into global entropy. Institutional memory struggles to keep pace with architectural drift.

This is not simply a technical problem. It is an economic one. Engineering hours spent navigating avoidable complexity represent capital that cannot be invested in strategic differentiation, and the productivity story organizations tell themselves about AI tooling often does not account for that second-order cost.

Productivity Gains Are Unevenly Distributed, and That Creates Risk

Field experiments across nearly five thousand developers at three major technology companies found roughly a 26% increase in completed tasks among engineers using AI coding assistants. That number looks unambiguous until you examine the distribution. The gains were concentrated almost entirely among junior and less experienced contributors. For senior engineers, productivity was essentially flat, and in some measurements negative, consistent with findings that experienced developers can be slowed by the overhead of validating, correcting, and contextualizing generated output.

When junior contributors accelerate and senior contributors absorb the validation burden, organizations mistake volume for velocity. The throughput number rises. The cognitive load on the people most responsible for architectural coherence rises with it.

The costs of this asymmetry do not show up in performance reviews. A senior engineer I know has been quietly burning out for months, not from the work itself, but from the additional standup ritual that came from his team’s implicit productivity metric: what was accomplished with AI the day before. He is being measured on AI usage rather than AI judgment. What is not being measured is the junior engineers he quietly keeps from shipping things they couldn’t take back each week, or the quality of the architectural calls he makes between the standups.

Hiring Signals Are Being Distorted

AI tooling is also weakening the reliability of the proxies organizations use to evaluate engineering capability. The barrier to entry for new technologies is lower. Engineers can produce polished portfolios, plausible GitHub histories, and credible technical writing in unfamiliar domains with unprecedented speed. The distinction between surface-level familiarity and deep systems understanding becomes harder to detect during hiring.

Organizations that optimize for narrow experience signals may find themselves building teams capable of generating change but less equipped to steward long-lived systems. Engineers with strong domain foundations, distributed systems intuition, networking literacy, hardware awareness, create disproportionate value even when their recent experience does not align with a job description. That shift shows up three to five years after hiring.

Senior Engineers Are Becoming Throughput Governors

As AI increases the rate of change entering the pipeline, experienced engineers are increasingly asked to govern throughput rather than architect new systems. They become validators, integrators, and incident responders. It is often highly leverageable work, and it can also become a bottleneck.

Figure 1 — The front of the pipeline accelerates. The middle does not. Complexity accumulates on the right.

When too much change depends on the judgment of too few individuals, organizations risk both burnout and fragility. Decisions are deferred, opportunities to develop junior engineers diminish, and institutional knowledge becomes localized rather than distributed.

The value of that localized knowledge is not visible until it is applied. A packet processing system I worked on had a race condition in shared memory ownership that caused packets to be silently dropped during nightly processor restarts. The issue accumulated over six months before it surfaced, took over a month to diagnose, and was fixed. Years later, a senior engineer reviewing an adjacent system caught a structurally identical issue in the path between the packet processor and a proxy used to decrypt traffic, before it ever reached production. They recognized it because they had lived through the first one. That catch does not appear in any metric or sprint review. That contribution exists only because the same engineer was around for the original failure, and was still paying attention when its shape returned.

The organizational design problem and the infrastructure problem are, viewed at sufficient altitude, the same problem.

Building Organizations That Can Absorb What They Build

The practices that build assimilation capacity are not exotic. When I join an engineering team, the first thing I introduce if it does not already exist is release cadence. Teams operating without one absorb a constant low-level uncertainty about when things ship, who is responsible for what, and what the next stable state looks like. A defined cadence gives engineers a finite structure they can depend on, and that dependability is a precondition for almost everything else.

From there, the practices I reach for most consistently are about where system surfaces touch each other. Schema changes, API contracts, shared infrastructure modifications. These are the points where one team’s decision becomes another team’s constraint. Explicit review gates at those boundaries, handled as lightweight async design documents rather than heavyweight committee approvals, do more to prevent systemic fragility than any amount of internal code quality tooling. My preference is round-robin gate ownership across the whole team, not just senior engineers, because it builds shared understanding of what the boundaries are and why they exist.

The operating principle that connects these practices is accountability for the full lifecycle: you build it, you run it, you support it. It is not a punitive framing. It is an acknowledgment that the people closest to a system’s construction are also best positioned to understand its behavior in production. Separating those responsibilities produces the kind of institutional amnesia that makes incidents slow and expensive to resolve.

The measurements I advocate for reflect this framing: code churn rate, change failure rate, mean time to recovery, on-call load per team, and PR review depth rather than speed. These are not how engineering teams have classically measured progress. What they share is that they surface the cost of operating what has been built, not just the rate at which it was produced, and that shift in what gets measured tends to shift what gets valued.

Output Is Easier. Stewardship Is Harder.

The preceding pieces in this series examined how change outpaces understanding and how exposure outpaces security maturity. This is the practical frame. The competitive advantage of engineering organizations in the coming years will be defined less by how quickly they can build, and more by how effectively they can understand what they have already built.

Organizations that treat assimilation capacity as a strategic asset, something to be designed, measured, and defended, are more likely to maintain both velocity and resilience as their platforms grow. Those that interpret productivity gains as justification for reducing investment in architectural leadership may find that the complexity they accumulated faster than they understood becomes the constraint that limits what comes next.

Sean O'Hara

Sean O’Hara

Technology leader and Co-Founder of Arbor Engineering Group. He writes about infrastructure, engineering organizations, and the decisions that compound quietly before they surface. He is currently looking for a full-time CTO or Head of Engineering role. Find him at arboreng.com/insights or on LinkedIn.