There is a silent yet profound shift taking place in software engineering. For decades, nearly every architectural discussion started from an assumption so obvious it rarely needed to be stated: code is written, read, maintained, and evolved by human beings. Clean Architecture, MVC, Hexagonal, DDD, layers, separation of concerns, dependency inversion — all of this was conceived as a response to one central limitation: the cognitive limitations of human teams. Programmers lose context, mix responsibilities, duplicate business rules, create too much coupling, and get lost as the system grows. Architecture emerged, to a large extent, as a discipline for containing that chaos.
The problem is that this assumption has begun to change. Not because humans have ceased to exist, but because they are no longer necessarily the primary agent in software construction. AI writes code, reorganizes code, suggests patterns, creates tests, refactors entire modules, and increasingly participates in maintenance. That changes everything. And it changes things in a way that many people still have not fully understood: it is no longer enough to ask which architecture is more elegant, more decoupled, or more sustainable for human beings.
The question now is a different one: which architecture is more intelligible, more operable, and less ambiguous for a machine that generates and modifies code?
The invisible anachronism of software engineering
This shift makes part of the traditional architectural debate potentially anachronistic. And anachronism here is not an insult. It is a precise description. It means applying a technique that belongs to a different historical context than the one in which it is now being used. Clean Architecture may still be correct in many scenarios, but the classical justification for defending it comes from a world in which humans were at the center of the software production process.
Once the machine enters the game as a relevant producer of code, the foundation of the decision must be reevaluated. Automatically sticking to the same criteria is like optimizing a horse-drawn carriage after the invention of the engine.
That is why the common criticism of Clean Architecture today seems insufficient to me. In general, it is attacked for being “overkill,” for introducing too many adapters, too many ports, too many DTOs, too many abstractions, too many files. The standard criticism says: for many projects, this is simply too much work. A simple MVC will do. A procedural organization with some discipline will do. A more direct, less liturgical system will do. That criticism is not wrong, but it is still an old one. It continues to judge architecture by the human cost of implementing it.
The problem is not writing architecture. It is reasoning about it.
[Suggested image: an AI staring at hundreds of code files connected by confusing lines, trying to reconstruct the system flow as if it were a chaotic map]
For an AI, creating one more interface, one more adapter, one more DTO, or one more layer is rarely a major operational cost. The machine does not suffer the way a programmer suffers when faced with repetitive ceremony. It does not complain about the amount of ritual. If you ask it to generate it, it will.
The problem, then, is not the effort of writing the architecture. The problem is the effort of reasoning about it afterward. And that is where a decisive difference appears: what helps a human being think modularly may not help an AI think correctly.
Classical architecture was designed to favor human comprehension. It organizes the system into smaller parts because humans deal better with controlled decomposition, explicit conceptual boundaries, and separation of responsibilities. That reduces the team’s cognitive load. But an LLM does not think like a team. It operates through available context, textual proximity, recurring patterns, names, examples, statistical correlations, and local instructions.
When good human architecture becomes noise for AI
[Suggested image: a complex layered diagram of Clean Architecture breaking apart into multiple fragments while an AI tries to assemble the correct flow as if solving a puzzle]
When a system is excessively decomposed into layers and indirections, the agent has to mentally reconstruct the real flow from scattered fragments. For an experienced human, that may be perfectly acceptable. For an AI, it may be the ideal recipe for ambiguity.
This is the central point: perhaps the correct criticism of Clean Architecture in the age of AI is not “it requires too much work,” but rather “it can introduce a kind of complexity that produces interpretation errors in code agents.”
An LLM suffers from something that, in practical terms, can be described as context collapse. It sees many parts, many relationships, many intermediate abstractions, and tends to correlate more than it should. Sometimes this is useful. Other times, it is disastrous.
The curious case of God Classes
[Suggested image: a huge code class represented as a “central planet” with many functions orbiting around it, while an AI observes the complete flow of an operation]
Interestingly, some patterns traditionally considered bad by classical engineering reveal an interesting behavior when viewed from the perspective of agents.
The so-called God Class, for example, is often seen as an anti-pattern: a class that is too large, responsible for too many operations in the system. It concentrates rules, coordinates flows, and accumulates responsibilities.
For humans, this is usually problematic because the class grows too much and becomes hard to maintain. But for an AI, there is an important nuance: a God Class can be semantically dense. In other words, a large part of a feature’s flow is concentrated in a single place.
That reduces one specific type of cognitive cost: the need to reconstruct the system’s behavior by navigating across ten different files. For an agent working through textual context, that concentration can make it easier to understand a complete operation.
The Facade pattern as an architectural compromise
[Suggested image: a large complex system hidden behind a single door labeled “Facade,” while an AI enters through that door and performs operations without seeing the internal complexity]
This is where a classical pattern may gain renewed relevance: Facade.
The Facade pattern creates a simple interface that encapsulates the internal complexity of a subsystem. Instead of requiring the agent to navigate across multiple classes or layers to execute an operation, it finds a clear entry point that coordinates the interaction among components.
In terms of agent-oriented architecture, this creates something highly useful: an explicit operational flow.
A facade class may, for example, contain the complete flow of an operation:
- validate data
- execute business rules
- call persistence
- assemble the response
Even if layers and abstractions exist internally, the entry point remains clear.
Architecture oriented toward machine cognition
[Suggested image: two diagrams side by side — one highly fragmented into many layers, the other modular with cohesive blocks connected by a few clear flows]
This leads us to a shift in architectural criteria.
For a long time, we celebrated systems whose beauty lay in the purity of their layers, their low coupling, and their isolated testability. All of that still has value. But now a new metric emerges, one that is almost absent from classical theory: does the architecture reduce or amplify operational ambiguity for a probabilistic agent?
A system can be impeccable according to the dogmas of good engineering and still be bad for AI-driven maintenance.
It is important to note that this does not mean classical structural principles have lost their relevance. Separation of responsibilities, infrastructure isolation, and interface contracts remain important mechanisms for containing the complexity of large systems.
The point is different: the degree of fragmentation must be recalibrated.
Modularity without excessive fragmentation
[Suggested image: a system represented as large, clear modules, each containing rules, data, and flow, connected by only a few interfaces]
Agent-oriented architecture tends to favor something closer to a modular monolith with clear boundaries.
Instead of decomposing each operation into multiple ceremonial files, the system can organize functionality into cohesive modules where the main flow is visible.
A typical structure may contain:
- domain modules
- services or facades that coordinate flows
- repositories or gateways for infrastructure
The difference lies not in the absence of architecture, but in the reduction of unnecessary indirections.
The new objective of architecture
[Suggested image: a human architect and an AI looking together at the same software diagram, representing cooperation between human and machine minds]
Traditional software engineering optimized architecture to reduce human cognitive load.
In the age of AI, a second objective emerges: to reduce interpretive ambiguity for agents.
This implies some practical guidelines:
- favor semantically cohesive modules
- use Facades to make flows explicit
- avoid excessive decomposition into intermediate layers
- maintain rigid structural conventions
- make the path of change traceable
Architecture ceases to be merely a tool for human organization and also becomes an instrument of communication with code agents.
Architecture as a language between humans and machines
[Suggested image: a bridge made of code connecting a human programmer on one side and an artificial intelligence on the other]
Software engineering has always evolved in response to the kind of system being built and the kind of mind building it. For decades, the focus was on disciplining human work in increasingly complex systems.
Now we are entering a hybrid scenario in which humans and machines both participate in software construction. In this context, the role of architecture changes. It ceases to be merely a way of making systems understandable to people and also becomes a mechanism for making systems interpretable by probabilistic agents.
This does not mean abandoning the principles of classical engineering. It means reinterpreting them in light of a new participant in the process.
Future architectures will probably be neither the shapeless chaos of improvisation nor the excessive liturgy of cascading abstractions. They will tend to combine modularity with operational clarity, concentrating flows into explicit points — often through patterns such as Facade — while keeping structural complexity under control.
In other words, architecture in the age of AI does not need to be less engineering. It needs to be engineering designed for two kinds of minds at the same time.