Who designs the system that runs the company?
Imagine a procurement agent that approves 300 purchase orders a day. Each one is below €200, from an approved supplier, within budget. The system runs cleanly for 46 days – and then, on day 47, it approves a cleaning-chemicals order from a supplier whose factory had been cited the previous week for environmental violations. The order is legal. The reputational damage that follows is real.
Who decided to place the order? The purchasing manager says she was asleep when it happened. The CPO says he authorized the use of agents in procurement – not this specific transaction. The CEO points to a digital transformation mandate and a budget.
Dr. Martin Hofmann, author of The Agentic Enterprise and former Group CIO of Volkswagen, let the question hang before answering it: “No one did.”
That answer, and what it implies for how organizations are built, was the subject of Hofmann’s recent Insight Hour at ESMT Berlin. His argument is not that AI agents are dangerous. It is that companies are deploying them into governance structures that were never designed to handle autonomous decisions. The problem is the missing architecture, not the technology.
From rigid structures to adaptive systems
“The system is not changing,” Hofmann observed, “but the environment is.” In this gap, humans have become what he calls the “shock absorber” – compensating for systems that cannot adapt fast enough, building task forces, running overtime, filling cracks.
Agentic AI changes that equation, but not in the way most discussions suggest. The distinction Hofmann draws is precise: an agent is not a chatbot answering questions. It is not an automated workflow following fixed rules, regardless of context. An agent has a target, senses its environment, reasons, decides, acts, and – critically – learns from the outcomes. “I want you to really memorize this definition,” he said, “because it’s important to understand what the capabilities are, where the limitations are, and what that really means to build it.”
What this enables is a fundamentally different operating model. In an agentic organization, a 25 percent overnight tariff on steel is not a crisis requiring a task force. It is an event the system has already begun responding to before the relevant manager arrives at the office. Change, which in the traditional model is a source of friction, becomes a source of competitive advantage.
When no one decides
The procurement example is not hypothetical. Hofmann confirmed it is real, with the company name withheld. He followed it with a second case: a recruiting agent that processes 2,000 applications and shortlists 30. A hiring manager interviews five and makes two offers. Six months later, an audit reveals the agent systematically filtered out applicants over 50. The recruiter points to the shortlist the agent provided. The data team says the bias was not something they programmed. The CHRO says she authorized using agents to make recruitment faster.
“Who’s accountable?” Hofmann asked. “No one is. [That’s the] uncomfortable truth."
This is not a technical failure. It is an organizational one. In traditional structures, accountability follows roles. In agentic systems, decisions emerge from interactions between humans, rules, and machines. Without deliberate design, responsibility diffuses until it belongs to no one.
The solution Hofmann proposes is not tighter oversight at the point of decision – that defeats the purpose. It is what he calls the “trust stack”: four architectural layers that must be built before an agent is deployed.
The four layers of the trust stack
The first is a decision contract: a detailed specification defining what an agent may do, what it must never do, when it must escalate to a human, and what ethical constraints apply. Decisions are assigned to one of three zones: green (agent acts autonomously), yellow (agent recommends, human approves), red (human only). Where exactly to draw those lines is itself a leadership decision. Draw the line too far toward autonomy and you accumulate risk; too far toward human oversight and you lose the efficiency gains that made the investment worthwhile.
The second layer is a data contract: explicit rules governing which sources an agent may access. The third is an activity channel: a timestamped log of every action the agent takes, including its reasoning. “You don’t know what your people do in your company. You cannot monitor your people, and you should not. Agents need to be monitored.” The fourth is a digital twin: a stored reference version of the agent against which its live behavior is continuously compared. If it drifts, you pull the kill switch.
Together, these layers constitute something organizations have not had to build before: trust engineered into software. “This is basically what we have never done before in history,” Hofmann said, “and now we need to do it.”

What leadership becomes
Hofmann’s second recruiting example ends without a verdict — no one is accountable, and the system that produced the bias is still running. That unresolved quality is deliberate. His point is not that agentic systems fail, but that organizations have not yet built the leadership capacity to run them well.
That capacity starts with a shift in how leaders define their own role. In the traditional model, a manager says “implement a procurement system” and monitors execution. In the agentic model, she says “reduce material costs by 5 percent by year-end, in these categories” and then steps back. The hybrid team – human and agent – determines how. “The org chart is becoming a social map,” Hofmann said, one that includes humans, agents, and the interactions between them. The leader’s job is to design that map, not to direct its traffic.
Three competencies for the agentic era
Three competencies follow from this shift. The first is outcome clarity: the ability to specify what success looks like, not how to get there. The second is trust architecture – designing the contracts, evidence systems, and escalation paths that make autonomy safe. Hofmann was explicit that this is not a technical skill. “This is a very, very different leadership skill than just telling someone to go and do. This is design work.” The third is adoption leadership: bringing the workforce into the redesign process rather than imposing it. The best change management, he argued, is not a workshop or a three-month project. It is giving people ownership of the redesign of their own work. “It’s their work, their pride.”
None of these competencies, Hofmann observed, are yet standard in business education. “Most MBA schools do not teach that today. I guarantee this will be on the agenda in the next two years.”
The real question
Hofmann ended by correcting his own framing. The question he arrived with – who runs the company when AI agents do the work – is, he argued, the wrong one. “The real question should be: who designs the system that runs the company?”
That question is not confined to the C-suite. In an agentic enterprise, system design becomes a shared responsibility – one that extends across functions, disciplines, and levels, to anyone with knowledge of where processes break down, where edge cases arise, where the ethical lines need to be drawn. “Everyone in the company should have, and can have, the ability and the chance to co-architect and co-design that enterprise.”
“It’s not disruption,” Hofmann concluded. “I would call it empowerment by design – if we live it that way.”
The full conversation is available on YouTube. The next edition of ESMT Insight Hour is on April 22, 2026 click here to register. Join our mailing list to participate in upcoming events.