Integrating AI: an organizational transformation before a technological project

Written by Philippe Cieutat, on 15 January 2026

There's a reflex I often encounter in my coaching work. A team slows down. A department starts creaking. Decision-making chains grow complicated. Quality becomes inconsistent. Delays set in. Tensions rise. And quickly, the same pattern emerges: they look for the tool.

  • A tool to "catch up."
  • A tool to "streamline."
  • A tool to "fix."

A tool to "automate what's broken." For a long time, the miracle tool kept changing names: ERP, CRM, lean, ticketing, RPA… Today, the miracle tool is called Artificial Intelligence.

I understand the temptation. AI impresses. It writes fast. It synthesizes cleanly. It suggests plans. It answers confidently. It gives the impression of restoring order. As if confusion were just a formatting flaw. As if a human system could be fixed by applying a layer of technology. Except AI isn't a patch. AI is a multiplier. And a multiplier doesn't "fix" anything: it amplifies what's already there.

That's exactly what struck me in recent months, reading the positions of two figures who, each in their own way, tell the same story: AI is a partner to human intelligence… but it doesn't excuse the absence of a system. On one side, Hosni Zaouali (Hoss), CEO of Voilà Learning and Guest Lecturer at Stanford (Department of Management), who emphasizes AI in service of humans, amplifying potential rather than replacing it. On the other, Ethan Mollick, professor at Wharton, who rigorously explores what AI truly changes in work: the nature of expertise, ways of learning, new risks, and how organizations must reconfigure themselves.

From cross-referencing their ideas with what I observe in the field, one conviction has solidified: AI shouldn't compensate for mediocrity. It should propel excellence. And that has a very direct consequence: integrating AI isn't primarily a technical adventure. It's an organizational transformation.

The Trap: Automating Failures Instead of Healing the System

Take an almost caricatural case… A company deems its customer service "mediocre": long delays, inadequate responses, poorly managed escalations, repeated irritants. The decision drops: "We're replacing level 1 with a conversational AI. It'll be faster. And cheaper." In some "replacement"-oriented deployments, satisfaction drops sharply. Hoss mentions an example at -40%. And this kind of result is no mystery. The error is in the diagnosis.

The service wasn't mediocre because it was slow. It was mediocre because it missed the need: lack of relevance, absent listening, generic responses, misunderstood customer journeys, scattered internal knowledge, unhandled gray areas. In short: a system that doesn't know how to produce quality consistently. And what does AI do in this case? It automates. It accelerates. It standardizes. It industrializes. It responds faster, sure. But it misses the mark faster. At scale.

The company wanted to solve a human problem with a machine. It simply turned a dysfunction into an industrial process. It didn't fix the source: it multiplied the effect. That's where a phrase becomes central, because it says it all: Don't automate inefficiency.

Before "plugging in" AI, you must face reality: map the process, clarify quality criteria, understand contact patterns, identify edge cases, spot areas where empathy isn't a bonus but a condition for success. Sometimes you must "fix by hand." Not out of nostalgia for the human. But because a sick system doesn't heal by speeding up.

What Studies Reveal (and What They Don't)

At this point, one could settle for opinions. But recent research on AI in the workplace is invaluable: it provides benchmarks where intuition is often misleading.

The Jagged Frontier: AI Excels… But Not Everywhere

The study with consultants (Harvard Business School / BCG) became famous for a simple reason: it shows impressive gains, but uneven ones. AI can boost performance on certain tasks… and degrade it on others, sometimes "adjacent" ones. That's what the authors call a jagged technological frontier. Inside this frontier, AI is a formidable ally. Outside, it becomes a subtle danger: it produces plausible outputs, with impeccable form, sometimes on false premises.

The real trap isn't "AI errs." The real trap is overconfidence. When a team lacks verification routines, AI becomes a generator of seductive artifacts: summaries, emails, analyses… that pass the fluency test, but not always the reality test. And the smoother it is, the more credible it seems.

Successful teams don't "believe" AI. They use it as an options accelerator. They multiply ideas. Then they do what organizations must always do: arbitrate, verify, decide. They implement human-in-the-loop not as a slogan, but as a routine: control, critique, accountability. And this point is essential: here, human expertise isn't replaced. It's repositioned. The expert becomes editor. The expert becomes guarantor. The expert becomes the one who recognizes the frontier and knows when AI veers off course.

Expertise Diffusion: When AI Elevates Novices… Thanks to the Best

Another major study (Brynjolfsson, Li, Raymond, often cited via NBER) analyzed AI assistant use among thousands of support agents. A striking result: gains are particularly strong for the least experienced profiles, while impact on experts is smaller in "pure speed." This result is often misinterpreted. One might think: "So the expert becomes useless." In reality, it's the opposite.

AI works because it captures and redistributes part of the top performers' tacit expertise. Responses, formulations, reasoning, sequences: the tool learns from what experts have produced, then makes it accessible to novices. It accelerates skill-building, narrows gaps, avoids classic errors. It's a huge advance. And it's also a systemic trap. Because if the organization, dazzled by juniors' quick gains, decides to replace costly experts with "augmented" juniors, it cuts off the very source of its intelligence. Experts remain essential for what AI handles poorly: unprecedented cases, exceptions, ambiguous situations, "edge cases" not yet in the data. They're the ones who invent the response before it becomes a standard. Without them, the model atrophies, and innovation slows. In other words: AI can diffuse excellence… but only if excellence keeps existing.

The Smoothing Effect: Raising the Floor Can Reduce Diversity

Finally, several recent studies converge on a subtler observation: AI excels at improving the "average," but it can also create standardization. That's not necessarily negative. In many organizations, standardizing certain outputs is a gain: clearer emails, better-structured reports, more consistent responses, more usable documents.

The problem arises when the organization confuses quality with conformity. In a high-performing team, it's not just about the "correct." It's sometimes about the "different." The unexpected angle. The breakthrough idea. The formulation that reframes the problem. The reasoning that opens a path. And here, a challenge emerges: if everyone relies on the same models, suggestions, and "best practices," outputs can converge toward sameness. AI raises the floor… but it can also lower the ceiling if the organization no longer encourages exploration and dissonance.

What makes the difference, once again, is human intent. An expert doesn't say "write me a text." They impose a constraint. They force a viewpoint. They deliberately seek the objection. They request opposing hypotheses. They push AI off the beaten path. And that gesture… isn't a prompt. It's a skill.

The Real Question: Why Some Deployments Create Value… and Others Industrialize Chaos

  • If you look closely, all these elements tell the same story:
  • I creates value when inserted into a healthy system.
  • AI creates risk when it masks a sick system.
  • AI creates performance when it augments expertise.
  • AI creates noise when it replaces accountability.

This isn't a tool question. It's an organizational one.

And that's where the mindset shifts: integrating AI isn't "deploying a solution." It's making the organization capable of using it without fighting the wrong battle.

A Simple Sequence: Understand, Repair, Augment

In my work as a team coach (and more broadly, organizational coach), I've identified a sequence that consistently avoids missteps.

  1. Understand (Diagnosis)
    Before AI: understand what's truly broken.
    Is it a skills issue? Workload? Coordination? Prioritization? Decision quality? Workflow? Dependencies? Blurry responsibilities? Conflicting goals?
    If the process is absurd, AI will execute the absurdity faster.
    If the decision is fuzzy, AI will produce clean text on fuzziness.
    If knowledge is scattered, AI will give coherent answers from an unstable context.
    That's why I often talk about "organizational debt" before technical debt. As long as this debt is invisible, AI becomes makeup.

  2. Repair (Human Work)
    Next comes management and coaching time. Restore standards. Clarify "quality" definitions. Reduce gray areas. Install review routines. Recreate communication. Realign goals. Share meaning. Circulate information. Sanitize the service.
    You don't digitize chaos. And this moment is counterintuitive for many leaders: it feels "slow" because it doesn't look like a project. But that's where ROI conditions are built. Without this foundation, AI is a race… in the wrong direction.

  3. Augment (Leverage)
    Once the foundation is solid, AI becomes relevant, almost obvious. Use it to remove repetitive tasks, speed up preparation, structure information, explore options, improve writing, reduce frictions.

And above all: equip the champions. Not to create an elite. But to create a ripple effect. Top performers are often those who ask the right questions, test hypotheses, verify quality, spot errors, explore the unexpected. When you augment them, they produce standards, patterns, examples. They turn AI into collective knowledge. They pull the organization upward.

Only then do you scale. By making best practices visible. Installing routines. Creating a healthy doubt culture. Building collective usage — rather than a sum of individual ones.

What AI Multiplies… Is You

AI isn't a rescue tool for failing organizations. It's a strength multiplier.

  • Multiply zero by AI, you still get zero.
  • Multiply a confused organization by AI, you get faster confusion.
  • Multiply excellence and competence by AI, you get a lead.

And that's the essential, I believe, for a manager or CEO: You don't just need technology. You need an organization capable of exploiting it. What's most lacking isn't a tool. It's the ability to diagnose reality, sanitize routines, clarify responsibilities, install verification, protect learning, turn AI into value. That's exactly coaching territory: not "accompanying on the side," but helping the organization become capable of learning and reconfiguring itself. AI won't ask if your processes are ready. It will advance. The question is simple: what will it amplify in you?