AI integration in existing businesses is not a tech problem

In the past months, I have been talking to a lot of people at the forefront of AI development: founders of startups with AI at their very core, AI engineers designing LLM model implementation into products and workflows - and OpenClaw enthusiasts, building 1-person-startups with the tool. Of course, AI integration is a highly technical topic. Or maybe not? Here are my main takeaways, structured into a helpful framework to give orientation for any organisation in that transformation. Some of it may not be what you'd expect.

AI in businesses does not an isolated, stand-alone tool. It’s an entire system. And this system needs a solid foundation.

An AI engineer’s perspective

I’ve been following the AI rollout in companies for quite a while now. Already early on, it was clear - not just to me - that this would be the next big transformation process in organisations, just like digitisation had been one. While I also suspected the most critical parts in this transformation being of a non-technical nature, I was blown away by the clear, unambiguous statement of a seasoned AI engineer I talked to recently.

He told me something along these lines: “AI integration is hardly a technical issue: the technical part is very small and easily solved - very quickly. The real challenges lie in establishing and adapting the processes and organisational systems around the ML/LLM model - and in managing the change.” Let me add to that he seemed clearly frustrated about unrealistic expectations and a lack follow-through from CEOs and leaders on the client side.

So I developed a framework primarily aimed at leadership teams and decision-makers navigating this transformation. Not engineers, but the people who commission, oversee, and live with the consequences of AI integration.

A framework for orientation

When talking about AI integration in existing businesses, there is a lot to unpack. Often, things get messy and the discussion confusing. Buzz words keep flying around, but buzzwords do not help in my experience. What does often help is a structured framework. Not that they are always perfect or complete - but they bring focus and a better, shared understanding into the conversation. So, here’s my attempt to bring more structure into your AI-related board room discussion:

My AI Integration Framework (for existing organisations)

  1. Readiness: data, infrastructure, culture, regulatory exposure, leadership alignment

  2. System design: workflows, human-in-the-loop decisions, failure modes

  3. Governance: inventory, ownership, monitoring, agent management

  4. Change & capability: process overhaul, communication, fear, skill gaps

  5. Risk & security: agentic access, vendor dependency, error recovery

  6. Measurement: how do you know it's working (beyond cost savings)

1) Readiness

The “AI readiness” is about assessing whether the organisation has the foundations in place before any AI is introduced. It covers data quality and accessibility. When you discover that your operational data lives in three incompatible legacy systems with no unified identifiers you will need a data consolidation initiative - before any AI integration can start.

The assessment also covers existing infrastructure, regulatory exposure, and the cultural readiness: does the organisation have the psychological safety and tolerance for iteration that AI adoption requires?

Lastly, leadership alignment is a soft, but necessary readiness factor: AI integration stalls most often because the CEO and the leadership team don't share the same mental model of what AI can and can't do, or what they're actually trying to achieve.

2) System design

AI is never just a tool dropped into a process. It requires redesigning the process around the AI's capabilities and limitations. Often, organisations bolt AI onto existing workflows instead of asking "if we had this capability from the start, how would we design the process?" The system design also includes mapping which decisions the AI makes autonomously, which require human review, and what happens when the AI fails or produces low-confidence outputs. It is not a matter if an AI model makes a mistake, but what happens when it does.

The system around AI models must be designed with multiple security layers which kick in, when the mistake happens - so there is no damage. It's a best practice to keep humans in the loop (directly making or approving each decision), on the loop (monitoring outcomes and intervening when needed), or over the loop (setting the overall rules and boundaries within which the AI operates autonomously). The right level of autonomy depends on consequence severity and reversibility, not just preference. High-stakes, hard-to-reverse decisions need tighter human control regardless of how much leadership trusts the AI.

And then there is the question where not to use AI. Organisations tend to over-apply it once they've bought in. But using AI, simply because you can, would be ill-advised. Knowing which decisions or processes should stay human is part of good system design.

3) Governance

Governance is about creating the organisational structures to manage AI as an ongoing operational reality, not a one-time project. It includes a living inventory of all models, agents, and integrations in use: who owns them, what they do, which system access do they have, which decisions do they take, and who is accountable when they fail? AI agents need to be managed like staff: regular performance reviews, clear mandates, and someone responsible for their outputs.

When implementing governance mechanisms, you might realise that in your company, 14 different AI tools are in use across teams, most subscribed to individually, with no central oversight. You might also discover shadow usage, and that two tools process customer data without a DPA in place. Another discovery might be a worrying level of access for an AI agent, which - if exploited - would have devastating consequences for your company.

Especially relevant for regulated industries and increasingly required under the EU AI Act, are documentation and auditability: Who can reconstruct what the AI decided and why?

4) Change & capability

There is no other way to put it: the change that AI integration brings can be massive. It irreversibly alters or - in many cases - ends careers in the company. This is why I would consider the human side of AI integration the hardest part. Addressing the (very real) fear of job loss and the identity shifts for people whose expertise is being partially automated is key.

And then there are the long-established processes that need a complete overhaul, requiring buy-in from and coordination among multiple departments, middle management specifically. They're often the most threatened and the least discussed. They're also the ones who can quietly kill adoption if they're not brought along. Changing processes that - at their core - had been the same for a decade or two, is a huge step for both: the organisation and the humans involved. Certainties and long-lived truths crumble, the rules are rewritten from one day to the next.

The employees who remain, require new skills and need to learn how to work with AI effectively: evaluating outputs critically, designing prompts, knowing when not to trust the model. One typical example is that the team treats AI output as gospel rather than as a first draft requiring judgment.

5) Risk & security

This goes beyond data privacy to address the new attack surfaces and failure modes that AI systems introduce. Agentic AI with broad system access (think OpenClaw-type tools) needs to be threat-modeled: prompt injection, privilege escalation, and audit gaps are real and underappreciated risks. When you think of an agent with access to email, calendar, and file systems, it automatically becomes a high-value target. A malicious document in the environment could instruct it to exfiltrate data. This can have devastating consequences for any organisation: blackmail, data loss, and ultimately, a complete standstill.

Apart from cyber security, there are also commercial risks related to the use of AI models: organisations need contingency plans for vendor changes, model deprecations. Imagine your company builds a critical sales workflow on a specific LLM API and when pricing doubles, you have no fallback.

Lastly and crucially, there is the case when the AI makes a serious error (see point 2, System Design). The question I ask every client is: "When your AI makes a significant mistake - not if - what happens next?"

6) Measurement

Measurement defines what success actually looks like beyond the obvious cost savings, and helps improving over time. Most organisations default to measuring hours saved, but miss quality, decision reliability, employee experience, and customer impact.

Think of many hours saved but a steep decline in customer satisfaction. And what about your employees? Are they happy to use the tools because they deliver real value or are they forced to use them (performative adoption), because if they don’t, they will get fired? You may have fewer employees but their satisfaction and retention still matters - probably more than before. The Financial Times reported that in February 2026, Accenture linked staff’s promotion prospects to the number of times they logged in to AI tools. This is the kind of measurement you absolutely want to avoid, because it does not say anything about productivity or quality of the work, but stirs up resistances and disbelief among employees.

So, a good measurement framework tracks not just output volume or savings, but also error rates, escalation frequency, and whether the humans overseeing the system are actually equipped to catch what the AI gets wrong. Without clear metrics, it's impossible to know whether the AI integration is working, degrading, or creating hidden risks over time.

Take-away

So, if I had to sum it all up in two sentences, it would be those: AI integration is not about a tool, but thinking systems and processes new. Change management is the real bottleneck.

Next
Next

Remote Work Learnings From The 1990s Until Today