top of page

Why the Biggest AI Challenge Isn’t Technical

  • Apr 1
  • 4 min read

Written By: Yehuda King, Head of Artificial Intelligence Solutions & Operational Excellence Over the last several months, I have been working with executive and senior management teams across Europe and the Nordic region. One pattern has become increasingly clear: the biggest barrier to enterprise AI is rarely the technology itself. It is the human and organisational readiness around it.

Different countries, different contexts, different functions, but the same human patterns kept emerging.

What struck me most is that these are not only observations about AI. They are observations about how people and organisations respond when a technology demands new ways of thinking, clearer problem definition, and greater ownership than they may be used to.

1. The mental model people arrive with determines almost everything

Before meaningful progress can happen, there is a prior question that often goes unexamined: what does this person actually believe AI is?

Many senior professionals have encountered AI first through generic productivity tools, and a good number arrived carrying quiet disappointment from those experiences. That prior exposure shapes more than people realise. It affects what problems they believe AI can solve, how seriously they engage, and whether they approach the opportunity with curiosity or scepticism.

What became clear was that scepticism in this context is not necessarily resistance. More often, it is a rational response to having been overpromised before. The teams that moved fastest were not always the most technically literate. They were the ones whose expectations had been recalibrated through an honest understanding of both capability and limitation.

The mental model is the starting point. Everything else is downstream of it.

2. Systematic problem-solving is not always a natural thinking process. It must be deliberate.

Across functions, a consistent pattern emerged when teams were asked to identify where AI could add value. The instinct was often to jump to a solution concept, something that would be useful or interesting, rather than to surface the underlying friction driving it.

This is not a shortcoming. It is simply how most people think. Unless they have been trained in systematic problem-solving, they do not naturally break work down into root causes, constraints, and repeatable workflows.

What consistently improved the quality of thinking was a shift in framing. The conversation became far more valuable when the question moved away from "where could AI help?" and toward "what do you wish you could change about how you work today?"

That shift mattered. It moved teams from feature-request thinking into problem-led thinking. Operations and production teams were often closer to their pain points and could surface them more readily. Finance, HR, Legal, IT, and other support functions often needed more reflection to identify friction they had long since normalised.

The quality of an AI use-case is almost entirely determined by the quality of the problem it is anchored to.

3. Value is consistently underestimated, and almost always deeper than it first appears

When strong opportunities did emerge, teams almost always stopped at the first layer of value. Time saving. Reduced manual effort. Faster turnaround.

These are valid benefits, but they are rarely the full story. And when AI opportunities are assessed only through that lens, genuinely high-impact use-cases can be undervalued or deprioritised.

A contract review solution does not just save a few hours. It can reduce the risk of non-compliance, improve consistency, and prevent downstream liability. A procurement or policy review use-case does not just make a process quicker. It can prevent costly errors, improve quality, and reduce exposure in ways that are materially more valuable than the immediate productivity gain.

The deeper value, spanning quality, risk, compliance, and cost avoidance, often sits beneath the surface. Organisations that are able to surface that second and third layer of value make better decisions about where to invest.

Organisations that frame AI value only in terms of time saving will systematically underinvest in their highest-impact opportunities.

4. Often, the discomfort is not the technology itself, but what it exposes

One of the more nuanced observations across these engagements was that hesitation around AI adoption is often not really about the technology.

It is about what the technology reveals.

AI tends to force a level of clarity that many organisations have historically been able to operate without. It asks teams to articulate what they want, how they work, where decisions are made, what good looks like, and where judgment is actually being applied. In doing so, it can expose undocumented processes, inconsistent ways of working, tacit knowledge held by only a few individuals, and operational complexity that has never been fully surfaced.

That discomfort is understandable. But it is also useful.

The organisations that progressed most effectively were those that treated this discomfort as a diagnostic signal, not a reason to retreat. They recognised that the challenge was not that AI was creating disorder, but that it was making visible the disorder that was already there.

AI does not create organisational complexity. It often reveals the complexity that was already there.

5. Dependency is the default. Ownership has to be built deliberately.

In many organisations, the instinct is still to identify an opportunity and then hand it to someone else to realise. This is especially true in larger enterprises, where technology delivery has historically sat with a central team, and business functions are accustomed to acting as requestors rather than builders.

That model creates dependency very quickly.

What became clear is that lasting adoption only begins to take hold when the people closest to the problem begin to develop ownership over the solution. Not because everyone must become technical, but because meaningful enablement requires more than access to tools. It requires agency, confidence, and a belief that shaping solutions is part of one's role.

Where that ownership started to emerge, the relationship with AI changed. It was no longer seen as something external being introduced into the organisation. It became something the organisation itself could actively shape and apply.

The difference between AI adoption and AI dependency is whether the people closest to the problem are the ones shaping the solution.

The organisations that will extract lasting value from AI will not be defined only by the quality of their tools. They will be defined by whether their people can identify real problems, reason clearly about value, and take ownership of solutions.

In that sense, AI enablement is not primarily a technology programme. It is a human capability programme, with technology attached.

bottom of page