Strategy Without Context: How EU HealthTech Founders Misread the U.S. Market

Earlier this year, I met a friend for lunch in London. He had recently exited a leadership role at a successful digital health startup, one that had built a loyal user base, secured NHS contracting, and by all European measures, succeeded. But the company had plateaued. It wasn’t going to expand beyond its domestic market, and for a venture-backed startup, that was a problem.

Over fancy fish and chips, we exchanged notes. I was curious about his perspective; I wanted to understand the European founder experience, particularly when it came to evaluating U.S. market entry. He shared his reflections on building in Europe; I offered some (admittedly unsolicited) armchair analysis of how his former company might have approached the U.S. market differently.

Two important things from that conversation have stayed with me:

First: The US is on every market expansion roadmap – but most maps lack a key or legend.

For EU and UK digital health founders, U.S. market entry isn’t just an option; it’s baked into the funding trajectory and expectations from the start. It might not be explicitly stated in pre-seed rounds, but it’s absolutely part of an investor’s calculus. To reach the scale and returns that venture funders expect, the U.S. remains the holy grail: a massive market with one centralized regulatory system and a robust appetite for technical solutions to address systemic inefficiencies.

Nothing surprising there.  Here’s the kicker: while U.S. market entry is an expected milestone, the actual mechanics of that market aren’t well understood, even by the most sophisticated and experienced founders.

My lunch companion perfectly illustrates this phenomenon. During our conversation, he reflected on his one lingering regret: they should have pursued the U.S. market. They hesitated, he explained, because the regulatory pathway was too demanding and uncertain. They weren’t eligible for 510(k) clearance, and the full FDA De Novo process meant indeterminate cost and timeline risk. So, they deferred, missing the momentum from their last funding round. They should have entered the U.S., he said, and the mistake was one of timing.

I wasn’t convinced. I was familiar with his application, a digital therapeutic for mental health management. It had clear potential as a patient engagement and self-management tool, which meant natural appeal to U.S. payers and providers managing risk-based or capitated populations. I asked whether they had considered piloting the intervention in the U.S. as a supportive tool, positioning it (at least initially) outside the clinical intervention framework. This approach,  often overlooked by European founders, would have enabled his team to build an evidence base (not clinical trial evidence, but ROI evidence), develop relationships with payers and providers, and establish U.S. market traction without waiting for FDA clearance.

He didn’t hesitate: “No. Ours is a clinical intervention. We needed FDA clearance, and that would have taken two years.”

His forceful and unequivocal response was a bit of a conversation stopper: it also, however reflected a crucial misunderstanding of the US healthcare market. He assumed that how his product worked in the EU and its business model would be the same in the US.

Specifically, in Europe, his product’s background as a clinically validated intervention to treat a given condition was the company’s greatest asset. Its designation as a clinical intervention was core to the company’s identity and its business model. But that rigid adherence to a clinical-only identity became a limitation in the U.S. market. It blinded his team to a critical opportunity: the ability to pivot and adapt the application to serve the pain points of their (actual) US clients- healthcare payers.  By framing their solution so rigidly, they locked themselves into requiring FDA clearance, which meant they never entered the market at all.

Why Are European Founders so disposed to make this mistake?

This isn’t an isolated case, but a pattern I see among European founders. Not because they lack sophistication, but because their experience and success in home systems have trained them to think in ways that don’t translate to the U.S. context.

When European founders look at the U.S. market, they assume it works like their home markets, just faster and more expensive. They apply a European lens to an American ecosystem, and that lens fundamentally distorts what they see.

Let me explain: In Europe, the go-to-market sequence is orderly, hierarchical, and technocratic:

Basically, it works like this:

Regulation → Reimbursement → Adoption

Certification by a Health Technology Assessment (HTA) body or notified body (which grants CE marking, the European equivalent of FDA clearance) unlocks the pathway to reimbursement. Clinical evidence and comparative effectiveness analyses drive evaluation. National contracting and scale follow. The process is long, but it’s predictable, repeatable, and institutional.

This model creates rational assumptions that founders internalize and carry forward:

  1. Regulatory approval (and reimbursement approval) is the primary barrier to market entry and adoption
  2. Clinical evidence and rigorous academic validation are the currency of legitimacy and market success

These assumptions make perfect sense in Europe. They are, in fact, correct for European markets. But when founders apply this same framework to the U.S. market, they fundamentally misread the landscape.

The core misunderstanding is this: the U.S. does not have a healthcare system; it’s a healthcare ecosystem. It’s a fragmented, overlapping network of stakeholders, payers, providers, and value chains—each with different incentives, priorities, risk tolerances, and decision-making processes. There is no central gatekeeper. There is no single payer and that means there is no single pathway to adoption. The difference between a centralized system (with all its challenges and deficits) and an ecosystem built around private market principles is the difference between We Bought a Zoo and Jurassic Park.  (Welcome to Jurassic Park, indeed) There is no unified “market entry” in the way European founders conceptualize and experience it, in their home-European markets.

The complexity of the US healthcare landscape and complexity deserves its own deep dive, which we’ll cover in a future article. For now, suffice it to say: it’s a lot.

Regulation matters enormously in the U.S., but it is not what drives adoption; that intangible “traction” that founders and funders obsess over. This is surprising to most Europeans, because, unlike in European systems, regulators in the U.S. are not the payers or clients. The FDA’s role is to ensure the safety, integrity, and effectiveness of products; but it has no role in reimbursement, utilization, or market success.

This presents a necessary, fundamental shift in the “go-to-market” mind map for European founders. European founders tend to prioritize regulatory approval as the critical milestone, investing enormous resources and time into achieving it, only to discover that FDA clearance doesn’t open doors the way CE marking (certification that a medical device conforms to EU safety and performance requirements) does in Europe. By the time founders realize adoption requires an entirely different playbook (one focused on demonstrating ROI, building payer partnerships, integrating into provider workflows, and navigating a byzantine reimbursement landscape) they’ve burned resources and momentum.

The challenge, and the mistake my friend and his team likely made when evaluating whether to enter the US market, is recognizing these blind spots upfront. Instead of doubling down on strategies tailored to the European model, successful entrants identify where U.S. market dynamics differ and adapt their approach early. This means rethinking not just when to pursue regulatory approval, but whether it’s necessary at all for initial market entry.

Inverting the Sequence: How the US Market Actually Works

For many non-medical devices (specifically solutions that support clinical decision-making or workflows, facilitate patient management and engagement), the sequencing in the US often looks like this:

Adoption → Evidence → Reimbursement → (Maybe) Regulation

Here’s a critical distinction that European founders often miss: not all digital health tools require FDA authorization. Indeed the U.S. regulatory framework was never designed to treat every digital health tool as a medical device. The FDA regulates “Software as a Medical Device” (SaMD), when the software performs a clinical function: diagnosing, treating, or making autonomous clinical decisions. Many tools that support clinicians, help patients manage chronic conditions, facilitate workflow, or promote general wellness either fall outside the device definition entirely or sit within the FDA’s doctrine of enforcement discretion polices. In practice, this means a wide range of adjunctive tools can be deployed without FDA authorization under the agency’s policy of “enforcement discretion”, because clinical judgment, not the software, remains the locus of control.

This discretionary space is not an accident; it reflects a broader U.S. tradition of light-touch professional regulation, where physicians retain autonomy and are expected to interpret and contextualize supportive tools. The result is a large market of solutions that augment care rather than replace it. European founders, coming from systems where regulatory designation tightly maps to reimbursement and adoption, often misinterpret this as a loophole. It isn’t. It is a feature of the U.S. system: a system that relies more heavily on clinician judgment, tort liability, and market forces than on centralized gate-keeping.

Of course, as software becomes more central to care delivery and AI systems assume more determinative roles, the line between supportive workflows and the autonomous provider will continue to blur. The scope of FDA regulated products will continue to expand, but it is important to keep in mind that, even with expanded regulatory reach, not all regulatory process and pathways are the same or will involve the same rigor as the archetype De Novo review (tailored to pharmaceuticals and traditional medical devices). The agency is developing a more structured, software-specific regulatory framework. This evolving framework reflects intentional, risk-based philosophy that allows low-risk and adjunctive tools to evolve quickly without unnecessary burden. This more nuanced regulatory architecture seeks to adapt to modern care workflows while preserving core free-market principles: promoting speed, innovation, and real-world use. This evolving but still flexible space remains a viable and often faster pathway into the U.S. market.

Why this is consistently missed by EU Founders:

This is a hard concept for EU founders to grasp, let alone spend capital pursuing. You send your product to market… without regulatory approval. Develop partnerships, generate evidence of impact (whether on operational processes, patient outcomes, or cost savings), develop and execute business plans and generate revenue without any sign-off from an administrative body. This is the inverse of the European system(s). It feels risky, unstructured, and perhaps even illegitimate (it’s not). Instead it reflects the diversity of the US ecosystem – a market characterized by decentralization, free-market principles, and where money (and evidence of ROI) talks.

This brings us to a universal truth in business: know your customer. You won’t get far whether selling donuts or digital health unless you tailor your offering to your actual clients and their context and priorities. In the U.S., the State is often not the client (even though they’re often the ones picking up the check in the end). If you don’t prioritize developing traction and credibility with the actual prospective clients and payers, FDA approval by itself gets you nowhere commercially.

My friend’s company had a solution that could have provided immediate value to U.S. payers managing chronic disease populations. By piloting it as a patient engagement tool (as support to a patient’s care team, outside the direct clinical intervention framework) they could have:

  • Built relationships with U.S. payers (who are desperate for tools that improve outcomes and reduce costs)
  • Generated evidence of impact in the U.S. healthcare context
  • Established a revenue base to fund the regulatory approval process (if needed)
  • Learned what U.S. stakeholders actually need and prioritize (which might have been different from initial assumptions) and adapted their offering to meet these needs
  • Positioned the company for acquisition or partnership

Instead, by defining their solution as exclusively clinical, they made FDA clearance a prerequisite, which meant they never entered the market at all. To return to the Jurassic Park analogy, when you are navigating an ecosystem predicated on free market principles, sometimes your best move is to think different and come from the side.

Addressing Blind Spots and the Genesis of the Series

Our conversation also raised another challenge, one that compounds these blind spots: the pressure on founders to always appear certain.

Founders are conditioned to project confidence, expertise, and conviction. Admitting uncertainty, especially about something as fundamental as “how does this market actually work?” feels like a weakness. In pitch meetings, investor updates, and competitive conversations, there’s enormous pressure to have all the answers.

This creates a trap.

Founders can’t admit when they’re operating from flawed assumptions because doing so would undermine confidence in their ability to execute. So, there is a tendency to double down on the mental models they know, even when those models don’t fit the new context. And even when founders hire experts (regulatory attorneys, market consultants) those professionals tend to focus on specific technical questions rather than challenging foundational assumptions at the risk of stepping on toes.

This is particularly acute when it comes to the U.S. healthcare market because it’s so genuinely confusing. Even Americans who work in healthcare struggle to explain it coherently. How do you ask “stupid” foundational questions, like “Why doesn’t the U.S. have universal healthcare? How does the absence of a national health system translate to different market priorities? Who actually decides what gets reimbursed? Why do payers and providers seem to have conflicting incentives?” when you’re supposed to be the expert positioning your company for U.S. expansion?

The answer to identifying and resolving cultural blind spots is to ask these questions, to be vulnerable, to admit what you don’t understand. But the founder experience rarely creates space for that kind of vulnerability. That’s why I’m writing this series.

After my lunch with my friend, I realized that what European founders need isn’t just a guide to U.S. regulatory pathways or reimbursement codes or an explanation on how to approach hospitals and health systems. Those resources exist.

What’s missing is an accessible explainer of the foundational mental models and cultural assumptions that shape the U.S. healthcare market – and how it applies to (and sometimes directly challenges) the EU founder. This series is designed to be that resource: a space to understand not just what the U.S. market requires, but why it works the way it does. Consider this series a safe space for “stupid questions;” or, to continue the Jurassic Park theme, your pocket Jeff Goldblum, a companion who can interpret the tremors, identify the pivots and when, frankly you “must go faster.”

We’ll cover:

  • Why the U.S. isn’t “one market” and what that means for your strategy
  • How U.S. stakeholders define and measure “value” (and a glossary to translate industrial jargon)
  • Understanding US value-based care and other transformation models (and the role of digital solutions).
  • Why European traction and clinical evidence don’t automatically translate to U.S. regulatory approval or market success
  • An overview of the FDA approval process, and the evolving approach to regulating Software as a Medical Device.

The goal is not to overwhelm you with complexity (as always, our objective is to studiously untangle complexity), but to help you see the U.S. market clearly, and therefore to move forward confidently, hopefully bridging the best of two worlds.

In the next installment of our series, we’ll dig into what “the U.S. is not one market” actually means and how this impacts your market strategy.

About the Author Tina Simpson is a healthcare strategist and co-founder of Line Axia, a consultancy that helps European digital health companies navigate U.S. market entry. Having worked on both sides of the Atlantic, she specializes in translating across healthcare ecosystems, 90s adventure films, and regulatory jargon.

 

If this article resonated with you—or if you found yourself thinking “wait, that’s exactly the conversation we need to have internally”—let’s talk. Line Axia works with European digital health companies navigating U.S. market entry, helping founders recognize blind spots before they become expensive mistakes.

As always, this post was written by a human being! Not AI. ChatGPT was used for final grammar edits and spellcheck.

Is Lithuania the Next Data-Center Boom in Europe?

Short answer, Yes. Long answer, also Yes – but with an important asterisk.

Google Earth

By: Olivia Aréchiga Co-Founder, Line Axia

(AI Disclaimer – as always, this post was written by a human, me! I used ChatGPT-5 to confirm and discuss several sources.)

 

Who remembers that great viral video from February this past year – Baltic leaders flipping the grid switch, ending reliance on Russion/Belarus energy, to join the European energy grid?

Lithuania and its neighbors, Estonia and Latvia, are now completely independent of the Russian and Belarusian electricity grid; a gigantic and critical step in their geo-political stability, and economic development interests.

This is one of the key reasons investors should be looking more closely at this region for data center (DC) development. Lithuania, uniquely, offers several advantages over its other Baltic neighbors.

First, let’s discuss why we’re looking at DC development specifically, and not, for example, manufacturing. Both could be lucrative and beneficial for the Lithuanian economy. But DC is the golden egg.

Why?

 

AI.

 

Data center development brings large upfront capital (hundreds of millions to billions) with long asset-lives. They act as an anchor for the digital economy, and as a lynchpin for AI.

It’s important to understand, AI development ultimately depends on three interlocked resources: compute, power, and storage. Compute is what turns data into intelligence (chips, GPUs). Storage is what holds the massive training data and physical servers. And power is what keeps it all running.

But traditional DC’s used for colocation and redundant on-ramps are not going to cut it. AI requires “hyperscale” DC campuses. A hyperscale data center is not just big; it’s uniquely architected for scale and uniformity.

It’s the physical backbone for companies running enormous workloads such as AI model training and cloud services.

Simply training a single large-scale model can consume millions of kilowatt-hours, and that electricity draw keeps going once the model is live. That’s why data centers are no longer just digital infrastructure; they’ve become energy infrastructure.

AI workloads outgrew traditional data centers a long time ago. Hyperscale campuses are now the only environments that can reliably deliver the power, cooling, and interconnect required for modern model training and large-scale inference.

And it seems Lithuania, with its large, grid-adjacent land, renewable energy expansion, and fast-track permits, is actively positioning themselves to attract new hyper-scale builds.

 

Power & Sustainability

 

Lithuania offers a grid that’s both reliable and increasingly green. According to their own national investment-agency site, Lithuania aims to have 100% renewable energy infrastructure by 2030. This timeline is far more aggressive than most other EU countries.

By comparison Lithuania’s neighbor, Latvia, hopes that by 2030, they’ll be able to “source 57% of total energy from renewable sources, with an ultimate goal of climate neutrality by 2050…”.

Cooling is a giant electricity cost line-item for data-center OPEX. Lithuania’s climate is relatively cool for most of the year, enabling efficient and “free-cooling” strategies. That means lower power usage effectiveness (PUE), fewer external heat-risks, and better margin potential for operators.

For data centers, electricity is easily the largest cost and one of the biggest risks. Lithuania’s combination of cost-control, credibility and sustainability is a serious draw.

 

Connectivity & Geography

 

Lithuania sits on multiple international fiber corridors, including the Baltic Highway and NordBalt  subsea cable to Sweden and Germany.

This provides Lithuania a low-latency link to Western Europe, while still providing geographic and political diversification from saturated markets like the Netherlands and Ireland (currently two of Europe’s most congested DC hubs).

Lithuania has lower population density than other EU countries. Lithuania’s density is ~46 people/km² over 62,674 km² of land, well below many Western EU markets, so it is ostensibly easier to find parcels with distance from housing/residential areas, while still tying into the electric and fiber grids**.

The number one lesson of real-estate (“Location, Location, Location”) also governs data-center siting, but with more exacting technical requirements. The ideal site is a kind of “Goldilocks zone”, ideally not (too) densely populated, but not so remote that grid connection and fiber access become costly or wasteful.

Several big industrial parks, “FEZ” zones (Free Economic Zones), are reportedly energy-anchored sites designed for big-footprint industry. The government is actively advertising “Data Center” ready sites, like its Kruonis area.

Established Data Center markets like Ireland and the Netherlands, have fewer large, “shovel ready” industrial parks with heavy-duty grid access and comfortable residential buffer. Their combination of grid connection limits + hyperscale policy constraints + urban density makes it hard to stand up very large, quick-to-build plots regardless of whether a parcel exists on paper. These constraints are why developers often look to peripheral regions or other countries for the big footprints hyperscale builds require.

** Authors note – Building Data Centers that tie into populated or residential areas has become more and more controversial  – and for good reason. They raise electricity costs for homes & businesses nearby, and can have a substantial negative effect on the environment, including water scarcity, ecological disruption, and water contamination. Hyperscale DC’s use an unprecedented amount of energy; renewable energy is the only feasible way to keep up.**

 

Business Environment & Incentives

 

The Lithuanian government is actively treating data-center development as a strategic sector. And so far, it doesn’t seem like this is just a marketing head-fake.

Earlier this year, Lithuania’s Economy & Innovation Ministry launched an “Investment Highway”. This legislation cuts red tape for major projects, framed as enabling “up to ten times faster movement from investor decision to start of construction.” It includes the “Green Corridor” initiative, aimed at streamlining procedures for large scale, complex projects, and attractive tax benefits.

How effective this program will be remains to be seen, but the legislation itself seems to offer concrete pathways and clear communication routes directly into government channels.

An even stronger signal of Lithuania’s strategic commitment to this broader initiative is how rapidly it transposed and implemented the EU’s NIS2 cybersecurity directive. Neighboring Estonia and Latvia missed the EU’s implementation deadline. This is not a minor detail: it signals not only a level of cybersecurity readiness, but also regulatory predictability and administrative capacity, qualities that materially de-risk long-horizon infrastructure investment.

In preparing this article, I reached out to Milda Venckutė, Investment Advisor, and Elijus Čivili, the General Manager for Invest Lithuania, the country’s official investment promotion agency, for comment.  Mr. Čivili’s response reinforced the degree to which data infrastructure (at the scale needed for AI transformation) is now regarded as a matter of national systems planning, rather than ad-hoc speculative development by any single sector.

“This fits our pattern” he explained, pointing to Lithuania’s increased prominence as a fintech hub in the wake of Brexit, and its recent expansion of defense manufacturing facilities in cooperation with Rheinmetall.

“Both times, we saw the opportunity and moved fast with flexible regulation and clear investment strategies. Now with our new Investment Highway initiative cutting pre-construction development timelines by half and our government treating data centers as a strategic priority, we are applying the same approach to become a solution for AI data centers.”

Centralized government strategy and coordination are now used to accelerate hyperscale development, underscoring data-center capacity is being treated as core national infrastructure, not simply a development or tax-arbitrage play.

Lithuania’s DC market is currently pretty small. And yet, it has all the markings of a fertile environment for development.

As any investor will say, “timing is key”. Being early in a high-growth region can lead to land-locked advantage, first-mover positioning, and cost arbitrage advantages relative to Europe’s current saturated hubs.

 

The Other Side of the Coin – Local Competition & Geography

 

Lithuania isn’t alone. The Baltic and Nordic regions are courting the same investors with similar pitches: green energy, cool climate, low cost. The differentiator may come down to execution, who actually delivers power, permits, and uptime faster.

Lithuania has made more concrete moves to convince the world they are the ones in the region who can do this.

Estonia is positioning itself as a leader in digital public services, and Latvia seems “less vocal” on fast-track builds. For big infrastructure (like data centers), Lithuania’s recent reforms make it the most overtly pro-build of the three.

 

What about Lithuania’s “Other” Neighbors?

 

We have yet to talk about the big elephant in the room – Russia (Kaliningrad) and Belarus. These neighbors provide real concern about Lithuania’s geopolitical position.

As a NATO and EU member, this should quell some investors’ fears about Lithuania’s geopolitical instability. And in being completely energy independent, they’ve effectively eliminated Moscow’s leverage over electricity flows.

Yet, risks such as Russian DDoS and cyber-attack campaigns (like the 2022 Killnet) are not one-off concerns, and must be assumed in long-term resiliency planning. The NIS2 framework is important in this regard, as Lithuania seems more than aware of these threats, and more importantly, prepared.

There is also a concern with infrastructure security, specifically undersea cables.  In response, NATO earlier this year formed “Baltic Sentry”, a new military program to “strengthen the protection of critical infrastructure”.

Yet, one cannot ignore the outright animosity in the region, and Russia’s coercive maneuvers, hostile positioning and rhetoric requires its Baltic neighbors be on a heightened security posture at all times. There is an objective need for more advanced incident protocols in the region, to manage real geographic risks that other EU countries don’t share.

Thus, while Lithuania’s NATO/EU status and 2025 grid synchronization have reduced structural dependence on Russia/Belarus, hybrid threats (cyber, undersea) remain a real, ongoing planning assumption. These risks are manageable, but they must be engineered in from day one.

Additionally, while Lithuania’s renewable goals are bold, the grid is still developing. Integration of variable wind and solar sources can cause load-balancing challenges, and large-scale data centers require stable, uninterruptable and redundant power.

Neighboring countries like Sweden and Finland already have excess generation capacity and more experience supporting hyperscale loads needed by DC’s.

Lithuania is still building its data center brand. The ecosystem of specialized contractors, suppliers, and technical workforce isn’t as deep as in Frankfurt or Dublin. For early entrants, that means potentially higher build costs, longer commissioning times, and steeper learning curves.

 

The Bottom Line

 

Lithuania’s fundamentals are solid: sustainable energy, efficient climate, strong connectivity, and a government that actually wants and is working to develop this business.

But it’s not a turnkey market – it’s a build-and-shape market.

For operators and investors willing to engage early, Lithuania offers first-mover advantage and cost-efficient positioning in the EU’s underdeveloped East-Nordic corridor.

It’s not at a “Frankfurt-level” readiness, but that’s a good thing. AI is not going to slow down (at least in terms of infrastructure), and DC’s are the requisite commodity for that development.

For many data-center developers and AI/capacity providers, the factors mentioned add up to a compelling site thesis: efficient cooling, renewable energy alignment, strong connectivity, and a business environment ready to compete.

If Lithuania continues its current trajectory, there is no doubt of it being a competitive and viable market. It’s a matter now of which investors are willing to take the risk now, and are patient enough for the inevitable payoff.

 

AI Disruption

How Disruptive is AI?
Examining the Narrative Behind the Hype


By: Olivia Arechiga, Co-Founder – Line Axia

(AI acknowledgement: this post was written by a human, me! I used ChatGPT-4 to fact check a few items, confirm sources, and to answer a few specific questions).

“People worry that computers will get too smart and take over the world. But the real problem is that they’re too stupid and they’ve already taken over the world.”

Pedro Domingos, computer science professor, University of Washington. 2015.

Artificial Intelligence (AI) has been hailed as a force of radical disruption.

We’ve all heard the fear-mongering and the siren songs: AI will take over the world, AI will replace humans, AI will solve & cause immense problems. All major tech CEO’s have something profound to say on AI, and all seem to think they are right. From healthcare and finance, to logistics, media, and legal services, nearly every industry is bracing for an up-ending transformation.

Just look at your LinkedIn feeds – it seems there every third post is about how AI solved X problem, or spouting their opinion on AI, hoping you’ll think of them as an expert on the subject.

But as organizations rush to integrate large language models, predictive analytics, and workflow automation tools, it’s worth asking a more fundamental question: Is AI truly “disruptive”, and “revolutionary”, or is it just the next logical phase in digital evolution?

This post takes a measured look at what disruption really means in practice, how AI is unfolding in real organizations, and why leaders should temper both fear and hype with critical analysis.

First off, let’s define “Disruption”

For the sake of this post, we’re sticking with the definition used by Clayton Christensen, in his 1997 book “The Innovator’s Dilemma”.

In it, Christensen describes a specific and unique kind of market shift; the theory of disruptive innovation. Disruptive innovation in this context, is innovation that creates a new value network, often entering at the bottom of an existing market, and eventually displacing established market-leading firms, products, and alliances.

In this sense, disruption is not about radical new technologies per se, but about business model displacement and transformation.

Through this lens, AI’s disruptive potential depends less on how technically advanced it is, and more on how it restructures value chains and redefines who captures the margin. So far then, it seems many AI use cases are powerful, but largely adjunctive, and not truly destabilizing.

“Take a look at software developers. It’s not hard to see AI can help software engineers, and help optimize their workflows, but not to the extent that it can replace them. Illustrating this point much better than me, is GitClear’s founder warning that AI tools are generating “AI-induced tech debt,” with code churn increasing, and hastily added code, creating substantial maintainability challenges.

Emily Bender (noted AI skeptic and professor at the University of Washington for Computational Linguistics Laboratory) wrote a piece in the Financial Times, June 2025, in which she called LLMs “stochastic parrots” and “plagiarism machines” lacking real understanding; the boom being built on misconceptions that mask their limits and costs.”

But we know AI has had a direct effect on jobs. So who Is Actually Being Displaced? And is this a permanent change?

As with any “disruptive” technology or change, the process by which it integrates itself into society is messy in its regulatory and societal integration. The market effect therefore is inconsistent and often painful. The key is distilling what is real and lasting change, and what is the overreaction of tech leaders trying to temporarily improve their bottom line and stay ahead of the perceived curve.

The more significant disruptions we have seen are at the service layer, not the strategic core. A few examples:

  • Freelancers in design, content writing, and translation are already seeing pricing pressures as AI models produce similar content and products at no or very low cost.
  • Customer support outsourcing is being re-evaluated in light of 24/7 AI agents.
    • More on this later, but more businesses are looking at replacing customer support with AI agents and AI bot support.

In these sectors, AI may indeed be commodifying a sector of specialized labor. But in regulated, high-stakes, or relational contexts like healthcare, law, consulting, public sector- AI tools remain assistants, not substitutes.

Additionally, the displacement effect, where it exists, is fragmented and uneven.

It seems then, what we’re talking about here is more evolutionary than revolutionary.

  • Routine cognitive tasks (data entry, document summarization) are vulnerable.
  • Manual and low-wage jobs in unpredictable environments (e.g., skilled care work, construction) remain largely resistant.
  • Highly skilled knowledge work (e.g., law, medicine) may become augmented, not displaced.

Why “Disruption” Isn’t Inevitable

First off, AI is not a monolith.

It encompasses an array of technologies: machine learning, natural language processing, computer vision, recommendation algorithms, generative models- each with different maturity levels and use cases. Generative AI (like what you use Claude or Chat-GPT5 to write your emails or create an image), is all the rage at the moment.

But most “AI” embedded in industry today is still narrow and purpose-built– automating specific tasks (e.g., fraud detection, predictive maintenance, customer support chat-bots), not reinventing entire workflows or job categories.

It is, plainly, falling short of its promises.

Gartner’s “trough of disillusionment” comes to mind.

https://www.economist.com/business/2025/05/21/welcome-to-the-ai-trough-of-disillusionment

There are several high-profile examples of AI pilots programs stalling in what Gartner calls its “trough of disillusionment.” Are we currently sitting at the “Peak of Inflated Expectations?”, or somewhere near there?

Think back to “Humane”, the company who launched a buzzy wearable AI device- the AI Pin- as a high-profile proof-of-concept. By mid-2024, operations were halted. The product didn’t transform user interaction and failed to live up to expectations, illustrating a classic “nice demo, but no mass-market product” scenario.

A recent article by Live Mint, highlighted 42% of companies are abandoning most GenAI pilot projects, citing frustrated Chief Executives who feel the money has been spent without delivery.

And a 2024 report from MIT Sloan found that only 23% of firms using AI reported significant impact on core business models.

The questions now are, are these just the messy, contradictory market convulsions we tend to see with disruptive technologies? Where hype and marketing don’t align with operability, scale, and function? Is AI ever going to actually, substantially, and permanently disrupt our economies and workflow?

Labor Market Shifts Will Be Gradual- And Messy

The claim that AI will replace vast swaths of the workforce is not baseless, but it is definitely oversimplified.

Technological capability alone does not guarantee success. Just because you successfully integrated an AI chatbot tool into your stack to replace 75% of your call-center agents, does not mean that it will produce a net-positive for your business. We’ve seen big changes and hasty-layoffs now, as businesses scurry to reduce their bottom line and integrate fancy and exciting AI tools, but the reality of AI is that it is still pretty bad at most tasks. (Who here has had to work with an AI customer service agent? I have, and it’s been veritably terrible each and every time). Those that integrate AI tools too early or hastily, get their feedback quickly in the market response by the consumer.

We don’t need to look too far for a few notable examples:

  • Klarna – After replacing 800 customer service staff with AI chatbots, Klarna found that customer satisfaction dropped “significantly”. Klarna then reversed course, rehiring human agents to ensure customers “always have the option to speak with a real person”.
  • IBM – Replaced many HR roles with an AI bot (“AskHR”), but found the system failed to replicate human empathy and judgment. IBM then rehired staff to fill those gaps in their HR division.
  • Duolingo -Declared it would become “AI‑first” in a near-viral message that was all over social media. They laid off 10% of its workforce. Its intention was to use AI to replace contractors for translation, amongst other tasks. Within a week, it reversed course: CEO Luis von Ahn clarified they’d continue normal hiring, using AI as a tool to “assist rather than replace” employees.

But on the other hand, we see things like Amazon’s CEO Andrew Jassy announcing “he expects the rise of generative AI to “reduce” Amazon’s corporate workforce over the coming years”.

So what are we to think?

Well, tech giants have everything to gain in perpetuating the idea that AI is, and will continue to, disrupt our modern workforce as we know it.

Narrative = Valuation: Market history shows us publicly traded tech companies benefit when they’re seen as innovators. Framing AI as a transformative force boosts investor enthusiasm, driving up stock prices.

Capital Inflows: VC and institutional funding favor “AI-forward” firms. AWS, for example,  can undoubtedly attract more clients and partners by emphasizing AI as the future.

Market Perception: If Amazon for example doesn’t talk about AI, it may look like it’s falling behind peers like Google, Microsoft, or Meta. Public statements maintain a perception of technological leadership – no matter how realistic, effective, or devastating those plans might turn out to be.

“Inevitable Future” Mentality: If workers, governments, and competitors believe that AI adoption is inevitable, it becomes a self-fulfilling prophecy. Dissenters look regressive or inefficient. Companies are afraid to be left behind. Talking about AI begets talking about AI.

Social License to Automate: Calling AI a “revolution” provides a certain level of moral and economic cover for hard-to-defend decisions… like replacing human roles.

In reality though, there are still enormous, dynamic hurdles AI needs to clear- before it can even be considered an actual revolution.

  • In healthcare, AI diagnostic tools face strict regulatory review, data residency laws, and major ethical scrutiny.
  • In legal practice, issues of privilege, authorship, and jurisdiction make full automation of legal reasoning unlikely in the near term.
  • In finance, auditability and explainability remain major barriers to black-box AI adoption.
  • Government regulation and compliance: the EU AI-Act has been touted as the most aggressive and far reaching regulation surrounding AI to date. Yet it’s hitting big roadblocks, with integration and adherence planning at the near foregone conclusion that it will be delayed.

What we’re actually seeing instead is selective adoption– AI being used tactically, not structurally, in environments where trust and accountability still matter more than speed or cost.

What Leaders Should Actually Focus On

Rather than asking whether AI will disrupt their industry, organizations might get farther by asking:

  • Where does AI offer immediate value without compromising risk posture?
  • Which workflows can be streamlined without undermining human oversight?
  • How should governance evolve to integrate AI responsibly and strategically?
  • What talent and skills will be needed to translate AI outputs into real-world impact?

The true differentiator won’t be which company has the most aggressive and highest number of AI tools, but which ones can integrate them sustainably, and  meaningfully within their operational model and stakeholder context.

And it’s important to remember, disruption is not uniform. It is filtered through economics, regulation, unionization, ethical boundaries, and social context.

Final Thoughts

AI is significant. But disruption is not a foregone destiny. The most likely scenario is not an AI revolution, but an AI diffusion, a gradual, uneven, messy integration into the workflows of institutions that are already adapting to other transformations: remote work, data localization, ESG reporting, and platform consolidation.

For leaders, the real work lies not in reacting to headlines (or even stock prices), but in building the operational discipline to assess, adopt, and govern AI technologies in proportion to their actual impact.

Hypnotized by Complexity

Hypnotized by Complexity:
How Excess Systems and Processes Cloud Judgment

 

By: Olivia Arechiga, Co-Founder Line Axia Consulting

AI disclaimer: As always, a human being wrote this article (me!) I used AI tools however to fact-check and proofread.

 

“We struggle with the complexities and avoid the simplicities” – Norman Vincent Peale

In the never-ending pursuit of growth, compliance, and innovation, organizations naturally accumulate “layers” of systems, and frameworks –  in the name of efficiency, compliance, or even “the next best thing”.

Over time, these layers create a dense operating environment in which complexity is normalized – and even virtuous.

This phenomenon, which we refer to as being “hypnotized by complexity,” is not just an operational challenge; it is a strategic liability.

This post explores the roots of organizational complexity, how it impairs decision-making, and why leaders must remain vigilant in disentangling the useful, from the unnecessary and distracting.

 

Understanding the Nature of Complexity

Complexity in business is by no means negative. In highly regulated sectors such as healthcare, financial services, or digital infrastructure, a certain degree of sophistication is simply non-negotiable.

However, the distinction between essential complexity and accidental or outdated complexity is often blurred (…or ignored).

Essential complexity refers to the inherent intricacy of the issues one is seeking to address.

Accidental complexity arises not from the inherent demands of a system, but from historical decisions, poor integration, siloed teams, or an over-reliance on one-size-fits-all tools.

This complexity then goes unchallenged because it is embedded in bad institutional habits or justified by legacy compliance concerns. It becomes justified by how the work is managed. And because it becomes familiar, it often goes unchallenged.

For lack of a better phrase, it’s a rock you don’t have time (and don’t want) to look under.

 

The Psychological Dimension: Decision Fatigue and Overload

Leaders navigating complex environments are bombarded by an overwhelming number of variables, including metrics, vendor options, approval gates, governance frameworks, and ever-changing compliance requirements. This creates decision fatigue, where the quality of judgment actually deteriorates over time due to cognitive overload.

A study published in Harvard Business Review outlines how even experienced decision-makers begin to rely more on heuristics and defaults under pressure, potentially undermining the quality of their choices (Harvard Business Review – “Beware of the Busy Manager”).

Complexity of this kind, often fosters a sort of procedural inertiawhere decision-makers defer action, or over-consult stakeholders- not because it adds value, but because the system implicitly demands it. 

Over time, the result is a system that can (seemingly) only survive in its current complex, over-architectured state. You were, in effect, hypnotized into thinking this is required. 

 

Common Drivers of Accidental Complexity

  1. Legacy Infrastructure and Policy
    Outdated tools, processes, or compliance interpretations are rarely (or not totally) decommissioned. Instead, new layers are added on top of the old, creating internal friction, excessive budgets, and a lack of clear understanding of what is needed.
  2. Functional Silos
    Highly specialized departments often build their own systems, taxonomies, and workflows, which complicates cross-functional integration.
  3. Over-responsiveness to Risk
    In regulated or litigation-sensitive industries, it’s common for compliance functions to actually over-correct, resulting in burdensome procedures that can outpace actual legal or regulatory requirements.
  4. Tool Sprawl
    The proliferation of platforms, especially in IT and healthtech, often leads to overlapping capabilities and poor system visibility, rather than better outcomes.

Organizational Consequences

The net effect of unmanaged accidental complexity is not simply inefficiency- it is strategic drift.

Decision-making slows. Coordination erodes. Stakeholders lose a shared understanding of priorities. This environment then rewards “fire-fighting” rather than long-term thinking. And that reward system makes it difficult to understand and see that you actually are in a fire-fighting system, not a strategic and sustainable future-proofed path.

Moving Toward Deliberate Simplicity

Reducing complexity is not about oversimplification or ignoring legitimate risk.

It is about applying design thinking, analytical rigor, and cross-functional cooperation to distinguish between what is essential and what is merely habitual.

A few guiding questions for teams:

  • What decisions consistently take longer than they should, and why?
  • Are there systems or processes no one fully understands, yet still rely on?
  • Do our compliance mechanisms reflect today’s requirements, or yesterday’s fears?
  • Where are there repeated failures of coordination or accountability?
  • When was the last time there was a full audit of all IT tools and services?

Clarity, in this context, is not a communications goal, it is an organizational discipline.

Final Thoughts

Every organization carries complexity. The challenge is to recognize when that complexity has ceased to serve the mission, and instead obscures it.

In a marketplace where every product promises to solve your current pain point, it is easy to stay “hypnotized” by complexity.  Getting out of this cycle and perspective is both difficult and seemingly counterintuitive at times. “Consciously decoupling” essential complexity from incidental complexity, needs an expert outsider viewpoint and voice.

This is Line Axia’s mission and role: to break the trance or procedural inertia associated with overburdened systems and help businesses reclaim clarity – ensuring they invest in what truly adds value, and shedding what does not.