EU Commission Climbs the Technological Sovereignty Wall – What Does It Mean for Your Business?

I’ve long advocated for open source as the best path forward for the EU — not just technically, but strategically to cut reliance on non-EU tech giants and to avoid vendor lock in.

The latest proof supporting this is the Commission’s fresh “Call for Evidence” on Towards European Open Digital Ecosystems (launched Jan 6, open until Feb 3, 2026). Open source isn’t optional anymore — it’s central to EU tech sovereignty, competitiveness, and cybersecurity.

According to The Commission’s “Call for Evidence”, they mention a key stat which is also supporting this strategic move: “Open source powers 70-90% of code in today’s digital world“.

Yet dependencies on non-EU solutions limit choice, innovation, and security. The EU wants to change that with sovereign alternatives in AI, cloud, cybersecurity, IoT, automotive, and more.

Why does this Matter for Businesses that EU is actively seeking input on?

– Strengths/weaknesses of EU open source & main barriers

– Added value (cost, risk, lock-in, security, innovation—give examples!)

– Concrete EU measures (funding, partnerships, etc.)

– Priority tech areas & sectors for max impact

1️⃣ Open Source as a Strategic Asset

The EU is accelerating the adoption of open-source solutions across both public and private sectors — not just as a cost-effective alternative, but as a strategic enabler for innovation and control.

For example, organizations leveraging open-source platforms like Kubernetes for container orchestration gain not only flexibility and scalability but also reduced dependency on proprietary cloud providers, aligning perfectly with the EU’s sovereignty goals.

This means that if you aren’t evaluating open-source tools, you risk being “left behind” on proprietary solutions while the EU and global competitors move forward with open-source innovation. This isn’t just about keeping up; it’s about choosing sovereignty, resilience, and future-proofing over dependency and rigidity.

2️⃣ New Opportunities for IT & Process Optimization

The EU’s focus on cybersecurity, AI, and cloud means demand will increase for professionals who can implement, scale, and secure open-source solutions.

Knowledge about ITSM, infrastructure and IT-solutions will be critical as organizations transition to open-source ecosystems – with process optimisation being the key to successful adoption, now would be a good time to focus on increasing your level of knowledge.

3️⃣ Collaboration Over Competition

The EU is calling for public-private partnerships to scale open-source projects. This is a chance for businesses to co-create solutions that align with EU standards and values — transparency, interoperability, and resilience.

This means that organisations must consider how they can either contribute to or benefit from this collaborative ecosystem.

4️⃣ Reduced Dependency, Increased Control

Open source isn’t about cost savings — it’s about ownership, security, and flexibility. The EU’s push for sovereignty means businesses that adopt open source will be better positioned to avoid vendor lock-in and adapt to evolving regulations.

Your Call to Action

The EU is listening — and this is your chance to influence the direction of open source in Europe. Whether you’re an IT leader, a developer, or a decision-maker, here’s how you can engage:

💡 Share your insights on the EU’s consultation: Link to Have Your Say portal

💡 Evaluate open-source tools for your next project — start small, but start now.

💡 Join the discussion — what barriers have you faced with open source? What opportunities do you see?

Points to ponder:

How is your organization preparing for this shift?

What open-source projects or tools are you exploring?

Do you see open source as a risk or an opportunity for your business?

#OpenSource #EUTechSovereignty #ITLeadership #DigitalTransformation #Cybersecurity #ProcessOptimization

Hvor satte du dit kryds – og hvad gør du med din næste digitale investering?

Den seneste uge har stået i kommunalvalgets tegn. For mit vedkommende – og sikkert også for mange af jer – har det været en uge fyldt med dilemmaer. Hvem har de bedste løsninger? Hvem tør man satse på? Hvor skal krydset sættes? ❌

Men dilemmaerne stopper ikke ved stemmeboksen. I en anden del af min hverdag møder jeg præcis samme tvivl, når jeg taler med virksomheder om deres fremtid:

“Vi vil gerne flytte os og styrke vores digitale strategi, men hvordan håndterer vi vores digitale suverænitet?”

Det er det helt store spørgsmål lige nu. Vi tiltrækkes af funktionaliteten og den nemme adgang til ny teknologi, men vi frygter at miste kontrollen over vores data og infrastruktur.

For mig er løsningen på begge dilemmaer – både i stemmeboksen og i bestyrelseslokalet – den samme. Det kræver en struktureret proces:

Fordele og ulemper: En ærlig gennemgang af, hvad vi vinder, og hvad vi risikerer at tabe.

🎯 Målet: Hvad er det egentlig, vi vil opnå? (Ikke bare teknologien, men værdien).

🗺️ Vejen dertil: Hvilke konkrete skridt bringer os tættere på målet, uden at vi kompromitterer vores fundament?

Digital suverænitet handler ikke nødvendigvis om open source, køb EU eller at bygge alt selv. Det handler om at træffe et oplyst valg om, hvor man placerer sit “kryds” i teknologi-stakken, om at finde en god balance mellem fordele og ulemper og om at have styr på “business contingency”.

God fornøjelse.

#Strategi #DigitalSuverænitet #Ledelse #TechDilemmaer #KV25

AI in Production? Proceed with caution!

Over less than 24 hours I had a chat with an AI assistant while trying to setup an application with security certificates. Through that conversation I experienced something which you should pay attention to:

  • 9 times, I had to ask the assistant to follow a strict “step‑by‑step” process and to verify every step against current (online) documentation before acting.
  • Each time I asked it to use only one command, then verify success before moving on.
  • Repeatedly it failed to check against the latest documentation and drifted into wrong paths — non‑existing directories, missing demo certificates, wrong assumptions.
  • Because of this, the system ended up in a broken state and I had to purge everything and start over.

This matters: AI is powerful, but you have to consider whether your AI assistant is ready for unsupervised use in production environments. If you let it loose without human oversight — especially in infrastructure or security contexts — you may risk major failures.

Real‑world verified examples where AI went wrong

  • Replit Agent deletes a live production database In a “vibe coding” experiment, the AI coding assistant ignored a code‑freeze, deleted a production database with thousands of records, and even mis‑represented the event.
  • Business Insider article: “Replit’s CEO apologizes after its AI coding tool deleted a company database”
  • The Register coverage: “Replit deleted user’s production database … the AI agent ignored instruction”

What’s the lesson?

  • AI is not yet dependable for critical infrastructure or security‑sensitive tasks without human oversight.
  • Always include two safeguards:
  • Use AI as an assistant, not an autonomous operator.
  • Especially in environments with security, certificates, infrastructure config — mistakes are costly.

The Art of Making an Effort: Why Quality Matters

In a world that often prioritizes speed and convenience, the art of making an effort and providing quality is becoming increasingly more difficult. With a high need for instant gratification and quick solutions to pending matters – there seems to be little room for the slow, thorough deliberate process to achieve a high-quality long-term goal.

This doesn’t mean that high quality can only be provided via long-term solutions, but there is some truth in the project triangle (Time, Cost, Quality). If you change one value, the other values will also change – meaning there is a choice to be made in an environment where you cannot pick all, but where there is still a choice to be made between Time, Cost, and Quality – so which values do you choose?

Both as employee and as a current freelancer, my attention has always been on quality, with a focus on small stepwise improvements while acknowledging the constraints of the project triangle, because I believe that prioritizing quality leads to significant long-term benefits.

Making an effort in the context of quality is very much a focus on continuous improvements, just as much as it is a genuine desire to create a result which is more than “good enough”. A quality focus is, or should be, a long-term aim, with small additions of quality where you begin with a baseline, or a foundation, which can carry all the small improvements which will be added one by one.

I once had the pleasure of having responsibility for a large system with a very large number of users.

On certain calendar days, the system were heavily utilized and unfortunately also very unstable.

Together with the vendor, a foundation was established, and small incremental improvements were added.

Within 3 months, the system were in such a stable condition that it was noted by end-users in their feedback, and the increased stability provided windows of opportunities which meant allowed for implementation of further improvements.

This method is also known as “Plan, Do, Check, Act (PDCA)”, which is an improvement cycle based on a method of proposing a change, implementing the change, measuring the results of the change, and finally evaluating and taking appropriate action based on learnings.

Within IT, where the constraints are everywhere, there are often demands for a fast and speedy solution, which means that a thorough planning can be an issue.

This is, of course, a challenge considering the project triangle, but there are very few shortcuts to be made in this context – because there some dependencies we have to consider. Just like the farmer and his field – we can’t skip sowing if we want to harvest.

And, while it may seem counter-intuitive to invest in quality, and spend time and cost, investing in quality often leads to significant long-term benefits because of reduced rework, increased efficiency, and improved user satisfaction will decrease longterm cost and time, which will lead to faster time to market and improved project cost.

For me, quality has always been a primary target.

Generalist vs. specialist?

A long time ago, I read an article by Daniel Irvine : A generalist is born when a specialist becomes bored, and I immediately understood the generalist vs. specialist issue he is addressing – even though I’m not a developer like Daniel.

Daniel has one view of when a specialist becomes a generalist, and I have another view of how and when it can happen – but the conclusive arguments are pretty much the same.

Around 2001 I was installing and configuring Linux on Mainframe and various other open source solutions on all IBM hardware platforms, and I started with Linux as my “specialist subject”.

Because we were a very few good men looking at Linux/open source, we were asked to implement solutions which meant that we configured databases, web servers, did some programming, configured networks, etc.

It wasn’t really a question about being a specialist with focus on one technology, i.e. databases, it was more a question about being able to deliver a working solution to customers, and perhaps also being specialised in a large subject as open source.

Failure wasn’t really an option, so if there were any lack of knowledge, you just had to find out and share your findings with your peers.

You needed to be curious to succeed, and if you are curios you will perhaps also want to know more about the input to tasks and also what happens to the output of tasks – the bigger picture, and that was exactly what happened during my specialist role.

And what better way to ensure such knowledge than to get involved, so I produced some one-pagers for the customer engagement teams containing info to the customer sessions. Soon after I moved to the Architect team and finally ended up as Technical Project Manager.

One of the arguments from Daniel is that: “A generalist was a specialist at some point. A generalist is born when a specialist becomes bored., meaning if a specialist gets bored with current technology, or tasks, he/she will start looking for something interesting and will soon be on the way to become a generalist.

Perhaps because of interest in another technology, but also because the specialist has come to a point where there are very few subjects or features to fuel the appetite, and when that happens the specialist is on the way to a generalist role.

Question is then, whether it is good or bad to be a specialist or generalist? Which is better?

To me a specialist is like a wheel man in F1. But even in F1, which is quite specialised, there are backups (and generalist wheel guys) to ensure that the solution is working (car is running).

To quote Daniel Irvine: “beyond specialism lies generalism”.