Die Gefahr einer Swiss Government Cloud, die schon bei der Fertigstellung alt ist

Die Gefahr einer Swiss Government Cloud, die schon bei der Fertigstellung alt ist

Hinweis: Dieser Beitrag spiegelt ausschliesslich meine persönliche Meinung und Einschätzung wider. Er basiert auf öffentlich zugänglichen Informationen sowie eigenen Analysen und Erfahrungen. Er stellt nicht die offizielle Position oder Meinung meines Arbeitgebers dar.

Die Schweiz steht vor einem der grössten IT-Migrationsprojekte in ihrer Geschichte. Dem Wechsel von den heutigen Bundesplattformen zur neuen Swiss Government Cloud (SGC). Ein Mammutprojekt – Dutzende Bundesämter, Tausende Anwendungen, Petabytes an Daten, und eine mehrjährige Umsetzung zwischen 2025 und 2032.

Auf dem Papier ist das unsere Chance, etwas Moderneres, Innovativeres und Stabileres zu schaffen mit besserer Performance und höherer Resilienz. Doch hier liegt meine grösste Befürchtung:

Wir könnten die nächsten Jahre damit verbringen, eine 1-zu-1-Kopie der heutigen Plattform zu bauen.

Das Ergebnis wäre eine SGC, die technisch zwar neu ist, aber bereits altert, bevor wir die Anwendungen darauf modernisieren können.

Der Komfort von “Like-for-Like”

Grosse IT-Projekte greifen oft zum sicheren Ansatz. Alte Systeme technisch erneuern, aber nicht verändern. Das minimiert Risiken, reduziert Widerstände und hält den Betrieb stabil.

Quelle: https://www.fedlex.admin.ch/eli/fga/2024/1408/de#lvl_1/lvl_1.2 

Das Bundesamt für Informatik und Telekommunikation (BIT) muss die Stabilität kritischer Dienste wie Steuersysteme, Register, Sicherheitsplattformen etc. während der gesamten Migration gewährleisten. Der “sichere Weg” ist nachvollziehbar, aber auch gefährlich. Er führt potenziell dazu, dass man lediglich dieselbe alte Fahrzeugflotte in einer brandneuen Garage parkt.

Klar ist, dass man schon jetzt im Verzug ist.

Die Migrationsfalle

Die geplante schrittweise Migration in Wellen sorgt für Betriebssicherheit. Zuerst umziehen, dann stabilisieren, dann den nächsten Block verschieben. Aber während dieser Zeit beginnt die neue Plattform bereits zu altern: Hardware muss erneuert, Software aktualisiert und Sicherheitslücken geschlossen werden. Gleichzeitig schreitet die technologische Entwicklung unaufhaltsam voran.

Quelle: https://www.fedlex.admin.ch/eli/fga/2024/1408/de#lvl_2/lvl_2.3 

Das Risiko? Wenn am Ende alle Anwendungen auf der SGC laufen und endlich über Modernisierung nachgedacht werden kann, steht schon der nächste Plattform-Refresh an.

Es ist wie beim Bau einer neuen Autobahn, die zehn Jahre für die Verkehrsverlagerung braucht, und wenn der letzte Konvoi ankommt, ist der Asphalt bereits rissig.

Warum Modernisierung während der Migration passieren muss

Bequem ist es nicht, aber notwendig. Migration und Modernisierung sollten Hand in Hand gehen. Gerade bei einem Replatforming braucht Modernisierung Zeit und bringt gewisse Risiken mit sich, aber die Alternative ist, alte Probleme einfach in ein neues Zuhause zu verschieben.

Der bessere Ansatz wäre Anwendungen nicht nur verschieben, sondern direkt oder kurz nach der Migration teilweise oder vollständig modernisieren. So entsteht Mehrwert von Beginn an.

Die Vorteile liegen auf der Hand:

  • Keine Altlasten mitschleppen -> veraltete Strukturen bleiben zurück.

  • Neue Funktionen sofort nutzen -> Automatisierung, elastische Skalierung, moderne Sicherheitsframeworks werden von Anfang an eingebaut.

  • Echte Effizienzgewinne -> statt ein leeres “Cloud-Gebäude” mit ineffizienten Altlasten zu füllen, zieht nur das ein, was auch in die Zukunft passt. Natürlich gibt es hier einige Ausnahmen, welche sich nicht (mehr) modernisieren lassen.

Die Gefahr politischer Zyklen

Politische Entscheidungszyklen sind kurz, oft nur eine Legislaturperiode oder ein Budgetjahr. Technologische Lebenszyklen dagegen sind lang. Eine Plattform wie die Swiss Government Cloud wird über viele Jahre aufgebaut, betrieben und weiterentwickelt.

Das Problem: Wenn die ersten Projektjahre fast ausschliesslich für den Aufbau der Infrastruktur und eine “sichere Migration” genutzt werden, ohne sichtbare Veränderungen an den Anwendungen, sehen Politik und Öffentlichkeit nach zwei oder drei Jahren noch keinen spürbaren Fortschritt.

Die Wahrnehmung könnte dann schnell kippen: “Viel Geld ausgegeben und nichts ist besser geworden”

Die Konsequenzen sind vorhersehbar: Die politische Unterstützung bröckelt, Budgets werden gekürzt, und aus einem ambitionierten Innovationsprojekt wird ein reines Betriebsprojekt, das nur noch den Status quo verwaltet.

Um das zu vermeiden, braucht es schon in den ersten Jahren sichtbare Verbesserungen wie neue Services, spürbar mehr Performance, effizientere Abläufe. Nur so bleibt die Swiss Government Cloud politisch tragfähig und technologisch zukunftsfähig.

Innovation als Parallelspur, nicht als Phase 2

Die SGC muss von Tag eins an Funktionen bieten, die echte Veränderungen ermöglichen:

  • Self-Service-Umgebungen für Entwickler

  • Integrierte Sicherheitsdienste

  • Datenplattformen für Analysen und KI

  • Schnittstellenstandards für Bund, Kantone und Gemeinden

Warten, bis «alles stabil» ist, bevor man innoviert, bedeutet in Wahrheit Stillstand.

Brauchen wir das Stufenmodell noch?

Heute strukturiert das Cloud-Stufenmodell der Bundesverwaltung ihre Welt in Stufen I bis III (Stufe IV ist beim EJPD und Stufe V ist die NDP beim Kommando Cyber): von Public Cloud über sensible Workloads bis hin zu hochsicheren Private-Cloud-Umgebungen. Historisch gesehen war das sinnvoll, weil unterschiedliche Anforderungen oft unterschiedliche Anbieter und Technologien erforderten.

Quelle: https://www.fedlex.admin.ch/eli/fga/2024/1408/de#lvl_2/lvl_2.2/lvl_2.2.1 

Das BIT schätzt aktuell, dass zwischen 2027 und 2032 rund 70% der Workloads in die Public Cloud wandern könnten. Das entspricht dem heutigen Trend, möglichst viele Services flexibel und skalierbar aus der Public Cloud zu beziehen.

SGC Prozentuale Verteilung Nutzung Cloud-Stufen

Quelle: https://www.fedlex.admin.ch/eli/fga/2024/1408/de#lvl_2/lvl_2.4 

Ein Blick in die offiziellen Programmausgaben bis 2032 zeigt jedoch ein deutlich anderes Bild: Von insgesamt rund 120 Mio. CHF für den Aufbau der Hybrid-Multi-Cloud-Infrastruktur sind 108,5 Mio. CHF (also ca. 90%) für Private Cloud On-Prem vorgesehen. Für Public Cloud sind lediglich 7,1 Mio. CHF (ca. 6%) eingeplant, und für Public Cloud On-Prem gerade einmal 4,5 Mio. CHF (ca. 4%).

SGC Programmausgaben

Quelle: https://www.fedlex.admin.ch/eli/fga/2024/1408/de#lvl_3/lvl_3.1/lvl_3.1.1/lvl_3.1.1.1 

Das ist eine massive Schwerpunktsetzung auf die private, eigene Plattform und deutet darauf hin, dass die reale Umsetzung stark in Richtung Private Cloud tendieren wird, weit entfernt von der 70% Public-Cloud-Annahme.

Nehmen wir nun an, dass sich aufgrund geopolitischer Diskussionen und veränderter Rahmenbedingungen das Verhältnis tatsächlich umkehren soll – also 70% Private Cloud (Stufe III) und nur 30% Public Cloud (Stufen I & II). Dann könnte es in diesem Szenario passieren, dass heutige Fachapplikationen zunächst auf die neue Plattform der Stufe III migriert werden, nur um später ein zweites Mal in die Public Cloud verschoben zu werden. Das wäre doppelte Arbeit mit doppelten Kosten, längeren Projektlaufzeiten und unnötigen Unterbrüchen für die betroffenen Dienste.

Wenn die SGC jedoch in der Lage ist, alle Anforderungen, von IaaS bis SaaS, von hochkritischen bis unkritischen Workloads innerhalb einer integrierten Plattform abzubilden, liesse sich diese Mehrfachmigration vermeiden. Das würde nicht nur Komplexität reduzieren, sondern auch die Flexibilität erhöhen, Workloads jederzeit zwischen verschiedenen Betriebsmodellen zu verschieben, ohne sie erneut komplett umziehen zu müssen.

Die Cloud-Landschaft des Bundes und die Grenzen der Vielfalt

Beim BIT ist bekannt, dass in der Private Cloud vorwiegend VMware-basierte Technologien zum Einsatz kommen. Gleichzeitig erlaubt das WTO20007-Abkommen, dass Leistungsbezüger auch Services von grossen Public-Cloud-Anbietern wie AWS, Azure, Oracle, IBM und Alibaba Cloud nutzen können.

Theoretisch könnten also bis zu sechs verschiedene Cloud-Stacks parallel betrieben werden. Praktisch ist das kaum vorstellbar. Schon heute ist es eine Herausforderung, eine einzelne Public Cloud neben einer komplexen Private Cloud effizient zu betreiben – mit unterschiedlichen Betriebsmodellen, Schnittstellen, Sicherheitsrichtlinien und Abrechnungsmodellen. Multipliziert man das mit sechs, wird die Situation operationell schnell unbeherrschbar.

Darum ist es realistisch anzunehmen, dass sich der Bund künftig auf maximal zwei bis drei Cloudanbieter konzentrieren wird. Mehr Anbieter bedeuten nicht automatisch mehr Sicherheit oder Flexibilität. Im Gegenteil: Sie erhöhen die Komplexität, schaffen zusätzliche Abhängigkeiten und erfordern eine enorme Menge an spezialisierten Skills, die selbst ein grosses Bundesamt wie das BIT kaum in ausreichender Tiefe vorhalten kann.

Das ideale Szenario wäre, wenn das Stufenmodell vereinfacht oder sogar abgeschafft werden könnte, weil es marktverfügbare Plattformen gibt, die alle drei Stufen gleichzeitig abbilden können. Ja, von Public Cloud über Public Cloud On-Prem bis Private Cloud On-Prem. In diesem Fall müsste das BIT nur mit einem Cloudanbieter arbeiten.

Bemerkung: Während der Migrationsphase gibt es natürlich eine Überlappung und dann wären es temporär zwei Cloudanbieter (im eigenen Rechenzentrum).

Die Chance, die wir nicht verpassen dürfen

Die Swiss Government Cloud ist eine einmalige Gelegenheit, die digitale Infrastruktur der Bundesverwaltung auf ein neues Niveau zu heben.

Das bedeutet:

  • Alte, fragile Anwendungen durch modulare, cloud-native Lösungen ersetzen

  • Sicherheit und Compliance einheitlich umsetzen

  • KI, Automatisierung und Echtzeitdaten in den Betrieb einbinden

  • IT reaktionsfähiger und krisenfester machen

Wenn wir die SGC nur als “neues Zuhause” für bestehende Systeme sehen, werden wir genau das bekommen. Und bis wir die “neue Einrichtung” kaufen, muss das Dach schon wieder saniert werden.

Meine grösste Befürchtung

Bis 2032 könnte die Schweiz eine souveräne Cloud-Plattform besitzen, die technisch solide, rechtlich abgesichert und vollständig unter Schweizer Kontrolle ist, aber dennoch dieselben isolierten Services liefern wie heute.

Die (irgendwann) kommende Ausschreibung und nächsten Jahre entscheiden, ob die Swiss Government Cloud ein Fundament für echten Fortschritt oder ein Mahnmal für verpasste Chancen wird.

Hier geht es zum zweiten Teil: Cloud-Konzentrationsrisiko beim Bund – Vielfalt oder nur Scheinvielfalt?

The Cloud Isn’t Eating Everything. And That’s a Good Thing

The Cloud Isn’t Eating Everything. And That’s a Good Thing

A growing number of experts warn that governments and enterprises are “digitally colonized” by U.S. cloud giants. A provocative claim and a partial truth. It’s an emotionally charged view, and while it raises valid concerns around sovereignty and strategic autonomy, it misses the full picture.

Because here’s the thing. Some (if not most) workloads in enterprise and public sector IT environments are still hosted on-premises. This isn’t due to resistance or stagnation. It’s the result of deliberate decisions made by informed IT leaders. Leaders who understand their business, compliance landscape, operational risks, and technical goals.

We are no longer living in a world where the public cloud is the default. We are living in a world where “cloud” is a choice and is used strategically. This is not failure. It’s maturity.

A decade ago, “cloud-first”  was often a mandate. CIOs and IT strategists were encouraged, sometimes pressured, to move as much as possible to the public cloud. It was seen as the only way forward. The public cloud was marketed as cheaper, faster, and more innovative by default.

But that narrative didn’t survive contact with reality. As migrations progressed, enterprises quickly discovered that not every workload belongs in the cloud. The benefits were real, but so were the costs, complexities, and trade-offs.

Today, most organizations operate with a much more nuanced perspective. They take the time to evaluate each application or service based on its characteristics. Questions like: Is this workload latency-sensitive? What are the data sovereignty requirements? Can we justify the ongoing operational cost at scale? Is this application cloud-native or tightly coupled to legacy infrastructure? What are the application’s dependencies?

This is what maturity looks like. It’s not about saying “yes” or “no” to the cloud in general. It’s about using the right tool for the right job. Public cloud remains an incredibly powerful option. But it is no longer a one-size-fits-all solution. And that’s a good thing.

On-Premises Infrastructure Is Still Valid

There is this persistent myth that running your own datacenter, or even part of your infrastructure, is a sign that you are lagging behind. That if you are not in the cloud, you are missing out on agility, speed, and innovation. That view simply doesn’t hold up.

In reality, on-premises infrastructure is still a valid, modern, and strategic choice for many enterprises, especially in regulated industries like healthcare, finance, manufacturing, and public services. These sectors often have clear, non-negotiable requirements around data locality, compliance, and performance. In many of these cases, operating infrastructure locally is not just acceptable. It’s the best option available.

Modern on-prem environments are nothing like the datacenters of the past. Thanks to advancements in software-defined infrastructure, automation, and platform engineering, on-prem can offer many of the same cloud-like capabilities: self-service provisioning, infrastructure-as-code, and full-stack observability. When properly built and maintained, on-prem can be just as agile as the public cloud.

That said, it’s important to acknowledge a key difference. While private infrastructure gives you full control, it can take longer to introduce new services and capabilities. You are not tapping into a global marketplace of pre-integrated services and APIs like you would with Oracle Cloud or Microsoft Azure. You are depending on your internal teams to evaluate, integrate, and manage each new component.

And that’s totally fine, if your CIO’s focus is stability, compliance, and predictable innovation cycles. For many organizations, that’s (still) exactly what’s needed. But if your business thrives on emerging technologies, needs instant access to the latest AI or analytics platforms, or depends on rapid go-to-market execution, then public cloud innovation cycles might offer an advantage that’s hard to replicate internally.

Every Enterprise Can Still Build Their Own Data Center Stack

It’s easy to assume that the era of enterprises building and running their own cloud-like platforms is over. After all, hyperscalers move faster, operate at massive scale (think about the thousands of engineers and product managers), and offer integrated services that are hard to match. For many organizations, especially those lacking deep infrastructure expertise or working with limited budgets, the public cloud is the most practical and cost-effective option.

But that doesn’t mean enterprises can’t or shouldn’t build their own platforms, especially when they have strong reasons to do so. Many still do, and do it effectively. With the right people, architecture, and operational discipline, it’s entirely possible to build private or hybrid environments that are tailored, secure, and strategically aligned.

The point isn’t to compete with hyperscalers on scale, it’s to focus on fit. Enterprises that understand their workloads, compliance requirements, and business goals can create infrastructure that’s more focused and more integrated with their internal systems.

Yes, private platforms may evolve more slowly. They may require more upfront investment and long-term commitment. But in return, they offer control, transparency, and alignment. Advantages that can outweigh speed in the right contexts!

And critically, the tooling has matured. Today’s internal platforms aren’t legacy silos but are built with the same modern engineering principles: Kubernetes, GitOps, telemetry, CI/CD, and self-service automation.

Note: If a customer wants the best of both worlds, there are options like OCI Dedicated Region.

The Right to Choose the Right Cloud

One of the most important shifts we are seeing in enterprise IT is the move away from single-platform thinking. No one-size-fits-all platform exists. And that’s precisely why the right to choose the right cloud matters.

Public cloud makes sense in many scenarios. Organizations might choose Azure because of its tight integration with Microsoft tools. They might select Oracle Cloud for better pricing or AI capabilities. At the same time, they continue to operate significant workloads on-premises, either by design or necessity.

This is the real world of enterprise IT: mixed environments, tailored solutions, and pragmatic trade-offs. These aren’t poor decisions or “technical debt”. Often, they are deliberate architectural choices made with a full understanding of the business and operational landscape. 

What matters most is flexibility. Organizations need the freedom to match workloads to the environments that best support them, without being boxed in by ideology, procurement bias, or compliance roadblocks. And that flexibility is what enables long-term resilience.

What the Cloud Landscape Actually Looks Like

Step into any enterprise IT environment today, and you will find a blend of technologies, platforms, and operational models. And the mix varies based on geography, industry, compliance rules, and historical investments.

The actual landscape is not black or white. It’s a continuum of choices. Some services live in hyperscale clouds. Others are hosted in sovereign, regional datacenters. Many still run in private infrastructure owned and operated by the organization itself.

This hybrid approach isn’t messy. It’s intentional and reflects the complexity of enterprise IT and the need to balance agility with governance, innovation with stability, and cost with performance.

What defines modern IT today is the operating model. The cloud is not a place. It’s a way of working. Whether your infrastructure is on-prem, in the public cloud, or somewhere in between, the key is how it’s automated, how it’s managed, how it integrates with developers and operations, and how it evolves with the business.

Conclusion: Strategy Over Hype – And Over Emotion

There’s no universal right or wrong when it comes to cloud strategy. Only what works for your organization based on risk, requirements, talent, and timelines. But we also can’t ignore the reality of the current market landscape.

Today, U.S. hyperscalers control over 70% of the European cloud market. Across infrastructure layers like compute, storage, networking, and software stacks, Europe’s digital economy relies on U.S. technologies for 85 to 90% of its foundational capabilities. 

But these numbers didn’t appear out of nowhere.

Let’s be honest: it’s not the fault of hyperscalers that enterprises and public sector organizations chose to adopt their platforms. Those were decisions made by people – CIOs, procurement teams, IT strategists – driven by valid business goals: faster time-to-market, access to innovation, cost modeling, availability of talent, or vendor consolidation.

These choices might deserve reevaluation, yes. But they don’t deserve emotional blame.

We need to stop framing the conversation as if U.S. cloud providers “stole” the European market. That kind of narrative doesn’t help anyone. The reality is more complex and far more human. Companies chose platforms that delivered, and hyperscalers were ready with the talent, services, and vision to meet that demand.

If we want alternatives, if we want European options to succeed, we need to stop shouting at the players and start changing the rules of the game. That means building competitive offerings, investing in skills, aligning regulation with innovation, and making sovereignty a business advantage, not just a political talking point.

Sovereign Clouds and the VMware Earthquake: Dependency Isn’t Just a Hyperscaler Problem

Sovereign Clouds and the VMware Earthquake: Dependency Isn’t Just a Hyperscaler Problem

The concept of “sovereign cloud” has been making waves across Europe and beyond. Politicians talk about it. Regulators push for it. Enterprises (re-)evaluate it. On the surface, it sounds like a logical evolution: regain control, keep data within national borders, reduce exposure to foreign jurisdictions, and while you are at it, maybe finally break free from the gravitational pull of the U.S. hyperscalers.

After all, hyperscaler dependency is seen as the big bad wolf. If your workloads live in AWS, Azure, Google Cloud or Oracle Cloud Infrastructure, you are automatically exposed to price increases, data sovereignty concerns, U.S. legal reach (hello, CLOUD Act), and a sense of vendor lock-in that seems harder to escape with every commit to infrastructure-as-code.

So, the solution appears simple: go local, go sovereign, go safe.

But if only it were that easy.

The truth is: sovereignty isn’t something you can just buy off the shelf. It’s not a matter of switching cloud logos or picking the provider that wraps their marketing in your national flag. Because even within your own datacenter, even with platforms that have long been considered “sovereign” and independent, the same risks apply.

The best example? VMware.

What happened in the VMware ecosystem over the past year should be a wake-up call for anyone who thinks sovereignty equals control. Because, as we have now seen, control can vanish. Fast. Very fast.

VMware’s Rapid Fall from Grace

Take VMware. For years, it was the go-to platform for building secure, sovereign private clouds. Whether in your own datacenter or hosted by a trusted service provider in your region, VMware felt like the safe, stable choice. No vendor lock-in (allegedly), no forced cloud-native rearchitecture, and full control over your workloads. Rainbows, unicorns, and that warm fuzzy feeling of sovereignty.

Then came the Broadcom acquisition, and with it, a cold splash of reality.

As Hock Tan , our President and CEO, shared in today's General Session at  VMware Explore Barcelona, European customers want control over their data  and processes. | Broadcom

Practically overnight, prices shot up. In some cases, more than doubled. Features were suddenly stripped out or repackaged into higher-priced bundles. Longstanding partner agreements were shaken, if not broken. Products disappeared or were drastically repositioned. Customers and partners were caught off guard. Not just by the changes, but by how quickly they hit.

And just like that, a platform once seen as a cornerstone of sovereign IT became a textbook example of how fragile that sovereignty really is.

Sovereignty Alone Doesn’t Save You

The VMware story exposes a hard truth: so-called “sovereign” infrastructure isn’t immune to disruption. Many assume risk only lives in the public cloud under the branding of AWS, Azure, or Oracle Cloud. But in reality, the triggers for a “cloud exit” or forced platform shift can be found anywhere. Also on-premises!

A sudden licensing change. An unexpected acquisition. A new product strategy that leaves your current setup stranded. None of these things care whether your workloads are in a public cloud region or a private rack in your basement. Dependency is dependency, and it doesn’t always come with a hyperscaler logo.

It’s Not About Picking the Right Vendor. It’s About Being Ready for the Wrong One.

That’s why sovereignty, in the real world, isn’t something you just buy. It’s something you design for.

Note: Some hyperscalers now offer “sovereign by design” solutions but even these require deeper architectural thinking.

Sure, a Greenfield build on a sovereign cloud stack sounds great. Fresh start, full control, compliance checkboxes all ticked. But the reality for most organizations is very different. They have already invested years into specific platforms, tools, and partnerships. There are skill gaps, legacy systems, ongoing projects, and plenty of inertia. Ripping it all out for the sake of “clean” sovereignty just isn’t feasible.

That’s what makes architecture, flexibility, and diversification so critical. A truly resilient IT strategy isn’t just about where your data lives or which vendor’s sticker is on the server. It’s about being ready (structurally, operationally, and contractually) for things to change.

Because they will change.

Open Source ≠ Sovereign by Default

Spoiler: Open source won’t save you either

Let’s address another popular idea in the sovereignty debate. The belief that open source is the magic solution. The holy grail. The thinking goes: “If it’s open, it’s sovereign”. You have the source code, you can run it anywhere, tweak it however you like, and you are free from vendor lock-in. Yeah, right.

Sounds great. But in practice? It’s not that simple.

Yes, open source can enable sovereignty, but it doesn’t guarantee it. Just because something is open doesn’t mean it’s free of risk. Most open-source projects rely on a global contributor base, and many are still controlled, governed, or heavily influenced by large commercial vendors – often headquartered in the same jurisdictions we are supposedly trying to avoid. Yes, that’s good and bad at the same time, isn’t it?

And let’s be honest: having the source code doesn’t mean you suddenly have a DevOps army to maintain it, secure it, patch it, integrate it, scale it, monitor it, and support it 24/7. In most cases, you will need commercial support, managed services, or skilled specialists. And with that, new dependencies emerge.

So what have you really achieved? Did you eliminate risk or just shift it?

Open source is a fantastic ingredient in a sovereign architecture – in any cloud architecture. But it’s not a silver bullet.

Behind the Curtain – Complexity, Not Simplicity

From the outside, especially for non-IT people, the sovereign cloud debate can look like a clear binary: US hyperscaler = risky, local provider = safe. But behind the curtain, it’s much more nuanced. You are dealing with a web of relationships, existing contracts, integrated platforms, and real-world limitations.

The Broadcom-VMware shake-up was a loud and very public reminder that disruption can come from any direction. Even the platforms we thought were untouchable can suddenly become liabilities.

So the question isn’t: “How do we go sovereign?”

It’s: “How do we stay in control, no matter what happens?”

That’s the real sovereignty.

Oracle’s EU Sovereign Cloud Is Real. AWS’s Is Still a Roadmap

Oracle’s EU Sovereign Cloud Is Real. AWS’s Is Still a Roadmap

The digital sovereignty debate in Europe is evolving fast. As data privacy regulations tighten and public sector requirements become more explicit, the race among hyperscalers to deliver truly sovereign infrastructure has entered a new chapter. AWS’s recent unveiling of its European Sovereign Cloud, set to arrive in late 2025, has generated considerable attention. But when it comes to choosing a sovereign cloud today that meets regulatory, operational, and architectural requirements, Oracle Cloud Infrastructure (OCI) is not only ahead, it’s already delivering for two years.

Note: The German version of this article can be found here.

Not Just a Data Center in Europe

Many cloud providers claim “sovereignty” by operating a data center in the EU. But true sovereignty extends beyond location. It encompasses who operates the infrastructure, who can access the data, how services are isolated, and how control is governed.

This is where Oracle has drawn a clear line in the sand.

Oracle’s EU Sovereign Cloud, launched in 2023, was designed specifically to meet the stringent legal, operational, and security requirements of European governments and regulated industries. It doesn’t simply retrofit an existing model but delivers a fully isolated cloud realm, physically and logically separated from Oracle’s global infrastructure, governed by EU laws, and operated exclusively by EU personnel.

A graphic depicting OCI realms with separation.

AWS, by contrast, has announced a sovereign model with similar goals but it’s still under development. The first region, located in Brandenburg, Germany, won’t be operational until late 2025, and much remains to be proven about how it will be implemented, governed, and audited.

Oracle is Shipping, AWS is Promising

Let’s be clear: AWS’s European Sovereign Cloud announcement is comprehensive and well-articulated. It lays out a future where services will be operated by EU-based subsidiaries, under EU laws, and with controls in place to maintain data independence from AWS’s global infrastructure. Their governance structure even includes an independent advisory board and EU-based trust services.

But for CIOs and CTOs making infrastructure decisions today, those promises offer little operational value.

Oracle’s two sovereign regions (Frankfurt and Madrid) are already live and serving customers. These regions are:

  • Controlled by separate legal entities based in the EU.

  • Operated by EU-resident staff with no external or global personnel access.

  • Physically and logically isolated from Oracle’s global commercial cloud, including separate networking, control planes, and identity services.

  • Offering identical services, SLAs, and pricing to Oracle’s standard OCI regions without sovereignty surcharges or trade-offs.

This level of readiness provides certainty. For public sector agencies, financial services institutions, healthcare providers, and others operating under GDPR or national sovereignty laws, Oracle’s offering is deployable today and auditable under real-world conditions.

Governance and Transparency – Built-In, Not Promised

AWS has made bold commitments around its future governance model. Its European cloud will be operated by German-incorporated subsidiaries, employ EU-resident personnel, and adhere to a Sovereign Requirements Framework (SRF) backed by independent audits. These measures are vital, and if delivered as described, they will represent a meaningful step forward in how cloud sovereignty is implemented.

However, the keyword is “if”. At this stage, AWS is still laying the foundation, and the structure – however promising – remains untested.

Oracle, on the other hand, has already passed this test. Its governance model is active today. Customers have full audit visibility, complete operational transparency, and confidence that their data never leaves the EU, either technically or legally. Oracle’s setup has passed certifications including SOC 1, 2, and 3, CSA, STAR, PCI, HIPAA, C5, HDS, ENS, and ISO 9001, 20000-1, 27001, 27017, 27018, and 27701. External key management (including customer-controlled keys outside of Oracle’s access) further strengthens the platform’s trust envelope.

Service Parity Without Sovereignty Tax

A common concern with sovereign clouds is the trade-off in features and performance. AWS says it plans to deliver full service parity with its global cloud, but again, that’s a roadmap, not a guarantee.

Oracle’s sovereign cloud offers over 150 OCI services – from autonomous databases to Kubernetes, from serverless functions to AI/ML tooling – without compromise. Pricing remains consistent with OCI’s commercial regions. There’s no premium, no second-tier treatment, and no degraded performance due to isolation. Sovereignty isn’t an upsell; it’s an expectation.

Isolation That’s Architectural, Not Just Geographic

Oracle’s architecture reflects a deep understanding that sovereignty is a technical state, not just a geographic one. Its sovereign cloud is:

  • Part of a distinct cloud realm, meaning no shared control plane, no global peering, and no cross-realm data leakage.

  • Accessible via FastConnect or VPN, with inter-region replication supported over a dedicated sovereign backbone.

  • Designed for infrastructure resilience, with separate fault domains and the ability to replicate workloads across regions while staying within the EU realm.

AWS has pledged to build similar isolation into its new cloud, but the full details, and whether it can match Oracle’s realm-level segmentation, remain unclear.

VMware in a Sovereign Cloud? Oracle Makes It Possible Today

One of the biggest challenges for organizations with deeply integrated VMware environments is finding a sovereign cloud that allows for seamless migration without rearchitecture. Oracle Cloud VMware Solution (OCVS) delivers precisely that. And it’s available within Oracle’s EU Sovereign Cloud, a capability unmatched by other hyperscalers at this time.

OCVS is a fully customer-managed VMware environment running on dedicated baremetal infrastructure inside Oracle Cloud. It includes VMware Cloud Foundation (VCF) and HCX – all running natively, with full control and administrative access maintained by the customer. 

In the context of data sovereignty, OCVS offers distinct advantages:

  • Runs inside Oracle’s isolated sovereign realm – ensuring that your VMware workloads remain within the EU, under EU jurisdiction, and operated only by EU-resident staff.

  • No dependency on shared control planes or global services, which means your VMware environment is as isolated and sovereign as the underlying cloud infrastructure.

  • No need to retrain teams or re-platform applications – existing tools, automation, and skill sets transfer directly.

For organizations planning sovereign migration strategies, OCVS provides a low-friction, high-control path to the cloud, while ensuring compliance and other sovereignty mandates. It’s particularly appealing for highly regulated sectors such as government, banking, insurance, and critical infrastructure where both operational continuity and auditability are essential.

Oracle Compute Cloud@Customer – Now with EU Sovereign Operations

Oracle has extended its EU Sovereign Cloud model to Compute Cloud@Customer (C3) bringing cloud infrastructure and EU governance directly into your data center. This is a game-changer for organizations with strict data residency and control requirements that cannot move workloads to the public cloud.

The updated C3 model is now:

  • Deployed on-premises

  • Managed only by EU-based personnel

  • Operated entirely under EU jurisdiction

No global cloud involvement. No shared control planes. Just full-stack OCI services, physically hosted on your site and governed like the sovereign cloud regions in Frankfurt and Madrid.

For public sector bodies, critical infrastructure, or industries like defense and healthcare, this means deploying modern cloud infrastructure without compromise.

Oracle Compute Cloud@Customer with EU Sovereign operations closes the gap between private cloud and true sovereignty. That’s something only Oracle can offer at the moment.

Note: Please note there’s also an air-gapped version of C3 available now.

Conclusion – The Time to Act is Now, and Oracle is Ready

AWS’s European Sovereign Cloud will be an important development when it arrives. But for European organizations operating under strict data localization and control mandates, the ability to deploy, scale, and audit sovereign infrastructure today is critical.

Oracle’s EU Sovereign Cloud is here, certified, compliant, and production-ready. It aligns with the reality of European data sovereignty.

For CIOs and CTOs, the choice is between planning for the future or executing in the present. In this case, the best strategic move is to choose a provider that isn’t just promising sovereignty but already delivering it.

Additional Resources:

Cloud Exit Triggers – What Happens When the Exit Button Isn’t Optional?

Cloud Exit Triggers – What Happens When the Exit Button Isn’t Optional?

It is becoming clearer by the day: geopolitical realities are forcing CIOs and regulators to revisit their cloud strategy, not just for performance or innovation, but for continuity, legal control, and sovereignty. The past few years have been a story of cloud-first, then cloud-smart, and then about cloud repatriation. The next chapter is about cloud control. And with the growing influence of U.S. legislation like the CLOUD Act, many in Europe’s regulated sectors are starting to ask: what happens when we need to exit?

Now add another layer: what if your cloud provider is still technically and legally subject to a foreign jurisdiction, even when the cloud lives in your own country and your own data centers?

That’s the fundamental tension/question with models like Oracle Alloy (or OCI Dedicated Region), a promising construct that brings full public cloud capabilities into local hands, but with a control plane and infrastructure still operated by Oracle itself. So what if something changes (for example, politically) and you need to exit?

Let’s explore what that exit could look like in practice, and whether Oracle’s broader portfolio provides a path forward for such a scenario.

Local Control – How Far Does Oracle Alloy Really Go?

Oracle Alloy introduces a compelling model for delivering public cloud services with local control. For providers like cloud13 (that’s the fictitious company I am using for this article), this means the full OCI service catalogue can run under the cloud13 brand, with customer relationships, onboarding, and support all handled locally. Critically, the Alloy control plane itself is deployed on-premises in cloud13’s own data center, not remotely managed from an Oracle facility. This on-site architecture ensures that operational control, including provisioning, monitoring, and lifecycle management, remains firmly within Swiss borders.

But while the infrastructure and control plane are physically hosted and operated by cloud13, Oracle still provides and maintains the software stack. The source code, system updates, telemetry architecture, and core service frameworks are still Oracle-owned IP, and subject to Oracle’s global governance and legal obligations. 

Please note: Even in disconnected Alloy scenarios, update mechanisms or security patches may require periodic Oracle coordination. Understanding how these touchpoints are logged and audited will be crucial in high-compliance sectors.

Oracle Alloy

So, while cloud13 ensures data residency, operational proximity, and sovereign service branding, the legal perimeter around the software stack itself may still inherit external jurisdictional influence.

For some sectors, this hybrid control model strikes the right balance. But for others, particularly those anticipating geopolitical triggers (even highly unlikely!) or regulatory shifts, it raises a question: what if you need to exit Alloy entirely?

What a Cloud Exit Really Costs – From Oracle to Anywhere

Let’s be honest and realistic: moving cleanly from Oracle Cloud Infrastructure (OCI) to a hyperscaler like AWS or Azure is anything but simple. OCI’s services are deeply intertwined. If you are running Oracle-native PaaS or database services, you are looking at significant rework – sometimes a full rebuild – to get those workloads running smoothly in a different cloud ecosystem.

On top of that, data egress fees can quickly pile up, and when you add the cost and time of re-certification, adapting security policies, and retraining your teams on new tools, the exit suddenly becomes expensive and drawn out.

That brings us to the critical question: if you are already running workloads in Oracle Alloy, what are your realistic exit paths, especially on-premises?

Going the VMware, Nutanix, or Platform9 route doesn’t solve the problem much either. Sure, they offer a familiar infrastructure layer, but they don’t come close to the breadth of integrated platform services Oracle provides. Every native service dependency you have will need to be rebuilt or replaced.

Then There’s Azure Local and Google Distributed Cloud

Microsoft and Google both offer sovereign cloud variants that come in connected and disconnected flavours.

While Azure Local and Google Distributed Cloud are potential alternatives, they behave much like public cloud platforms. If your workloads already live in Azure or Google Cloud, these services might offer a regulatory bridge. But if you are not already in those ecosystems, and like in our case, are migrating from an Oracle-based platform, you are still facing a full cloud migration.

Yes, that’s rebuilding infrastructure, reconfiguring services, and potentially rearchitecting dozens or even hundreds of applications.

And it’s not just about code. Legacy apps often depend on specific runtimes, custom integrations, or licensed software that doesn’t map easily into a new stack. Even containerised workloads need careful redesign to match new orchestration, security, and networking models. Multiply that across your application estate, and you are no longer talking about a pivot.

You are talking about a multi-year transformation programme.

That’s before you even consider the physical reality. To run such workloads locally, you would need enough data center space (image repatriation or a dual-vendor strategy), power, cooling, network integration, and a team that can operate it all at scale. These alternatives aren’t just expensive to build. They also require a mature operational model and skills that most enterprises simply don’t have ready.

One cloud is already challenging enough. Now, imagine a multi-cloud setup and pressure to migrate.

From Alloy to Oracle Compute Cloud@Customer Isolated – An Exit Without Downtime

Oracle’s architecture allows customers to move their cloud workloads from Alloy into an Oracle Compute Cloud@Customer environment (known as C3I), with minimal disruption. Because these environments use the exact same software stack and APIs as the public OCI cloud, workloads don’t need to be rewritten or restructured. You maintain the same database services, the same networking constructs, and the same automation frameworks.

This makes the transition more of a relocation than a rebuild. Everything stays intact – your code, your security model, your SLAs. The only thing that changes is the control boundary. In the case of C3I, Oracle has no remote access. All infrastructure remains physically isolated, and operational authority rests entirely with the customer.

Oracle Compute Cloud@Customer Isolated

By contrast, shifting to another public or private cloud requires rebuilding and retesting. And while VMware or similar platforms might accommodate general-purpose workloads, they still lack the cloud experience.

Note: Oracle Compute Cloud@Customer offers OCI’s full IaaS and a subset of PaaS services.

While C3I doesn’t yet deliver the full OCI portfolio, it includes essential services like Oracle Linux, Autonomous Database, Vault, IAM, and Observability & Management, making it viable for most regulated use cases.

Alloy as a Strategic Starting Point

So, should cloud13 even start with Alloy?

That depends on the intended path. For some, Alloy is a fast way to enter the market, leveraging OCI’s full capabilities with local branding and customer intimacy. But it should never be a one-way road. The exit path, no matter what the destination is, must be designed, validated, and ready before geopolitical conditions force a decision.

This isn’t a question of paranoia. It’s good cloud design. You want to have an answer for the regulators. You want to be prepared and feel safe.

The customer experience must remain seamless. And when required, ideally, the workloads must move within the same cloud logic, same automation, and same/some platform services.

Could VMware Be Enough?

For some customers, VMware might remain a logical choice, particularly where traditional applications and operational familiarity dominate. It enables a high degree of portability, and for infrastructure-led workloads, it’s an acceptable solution. But VMware environments lack integrated PaaS. You don’t get Autonomous DB. You get limited monitoring, logging, or modern analytics services. You don’t get out-of-the-box identity federation or application delivery pipelines.

Ultimately, you are buying infrastructure, not a cloud.

The Sovereign Stack – C3I and Exadata Cloud@Customer

That’s why Oracle’s C3I, especially when paired with Exadata Cloud@Customer (ExaCC) or a future isolated variant of it, offers a more complete solution. It delivers the performance, manageability, and sovereignty that today’s regulated industries demand. It lets you operate a true cloud on your own terms – local, isolated, yet fully integrated with Oracle’s broader cloud ecosystem.

C3I may not yet fit every use case. Its scale and deployment model must match customer expectations. But for highly regulated workloads, and especially for organizations planning for long-term legal and geopolitical shifts, it represents the most strategic exit vector available.

Final Thought

Cloud exit should never be a last-minute decision. In an IT landscape where laws, alliances, and risks shift quickly, exit planning is not a sign of failure. It’s considered a mark of maturity!

Oracle’s unique ecosystem, from Alloy to C3I, is one of the few that lets you build with that maturity from day one.

Whether you are planning a sovereign cloud, or are already deep into a regulated workload strategy, now is the time to assess exit options before they are needed. Make exit architecture part of your initial cloud blueprint.

Why Switzerland Needs a Different Kind of Sovereign Cloud

Why Switzerland Needs a Different Kind of Sovereign Cloud

Switzerland doesn’t follow. It observes, evaluates, and decides on its own terms. In tech, in policy, and especially in how it protects its data. That’s why the typical EU sovereign cloud model won’t work here. It solves a different problem, for a different kind of political union.

But what if we could go further? What if the right partner, one that understands vertical integration, local control, and legal separation, could build something actually sovereign?

That partner might just be Oracle.

Everyone is talking about the EU’s digital sovereignty push and Oracle responded with a serious answer: the EU Sovereign Cloud, which celebrated its second anniversary a few weeks ago. It’s a legally ring-fenced, EU-operated, independently staffed infrastructure platform. Built for sovereignty, not just compliance.

That’s the right instinct. But Switzerland is not the EU. And sovereignty here means more than “EU-only.” It means operations bound by Swiss law, infrastructure operated on Swiss soil, and decisions made by Swiss entities.

Oracle Alloy and OCI Dedicated Region – Sovereignty by Design

Oracle’s OCI Dedicated Region and the newer Alloy model were designed with decentralization in mind. Unlike traditional hyperscaler zones, these models bring the entire control plane on-premises, not just the data.

That allows for policy enforcement, tenant isolation, and lifecycle management to happen within the customer’s boundaries, without default exposure to centralized cloud control towers. In short, the foundation for digital sovereignty is already there.

But Switzerland, especially the public sector, expects more.

What Still Needs to Be Solved for Switzerland

Switzerland doesn’t just care about where data sits. It cares about who holds the keys, who manages the lifecycle, and under which jurisdiction they operate.

While OCI Dedicated Region and Alloy keep the control plane local, certain essential services, such as telemetry, patch delivery, and upgrade mechanisms, still depend on Oracle’s global backbone. In the Swiss context, even a low-level dependency can raise concerns about jurisdictional risk, including exposure to laws like the U.S. CLOUD Act.

Support must remain within Swiss borders. Sovereign regions that rely on non-Swiss teams or legal entities to resolve incidents still carry legal and operational exposure – but this data can be anonymized. Sovereignty includes not only local infrastructure, but also patch transparency, cryptographic root trust, and full legal separation from foreign jurisdictions. 

Yes, operational teams must be Swiss-based, except at the tier 2 or tier 3 level.

Avaloq Is Already Leading the Way

This isn’t just theory. Switzerland already has a working example: Avaloq, the Swiss financial technology provider, is running core workloads on OCI Dedicated Region.

These are not edge apps or sandbox environments. Avaloq supports mission-critical platforms for regulated financial institutions. If they trust Oracle’s architecture with that responsibility, the model is clearly feasable. From a sovereignty, security, and compliance perspective.

Avaloq’s deployment shows that Swiss-regulated workloads can run securely, locally, and independently. And if one of Switzerland’s most finance-sensitive firms went down this path, others across government, healthcare, and infrastructure should be paying attention.

Sovereignty doesn’t mean reinventing everything. It means learning from those already building it.

The Bottom Line

Switzerland doesn’t need more cloud. It needs a cloud built for Swiss values: neutrality, autonomy, and legal independence.

Oracle is closer to that model than most. Its architecture is already designed for local control. Its EU Sovereign Cloud shows it understands the legal and operational dimensions of sovereignty. And with Avaloq already in production on OCI Dedicated Region, the proof is there.

The technology is ready. The reference customer is live.

What comes next is a question of commitment.