Swiss Government Cloud – Possible Paths, Components and Vendors Compared

Swiss Government Cloud – Possible Paths, Components and Vendors Compared

Disclaimer: This article reflects my personal opinion and not that of my employer.

The discussion around a Swiss Government Cloud (SGC) has gained significant momentum in recent months. Digital sovereignty has become a political and technological necessity with immediate relevance. Government, politics, and industry are increasingly asking how a national cloud infrastructure could look. This debate is not just about technology, but also about governance, funding, and the ability to innovate.

Today’s starting point is relatively homogeneous. A significant portion of Switzerland’s public sector IT infrastructure still relies on VMware. This setup is stable but not designed for the future. When we talk about the Swiss Government Cloud, the real question is how this landscape evolves from pragmatic use of the public cloud to the creation of a fully sovereign cloud, operated under Swiss control.

For clarity: I use the word “component” (Stufe), in line with the Federal Council’s report, to describe the different maturity levels and deployment models.

A Note on the Federal Council’s Definition of Digital Sovereignty

According to the official report, digital sovereignty includes:

  • Data and information sovereignty: Full control over how data is collected, stored, processed, and shared.

  • Operational autonomy: The ability of the Federal Office of Information Technology (FOITT) to run systems with its own staff for extended periods, without external partners.

  • Jurisdiction and governance: Ensuring that Swiss law, not foreign regulation, defines control and access.

This definition is crucial when assessing whether a cloud model is “sovereign enough.”

Component 1 – Public Clouds Standard

The first component describes the use of the public cloud in its standard form. Data can be stored anywhere in the world, but not necessarily in Switzerland. Hyperscalers such as AWS, Microsoft Azure, or Oracle Cloud Infrastructure (OCI) offer virtually unlimited scalability, the highest pace of innovation, and a vast portfolio of services.

Amazon Web Services (AWS) is the broadest platform by far, providing access to an almost endless variety of services and already powering workloads from MeteoSwiss.

Microsoft Azure integrates deeply with Microsoft environments, making it especially attractive for administrations that are already heavily invested in Microsoft 365 and MS Teams. This ecosystem makes Azure a natural extension for many public sector IT landscapes.

Oracle Cloud Infrastructure (OCI), on the other hand, emphasizes efficiency and infrastructure, particularly for databases, and is often more transparent and predictable in terms of cost.

However, component 1 must be seen as an exception. For sovereign and compliant workloads, there is little reason to rely on a global public cloud without local residency or control. This component should not play a meaningful role, except in cases of experimental or non-critical workloads.

And if there really is a need for workloads to run outside of Switzerland, I would recommend using alternatives like the Oracle EU Sovereign Cloud instead of a regular public region, as it offers stricter compliance, governance, and isolation for European customers.

Component 2a – Public Clouds+

Here, hyperscalers offer public clouds with Swiss data residency, combined with additional technical restrictions to improve sovereignty and governance.

AWS, Azure, and Oracle already operate cloud regions in Switzerland today. They promise that data will not leave the country, often combined with features such as tenant isolation or extra encryption layers.

The advantage is clear. Administrations can leverage hyperscaler innovation while meeting legal requirements for data residency.

Component 2b – Public Cloud Stretched Into FOITT Data Centers

This component goes a step further. Here, the public cloud is stretched directly into the local data centers. In practice, this means solutions like AWS Outposts, Azure Local (formerly Azure Stack), Oracle Exadata & Compute Cloud@Customer (C3), or Google Distributed Cloud deploy their infrastructure physically inside FOITT facilities.

Diagram showing an Outpost deployed in a customer data center and connected back to its anchor AZ and parent Region

This creates a hybrid cloud. APIs and management remain identical to the public cloud, while data is hosted directly by the Swiss government. For critical systems that need cloud benefits but under strict sovereignty (which differs between solutions), this is an attractive compromise.

The vendors differ significantly:

Costs and operations are predictable since most of these models are subscription-based.

Component 3 – Federally Owned Private Cloud Standard

Not every path to the Swiss Government Cloud requires a leap into hyperscalers. In many cases, continuing with VMware (Broadcom) represents the least disruptive option, providing the fastest and lowest-risk migration route. It lets institutions build on existing infrastructure and leverage known tools and expertise.

Still, the cloud dependency challenge isn’t exclusive to hyperscalers. Relying solely on VMware also creates a dependency. Whether with VMware or a hyperscaler, dependency remains dependency and requires careful planning.

Another option is Nutanix, which offers a modern hybrid multi-cloud solution with its AHV hypervisor. However, migrating from VMware to AHV is in practice as complex as moving to another public cloud. You need to replatform, retrain staff, and become experts again.

Hybrid Cloud Management diagram

Both VMware and Nutanix offer valid hybrid strategies:

  • VMware: Minimal disruption, continuity, familiarity, low migration risk. Can run VMware Cloud Foundation (VCF) on AWS, Azure, Google Cloud and OCI.

  • Nutanix: Flexibility and hybrid multi-cloud potential, but migration is similar to changing cloud providers.

Crucially, changing platforms or introducing a new cloud is harder than it looks. Organizations are built around years of tailored tooling, integrations, automation, security frameworks, governance policies, and operational familiarity. Adding or replacing another cloud often multiplies complexity rather than reduces it.

Gartner Magic Quadrant for_Distributed_Hybrid_Infrastructure

Source: https://www.oracle.com/uk/cloud/distributed-cloud/gartner-leadership-report/ 

But sometimes it is just about costs, new requirements, or new partnerships, which result in a big platform change. Let’s see how the Leaders Magic Quadrant for “Distributed Hybrid Infrastructure” changes in the coming weeks or months.

Why Not Host A “Private Public Cloud” To Combine Components 2 and 3?

This possibility (let’s call with component 3.5) goes beyond the traditional framework of the Federal Council. In my view, a “private public cloud” represents the convergence of public cloud innovation with private cloud sovereignty.

“The hardware needed to build on-premises solutions should therefore be obtained through a ‘pay as you use’ model rather than purchased. A strong increase in data volumes is expected across all components.”

Source: https://www.fedlex.admin.ch/eli/fga/2024/1408/de#lvl_2/lvl_2.4 (translated from German)

Solutions like OCI Dedicated Region or Oracle Alloy can deliver the entire Oracle Cloud stack– IaaS, PaaS, databases, analytics – on Swiss soil. Unlike AWS Outposts, Azure Local, these solutions are not coming with a limited set of cloud services, but are identical to a full public cloud region with all cloud services, only hosted and operated locally in a customer’s data center.

OCI Dedicated Region Overview

Additionally, Oracle offers a compelling hybrid path. With OCVS (Oracle Cloud VMware Solution) on OCI Dedicated Region or Alloy, organizations can lift-and-shift VMware workloads unchanged to this local public cloud based on Oracle Cloud Infrastructure, then gradually modernize selected applications using Oracle’s PaaS services like databases, analytics, or AI.

OCI Dedicated Region and OCVS

This “lift-and-modernize” journey helps balance risk and innovation while ensuring continuous operational stability.

And this makes component 3.5 unique. It provides the same public-cloud experience (continuous updates, innovation pace; 300-500 changes per day, 10’000-15’000 per month) but under sovereign control. With Alloy, the FOITT could even act as a Cloud Service Provider for cantons, municipalities, and hospitals, creating a federated model without each canton building its own cloud.

Real-world Swiss example: Avaloq was the first customer in Switzerland to deploy an OCI Dedicated Region, enabling it to migrate clients and offer cloud services to other banks in Switzerland. This shows how a private public cloud can serve highly regulated industries while keeping data and operations in-country.

In this model:

  • Component 2b becomes redundant: Why stretch a public cloud into FOITT data centers if a full cloud can already run there?

  • Component 2 can also be partly obsolete: At least for workloads running a local OCI Dedicated Region (or Alloy) deployment, there’s no need for parallel deployments in public regions, since everything can run sovereignly on a local dedicated OCI region in Switzerland (while workloads from MeteoSwiss on AWS remain in component 2).

A private public cloud thus merges sovereignty with innovation while reducing the need for parallel cloud deployments.

Another Possibility – Combine Capabilities From A Private Public Cloud With Component 2b

Another possibility could be to combine elements of components 3.5 and 2b into a federated architecture: Using OCI Dedicated Region or Oracle Alloy as the central control plane in Switzerland, while extending the ecosystem with Oracle Compute Cloud@Customer (C3) for satellite or edge use cases.

Oracle Compute Cloud@Customer – Wichtige Funktionen

This creates a hub-and-spoke model:

  • The Dedicated Region or Alloy in Switzerland acts as the sovereign hub for governance and innovation.

  • C3 appliances extend cloud services into distributed or remote locations, tightly integrated but optimized for local autonomy.

A practical example would be Swiss embassies around the world. Each embassy could host a lightweight C3 edge environment, connected securely back to the central sovereign infrastructure in Switzerland. This ensures local applications run reliably, even with intermittent connectivity, while remaining part of the overall sovereign cloud ecosystem.

By combining these capabilities, the FOITT could:

  • Use component 2b’s distributed deployment model (cloud stretched into local facilities)

  • Leverage component 3.5’s VMware continuity path with OCVS for easy migrations

  • Rely on component 3.5’s sovereignty and innovation model (private public cloud with full cloud parity)

This blended approach would allow Switzerland to centralize sovereignty while extending it globally to wherever Swiss institutions operate.

What If A Private Public Cloud Isn’t Sovereign Enough?

It is possible that policymakers or regulators could conclude that solutions such as OCI Dedicated Region or Oracle Alloy are not “sovereign enough”, given that their operational model still involves vendor-managed updates and tier-2/3 support from Oracle.

In such a scenario, the BIT would have fallback options:

Maintain a reduced local VMware footprint: BIT could continue to run a smaller, fully local VMware infrastructure that is managed entirely by Swiss staff. This would not deliver the breadth of services or pace of innovation of a hyperscale-based sovereign model, but it would provide the maximum possible operational autonomy and align closely with Switzerland’s definition of sovereignty.

Leverage component 4 from the FDJP: Using the existing “Secure Private Cloud” (SPC) from the Federal Department of Justice and Police (FDJP). But I am not sure if the FDJP wants to become a cloud service provider for other departments.

That said, these fallback strategies don’t necessarily exclude the use of OCI. Dedicated Region or Alloy could co-exist with a smaller VMware footprint, giving the FOITT both innovation and control. Alternatively, Oracle could adapt its operating model to meet Swiss sovereignty requirements – for example, by transferring more operational responsibility (tier-2 support) to certified Swiss staff.

This highlights the core dilemma: The closer Switzerland moves to pure operational sovereignty, the more it risks losing access to the innovation and agility that modern cloud architectures bring. Conversely, models like Dedicated Region or Alloy can deliver a full public cloud experience on Swiss soil, but they require acceptance that some operational layers remain tied to the vendor.

Ultimately, the Swiss Government Cloud must strike a balance between autonomy and innovation, and the decision about what “sovereign enough” means will shape the entire strategy.

Conclusion

The Swiss Government Cloud will not be a matter of choosing one single path. A hybrid approach is realistic: Public cloud for agility and non-critical workloads, stretched models for sensitive workloads, VMware or Nutanix for hybrid continuity, sovereign or “private public cloud” infrastructures for maximum control, and federated extensions for edge cases.

But it is important to be clear: Hybrid cloud or even hybrid multi-cloud does not automatically mean sovereign. What it really means is that a sovereign cloud must co-exist with other clouds – public or private.

To make this work, Switzerland (and each organization within it) must clearly define what sovereignty means in practice. Is it about data and information sovereignty? Is it really about operational autonomy? Or about jurisdiction and governance?

Only with such a definition can a sovereign cloud strategy deliver on its promise, especially when it comes to on-premises infrastructure, where control, operations, and responsibilities need to be crystal clear.

PS: Of course, the SGC is about much more than infrastructure. The Federal Council’s plan also touches networking, cybersecurity, automation and operations, commercial models, as well as training, consulting, and governance. In this article, I deliberately zoomed in on the infrastructure side, because it’s already big, complex, and critical enough to deserve its own discussion. And just imagine sitting on the other side, having to go through all the answers of the upcoming tender. Not an easy task.

Why Changing Platforms or Adding a New Cloud Is Harder Than It Sounds

Why Changing Platforms or Adding a New Cloud Is Harder Than It Sounds

Switching enterprise platforms isn’t like swapping cars and more like deciding to change the side of the road you drive on while keeping all the existing roads, intersections, traffic lights, and vehicles in motion. The complexity comes from rethinking the entire system around you, from driver behaviour to road markings to the way the city is mapped.

That’s exactly what it feels like when an enterprise tries to replace a long-standing platform or add a new one to an already complex mix. On slides, it’s about agility, portability, and choice. In reality, it’s about untangling years of architectural decisions, retraining teams who have mastered the old way of working, and keeping the business running without any trouble while the shift happens.

Every enterprise carries its own technological history. Architectures have been shaped over years by the tools, budgets, and priorities available at the time, and those choices have solidified into layers of integrations, automations, operational processes, and compliance frameworks. Introducing a new platform or replacing an old one isn’t just a matter of deploying fresh infrastructure. Monitoring tools may only talk to one API. Security controls might be tightly bound to a specific identity model. Ticketing workflows could be designed entirely around the behaviour of the current environment. All of this has to be unpicked and re-stitched before a new platform can run production workloads with confidence.

The human element plays an equally large role. Operational familiarity is one of the most underestimated forms of lock-in. Teams know the details of the current environment, how it fails, and the instinctive fixes that work. Those instincts don’t transfer overnight, and the resulting learning curve can look like a productivity dip in the short term. Something most business leaders are reluctant to accept. Applications also rarely run in isolation. They rely on the timing, latency, and behaviours of the platform beneath them. A database tuned perfectly in one environment might misbehave in another because caching works differently or network latency is just a little higher. Automation scripts often carry hidden assumptions too, which only surface when they break.

Adding another cloud provider or platform to the mix can multiply complexity rather than reduce it. Each one brings its own operational model, cost reporting, security framework, and logging format. Without a unified governance approach, you risk fragmentation – silos that require separate processes, tools, and people to manage. And while the headline price per CPU or GB (or per physical core) might look attractive, the true cost includes migration effort, retraining, parallel operations, and licensing changes. All of this competes with other strategic projects for budget and attention.

Making Change Work in Practice

No migration is ever frictionless, but it doesn’t have to be chaotic. The most effective approach is to stop treating it as a single, monolithic event. Break the journey into smaller stages, each delivering its own value.

Begin with workloads that are low in business risk but high in operational impact. This category often includes development and test environments, analytics platforms processing non-sensitive data, or departmental applications that are important locally but not mission-critical for the entire enterprise. Migrating these first provides a safe but meaningful proving ground. It allows teams to validate network designs, security controls, and operational processes in the new platform without jeopardising critical systems.

Early wins in this phase build confidence, uncover integration challenges while the stakes are lower, and give leadership tangible evidence that the migration strategy is working. They also provide a baseline for performance, cost, and governance that can be refined before larger workloads move.

Build automation, governance, and security policies to be platform-agnostic from the start, using open standards, modular designs, and tooling that works across multiple environments. And invest early in people: cross-train them, create internal champions, and give them the opportunity to experiment without the pressure of immediate production deadlines. If the new environment is already familiar by the time it goes live, adoption will feel less like a disruption and more like a natural progression.

From Complex Private Clouds to a Unified Platform

Consider an enterprise running four separate private clouds: three based on VMware – two with vSphere Foundation and one with VMware Cloud Foundation including vSphere Kubernetes Services (VKS), and one running Red Hat OpenShift on bare metal. Each has its own tooling, integrations, and operational culture.

Migrating the VMware workloads to Oracle Cloud VMware Solution (OCVS) inside an OCI Dedicated Region allows workloads to be moved with vMotion or replication, without rewriting automation or retraining administrators. It’s essentially another vSphere cluster, but one that scales on demand and is hosted in facilities under your control.

OCI Dedicated Region and OCVS

The OpenShift platform can also move into the same Dedicated Region, either as a direct lift-and-shift onto OCI bare metal or VMs, or as part of a gradual transition toward OCI’s native Kubernetes service (OKE). This co-location enables VMware, OpenShift, and OCI-native workloads to run side by side, within the same security perimeter, governance model, and high-speed network. It reduces fragmentation, simplifies scaling, and allows modernisation to happen gradually instead of in a high-risk “big bang” migration.

Unifying Database Platforms

In most large enterprises, databases are distributed across multiple platforms and technologies. Some are Oracle, some are open source like PostgreSQL or MySQL, and others might be Microsoft SQL Server. Each comes with its own licensing, tooling, and operational characteristics.

An OCI Dedicated Region can consolidate all of them. Oracle databases can move to Exadata Database Service or Autonomous Database for maximum performance, scalability, and automation. Non-Oracle databases can run on OCI Compute or as managed services, benefitting from the same network, security, and governance as everything else.

This consolidation creates operational consistency with one backup strategy, one monitoring approach, one identity model across all database engines. It reduces hardware sprawl, improves utilisation, and can lower costs through programmes like Oracle Support Rewards. Just as importantly, it positions the organisation for gradual application modernization, without being forced into rushed timelines.

A Business Case Built on More Than Cost

The value in this approach comes from the ability to run new workloads, whether VM-based, containerized, or fully cloud-native, on the same infrastructure without waiting for new procurement cycles. It’s about strategic control by keeping infrastructure and data within your own facilities, under your jurisdiction, while still accessing the full capabilities of a hyperscale cloud. And it’s about flexibility. Moving VMware first, then OpenShift, then databases, and modernizing over time, with each step delivering value and building confidence for the next.

When done well, this isn’t just a platform change. It’s a simplification of the entire environment, an upgrade in capabilities, and a long-term investment in agility and control.

Sovereign Clouds and the VMware Earthquake: Dependency Isn’t Just a Hyperscaler Problem

Sovereign Clouds and the VMware Earthquake: Dependency Isn’t Just a Hyperscaler Problem

The concept of “sovereign cloud” has been making waves across Europe and beyond. Politicians talk about it. Regulators push for it. Enterprises (re-)evaluate it. On the surface, it sounds like a logical evolution: regain control, keep data within national borders, reduce exposure to foreign jurisdictions, and while you are at it, maybe finally break free from the gravitational pull of the U.S. hyperscalers.

After all, hyperscaler dependency is seen as the big bad wolf. If your workloads live in AWS, Azure, Google Cloud or Oracle Cloud Infrastructure, you are automatically exposed to price increases, data sovereignty concerns, U.S. legal reach (hello, CLOUD Act), and a sense of vendor lock-in that seems harder to escape with every commit to infrastructure-as-code.

So, the solution appears simple: go local, go sovereign, go safe.

But if only it were that easy.

The truth is: sovereignty isn’t something you can just buy off the shelf. It’s not a matter of switching cloud logos or picking the provider that wraps their marketing in your national flag. Because even within your own datacenter, even with platforms that have long been considered “sovereign” and independent, the same risks apply.

The best example? VMware.

What happened in the VMware ecosystem over the past year should be a wake-up call for anyone who thinks sovereignty equals control. Because, as we have now seen, control can vanish. Fast. Very fast.

VMware’s Rapid Fall from Grace

Take VMware. For years, it was the go-to platform for building secure, sovereign private clouds. Whether in your own datacenter or hosted by a trusted service provider in your region, VMware felt like the safe, stable choice. No vendor lock-in (allegedly), no forced cloud-native rearchitecture, and full control over your workloads. Rainbows, unicorns, and that warm fuzzy feeling of sovereignty.

Then came the Broadcom acquisition, and with it, a cold splash of reality.

As Hock Tan , our President and CEO, shared in today's General Session at  VMware Explore Barcelona, European customers want control over their data  and processes. | Broadcom

Practically overnight, prices shot up. In some cases, more than doubled. Features were suddenly stripped out or repackaged into higher-priced bundles. Longstanding partner agreements were shaken, if not broken. Products disappeared or were drastically repositioned. Customers and partners were caught off guard. Not just by the changes, but by how quickly they hit.

And just like that, a platform once seen as a cornerstone of sovereign IT became a textbook example of how fragile that sovereignty really is.

Sovereignty Alone Doesn’t Save You

The VMware story exposes a hard truth: so-called “sovereign” infrastructure isn’t immune to disruption. Many assume risk only lives in the public cloud under the branding of AWS, Azure, or Oracle Cloud. But in reality, the triggers for a “cloud exit” or forced platform shift can be found anywhere. Also on-premises!

A sudden licensing change. An unexpected acquisition. A new product strategy that leaves your current setup stranded. None of these things care whether your workloads are in a public cloud region or a private rack in your basement. Dependency is dependency, and it doesn’t always come with a hyperscaler logo.

It’s Not About Picking the Right Vendor. It’s About Being Ready for the Wrong One.

That’s why sovereignty, in the real world, isn’t something you just buy. It’s something you design for.

Note: Some hyperscalers now offer “sovereign by design” solutions but even these require deeper architectural thinking.

Sure, a Greenfield build on a sovereign cloud stack sounds great. Fresh start, full control, compliance checkboxes all ticked. But the reality for most organizations is very different. They have already invested years into specific platforms, tools, and partnerships. There are skill gaps, legacy systems, ongoing projects, and plenty of inertia. Ripping it all out for the sake of “clean” sovereignty just isn’t feasible.

That’s what makes architecture, flexibility, and diversification so critical. A truly resilient IT strategy isn’t just about where your data lives or which vendor’s sticker is on the server. It’s about being ready (structurally, operationally, and contractually) for things to change.

Because they will change.

Open Source ≠ Sovereign by Default

Spoiler: Open source won’t save you either

Let’s address another popular idea in the sovereignty debate. The belief that open source is the magic solution. The holy grail. The thinking goes: “If it’s open, it’s sovereign”. You have the source code, you can run it anywhere, tweak it however you like, and you are free from vendor lock-in. Yeah, right.

Sounds great. But in practice? It’s not that simple.

Yes, open source can enable sovereignty, but it doesn’t guarantee it. Just because something is open doesn’t mean it’s free of risk. Most open-source projects rely on a global contributor base, and many are still controlled, governed, or heavily influenced by large commercial vendors – often headquartered in the same jurisdictions we are supposedly trying to avoid. Yes, that’s good and bad at the same time, isn’t it?

And let’s be honest: having the source code doesn’t mean you suddenly have a DevOps army to maintain it, secure it, patch it, integrate it, scale it, monitor it, and support it 24/7. In most cases, you will need commercial support, managed services, or skilled specialists. And with that, new dependencies emerge.

So what have you really achieved? Did you eliminate risk or just shift it?

Open source is a fantastic ingredient in a sovereign architecture – in any cloud architecture. But it’s not a silver bullet.

Behind the Curtain – Complexity, Not Simplicity

From the outside, especially for non-IT people, the sovereign cloud debate can look like a clear binary: US hyperscaler = risky, local provider = safe. But behind the curtain, it’s much more nuanced. You are dealing with a web of relationships, existing contracts, integrated platforms, and real-world limitations.

The Broadcom-VMware shake-up was a loud and very public reminder that disruption can come from any direction. Even the platforms we thought were untouchable can suddenly become liabilities.

So the question isn’t: “How do we go sovereign?”

It’s: “How do we stay in control, no matter what happens?”

That’s the real sovereignty.

Why Emulating the Cloud Isn’t the Same as Being One

Why Emulating the Cloud Isn’t the Same as Being One

It’s easy to mistake progress for innovation. VMware Cloud Foundation 9.0 (VCF) introduces long-awaited features like VPC-style networking, developer-centric automation, and bundled services. But let’s be honest: this is not the future of cloud. This is infrastructure catching up to where the public cloud world already was ten years ago.

Example: Moving some concepts and features from VMware Cloud Director (vCD) to Aria Automation and then calling it VCF Automation is also not innovative. It was the right thing to do, as vCD and Aria Automation (formerly known as vRealize Automation) shared many overlapping features and concepts. In other words, we can expect VCF Automation to be the future and vCD will be retired in a few years.

Anyway, there’s a pattern here. Platform vendors continue to position themselves as “private cloud providers”, yet the experience they offer remains rooted in managing hardware, scaling clusters, and applying patches. Whether it’s VCF or Nutanix, the story is always the same: it’s better infrastructure. But that’s the problem. It’s still infrastructure.

In contrast, the real shift toward cloud doesn’t start with software-defined storage or NSX overlay networks. It starts with the service model. That’s what makes cloud work. That’s what makes it scalable, elastic, and developer-first. That’s what customers actually need.

Let’s unpack where VCF 9.0 lands and why it still misses the mark.

What’s New in VCF 9.0. And What’s Not.

Broadcom deserves credit for moving VCF closer to what customers have been asking for since at least 2020. The platform now includes a proper developer consumption layer, integrated VPC-style networking, a simplified control plane, and aligned software versions for different products. Yes, it feels more like a cloud. It automates more, hides more complexity, and makes day 2 operations less painful. All good steps!

The new virtual private cloud constructs let teams carve out self-contained network domains – complete with subnets, NAT, firewall rules, and load balancers – all provisioned from a central interface. That’s a meaningful upgrade from the old NSX workflows. Now, transit gateways can be deployed automatically, reducing the friction of multi-domain connectivity. The whole setup is better, simpler, and more cloud-like. Well done.

On the consumption side, there’s a proper push toward unified APIs. Terraform support, policy-as-code blueprints in YAML, and native Kubernetes provisioning give developers a way to consume infrastructure more like they would in a hyperscaler environment. VCF customers can onboard teams faster, and the lifecycle engine behind the scenes handles upgrades, certificates, and best-practice configurations with far less manual effort.

So yes, VCF 9.0 is a big step forward for Broadcom and for existing VMware customers. But let’s put that progress into perspective.

Cloud Features Delivered Years Too Late

The features we’re seeing now – developer APIs, VPCs, self-service provisioning, built-in security, elastic-like networking – these aren’t breakthroughs. They are basic expectations. Public cloud providers like AWS and Azure introduced the VPC concept more than 10 years ago. Public clouds have offered full-stack policy automation, service mesh observability, and integrated load balancing for most of the last decade.

What VCF 9.0 delivers in 2025 is essentially what existing on-premises customers were asking for back in 2020.

The bigger concern is that VMware has always been the benchmark for enterprise-grade virtualization and private infrastructure. When customers bought into VCF years ago, they expected these capabilities then, not now. Broadcom has simply shipped the version of VCF that many customers assumed was already on the roadmap, five years ago.

And even now, many of the services (add-ons) in VCF 9.0 like Avi load balancing, vDefend IDS/IPS, integrated databases, and AI services, are optional components, mostly manually deployed, and not fully elastic or usage-based. These are integrations, not native services. You still need to operate them.

The Core Problem: It’s Still Infrastructure-Led

That’s the real difference. VCF and Nutanix remain infrastructure-led platforms. They require hardware planning, capacity management, lifecycle orchestration, and dependency tracking. Yes, they have APIs. Yes, they support Kubernetes. But at their core, they are platforms you need to own, operate, and scale yourself.

Cloud, on the other hand, is not about owning anything. It’s about consuming outcomes. VCF 9.0 and others are just not there yet.

The Illusion of a Private Cloud

This is why it’s time to call out the difference. Just because something looks like cloud – has some APIs, supports Kubernetes, uses words like “consumption” and “developer self-service” – doesn’t mean it actually behaves like cloud.

The illusion of a “private cloud” is seductive. You get to keep control. You get to use familiar tools. But control also means responsibility. Familiar tools mean legacy thinking. And a so-called private cloud, in most cases, just means more complex infrastructure with higher expectations.

That’s not transformation. That’s rebranding.

What VCF 9.0 delivers is an important evolution of VMware’s private infrastructure platform. But let’s not confuse that with cloud. Broadcom has moved in the right direction. They have shipped what customers needed years ago. But they are still delivering (virtual) infrastructure. Just better packaged.

Final Thought

You don’t transform your IT strategy by modernizing clusters. You transform it by changing how you consume and operate technology.

So the question isn’t whether your stack looks like “the cloud”. The question is whether you can stop operating infrastructure and start consuming services.

That’s the real line between emulating the cloud and actually being one. And as of today, VCF (and Nutanix) are still on the other side of that line. It’s not good. It’s not bad. It is what it is.

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

We all know that the future of existing VMware customers has become more complicated and less certain. Many enterprises are reevaluating their reliance on VMware as their core infrastructure stack. So, where to go next?

For enterprises already invested in Oracle technology, or simply those looking for a credible, flexible, and enterprise-grade alternative, Oracle Cloud Infrastructure (OCI) offers a comprehensive set of paths forward. Whether you want to modernize, rehost, or run hybrid workloads, OCI doesn’t force you to pick a single direction. Instead, it gives you a range of options: from going cloud-native, to running your existing VMware stack unchanged, to building your own sovereign cloud footprint.

Here are five realistic strategies for VMware customers considering OCI. Learn how to migrate from VMware to Oracle Cloud Infrastructure. It doesn’t need to be an either-or decision, it can also be an “and” approach.

1. Cloud-Native with OCI – Start Fresh, Leave VMware Behind

For organizations ready to move beyond traditional infrastructure altogether, the cloud-native route is the cleanest break you can make. This is where you don’t just move workloads; you rearchitect them. You replace VMs with containers where possible, and perhaps lift and shift some of the existing workloads. You replace legacy service dependencies with managed cloud services. And most importantly, you replace static, manually operated environments with API-driven infrastructure.

OCI supports this approach with a robust portfolio: you have got compute Instances that scale on demand, Oracle Kubernetes Engine (OKE) for container orchestration, OCI Functions for serverless workloads, and Autonomous Database for data platforms that patch and tune themselves. The tooling is modern, open, and mature – Terraform, Ansible, and native SDKs are all available and well-documented.

This isn’t a quick VMware replacement. It requires a DevOps mindset, application refactoring, and an investment in automation and CI/CD. It is not something you do in a weekend. But it’s the only path that truly lets you leave the baggage behind and design infrastructure the way it should work in 2025.

2. OCVS – Run VMware As-Is, Without the Hardware

If cloud-native is the clean break, then Oracle Cloud VMware Solution (OCVS) is the strategic pause. This is the lift-and-shift strategy for enterprises that need continuity now, but don’t want to double down on on-prem investment.

With OCVS, you’re not running a fully managed service (compared to AWS, Azure, GCP). You get the full vSphere, vSAN, NSX, and vCenter stack deployed on Oracle bare-metal infrastructure in your own OCI tenancy. You’re the admin. You manage the lifecycle. You patch and control access. But you don’t have to worry about hardware procurement, power and cooling, or supply chain delays. And you can integrate natively with OCI services: backup to OCI Object Storage, peer with Exadata, and extend IAM policies across the board.

Oracle Cloud VMware Solution

The migration is straightforward. You can replicate your existing environment (with HCX), run staging workloads side-by-side, and move VMs with minimal friction. You keep your operational model, your monitoring stack, and your tools. The difference is, you get out of your data center contract and stop burning time and money on hardware lifecycle management.

This isn’t about modernizing right now. It’s about escaping VMware hardware and licensing lock-in without losing operational control.

3. Hybrid with OCVS, Compute Cloud@Customer, and Exadata Cloud@Customer

Now we’re getting into enterprise-grade architecture. This is the model where OCI becomes a platform, not just a destination. If you’re in a regulated industry and you can’t run everything in the public cloud, but you still want the same elasticity, automation, and control, this hybrid model makes a lot of sense.

A diagram showing your tenancy in an OCI region, and how it connects to Compute Cloud@Customer in your data center.

Here’s how it works: you run OCVS in the OCI public region for DR, or workloads that have to stay on vSphere. But instead of moving everything to the cloud, you deploy Compute Cloud@Customer (C3) and Exadata Cloud@Customer (ExaCC) on-prem. That gives you a private cloud footprint with the same APIs and a subset of OCI IaaS/PaaS services but physically located in your own facility, behind your firewall, under your compliance regime.

You manage workloads on C3 using the exact same SDKs, CLI tools, and Terraform modules as the public cloud. You can replicate between on-prem and cloud, burst when needed, or migrate in stages. And with ExaCC running in the same data center, your Oracle databases benefit from the same SLA and performance guarantees, with none of the data residency headaches.

This model is ideal if you’re trying to modernize without breaking compliance. It keeps you in control, avoids migration pain, and still gives you access to the full OCI ecosystem when and where you need it.

4. OCI Dedicated Region – A Public Cloud That Lives On-Prem

When public cloud is not an option, OCI Dedicated Region becomes the answer.

This isn’t a rack. It is an entire cloud region. You get all OCI services like compute, storage, OCVS, OKE, Autonomous DB, identity, even SaaS, deployed inside your own facility. You retain data sovereignty and you control physical access. You also enforce local compliance rules and operate everything with the same OCI tooling and automation used in Oracle’s own hyperscale regions.

サーバラック3つで自分のOracle Cloudリージョンが持てる「Oracle Dedicated Region 25」発表 - Publickey

What makes Dedicated Region different from C3 is the scale and service parity. While C3 delivers core IaaS and some PaaS capabilities, Dedicated Region is literally the full stack. You can run OCVS in there, connect it to your enterprise apps, and have a fully isolated VMware environment that never leaves your perimeter.

For VMware customers, it means you don’t have to choose between control and modernization. You get both.

5. Oracle Alloy – Cloud Infrastructure for Telcos and VMware Service Providers

If you’re a VMware Cloud Director customer or a telco/provider building cloud services for others, then Oracle just handed you an entirely new business model. Oracle Alloy allows you to offer your own cloud under your brand, with your pricing, and your operational control based on the same OCI technology stack Oracle runs themselves.

This is not only reselling, it is operating your own OCI cloud.

Becoming an Oracle Alloy partner  diagram, description below

As a VMware-based cloud provider, Alloy gives you a path to modernize your platform and expand your services without abandoning your customer base. You can run your own VMware environment (OCVS), offer cloud-native services (OKE, DBaaS, Identity, Monitoring), and transition your customers at your own pace. All of it on a single platform, under your governance.

What makes Alloy compelling is that it doesn’t force you to pick between VMware and OCI, it lets you host both side by side. You keep your high-value B2B workloads and add modern, cloud-native services that attract new tenants or internal business units.

For providers caught in the middle of the VMware licensing storm, Alloy might be the most strategic long-term play available right now.

 

From Monolithic Data Centers to Modern Private Clouds

From Monolithic Data Centers to Modern Private Clouds

Behind every shift from old-school to new-school, there is a bigger story about people, power, and most of all, trust. And nowhere is that clearer than in the move from traditional monolithic data centers to what we now call a modern private cloud infrastructure.

A lot of people still think this evolution is just about better technology, faster hardware, or fancier dashboards. But it is not. If you zoom out, the core driver is not features or functions, it is trust in the executive vision, and the willingness to break from the past.

Monolithic data centers stall innovation

But here is the problem: monoliths do not scale in a modern world (or cloud). They slow down innovation, force one-size-fits-all models, and lock organizations into inflexible architectures. And as organizations grew, the burden of managing these environments became more political than practical.

The tipping point was not when better tech appeared. It was when leadership stopped trusting that the monolithic data centers with the monolithic applications could deliver what the business actually needed. That is the key. The failure of monolithic infrastructure was not technical – it was cultural.

Hypervisors are not the platform you think

Let us make that clear: hypervisor are not platforms! They are just silos and one piece of a bigger puzzle.

Yes, they play a role in virtualization. Yes, they helped abstract hardware and brought some flexibility. But let us not overstate it, they do not define modern infrastructure or a private cloud. Hypervisors solve a problem from a decade ago. Modern private infrastructure is not about stacking tools, it is about breaking silos, including the ones created by legacy virtualization models.

Private Cloud – Modern Infrastructure

So, what is a modern private infrastructure? What is a private cloud? It is not just cloud-native behind your firewall. It is not just running Kubernetes on bare metal. It is a mindset.

You do not get to “modern” by chasing features or by replacing one virtualization solution with another vendor. You get there by believing in the principles of openness, automation, decentralization, and speed. And that trust has to start from the top. If your CIO or CTO is still building for audit trails and risk reduction as their north star, you will end up with another monolithic data center stack. Just with fancier logos.

But if leadership leans into trust – trust in people, in automation, in feedback loops – you get a system that evolves. Call it modern. Call it next-gen.

It was never about the technology

We moved from monolithic data centers not because the tech got better (though it did), but because people stopped trusting the old system to serve the new mission.

And as we move forward, we should remember: it is not hypervisors or containers or even clouds that shape the future. It is trust in execution, leadership, and direction. That is the real platform everything else stands on. If your architecture still assumes manual control, ticketing systems, and approvals every step of the way, you are not building a modern infrastructure. You are simply replicating bureaucracy in YAML. A modern infra is about building a cloud that does not need micro-management.

Platform Thinking versus Control

A lot of organizations say they want a platform, but what they really want is control. Big difference.

Platform thinking is rooted in enablement. It is about giving teams consistent experiences, reusable services, and the freedom to ship without opening a support ticket every time they need a VM or a namespace.

And platform thinking only works when there is trust as well:

  • Trust in dev teams to deploy responsibly
  • Trust in infrastructure to self-heal and scale
  • Trust in telemetry and observability to show the truth

Trust is a leadership decision. It starts when execs stop treating infrastructure as a cost center and start seeing it as a product. Something that should deliver value, be measured, and evolve.

It is easy to get distracted. A new storage engine, a new control plane, a new AI-driven whatever. Features are tempting because they are measurable. You can point at them in a dashboard or a roadmap.

But features don’t create trust. People do. The most advanced platform in the world is useless if teams do not trust it to be available, understandable, and usable. 

So instead of asking “what tech should we buy?”, the real question is:

“Do we trust ourselves enough to let go of the old way?”

Because that is what building a modern private cloud is really about.

Trust at Scale

In Switzerland, we like things to work. Predictably. Reliably. On time. With the current geopolitical situation in the world, and especially when it comes to public institutions, that expectation is non-negotiable.

The systems behind those services are under more pressure than ever. Demands are rising and talent is shifting. Legacy infrastructure is getting more fragile and expensive. And at the same time, there is this quiet but urgent question being asked in every boardroom and IT strategy meeting:

Can we keep up without giving up control?

Public sector organizations (not only in Switzerland) face a unique set of constraints:

  • Critical infrastructure cannot go down, ever
  • Compliance and data protection are not just guidelines, they are legal obligations
  • Internal IT often has to serve a wide range of users, platforms, and expectations

So, it is no surprise that many of these organizations default to monolithic, traditional data centers. The logic is understandable: “If we can touch it, we can control it.”

But here is the reality though: control does not scale. And legacy does not adapt. Staying “safe” with old infrastructure might feel responsible, but it actually increases long-term risk, cost, and technical debt. There is a temptation to approach modernization as a procurement problem: pick a new vendor, install a new platform, run a few migrations, and check the box. Done.

But transformation doesn’t work that way. You can’t buy your way out of a culture that does not trust change.

In understand, this can feel uncomfortable. Many institutions are structured to avoid mistakes. But modern IT success requires a shift from control to resilience, and it is not about perfection. It is only perfect until you need to adapt again.

How to start?

By now, it is clear: modern private cloud infrastructure is not about chasing trends or blindly “moving to the cloud.” It’s about designing systems that reflect what your organization values: reliability, control, and trust, while giving teams the tools to evolve. But that still leaves the hardest question of all:

Where do we start?

First, ransparency is the first ingredient of trust. You can’t fix what you won’t name.

Second, modernizing safely does not mean boiling the ocean. It means starting with a thin slice of the future.

The goal is to identify a use case where you can:

  • Show real impact in under six months

  • Reduce friction for both IT and internal users

  • Create confidence that change is possible without risk

In short, it is about finding use cases with high impact but low risk.

Third, this is where a lot of transformation efforts stall. Organizations try to modernize the tech, but keep the old permission structures. The result? A shinier version of the same bottlenecks. Instead, shift from control to guardrails. Think less about who can approve what, and more about how the system enforces good behavior by default. For example:

  • Implement policy-as-code: rules embedded into the platform, not buried in documents

  • Automate security scans, RBAC, and drift detection

  • Give teams safe, constrained freedom instead of needing to ask for access

Guardrails enable trust without giving up safety. That’s the core of a modern infrastructure (private or public cloud).

And lastly, make trust measurable. Not just with uptime numbers or dashboards but with real signals:

  • Are teams delivering faster?

  • Are incidents down?

  • etc.

Make this measurable, visible, and repeatable. Success builds trust. Trust creates momentum.

Final Thoughts

IT organizations do not need moonshots. They need measured, meaningful modernization. The kind that builds belief internally, earns trust externally, and makes infrastructure feel like an asset again.

The technology matters, but how you introduce it matters even more.