Stop Writing About VMware vs. Nutanix

Stop Writing About VMware vs. Nutanix

Over the last months I have noticed something “interesting”. My LinkedIn feed and Google searches are full of posts and blogs that try to compare VMware and Nutanix. Most of them follow the same pattern. They take the obvious features, line them up in two columns, and declare a “winner”. Some even let AI write these comparisons without a single line of lived experience behind it.

The problem? This type of content has no real value for anyone who has actually run these platforms in production. It reduces years of engineering effort, architectural depth, and customer-specific context into a shallow bullet list. Worse, it creates the illusion that such a side-by-side comparison could ever answer the strategic question of “what should I run my business on?”.

The Wrong Question

VMware vs. Nutanix is the wrong question to ask. Both vendors have their advantages, both have strong technology stacks, and both have long histories in enterprise IT. But if you are an IT leader in 2025, your real challenge is not to pick between two virtualization platforms. Your challenge is to define what your infrastructure should enable in the next decade.

Do you need more sovereignty and independence from hyperscalers? Do you need a platform that scales horizontally across the edge, data center, and public cloud with a consistent operating model? Do you need to keep costs predictable and avoid the complexity tax that often comes with layered products and licensing schemes?

Those are the real questions. None of them can be answered by a generic VMware vs. Nutanix LinkedIn post.

The Context Matters

A defense organization in Europe has different requirements than a SaaS startup in Silicon Valley. A government ministry evaluates sovereignty, compliance, and vendor control differently than a commercial bank that cares most about performance and transaction throughput.

The context (regulatory, organizational, and strategic) always matters more than product comparison charts. If someone claims otherwise, they probably have not spent enough time in the field, working with CIOs and architects who wrestle with these issues every day. Yes, (some) features are important and sometimes make the difference, but the big feature war days are over.

It’s About the Partner, Not Just the Platform

At the end of the day, the platform is only one piece of the puzzle. The bigger question is: who do you want as your partner for the next decade?

Technology shifts, products evolve, and roadmaps change. What remains constant is the relationship you build with the vendor or partner behind the platform. Can you trust them to execute your strategy with you? Can you rely on them when things go wrong? Do they share your vision for sovereignty, resilience, and simplicity or are they simply pushing their own agenda?

The answer to these questions matters far more than whether VMware or Nutanix has the upper hand in a feature battle.

A Better Conversation

Instead of writing another VMware vs. Nutanix blog, we should start a different conversation. One that focuses on operating models, trust, innovation, ecosystem integration, and how future-proof your platform is.

Nutanix, VMware, Red Hat, hyperscalers, all of them are building infrastructure and cloud stacks. The differentiator is not whether vendor A has a slightly faster vMotion or vendor B has one more checkbox in the feature matrix. The differentiator is how these platforms align with your strategy, your people, and your risk appetite, and whether you believe the partner behind it is one you can depend on.

Why This Matters Now

The market is in motion. VMware customers are forced to reconsider their roadmap due to the Broadcom acquisition and the associated licensing changes. Nutanix is positioning itself as a sovereign alternative with strong hybrid cloud credentials. Hyperscalers are pushing local zones and sovereign cloud initiatives.

In such a market, chasing simplistic comparisons is a waste of time. Enterprises should focus on long-term alignment with their cloud and data strategy. They should invest in platforms and partners that give them control, choice, and agility.

Final Thought

So let’s stop writing useless VMware vs. Nutanix comparisons. They don’t help anyone who actually has to make decisions at scale. Let’s raise the bar and bring back thought leadership to this industry. Share real experiences. Talk about strategy and outcomes. Show where platforms fit into the bigger picture of sovereignty, resilience, and execution. And most importantly: choose the partner you can trust to walk this path with you.

That is the conversation worth having. Everything else is just noise and bullshit.

Finally, People Start Realizing Sovereignty Is a Spectrum

Finally, People Start Realizing Sovereignty Is a Spectrum

For months and years, the discussion about cloud and digital sovereignty has been dominated by absolutes. It was framed as a black-and-white choice. Either you are sovereign, or you are not. Either you trust hyperscalers, or you don’t. Either you build everything yourself, or you hand it all over. But over the past two years, organizations, governments, and even the vendors themselves have started to recognize that this way of thinking doesn’t reflect reality. Sovereignty is seen as a spectrum now.

When I look at the latest Gartner Magic Quadrant (MQ) for Distributed Hybrid Infrastructure (DHI), this shift becomes even more visible. In the leader’s quadrant, we find AWS, Microsoft, Oracle, Broadcom (VMware), and Nutanix. Each of them is positioned differently, but they all share one thing in common. They now operate somewhere along this sovereignty spectrum. Some of them are “fully” sovereign and some of them are dependent. The truth lies in between, and it is about how much control you want to retain versus how much you are willing to outsource. But it’s also possible to have multiple vendors and solutions co-existing.

Gartner MQ DHI 2025

The Bandwidth of Sovereignty

To make this shift more tangible, think of sovereignty as a bandwidth rather than a single point. On the far left, you give up almost all control and rely fully on global hyperscalers, following their rules, jurisdictions, and technical standards. On the far right, you own and operate everything in your data center, with full control but also full responsibility. Most organizations today are somewhere in between (using a mix of different vendors and clouds).

This bandwidth allows us to rate the leaders in the MQ not as sovereign or non-sovereign, but according to where they sit on the spectrum:

  • AWS stretches furthest toward global reach and scalability. They are still in the process of building a sovereign cloud, and until that becomes reality, none of their extensions (Outposts, Wavelength, Local Zones) can truly be seen as sovereign (please correct me if I am wrong). Their new Dedicated Local Zones bring infrastructure closer, but AWS continues to run the show. Meaning sovereignty is framed through compliance, not operational autonomy.

  • Microsoft sits closer to the middle. With Microsoft’s Sovereign Cloud initiatives in Europe, they acknowledge the political and regulatory reality. Customers gain some control over data residency and compliance, but the operational steering remains with Microsoft (except for their “Sovereign Private Cloud” offering, which consists of Azure Local + Microsoft 365 Local).

  • Oracle has its EU Sovereign Cloud, which is already available today, and with offerings like OCI Dedicated Region and Alloy that push sovereignty closer to customers. Still, these don’t offer operational autonomy, as Oracle continues to manage much of the infrastructure. For full isolation, Oracle provides Oracle Cloud Isolated Region and the smaller Oracle Compute Cloud@Customer Isolated (C3I). These are unique in the hyperscaler landscape and move Oracle further to the right.

  • Broadcom (VMware) operates in a different zone of the spectrum. With VMware’s Cloud Foundation stack, customers can indeed build sovereign clouds with operational autonomy in their own data centers. This puts them further right than most hyperscalers. But Gartner and recent market realities also show that dependency risks are not exclusive to AWS or Azure. VMware customers face uncertainty tied to Broadcom’s licensing models and strategic direction, which balances out their autonomy.

  • Google does not appear in the leader’s quadrant yet, but their Google Distributed Cloud (GDC) deserves mention. Gartner highlights how GDC is strategically advancing, winning sovereign cloud projects with governments and partners, and embedding AI capabilities on-premises. Their trajectory is promising, even if their current market standing hasn’t brought them into the top right yet.

  • Nutanix stands out by offering a comprehensive single product – the Nutanix Cloud Platform (NCP). Gartner underlines that NCP is particularly suited for sovereign workloads, hybrid infrastructure management, and edge multi-cloud deployments. Unlike most hyperscalers, Nutanix delivers one unified stack, including its “own hypervisor as a credible ESXi alternative”. That makes it possible to run a fully sovereign private cloud with operational autonomy, without sacrificing cloud-like agility and elasticity.

Why the Spectrum Matters

This sovereignty spectrum changes how CIOs and policymakers make decisions. Instead of asking “Am I sovereign or not?”, the real question becomes:

How far along the spectrum do I want to be and how much am I willing to compromise for flexibility, cost, or innovation?

It is no longer about right or wrong. Choosing AWS does not make you naive. Choosing Nutanix does not make you paranoid. Choosing Oracle does not make you old-fashioned. Choosing Microsoft doesn’t make you a criminal. Each decision reflects an organization’s position along the bandwidth, balancing risk, trust, cost, and control.

Where We Go From Here

The shift to this spectrum-based view has major consequences. Vendors will increasingly market not only their technology but also their place on the sovereignty bandwidth. Governments will stop asking for absolute sovereignty and instead demand clarity about where along the spectrum a solution sits. And organizations will begin to treat sovereignty not as a one-time decision but as a dynamic posture that can move left or right over time, depending on regulation, innovation, and geopolitical context.

The Gartner MQ shows that the leaders are already converging around this reality. The differentiation now lies in how transparent they are about it and how much choice they give their customers to slide along the spectrum. Sovereignty, in the end, is not a fixed state. It is a journey.

Moving into Any Cloud Is Easy. Leaving Is the Hard Part

Moving into Any Cloud Is Easy. Leaving Is the Hard Part

For more than a decade, the industry has been focused on one direction. Yes, into the cloud. Migration projects, cloud-first strategies, and transformation initiatives all pointed the way toward a future where workloads would move out of data centers and into public platforms. Success was measured in adoption speed and the number of applications migrated. Very few people stopped to ask a more uncomfortable question: What if one day we needed to move out again?

This question, long treated as hypothetical, has now become a real consideration for many organizations. Cloud exit strategies, once discussed only at the margins of risk assessments, are entering boardroom conversations. They are no longer about distrust or resistance to cloud services, but about preparedness and strategic flexibility.

Part of the challenge is perception. In the early years, the cloud was often viewed as a one-way street. Once workloads were migrated, it was assumed they would stay there indefinitely. The benefits were obvious (agility, global reach, elastic scale, and a steady stream of innovation). Under such conditions, why would anyone think about leaving? But reality is rarely that simple. Over time, enterprises discovered that circumstances change. Costs, which in the beginning looked predictable, began to rise, especially for workloads that run continuously. Regulations evolved, sometimes requiring that data be handled differently or stored in new ways. Geopolitical factors entered the discussion, adding new dimensions of risk and dependency. What once felt like a permanent destination started to look more like another stop in a longer journey.

Exiting the cloud, however, is rarely straightforward. Workloads are not just applications; they are deeply tied to the data they use. Moving terabytes or petabytes across environments is slow, expensive, and operationally challenging. The same is true for integrations. Applications are connected to identity systems, monitoring frameworks, CI/CD pipelines, and third-party APIs. Each of these dependencies creates another anchor that makes relocation harder. Licensing and contracts add another layer of complexity, where the economics or even the legal terms of use can discourage or delay migration. And finally, there are the human and process elements. Teams adapt their ways of working to a given platform, build automation around its services, and shape their daily operations accordingly. Changing environments means changing habits, retraining staff, and, in some cases, restructuring teams.

Despite these obstacles, exit strategies are becoming more important. Rising costs are one reason, particularly for predictable workloads, where running them elsewhere might be more economical. Compliance and sovereignty requirements are another. New rules can suddenly make a deployment non-compliant, forcing organizations to rethink their choices. A third driver is the need for strategic flexibility. Many leaders want to ensure they are not overly dependent on a single provider or operating model. Having the ability to relocate workloads when circumstances demand it has become a necessity.

This is why exit strategies should be seen less as a technical exercise and more as a strategic discipline. The goal is not to duplicate everything or keep environments constantly synchronized, which would be wasteful and unrealistic. Instead, the goal is to maintain options. Options to repatriate workloads when economics dictate, options to move when compliance requires, and options to expand when innovation opportunities emerge. The best exit strategies are not documents that sit on a shelf. They are capabilities built into the way an enterprise designs, operates, and governs its IT landscape.

History in IT shows why this matters. Mainframes, proprietary UNIX systems and even some early virtualization platforms all created situations of deep dependency. At the time, those technologies delivered enormous value. But eventually, organizations needed to evolve and often found themselves constrained. The lesson is not to avoid new technologies, but to adopt them with foresight, knowing that change is inevitable. Exit strategies are part of that foresight.

Looking ahead, enterprises can prepare by building in certain principles. Workloads that are critical to the business should be designed with portability in mind, even if not every application needs that level of flexibility. Data should be separated from compute wherever possible, because data gravity is one of the biggest barriers to mobility. And governance should be consistent across environments, so that compliance, security, and cost management follow workloads rather than being tied to a single location. These principles do not mean abandoning the cloud or holding it at arm’s length. On the contrary, they make the cloud more sustainable as a strategic choice.

Cloud services will continue to play a central role in modern IT. The benefits are well understood, and the pace of innovation will ensure that they remain attractive. But adaptability has become just as important as adoption. Having an exit strategy is not a sign of mistrust. It is a recognition that circumstances can change, and that organizations should be prepared. In the end, the key question is no longer only how fast you can move into the cloud, but also how easily you can move out again if you ever need to. And this includes the private cloud as well.

What Is Sovereign Enough for You?

What Is Sovereign Enough for You?

Digital sovereignty has become one of the defining topics for governments, regulators, and organizations that operate with sensitive data. Everyone wants to know how much control they truly have over their cloud environment. But the question that should be asked first is: what is sovereign enough for you? Because sovereignty is not a binary choice. It comes in different layers, with different levels of autonomy, control, and dependencies on global vendors like Oracle.

Oracle has designed its portfolio with exactly this variety in mind. Not every government or regulated organization needs or wants a fully isolated environment. Some are fine with Oracle managing the service end-to-end, while others require absolute control down to operations and staffing. Let’s walk through the different operating models and connectivity dependencies, so you can decide where your needs for sovereignty fit in.

1) Building a Full Sovereign Cloud With Local Personnel

At the very top of the sovereignty spectrum sits the option to build a national or regional sovereign cloud that is completely separated from Oracle’s global public regions (separate realms). These environments are staffed and operated by locally cleared personnel, and the legal entity running the cloud sits within the country or region itself.

A graphic depicting OCI realms with separation.

Examples today are the UK Government & Defence Cloud and the Oracle EU Sovereign Cloud. Here, sovereignty is not only a technical matter but also an organizational one. Governments get the guarantee that operations, compliance, and support are entirely bound by local regulations and cannot be accessed or influenced from outside.

Full operational autonomy by design. The control plane, the data plane, and the people managing the systems are all local. Oracle provides the technology, but the control and day-to-day operations are delegated to the sovereign entity.

From a policy-maker’s perspective, this is the gold standard. Independence from foreign jurisdictions, complete local control of staff, audits, and processes, and guaranteed resilience even without global connectivity. It comes with the highest costs and commitments, but also the strongest assurance.

2) OCI Dedicated Region

OCI Dedicated Region is a full public cloud region deployed directly into a customer’s data center. It includes all Oracle Cloud services, runs on dedicated infrastructure, and provides the same service catalog as Oracle’s global regions.

Diagram of OCI in a dedicated region, description below

From a sovereignty perspective, this model ensures that data never leaves the country if the customer so desires. The region is connected to Oracle’s global backbone (still separate realms), which allows it to remain consistent in updates, operations, and service integration. However, the control plane still depends on Oracle. Updates, patches, and lifecycle management (~15’000 changes per month) are performed by Oracle engineers, who operate under strict contracts and compliance rules.

Examples in practice:

  • Vodafone has deployed six Dedicated Regions across Europe (Ireland, Italy, Germany) to modernize core systems, ensure compliance, and keep data inside the EU while leveraging Oracle’s global innovation cycle.

  • Avaloq, the Swiss banking software leader, runs its own Dedicated Region to provide compliant, modern infrastructure for financial services, combining local control of data with Oracle’s managed operations.

Dedicated Region is often sovereign enough: sensitive workloads stay in-country, under national regulation, while Oracle ensures consistency, security, and ongoing modernization. Operational autonomy is not fully local, but the trade-off is efficiency and scale.

3) Oracle Alloy

Oracle Alloy takes sovereignty one step further by introducing a cloud-as-a-service model for partners. Service providers, system integrators, or governments can license Oracle’s technology stack and operate their own branded cloud. Alloy provides the full OCI service catalog but allows the partner to take over customer relationships, billing, compliance, and front-line operations.

Becoming an Oracle Alloy partner  diagram, description below

This is highly attractive for countries and organizations that want to build their own cloud business or national platforms without developing hyperscale technology themselves. Still, Alloy maintains a technical tether to Oracle. While the partner gains control over operations and branding, Oracle remains in charge of certain aspects of lifecycle management, support escalation (tier 2 and tier 3 support), and roadmap alignment.

Examples in practice:

  • In Saudi Arabia, telecom leader stc is using Alloy to deliver sovereign cloud services that meet local data residency and regulatory requirements.

  • In Japan, Fujitsu uses Oracle Alloy to provide sovereign cloud and AI capabilities tailored to Japanese compliance needs.

  • In Italy, the Polo Strategico Nazionale (PSN) leverages Alloy as part of its managed public cloud offering to deliver secure and compliant cloud services to public administrations.

Alloy strikes a strong balance: It empowers local ecosystems to run a nationally branded sovereign cloud, keeping control of data and compliance, while Oracle provides the innovation foundation. It is not full independence, but it delivers sovereignty in practice for many use cases.

4) Oracle Compute Cloud@Customer

Oracle Compute Cloud@Customer (C3) is a private cloud appliance that delivers OCI services inside your data center. It is ideal for customers who want cloud elasticity and API compatibility with OCI, but who also need workloads to run locally due to latency, compliance, or data residency requirements.

However, the control plane is still managed by Oracle (in a public OCI region or connected to the EU Sovereign Cloud). This means patching, upgrades, and critical operations are carried out by Oracle teams, even though the infrastructure is located in your facility. Connectivity back to Oracle is also required for normal lifecycle operations.

An additional strength of C3 is its seamless integration with OCI Dedicated Region. Customers can connect their local C3 instances to their Dedicated Region, effectively combining on-premises elasticity with the scale and service catalog of a full cloud region running inside their country. This creates a flexible architecture where workloads can be placed optimally on C3 for local control and performance, or on Dedicated Region for broader cloud capabilities.

Oracle Compute Cloud@Customer key capabilities

This model is a pragmatic compromise. It guarantees data sovereignty by keeping workloads in-country, but governments or regulated organizations don’t need to staff or manage the complexity of lifecycle operations. With the option to connect to a Dedicated Region, C3 also opens the door to a multi-layer sovereign cloud strategy, blending local control with the breadth of a national-scale cloud.

5) Oracle Cloud Isolated Region (OCIR)

Oracle Cloud Isolated Region (OCIR) is a specialized deployment model designed for environments that require enhanced security, autonomy, and in-country governance at national or organizational scale. Like a Dedicated Region, it is a full OCI cloud region deployed on-premises, but it is operated in a more restricted and isolated mode, with minimized dependency on Oracle’s global backbone.

Oracle Cloud Isolated Region

Example in practice: Singapore’s Defence Science and Technology Agency (DSTA) has selected OCIR to support the Ministry of Defence (MINDEF) and the Singapore Armed Forces (SAF). The deployment provides an air-gapped, sovereign cloud with high-performance compute and AI capabilities to strengthen C4 (Command, Control, Communications, and Computers) systems. It ensures secure operations, scalability, and rapid decision-making, all within a fully national framework.

OCIR is particularly suited to government ministries, defense, or intelligence organizations that want a nationally hosted cloud hub with strong autonomy, while still retaining the scalability and service richness of a hyperscale platform.

OCIR represents a strategic anchor, providing a sovereign cloud backbone for an entire country or authority that combines national-scale control with the innovation and reliability of Oracle Cloud.

6) Oracle Compute Cloud@Customer Isolated (C3I)

Oracle Compute Cloud@Customer Isolated (C3I) is the solution for organizations that need the highest degree of operational autonomy without running a full sovereign cloud program.

C3I is deployed into a customer’s data center and runs in an air-gapped configuration. The control plane is hosted locally and does not rely on Oracle’s global backbone. This means the customer or the designated authority is in charge of lifecycle operations, updates, and connectivity policies. Oracle provides the technology stack and ensures that the platform can evolve, but operations and control are fully handed to the customer/partner.

Scenario for defense organizations: Imagine a defense ministry operating an Oracle Cloud Isolated Region (OCIR) as its central sovereign cloud hub. Non-sensitive workloads such as HR, logistics, or training could run on regular C3 instances connected to OCIR’s control plane. At the same time, highly sensitive or tactical workloads, such as battlefield data analysis, mission planning, or classified operations, could be deployed on C3I instances. These isolated instances would be managed by local defense teams in the field, operating autonomously even in disconnected or hostile environments. This dual approach allows governments to combine centralized governance through OCIR with operational independence in mission-critical scenarios.

Oracle Compute Cloud@Customer Isolated

C3I represents the pinnacle of autonomy: The ability to maintain full local control for sensitive workloads while integrating into a broader sovereign cloud architecture anchored by OCIR.

Where Do You Stand on the Spectrum?

When thinking about sovereignty, it is essential to recognize that not every organization needs the same level of control. For some, a Dedicated Region is already sovereign enough, as it keeps all data within national borders while still benefiting from Oracle’s global expertise. For others, Alloy provides the right balance of local branding, compliance, and ecosystem building. For governments requiring national-scale autonomy, OCIR acts as a sovereign cloud hub. And for those with the most demanding tactical requirements, C3I ensures full local independence.

So the question remains: What is sovereign enough for you? The answer depends on your data sensitivity, regulatory environment, budget and strategic goals. Oracle has built a portfolio that allows you to choose. The challenge is to define your threshold and then pick the model that aligns with it.

The Future of Hybrid Multi-Cloud

The Future of Hybrid Multi-Cloud

Most organizations today operate a mix of on-premises systems, private cloud platforms, and multiple public clouds. This is not always the result of an overall strategy, but often the consequence of acquisitions, project-specific decisions, or the practicalities of compliance and regulation.

The result is a landscape that is both full of choice and complexity.

Hybrid multi-cloud has become the de facto reality for some in IT. Enterprises already live in a world where applications and data are distributed across different infrastructures, each with its own tools, contracts, and governance models. What used to be a patchwork born out of necessity is now evolving into a deliberate strategy. Making use of the strengths of different environments while trying to tame the complexity they bring.

Why Hybrid Multi-Cloud Is Here to Stay

Several forces are driving this development. Regulations around data residency and sovereignty mean that some workloads must stay within national borders. Business-critical systems that are tightly integrated with legacy processes cannot simply be moved to the cloud overnight. At the same time, organizations want to tap into the rapid innovation of hyperscale platforms or make use of specialized services like AI or advanced analytics. And then there is the matter of resilience. Distributing workloads across different infrastructures helps reduce dependency on a single provider and lowers risk.

But hybrid multi-cloud is not a silver bullet. Managing multiple platforms comes with a cost. Operations teams need to juggle different interfaces, tools, and billing models. Cost transparency across environments is still a weak point. Finding experts who can handle everything from traditional infrastructure to cloud-native architectures is difficult. And perhaps most importantly, moving data across platforms and regions remains expensive and technically challenging.

A Pragmatic “Lift and Learn” Approach

One of the most common questions when discussing hybrid multi-cloud is how to actually get there. Large application portfolios can’t be transformed overnight, and ambitious modernization programs often collapse under their own weight. That’s why the “lift and learn” methodology is so compelling.

In essence, this approach unfolds in two focused phases:

Phase 1 – Lift and Shift
Begin by moving applications as-is. No re-architecture, no delay, no over-analysis. This speeds up cloud adoption and often compresses migration timelines to just a few months. The result? Systems are live in the cloud quickly, giving organizations both visibility and breathing room.

Phase 2 – Learn and Modernize
With the time savings from the first phase, IT teams gain the freedom to explore. They can understand native cloud services, build up needed skills, and gradually rethink how apps can be modernized. On their own schedule, without pressure.

In 2022, I explored this pattern in an article describing how VMware’s “lift and learn” approach gave teams crucial time to upskill and experiment before starting innovation. The lesson remains valid today. Successful cloud strategies respect both human capacity and technological ambition.

Nuanced Workload Placement

Cloud strategies are also maturing. What started as “cloud-first” enthusiasm has shifted toward more pragmatic “cloud-smart” models, and in some cases is now labeled “repatriation“. Surveys often highlight the trend of workloads moving back on-premises, but the reality is far more nuanced. This is not about retreating from the cloud, but about placing workloads more intelligently!

Repatriation is not reversal. You really have to understand this. Organizations are not abandoning cloud, they are rebalancing. Certain workloads, particularly steady-state, predictable, or data-heavy ones, may be more cost-effective or secure on-premises. This is nothing new. Others, such as experimental, elastic, or short-lived applications, remain ideal for public cloud. Some move to the edge or sovereign clouds to meet latency or compliance requirements.

What matters is building a framework for placement decisions. Cost, performance, data proximity, compliance, and strategic flexibility all play a role. Workloads need to live where they make the most sense and where they deliver the most value.

This is the natural evolution of hybrid multi-cloud and not a binary choice between “cloud or not”, but an ongoing, data-driven exercise in finding the right home for each workload.

Workload Mobility and Cloud-Exit Strategies

Hybrid multi-cloud is not only about placing workloads intelligently. It is also about being able to move them when circumstances change. This capability, often overlooked in early cloud journeys, is becoming critical as organizations experience new pressures and triggers that may force them to rethink their choices.

VMware Workload Mobility

Source: https://www.cloud13.ch/2024/08/05/distributed-hybrid-infrastructure-offerings-are-the-new-multi-cloud/ 

Why Workload Mobility Matters
In a world where business conditions, regulations, and technologies can change rapidly, no workload placement decision should be permanent. The ability to move applications and data across infrastructures, between clouds, or back to on-premises, is a strategic safeguard. It ensures that decisions made today do not become tomorrow’s limitations.

Exit Triggers – Why Organizations Rethink Placement
Several factors can trigger the need for a cloud-exit or relocation:

  • Costs: Cloud pricing models are attractive for elastic workloads but can spiral out of control for long-running, predictable systems. Unexpected egress fees or sudden price increases can make repatriation or rebalancing attractive. Keep in mind that price increases can also happen on-premises, as I have described it in the article “Sovereign Clouds and the VMware Earthquake: Dependency Isn’t Just a Hyperscaler Problem

  • Regulatory and Sovereignty Requirements: New or changing laws may force workloads to be moved closer to home, into national clouds, or into sovereign environments under local control.

  • Access to Innovation: Sometimes the opposite is true. Workloads may need to leave legacy environments to take advantage of innovations in AI, analytics, or industry-specific platforms.

  • Strategic Flexibility: Businesses may want to avoid overdependence on one provider to maintain leverage and resilience. Also known as concentration risk.

Building Mobility into the Architecture
Workload mobility does not happen automatically. It requires foresight in architecture and governance:

  • Using containerization and orchestration platforms to abstract workloads from specific infrastructures

  • Follow a “consistent infrastructure, consistent operations” approach
  • Employing infrastructure-as-code and automation for repeatable deployments

  • Designing data strategies that minimize lock-in and enable portability

  • Establishing clear governance on when and how exit strategies should be executed

Mobility is not just about planning for the worst case. It is about creating long-term agility, so that organizations can move workloads toward opportunity as easily as they can move them away from risk.

Business Continuity and Cyber Recovery

Another dimension where hybrid multi-cloud demonstrates its value is business continuity and cyber resilience. Outages, ransomware attacks, or large-scale breaches are operational risks every CIO must plan for. In this context, having access to more than one cloud or infrastructure environment can make the difference between prolonged downtime and rapid recovery.

A Second Cloud as a Safety Net
Traditionally, disaster recovery was designed around secondary data centers. In a hybrid multi-cloud world, however, another cloud can serve as the recovery target. By replicating critical data and workloads into a separate cloud environment, organizations reduce their exposure to a single point of failure. If the primary environment is compromised, whether by technical outage or cyberattack, workloads can be restored in the alternate cloud, ensuring continuity of essential services.

Cyber Recovery and Forensics
Hybrid multi-cloud also opens up new options for cyber recovery. A secondary cloud can be isolated from day-to-day operations and act as a clean recovery environment. In case of a ransomware attack, for example, this environment becomes the trusted place to validate backups, perform integrity checks, and safely restore systems.

Source: https://www.cohesity.com/blogs/isolated-recovery-environments-the-next-thing-in-cyber-recovery/ 

It can also serve as a forensic sandbox, where compromised systems are analyzed without risking production operations.

Planning for the Unthinkable
The lesson here is clear. Hybrid multi-cloud is not only about optimization, innovation, or cost control. It is also about resilience in the face of growing threats. By designing continuity and recovery strategies that span across different providers, organizations build insurance against both natural outages and man-made disruptions.

Looking Ahead

The future will not remove this complexity, but it will change how we deal with it. We will see more organizations adopting a unified cloud operating model, where it no longer matters whether an application runs on-premises, in a private cloud, or in a public one. Abstraction and automation will take away much of the manual overhead, with AI-driven operations playing a key role. Infrastructure will become more decentralized, extending into regional clouds and edge locations. And with that, the focus will shift even further away from servers and workloads, toward outcomes and business value.

What Decision-Makers Should Keep in Mind

Hybrid multi-cloud is not about choosing the “best” cloud. It is about designing IT architectures that remain flexible in a constantly changing environment. Decision-makers should start from business outcomes, not infrastructure preferences. They should think about resilience, sovereignty, and innovation together – not in isolation. Skills and people remain the most critical success factor.

Technology can abstract a lot, but it cannot replace good governance and a strong IT culture.

Conclusion

Hybrid multi-cloud is an ongoing journey. Organizations that embrace flexibility, invest in skills, and build governance models that span across different environments will be best prepared. The future of IT is about mastering many clouds, without being mastered by them.

Swiss Government Cloud – Possible Paths, Components and Vendors Compared

Swiss Government Cloud – Possible Paths, Components and Vendors Compared

Disclaimer: This article reflects my personal opinion and not that of my employer.

The discussion around a Swiss Government Cloud (SGC) has gained significant momentum in recent months. Digital sovereignty has become a political and technological necessity with immediate relevance. Government, politics, and industry are increasingly asking how a national cloud infrastructure could look. This debate is not just about technology, but also about governance, funding, and the ability to innovate.

Today’s starting point is relatively homogeneous. A significant portion of Switzerland’s public sector IT infrastructure still relies on VMware. This setup is stable but not designed for the future. When we talk about the Swiss Government Cloud, the real question is how this landscape evolves from pragmatic use of the public cloud to the creation of a fully sovereign cloud, operated under Swiss control.

For clarity: I use the word “component” (Stufe), in line with the Federal Council’s report, to describe the different maturity levels and deployment models.

A Note on the Federal Council’s Definition of Digital Sovereignty

According to the official report, digital sovereignty includes:

  • Data and information sovereignty: Full control over how data is collected, stored, processed, and shared.

  • Operational autonomy: The ability of the Federal Office of Information Technology (FOITT) to run systems with its own staff for extended periods, without external partners.

  • Jurisdiction and governance: Ensuring that Swiss law, not foreign regulation, defines control and access.

This definition is crucial when assessing whether a cloud model is “sovereign enough.”

Component 1 – Public Clouds Standard

The first component describes the use of the public cloud in its standard form. Data can be stored anywhere in the world, but not necessarily in Switzerland. Hyperscalers such as AWS, Microsoft Azure, or Oracle Cloud Infrastructure (OCI) offer virtually unlimited scalability, the highest pace of innovation, and a vast portfolio of services.

Amazon Web Services (AWS) is the broadest platform by far, providing access to an almost endless variety of services and already powering workloads from MeteoSwiss.

Microsoft Azure integrates deeply with Microsoft environments, making it especially attractive for administrations that are already heavily invested in Microsoft 365 and MS Teams. This ecosystem makes Azure a natural extension for many public sector IT landscapes.

Oracle Cloud Infrastructure (OCI), on the other hand, emphasizes efficiency and infrastructure, particularly for databases, and is often more transparent and predictable in terms of cost.

However, component 1 must be seen as an exception. For sovereign and compliant workloads, there is little reason to rely on a global public cloud without local residency or control. This component should not play a meaningful role, except in cases of experimental or non-critical workloads.

And if there really is a need for workloads to run outside of Switzerland, I would recommend using alternatives like the Oracle EU Sovereign Cloud instead of a regular public region, as it offers stricter compliance, governance, and isolation for European customers.

Component 2a – Public Clouds+

Here, hyperscalers offer public clouds with Swiss data residency, combined with additional technical restrictions to improve sovereignty and governance.

AWS, Azure, and Oracle already operate cloud regions in Switzerland today. They promise that data will not leave the country, often combined with features such as tenant isolation or extra encryption layers.

The advantage is clear. Administrations can leverage hyperscaler innovation while meeting legal requirements for data residency.

Component 2b – Public Cloud Stretched Into FOITT Data Centers

This component goes a step further. Here, the public cloud is stretched directly into the local data centers. In practice, this means solutions like AWS Outposts, Azure Local (formerly Azure Stack), Oracle Exadata & Compute Cloud@Customer (C3), or Google Distributed Cloud deploy their infrastructure physically inside FOITT facilities.

Diagram showing an Outpost deployed in a customer data center and connected back to its anchor AZ and parent Region

This creates a hybrid cloud. APIs and management remain identical to the public cloud, while data is hosted directly by the Swiss government. For critical systems that need cloud benefits but under strict sovereignty (which differs between solutions), this is an attractive compromise.

The vendors differ significantly:

Costs and operations are predictable since most of these models are subscription-based.

Component 3 – Federally Owned Private Cloud Standard

Not every path to the Swiss Government Cloud requires a leap into hyperscalers. In many cases, continuing with VMware (Broadcom) represents the least disruptive option, providing the fastest and lowest-risk migration route. It lets institutions build on existing infrastructure and leverage known tools and expertise.

Still, the cloud dependency challenge isn’t exclusive to hyperscalers. Relying solely on VMware also creates a dependency. Whether with VMware or a hyperscaler, dependency remains dependency and requires careful planning.

Another option is Nutanix, which offers a modern hybrid multi-cloud solution with its AHV hypervisor. However, migrating from VMware to AHV is in practice as complex as moving to another public cloud. You need to replatform, retrain staff, and become experts again.

Hybrid Cloud Management diagram

Both VMware and Nutanix offer valid hybrid strategies:

  • VMware: Minimal disruption, continuity, familiarity, low migration risk. Can run VMware Cloud Foundation (VCF) on AWS, Azure, Google Cloud and OCI.

  • Nutanix: Flexibility and hybrid multi-cloud potential, but migration is similar to changing cloud providers.

Crucially, changing platforms or introducing a new cloud is harder than it looks. Organizations are built around years of tailored tooling, integrations, automation, security frameworks, governance policies, and operational familiarity. Adding or replacing another cloud often multiplies complexity rather than reduces it.

Gartner Magic Quadrant for_Distributed_Hybrid_Infrastructure

Source: https://www.oracle.com/uk/cloud/distributed-cloud/gartner-leadership-report/ 

But sometimes it is just about costs, new requirements, or new partnerships, which result in a big platform change. Let’s see how the Leaders Magic Quadrant for “Distributed Hybrid Infrastructure” changes in the coming weeks or months.

Why Not Host A “Private Public Cloud” To Combine Components 2 and 3?

This possibility (let’s call with component 3.5) goes beyond the traditional framework of the Federal Council. In my view, a “private public cloud” represents the convergence of public cloud innovation with private cloud sovereignty.

“The hardware needed to build on-premises solutions should therefore be obtained through a ‘pay as you use’ model rather than purchased. A strong increase in data volumes is expected across all components.”

Source: https://www.fedlex.admin.ch/eli/fga/2024/1408/de#lvl_2/lvl_2.4 (translated from German)

Solutions like OCI Dedicated Region or Oracle Alloy can deliver the entire Oracle Cloud stack– IaaS, PaaS, databases, analytics – on Swiss soil. Unlike AWS Outposts, Azure Local, these solutions are not coming with a limited set of cloud services, but are identical to a full public cloud region with all cloud services, only hosted and operated locally in a customer’s data center.

OCI Dedicated Region Overview

Additionally, Oracle offers a compelling hybrid path. With OCVS (Oracle Cloud VMware Solution) on OCI Dedicated Region or Alloy, organizations can lift-and-shift VMware workloads unchanged to this local public cloud based on Oracle Cloud Infrastructure, then gradually modernize selected applications using Oracle’s PaaS services like databases, analytics, or AI.

OCI Dedicated Region and OCVS

This “lift-and-modernize” journey helps balance risk and innovation while ensuring continuous operational stability.

And this makes component 3.5 unique. It provides the same public-cloud experience (continuous updates, innovation pace; 300-500 changes per day, 10’000-15’000 per month) but under sovereign control. With Alloy, the FOITT could even act as a Cloud Service Provider for cantons, municipalities, and hospitals, creating a federated model without each canton building its own cloud.

Real-world Swiss example: Avaloq was the first customer in Switzerland to deploy an OCI Dedicated Region, enabling it to migrate clients and offer cloud services to other banks in Switzerland. This shows how a private public cloud can serve highly regulated industries while keeping data and operations in-country.

In this model:

  • Component 2b becomes redundant: Why stretch a public cloud into FOITT data centers if a full cloud can already run there?

  • Component 2 can also be partly obsolete: At least for workloads running a local OCI Dedicated Region (or Alloy) deployment, there’s no need for parallel deployments in public regions, since everything can run sovereignly on a local dedicated OCI region in Switzerland (while workloads from MeteoSwiss on AWS remain in component 2).

A private public cloud thus merges sovereignty with innovation while reducing the need for parallel cloud deployments.

Another Possibility – Combine Capabilities From A Private Public Cloud With Component 2b

Another possibility could be to combine elements of components 3.5 and 2b into a federated architecture: Using OCI Dedicated Region or Oracle Alloy as the central control plane in Switzerland, while extending the ecosystem with Oracle Compute Cloud@Customer (C3) for satellite or edge use cases.

Oracle Compute Cloud@Customer – Wichtige Funktionen

This creates a hub-and-spoke model:

  • The Dedicated Region or Alloy in Switzerland acts as the sovereign hub for governance and innovation.

  • C3 appliances extend cloud services into distributed or remote locations, tightly integrated but optimized for local autonomy.

A practical example would be Swiss embassies around the world. Each embassy could host a lightweight C3 edge environment, connected securely back to the central sovereign infrastructure in Switzerland. This ensures local applications run reliably, even with intermittent connectivity, while remaining part of the overall sovereign cloud ecosystem.

By combining these capabilities, the FOITT could:

  • Use component 2b’s distributed deployment model (cloud stretched into local facilities)

  • Leverage component 3.5’s VMware continuity path with OCVS for easy migrations

  • Rely on component 3.5’s sovereignty and innovation model (private public cloud with full cloud parity)

This blended approach would allow Switzerland to centralize sovereignty while extending it globally to wherever Swiss institutions operate.

What If A Private Public Cloud Isn’t Sovereign Enough?

It is possible that policymakers or regulators could conclude that solutions such as OCI Dedicated Region or Oracle Alloy are not “sovereign enough”, given that their operational model still involves vendor-managed updates and tier-2/3 support from Oracle.

In such a scenario, the BIT would have fallback options:

Maintain a reduced local VMware footprint: BIT could continue to run a smaller, fully local VMware infrastructure that is managed entirely by Swiss staff. This would not deliver the breadth of services or pace of innovation of a hyperscale-based sovereign model, but it would provide the maximum possible operational autonomy and align closely with Switzerland’s definition of sovereignty.

Leverage component 4 from the FDJP: Using the existing “Secure Private Cloud” (SPC) from the Federal Department of Justice and Police (FDJP). But I am not sure if the FDJP wants to become a cloud service provider for other departments.

That said, these fallback strategies don’t necessarily exclude the use of OCI. Dedicated Region or Alloy could co-exist with a smaller VMware footprint, giving the FOITT both innovation and control. Alternatively, Oracle could adapt its operating model to meet Swiss sovereignty requirements – for example, by transferring more operational responsibility (tier-2 support) to certified Swiss staff.

This highlights the core dilemma: The closer Switzerland moves to pure operational sovereignty, the more it risks losing access to the innovation and agility that modern cloud architectures bring. Conversely, models like Dedicated Region or Alloy can deliver a full public cloud experience on Swiss soil, but they require acceptance that some operational layers remain tied to the vendor.

Ultimately, the Swiss Government Cloud must strike a balance between autonomy and innovation, and the decision about what “sovereign enough” means will shape the entire strategy.

Conclusion

The Swiss Government Cloud will not be a matter of choosing one single path. A hybrid approach is realistic: Public cloud for agility and non-critical workloads, stretched models for sensitive workloads, VMware or Nutanix for hybrid continuity, sovereign or “private public cloud” infrastructures for maximum control, and federated extensions for edge cases.

But it is important to be clear: Hybrid cloud or even hybrid multi-cloud does not automatically mean sovereign. What it really means is that a sovereign cloud must co-exist with other clouds – public or private.

To make this work, Switzerland (and each organization within it) must clearly define what sovereignty means in practice. Is it about data and information sovereignty? Is it really about operational autonomy? Or about jurisdiction and governance?

Only with such a definition can a sovereign cloud strategy deliver on its promise, especially when it comes to on-premises infrastructure, where control, operations, and responsibilities need to be crystal clear.

PS: Of course, the SGC is about much more than infrastructure. The Federal Council’s plan also touches networking, cybersecurity, automation and operations, commercial models, as well as training, consulting, and governance. In this article, I deliberately zoomed in on the infrastructure side, because it’s already big, complex, and critical enough to deserve its own discussion. And just imagine sitting on the other side, having to go through all the answers of the upcoming tender. Not an easy task.