Why I Left Oracle and Joined Nutanix

Why I Left Oracle and Joined Nutanix

There are moments in a career when you stop and realise that the path beneath your feet is no longer the path you set out to walk. Sometimes the change is subtle, almost invisible and other times it becomes impossible to ignore. For me, this moment arrived somewhere between large public sector strategy discussions, another round of organizational changes, and one more conversation about “global priorities” that had little connection to the needs of Swiss or European sovereign infrastructure.

I spent a meaningful year at Oracle. I met great people and learned what it means to bring a (dedicated) hyperscale cloud into regulated environments. OCI Dedicated Region is still one of the most interesting and ambitious engineering efforts in the cloud industry. But at some point, I realized that my personal mission of digital sovereignty, open choice, and the empowerment of customers started to diverge from where I felt the company was going.

Not wrong. Not bad. Just different. And that difference grew large enough that it became impossible to pretend we were still walking in the same direction.

Sovereignty has always been my north star

Years before Oracle, long before the idea of sovereign clouds became a political agenda, I cared about the question of who controls technology. My time at VMware shaped that perspective deeply. Private cloud, infrastructure independence, and the ability for organizations to define their own architecture rather than renting someone else’s world.

Even during my time at Oracle, I continued to view everything through that sovereignty lens. Dedicated Region was my way of reconciling public cloud innovation with local control, which is a compelling proposition in many cases. But it became increasingly clear to me that the broader industry narrative was drifting toward full-stack centralization. Clouds wanted to become operating systems. Platforms wanted to become monopolies. The idea that customers deserve autonomy was becoming a footnote.

At some point, you have to ask yourself: Are you still aligned with the direction of travel, or are you just trying to keep up even though you know you want something else?

Realizing that it was time to step off the path

There is no single moment that triggered my decision to leave. It was more like a slow accumulation of signals. My conversations increasingly shifted from “how do we empower customers?” to “how do we position the stack?”. The freedom and creativity I had in the early days of promoting sovereign cloud initiatives felt narrower over time. And internally, I caught myself spending more energy explaining why sovereignty matters than building solutions around it.

If your work becomes a negotiation with your own values, you eventually reach a point where you must choose. Stay and adapt, or step forward and realign.

I chose alignment.

Why private cloud again?

When you think deeply about sovereignty, you eventually come to the simple truth that sovereignty does not happen by accident. It is not a checkbox, a certificate, or a location of a data center. Sovereignty is an architectural stance. A design choice. A commitment to decentralization, reversibility, and customer control.

And that is where private cloud becomes relevant again as the foundation for a new era of controlled autonomy.

The more the world embraces hyperscale convenience, the more valuable real control becomes. The more cloud platforms abstract everything away, the more important it becomes to own the layers that matter. The more AI, data, and national infrastructure rely on cloud services, the more essential locally governed, locally designed, locally operable environments become.

Private cloud, done right, is a rebalancing of power.

Why Nutanix was the logical next chapter

If you want to work on digital sovereignty in a way that is meaningful, credible, and technically grounded, there are only a handful of companies where that mission is more than a marketing line. Nutanix is one of them and arguably the most aligned with the idea of customer freedom.

Nutanix sits in a unique space. It is an infrastructure platform that modernizes private cloud while keeping openness at the center. It doesn’t force customers into a predefined world and it creates the foundation upon which customers can build their own.

Choice becomes real again. Migration paths become optional rather than forced. Hybrid and multi-cloud become strategies instead of slogans. And customers regain something that hyperscale economics has quietly eroded for years. Yep, the right to decide their own future.

What I found at Nutanix is a philosophy that echoes my own. Technology should not dictate. It should enable. It should adapt to the customer, not the other way around. It should enhance sovereignty rather than dilute it behind yet another managed layer. And it should make modernization possible without making independence impossible.

Stepping into a mission, not just a new job

Leaving Oracle was not an escape. It was a conscious return to the principles that have guided me for more than a decade. I joined Nutanix not because it is fashionable, but because it represents the next phase of what the infrastructure world needs. A platform that gives power back to the organisations that increasingly rely on technology for national, economic, and operational resilience.

Modernisation should not mean giving up autonomy. Cloud adoption should not mean losing choice. Future architectures should not be designed by someone else’s business model.

Nutanix brings the balance back. It brings control back. It brings the freedom to design infrastructure on your terms.

And that is where I want to contribute. That is where I want to help customers. That is the path I want to walk.

Final Words

This move means a realignment with my own principles and the narrative I want to push into the market. The next decade will belong to organizations that understand this early and build accordingly.

I want to help shape that decade with customers, partners, policymakers, and anyone who believes that the future of infrastructure must be both modern and self-determined.

Leaving Oracle was the end of a chapter. Joining Nutanix is the continuation of a mission.

And for the first time in a long time, I feel like I am walking exactly where I am supposed to be.

Why the Sovereign AI Platform from Nutanix Ends the DIY Illusion

Why the Sovereign AI Platform from Nutanix Ends the DIY Illusion

AI has moved into every boardroom conversation. However, meaningful results don’t come from building everything from scratch. For enterprises and public organizations, sovereignty has become the real test of digital trust, and platforms like NCP, NKP, and NAI give an answer where others struggle.

Over the past year, enterprises and public institutions have increasingly tried to build their own AI platforms. The idea sounds compelling. You can run open-source large language models in-house, fine-tune them with proprietary data, and operate a fully controlled environment. In practice, this approach proves difficult.

The pace of change is relentless. Models evolve in weeks, tooling shifts every quarter, and lifecycle management is more complex than anticipated. Teams quickly discover that maintaining infrastructure, compliance, and updates requires far more resources than expected. What was meant to guarantee independence often ends in fragile prototypes that never scale.

True sovereignty is not (only) about doing everything internally but also about keeping control while relying on platforms that deliver the operational stability needed to run AI securely and at scale.

Nutanix Cloud Platform – The Sovereign Private Cloud Foundation

Nutanix Cloud Platform (NCP) provides exactly that. It offers a private cloud foundation that allows organizations to remain in control of infrastructure and data, while avoiding the trap of re-creating a hyperscaler internally.

Portfolio diagram

Sovereignty in this context means deciding who governs updates, how compliance is enforced, and which integrations are allowed. NCP delivers this flexibility through its modular architecture. Customers can adopt only the layers they need, combine them with open-source components, or run third-party solutions on the same platform.

For AI, where workloads evolve quickly and ecosystems are fragmented, this adaptability is critical. NCP ensures that the foundation remains under the customer’s control while still being ready for future demands.

Nutanix Kubernetes Platform – Orchestrating AI Workloads

Running AI workloads requires more than infrastructure. It depends on reliable orchestration, lifecycle management, and scalability. This is where Nutanix Kubernetes Platform (NKP) plays a central role.

NKP in Air-gapped environment

NKP delivers an enterprise-ready Kubernetes distribution with consistent operations across environments. Instead of spending resources on patching and troubleshooting upstream clusters, teams can focus on building and deploying AI applications, whether retrieval-augmented generation (RAG) pipelines, vector databases, or fine-tuned models.

The combination of NCP and NKP means that organizations can operate AI in a compliant, sovereign environment, without being slowed down by the underlying complexity.

Nutanix Enterprise AI – Bringing Enterprise AI to Life

Nutanix Enterprise AI (NAI) builds on this foundation by making AI adoption tangible. It provides pre-validated, production-ready blueprints and integrations that simplify how AI infrastructure is deployed and scaled.

Image to represent Nutanix Enterprise AI is a comprehensive solution for all your AI apps and agents

Instead of each organization reinventing the wheel, NAI accelerates the journey by delivering tested architectures for GPU management, data pipelines, and model deployment. Combined with NCP and NKP, it creates a stack where AI workloads can move from experiment to production without losing compliance or control.

NAI ensures that sovereignty means having a trusted, repeatable path to make AI real.

Between Dependency and Autonomy

Enterprises today face two extremes. On one side lies the dependency on hyperscalers, with the risk of (multiple forms of) lock-in and limited control. On the other side stands full do-it-yourself, which consumes resources and rarely delivers production-ready results.

Sovereign AI requires balance. Buy the infrastructure foundation, partner on orchestration, and build only what creates real differentiation. This middle path is where NCP and NKP demonstrate their strength by enabling sovereignty without sacrificing agility.

A Future Still in the Making

The debate about AI and sovereignty is only at the beginning. Regulations will evolve, compliance requirements will tighten, and technology stacks will keep changing. What is clear today? Organizations that embed sovereignty into their AI strategy from the start will be better positioned for the future.

With NCP, NKP, and NAI, enterprises gain a foundation where sovereignty is designed in and adaptability is preserved. That makes them enablers of sustainable AI strategies in an era where control and trust are as important as innovation itself.

Why Workloads Are Really Repatriating to Private Cloud and How to Prepare for AI

Why Workloads Are Really Repatriating to Private Cloud and How to Prepare for AI

In the beginning, renting won. Managed services and elastic capacity let teams move faster than procurement cycles, and the “convenience tax” felt like a bargain. A decade later, many enterprises have discovered what one high-profile cloud exit made clear: The same convenience that speeds delivery can erode margins at scale. That realization is driving a new wave of selective repatriation, moving the right workloads from hyperscale public clouds back to private cloud platforms, while a second force emerges simultaneously. AI is changing what a data center needs to look like. Any conversation about bringing workloads home that ignores AI-readiness is incomplete.

What’s really happening (and what isn’t)

Repatriation today is targeted. IDC’s Server and Storage Workloads Survey found that only ~8-9% of companies plan full repatriation. Most enterprises bring back specific components like production data, backup pipelines, or compute, where economics, latency, or exit risk justify it.

Media coverage has sharpened the picture. CIO.com frames repatriation as strategic workload placement rather than a retreat. InfoWorld’s look at 2025 trends notes rising data-center use even as public-cloud spend keeps growing. Forrester’s 2025 predictions echo the co-existence. Public cloud expands, private cloud thrives alongside it.  Hybrid is normal. Sovereignty, cost control, and performance are the levers. 

And then there are the headline case studies. 37signals (Basecamp/HEY) publicized their journey off AWS – deleting their account in 2025 after moving storage to on-prem arrays and citing seven-figure annual savings on S3 alone. Whether or not your estate looks like theirs, it crystallized the idea that the convenience premium can outgrow its value at scale.

Why the calculus changed

Unit economics at scale. Per-unit cloud pricing that felt fine at 100 TB looks different at multiple PB, especially once you add data egress, cross-AZ traffic, and premium managed services. Well-understood examples (Dropbox earlier) show material savings when high-volume, steady-state workloads move to owned capacity. 

Performance locality and control. Some migrations lifted and shifted latency-sensitive systems into the wrong place. Round-trip times, noisy neighbors, or throttling can make the public cloud an expensive place to be for chatty, tightly coupled apps. Industry coverage repeatedly points to “the wrong workload in the wrong spot” as a repatriation driver. 

Sovereignty and exit risk. Regulated industries must reconcile GDPR/DORA-class obligations and the US CLOUD Act with how and where data is processed. The mid-market is echoing this too. Surveys show a decisive tilt toward moving select apps for compliance, control, and resilience reasons. 

FinOps maturity. After a few budgeting cycles, many teams have better visibility into cloud variability and the true cost of managed services. Some will optimize in-place, others will re-platform components where private cloud wins over a 3-5 year horizon.

Don’t bring it back to a 2015 data center

Even if you never plan to train frontier models, AI has changed the physical design targets. Racks that once drew 8-12 kW now need to host 30-50 kW routinely and 80-100+ kW for dense GPU nodes. Next-gen AI racks can approach 1 MW per rack in extreme projections.

Evolution of power consumption & dissipation per rack (2000-2030)

Image credit: Lennox Data Center Solutions

Air alone won’t be enough. Direct-to-Chip or immersion liquid cooling, higher-voltage distribution, and smarter power monitoring become minimum requirements. European sites face grid constraints that make efficiency and modular growth plans essential. 

This is the retrofit conversation many teams are missing. If you repatriate analytics, vector databases, or LLM inference and can’t cool them, you’ve just traded one bottleneck for another.

How the analysts frame the decision

A fair reading across recent coverage lands on three points:

  1. Hybrid wins. Public cloud spend grows, and so do private deployments, because each has a place. Use the public cloud for burst, global reach, and cutting-edge managed AI services. Use the private cloud for steady-state, regulated (sovereign), chatty, or data-gravity workloads.
  2. Repatriation is selective. It’s about fit. Data sets with heavy egress, systems with strict jurisdiction rules, or platforms that benefit from tight locality are top candidates.
  3. AI is now a first-order constraint. Power, cooling, and GPU lifecycle management change the platform brief. Liquid cooling and higher rack densities stop being exotic and become practical requirements.

Why Nutanix is the safest private cloud bet for enterprises and the regulated world

If you are going to own part of the stack again, two things matter: Operational simplicity and future-proofing. This is where Nutanix stands out.

A single control plane for private, hybrid, and edge. Nutanix Cloud Platform (NCP) lets you run VMs, files/objects, and containers with one operational model across on-prem and public cloud extensions. It’s built for steady-state enterprise workloads and the messy middle of hybrid.

Kubernetes without the operational tax. Nutanix Kubernetes Platform (NKP), born from the D2iQ acquisition, prioritizes day-2 lifecycle management, policy, and consistency across environments. If you are repatriating microservices or building AI micro-stacks close to data, this reduces toil.

AI-ready from the hypervisor up. AHV supports NVIDIA GPU passthrough and vGPU, and Nutanix has published guidance and integrations for NVIDIA AI Enterprise. That means you can schedule, share, and secure GPUs for training or inference alongside classic workloads, instead of creating a special-case island.

Data services with immutability. If you bring data home, protect it. Nutanix Unified Storage (NUS) provides WORM/immutability and integrates with leading cyber-recovery vendors, giving you ransomware-resilient backups and object locks without bolt-on complexity. 

Enterprise AI without lock-in. Nutanix Enterprise AI (NAI) focuses on building and operating model services on any CNCF-certified Kubernetes (on-prem, at the edge, or in cloud) so you keep your data where it belongs while retaining choice over models and frameworks. That aligns directly with sovereignty programs in government and regulated industries.

A Full-Stack Platform for Private AI

You get a private cloud that behaves like a public cloud where it matters, including lifecycle automation, resilience, and APIs. Under your control and jurisdiction.

Designing the landing zone

On day zero, deploy NCP as your substrate with AHV and Nutanix Unified Storage. Enable GPU pools on hosts that will run inference/training, and integrate NKP for container workloads. Attach immutable backup policies to objects and align with your chosen cyber-recovery stack. As you migrate, standardize on one identity plane and network policy model so VMs and containers are governed the same way. When you are ready to operationalize AI services closer to data, layer NAI to package and run model APIs with the same lifecycle tooling you already know.

The bottom line?

Repatriation is the natural correction after a decade of fast, sometimes indiscriminate, lift-and-shift, and not an anti-cloud movement. The best operators are recalibrating placement. AI turns this from a pure cost exercise into an infrastructure redesign. You can’t bring modern workloads home to a legacy room.

If you want the private side of that hybrid story without rebuilding a platform team from scratch, Nutanix is the safe choice. You get a single control plane for virtualization, storage, and Kubernetes, immutable data services for cyber-resilience, proven GPU support, and an AI stack that respects your sovereignty choices. That’s how you pay for convenience once, not forever, and how you make the next decade less about taxes and more about outcomes. 

How much is it costing you to believe that VMware or public cloud are cheaper?

How much is it costing you to believe that VMware or public cloud are cheaper?

Every technology leader knows this moment: the procurement team sits across the table and asks the question you’ve heard a hundred times before. “Why is this solution more expensive than what we thought?”

When it comes to Nutanix, the honest answer is simple: it’s not cheap. And it shouldn’t be.
Because what you’re paying for is not just software – you’re paying for enterprise-readiness, operational simplicity, support quality, and long-term resilience. And don’t forget freedom and sovereignty.

But let’s put that into perspective.

The Myth of Cheap IT

Many IT strategies start with the illusion of saving money. The public cloud is often positioned as the easy, cost-effective way forward. The first few months even look promising: minimal upfront investments, quick provisioning, instant access to services.

But costs in the public cloud scale differently. What starts as an attractive proof of concept soon becomes a recurring nightmare in the CFO’s inbox. Networking, egress charges, storage tiers, backup, and compliance layers all stack on top of the base infrastructure. Before long, that “cheap” platform becomes one of the most expensive commitments in the entire IT budget.

We don’t have to talk in hypotheticals here. Just look at 37signals (now Basecamp). Beginning in 2022, they started migrating away from Amazon Web Services (AWS) because of escalating costs. Their AWS bill had ballooned to $3.2 million annually, with $1.5 million of that just for storage. By investing $700,000 in Dell servers and $1.5 million in Pure Storage arrays, they migrated 18 Petabytes of data out of AWS and completely shut down their cloud account by summer 2025. The result? Annual savings of more than $2 million, alongside full ownership and visibility into their infrastructure. For 37signals, the math was simple: public cloud had become the expensive choice.

VMware customers are experiencing something similar, but in a different flavor. Broadcom’s new licensing model has transformed familiar cost structures into something far less predictable and much higher. Organizations that relied on VMware for decades now face steep renewals, mandatory bundles, and less flexibility to optimize spend.

So yes, let’s talk about “expensive”. But let’s be honest about what expensive really looks like.

Paying for Readiness

Let’s talk about Nutanix. At first glance, it may not be the cheapest option on the table. But Nutanix is built from the ground up to deliver enterprise capabilities that reduce hidden costs and avoid painful surprises.

  • What others solve with layers of tools, Nutanix delivers in a single, integrated platform. That means fewer licenses, fewer integration projects, and fewer teams chasing issues across silos.

  • The architecture distributes risk instead of concentrating it. Failures don’t cascade, operations don’t grind to a halt, and recovery doesn’t require a small army.

  • You decide the hardware, the software, and how you extend into the public cloud. That means intentional lock-ins, and no forced upgrades just because a vendor decided to change the rules.

Value is the Real Differentiator

Price is always visible. It’s the line item that everyone sees. But value is often hidden in what doesn’t happen. The outages that rarely occur. The security incidents avoided. The integration projects you don’t need.

When you compare Nutanix against VMware’s new pricing or against runaway public cloud bills, the story shifts. What once looked “expensive” now feels reasonable. Because with Nutanix, you are not paying for legacy baggage or unpredictable consumption models. It is for a platform that runs mission-critical workloads in your sovereign environment.

The Real Cost of Cheap

There’s an old truth in enterprise IT: cheap usually ends up being the most expensive choice.
Cutting costs upfront often means sacrificing reliability, adding complexity, or creating other lock-in that limits your future options. And every one of those decisions comes back later as a much bigger invoice. Sometimes in dollars, sometimes in lost trust.

Nutanix is not cheap. But it is predictable. It is proven. And it is built for organizations that cannot afford to compromise on the workloads that matter most.

Final Thought

The question is not whether Nutanix costs money, of course it does. The real question is what you get in return, and how it compares to the alternatives. Against public cloud bills spiraling out of control and VMware contracts that now feel more like ransom notes, Nutanix delivers clarity, control, sovereignty, and enterprise-grade quality.

And today, that is worth every cent.

Why Nutanix Represents the Next Chapter

Why Nutanix Represents the Next Chapter

For more than two decades, VMware has been the backbone of enterprise IT. It virtualized the data center, transformed the way infrastructure was consumed, and defined the operating model of an entire generation of CIOs and IT architects. That era mattered, and it brought incredible efficiency gains. But as much as VMware shaped the last chapter, the story of enterprise infrastructure is now moving on. And the real question for organizations is not “VMware or Nutanix?”, but the real question is: how much control are you willing to keep over your own future?

The Wrong Question

The way the conversation is often framed, Nutanix against VMware,  misses the point entirely. Customers are not trying to settle a sports rivalry. They are not interested in cheering for one logo over another. What they are really trying to figure out is whether their infrastructure strategy gives them freedom or creates dependency. It is less about choosing between two vendors and more about choosing how much autonomy they retain.

VMware is still seen as the incumbent, the technology that defined stability and became the default. Nutanix is often described as the challenger. But in reality, the battleground has shifted. It is no longer about virtualization versus hyperconvergence, but about which platform offers true adaptability in a multi-cloud world.

The VMware Era – A Breakthrough That Belongs to the Past

There is no denying VMware’s historical importance. Virtualization was a revolution. It allowed enterprises to consolidate, to scale, and to rethink how applications were deployed. For a long time, VMware was synonymous with progress.

But revolutions have life cycles. Virtualization solved yesterday’s problems, and the challenges today look very different. Enterprises now face hybrid and multi-cloud realities, sovereignty concerns, and the rise of AI workloads that stretch far beyond the boundaries of a hypervisor. VMware’s empire was built for an era where the primary challenge was infrastructure efficiency. That chapter is now closing.

The Nutanix Trajectory – From HCI to a Distributed Cloud OS

Nutanix started with hyperconverged infrastructure. That much is true but it never stopped there. Over the years, Nutanix has steadily moved towards building a distributed cloud operating system that spans on-premises data centers, public clouds, and the edge.

This evolution matters because it reframes Nutanix not as a competitor in VMware’s world, but as the shaper of a new one. Think about it. Now, it’s about who provides the freedom to run workloads wherever they make the most sense without being forced into a corner by contracts, licensing, or technical constraints.

The Cost of Inertia

For many customers, staying with VMware feels like the path of least resistance. There are sunk costs, existing skill sets, and the comfort of familiarity, but inertia comes at a price. The longer enterprises delay modernization, the more difficult and expensive it becomes to catch up later.

The Broadcom acquisition has accelerated this reality. Pricing changes, bundled contracts, and ecosystem lock-in are daily conversations in boardrooms. Dependency has become a strategic liability. What once felt like stability now feels like fragility.

Leverage Instead of Lock-In

This is where Nutanix changes the narrative. It is not simply offering an alternative hypervisor or another management tool. It is offering leverage – the ability to simplify operations while keeping doors open.

With Nutanix, customers can run workloads on-premises, in AWS, in Azure, in GCP, or across them all. They can adopt cloud-native services without abandoning existing investments. They can prepare for sovereignty requirements or AI infrastructure needs without being tied to a single roadmap dictated by a vendor’s financial strategy.

That is what leverage looks like. It gives CIOs and IT leaders negotiation power. It ensures that the infrastructure strategy is not dictated by one supplier’s pricing model, but by the customer’s own business needs.

The Next Chapter

VMware defined the last era of enterprise IT. It built the virtualization chapter that will always remain a cornerstone in IT history. But the next chapter is being written by Nutanix. Not because it “beat” VMware, but because it aligned itself with the challenges enterprises are facing today: autonomy, adaptability, and resilience.

This chapter is about who controls the terms of the game. And for organizations that want to stay in charge of their own destiny, Nutanix represents the next chapter.

Why Sovereign Hybrid Multi-Cloud is the Future of Cloud in Europe

Why Sovereign Hybrid Multi-Cloud is the Future of Cloud in Europe

When people talk about cloud computing, the conversation almost always drifts toward the hyperscalers. AWS, Azure, and Google Cloud have shaped what we consider a “cloud” today. They offer seemingly endless catalogs of services, APIs for everything, and a global footprint. So why does Nutanix call its Nutanix Cloud Platform (NCP) a private cloud, even though its catalog of IaaS and PaaS services is far more limited?

To answer that, it makes sense to go back to the roots. NIST’s SP 800-145 definition of cloud computing is still the most relevant one. According to it, five essential characteristics make something a cloud: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. NIST then defines four deployment models: public, private, community, and hybrid.

If you look at NCP through that lens, it ticks the boxes. It delivers on-demand infrastructure through Prism and APIs, it abstracts and pools compute and storage across nodes, it scales out quickly, and it gives you metering and reporting on consumption. A private cloud is about the deployment model and the operating characteristics, not about the length of the service catalog. And that’s why NCP rightfully positions itself as a private cloud platform.

Nutanix Cloud Platform Hybrid Multi-Cloud

At the same time, it would be wrong to assume that private clouds stop at virtual machines and storage. Modern platforms are extending their scope with built-in capabilities for container orchestration, making Kubernetes a first-class citizen for enterprises that want to modernize their applications without stitching together multiple toolchains. On top of that, AI workloads are no longer confined to the public cloud. Private clouds can now deliver integrated solutions for deploying, managing, and scaling AI and machine learning models, combining GPUs, data services, and lifecycle management in one place. This means organizations are not locked out of next-generation workloads simply because they run on private infrastructure.

A good example is what many European governments are facing right now. Imagine a national healthcare system wanting to explore generative AI to improve medical research or diagnostics. Regulatory pressure dictates that sensitive patient data must never leave national borders, let alone be processed in a global public cloud where data residency and sovereignty are unclear. By running AI services directly on top of their private cloud, with Kubernetes as the orchestration layer, they can experiment with new models, train them on local GPU resources, and still keep complete operational control. This setup allows them to comply with AI regulations, maintain full sovereignty, and at the same time benefit from the elasticity and speed of a modern cloud environment. It’s a model that not only protects sovereignty but also accelerates innovation. Innovation at a different pace, but it’s still innovation.

Now, here’s where my personal perspective comes in. I no longer believe that the hyperscalers’ stretch into the private domain – think AWS Outposts, Azure Local, or even dedicated models like Oracle’s Dedicated Region – represents the future of cloud. In continental Europe especially, I see these as exceptions rather than the rule. The reality now is that most organizations here are far more concerned with sovereignty, control, and independence than with consuming a hyperscaler’s entire catalog in a smaller, local flavor.

What I believe will be far more relevant is the rise of private clouds as the foundation of enterprise IT. Combined with a hybrid multi-cloud strategy, this opens the door to what I would call a sovereign hybrid multi-cloud architecture. The idea is simple: sovereign and sensitive workloads live in a private cloud that is under your control, built to allow quick migration and even a cloud-exit if needed. At the same time, non-critical workloads can live comfortably in a public cloud where an intentional lock-in may even make sense, because you benefit from the deep integration and services that hyperscalers do best.

And this is where the “exit” part becomes critical. Picture a regulator suddenly deciding that certain workloads containing citizen data cannot legally remain in a U.S.-owned public cloud. For an organization without a sovereign hybrid strategy, this could mean months of firefighting, emergency projects, and unplanned costs to migrate or even rebuild applications. But for those who have invested in a sovereign private cloud foundation, with portable workloads across virtual machines and containers, this becomes a controlled process. Data and apps can be moved back under national jurisdiction quickly (or to any other destination), without breaking services or putting compliance at risk. It turns a crisis into a manageable transition.

VMware Sovereign Cloud Borders

This two-speed model gives you the best of both worlds. Sovereignty where it matters, and scale where it helps. And it puts private cloud platforms like Nutanix NCP in a much more strategic light. They are not just a “mini AWS” or a simplified on-prem extension, but are the anchor that allows enterprises and governments to build an IT architecture with both freedom of choice and long-term resilience.

While public clouds are often seen as environments where control and sovereignty are limited, organizations can now introduce an abstraction and governance layer on top of hyperscaler infrastructure. By running workloads through this layer, whether virtual machines or containers, enterprises gain consistent security controls independent of the underlying public cloud provider, unified operations and management across private and public deployments, and workload portability that avoids deep dependency on hyperscaler-native tools. Most importantly, sovereignty is enhanced, since governance, compliance, and security frameworks remain under the organization’s control.

This architecture essentially transforms the public cloud into an extension of the sovereign environment, rather than a separate silo. It means that even when workloads reside on hyperscaler infrastructure, they can still benefit from enhanced security, governance, and operational consistency, forming the cornerstone of a true sovereign hybrid multi-cloud.

In short, the question is not whether someone like Nutanix can compete with hyperscalers on the number of services. The real question is whether organizations in Europe want to remain fully dependent on global public clouds or if they want the ability to run sovereign, portable workloads under their own control. From what I see, the latter is becoming the priority.