Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Over the past two years, a new narrative has taken hold in the cloud market. No, it is not always about sovereign cloud. 🙂 Headlines talk about cloud repatriation – nothing really new, but it is still out there. CIOs speak openly about pulling some workloads back on-premises. Analysts write about organizations “correcting” some earlier cloud decisions to optimize cloud spend. In parallel, hyperscalers themselves now acknowledge that not every workload belongs in the public cloud.

And yet, when you look at the data, you will find a paradox.

IDC and Gartner both project strong, sustained growth in public cloud IaaS spending over the next five years. Not marginal growth and sign of stagnation. But a market that continues to expand at scale, absorbing more workloads, more budgets, and more strategic relevance every year.

At first glance, these two trends appear contradictory. If organizations are repatriating workloads, why does public cloud IaaS continue to grow so aggressively? The answer lies in understanding what is actually being repatriated, what continues to move to the cloud, and how infrastructure constraints are reshaping decision-making in ways that are often misunderstood.

Cloud Repatriation Is Real, but Narrower Than the Narrative Suggests

Cloud repatriation is not a myth. It is happening, but it is also frequently misinterpreted.

Most repatriation initiatives are highly selective. They focus on predictable, steady-state workloads that were lifted into the public cloud under assumptions that no longer hold. Cost transparency has improved, egress fees are better understood and operating models have matured. What once looked flexible and elastic is now seen as expensive and operationally inflexible for certain classes of workloads.

What is rarely discussed is that repatriation does not mean “leaving the cloud”, but I have to repeat it again: It means rebalancing. Meaning, that trganizations are not abandoning public cloud IaaS as a concept. They are just refining their usage of it.

At the same time, some new workloads continue to flow into public cloud environments. Digital-native applications, analytics platforms, some AI pipelines, globally distributed services, and short-lived experimental environments still align extremely well with public cloud economics and operating models. These workloads were not part of the original repatriation debate, and they seem to be growing faster than traditional workloads are being pulled back.

This is how both statements can be true at the same time. Cloud repatriation exists, and public cloud IaaS continues to grow.

The Structural Drivers Behind Continued IaaS Growth

Public cloud IaaS growth is not driven by blind enthusiasm anymore. It is driven by structural forces that have little to do with fashion and everything to do with constraints.

One of the most underestimated factors is time. Building infrastructure takes time and procuring hardware takes time as well. Scaling data centers takes time and many organizations today are not choosing public cloud because it is cheaper or “better”, but because it is available now.

This becomes even more apparent when looking at the hardware market right now.

Hardware Shortages and Rising Server Prices Change the Equation

The infrastructure layer beneath private clouds has suddenly become a bottleneck. Server lead times have increased, GPU availability is constrained and prices for enterprise-grade hardware continue to rise, driven by supply chain pressures, higher component costs, and growing demand from AI workloads.

For organizations running large environments, this introduces a new type of risk. Capacity planning is a logistical problem and no longer just a financial exercise anymore. Even when budgets are approved, hardware may not arrive in time. That is the new reality.

In this context, public cloud data centers represent something extremely valuable: pre-existing capacity. Hyperscalers have already made the capital investments and they already operate at scale. From the customer perspective, infrastructure suddenly looks abundant again.

This is why many organizations currently consider shifting workloads to public cloud IaaS, even if they were previously skeptical. It became a pragmatic response to scarcity.

The Flawed Assumption: “Just Use Public Cloud Instead of Buying Servers”

However, this line of thinking often glosses over a critical distinction.

Many of these organizations do not actually want “cloud-native” infrastructure, if we are being honest here. What they want is physical capacity – They want compute, storage, and networking under predictable performance characteristics. In other words, they want some VMs and bare metal.

Buying servers allows organizations to retain architectural freedom. It allows them to choose their operating system or virtualization stack, their security model, their automation tooling, and their lifecycle strategy. Public cloud IaaS, by contrast, delivers abstraction, but at the cost of dependency.

When organizations consume IaaS services from hyperscalers, they implicitly accept constraints around instance types, networking semantics, storage behavior, and pricing models. Over time, this shapes application architectures and operational processes. The usage of such services suddenly became a lock-in.

Bare Metal in the Public Cloud Is Not a Contradiction

Interestingly, the industry has started to converge on a hybrid answer to this dilemma: bare metal in the public cloud.

Hyperscalers themselves offer bare-metal services. This is an acknowledgment that not all customers want fully abstracted IaaS. Some want physical control without owning physical assets. It is simple as that.

But bare metal alone is not enough. Without a consistent cloud platform on top, bare-metal in the public cloud becomes just another silo. You gain performance and isolation, but you lose portability and operational consistency.

Nutanix Cloud Clusters and the Reframing of IaaS

Nutanix Cloud Platform running on AWS, Azure, and Google Cloud through NC2 (Nutanix Cloud Clusters) introduces a different interpretation of public cloud IaaS.

Instead of consuming hyperscaler-native IaaS primitives, customers deploy a full private cloud stack on bare-metal instances in public cloud data centers. From an architectural perspective, this is a subtle but profound difference.

Customers still benefit from the hyperscaler’s global footprint and hardware availability and they still avoid long procurement cycles, but they do not surrender control of their cloud operating model. The same Nutanix stack runs on-premises and in public cloud, with the same APIs, the same tooling, and the same governance constructs.

Workload Mobility as the Missing Dimension

The most underappreciated benefit of this approach is workload mobility.

In a cloud-native bare-metal deployment tied directly to hyperscaler services, workloads tend to become anchored, migration becomes complex, and exit strategies are theoretical at best.

With NC2, workloads are portable by design. Virtual machines and applications can move between on-premises environments and public cloud (or a service provider cloud) bare-metal clusters without refactoring. In practical terms, this means organizations can use public cloud capacity tactically rather than strategically committing to it. Capacity shortages, temporary demand spikes, regional requirements, or regulatory constraints can be addressed without redefining the entire infrastructure strategy.

This is something traditional IaaS does not offer, and something pure bare-metal consumption does not solve on its own.

Reconciling the Two Trends

When viewed through this lens, the contradiction between cloud repatriation and public cloud IaaS growth disappears.

Public cloud is growing because it solves real problems: availability, scale, and speed. Repatriation is happening because not all problems require abstraction, and not all workloads benefit from cloud-native constraints.

The future is not a reversal of cloud adoption. It is a maturation of it.

Organizations are asking how to use public clouds without losing control. Platforms that allow them to consume cloud capacity while preserving architectural independence are not an alternative to IaaS growth and they are one of the reasons that growth can continue without triggering the next wave of regret-driven repatriation.

What complicates this picture further is that even where public cloud continues to grow, many of its original economic promises are now being questioned again.

The Broken Promise of Economies of Scale

One of the foundational assumptions behind public cloud adoption was economies of scale. The logic seemed sound. Hyperscalers operate at a scale no enterprise could ever match. Massive data centers, global procurement power, highly automated operations. All of this was expected to translate into continuously declining unit costs, or at least stable pricing over time.

That assumption has not materialized as we know by now.

If economies of scale were truly flowing through to customers, we would not be witnessing repeated price increases across compute, storage, networking, and ancillary services. We would not see new pricing tiers, revised licensing constructs, or more aggressive monetization of previously “included” capabilities. The reality is that public cloud pricing has moved in one direction for many workloads, and that direction is up.

This does not mean hyperscalers are acting irrationally. It means the original narrative was incomplete. Yes, scale does reduce certain costs, but it also introduces new ones. That is also true for new innovations and services. Energy prices, land, specialized hardware, regulatory compliance, security investments, and the operational complexity of running globally distributed platforms all scale accordingly. Add margin expectations from capital markets, and the result is not a race to the bottom, but disciplined price optimization.

For customers, however, this creates a growing disconnect between expectation and reality.

When Forecasts Miss Reality

More than half of organizations report that their public cloud spending diverges significantly from what they initially planned. In many cases, the difference is not marginal. Budgets are exceeded, cost models fail to reflect real usage patterns, optimization efforts lag behind application growth.

What is often overlooked is the second-order effect of this divergence. Over a third of organizations report that cloud-related cost and complexity issues directly contribute to delayed projects. Migration timelines slip, modernization initiatives stall, and teams slow down not because technology is unavailable, but because financial and operational uncertainty creeps into every decision.

Commitments, Consumption, and a Structural Risk

Most large organizations do not consume public cloud on a purely on-demand basis. They negotiate commitments, look at reserved capacity, and spend-based discounts. These are strategic agreements designed to lower unit costs in exchange for predictable consumption.

These agreements assume one thing above all else: that workloads will move. They HAVE TO move.

When migrations slow down, a new risk pops up. Organizations fail to reach their committed consumption levels, because they cannot move workloads fast enough. Legacy architectures, migration complexity, skill shortages, and governance friction all play a role.

The consequence is subtle but severe. Committed spend still has to be paid and because of that future negotiations become weaker. The organization enters the next contract cycle with a track record of underconsumption, reduced leverage, and less credibility in forecasting.

In effect, execution risk turns into commercial risk.

This dynamic is rarely discussed publicly, but it is increasingly common in private conversations with CIOs and cloud leaders. The challenge is no longer whether the public cloud can scale, but whether the organization can.

Speed of Migration as an Economic Variable

At this point, migration speed stops being a technical metric and becomes an economic one. The faster workloads can move, the faster negotiated consumption levels can be reached. The slower they move, the more value leaks out of cloud agreements.

This is where many cloud-native migration approaches struggle. Refactoring takes time and re-architecting applications is expensive. Not every workload is a candidate for transformation under real-world constraints.

As a result, organizations are caught between two pressures. On one side, the need to consume public cloud capacity they have already paid for. On the other hand, the inability to move workloads quickly without introducing unacceptable risk.

NC2 as a Consumption Accelerator, Not a Shortcut

This is where Nutanix Cloud Platform with NC2 changes the conversation.

By allowing organizations to run the same private cloud stack on bare metal in AWS, Azure, and Google Cloud, NC2 removes one of the biggest bottlenecks in migration programs: The need to change how workloads are built and operated before they can move.

Workloads can be migrated as they are, operating models remain consistent, governance does not have to be reinvented, and teams do not need to learn a new infrastructure paradigm under time pressure. It’s all about efficiency and speed.

Faster migrations mean workloads start consuming public cloud capacity earlier and the negotiated consumption targets suddenly become achievable. Commitments turn into realized value rather than sunk cost, and the organization regains control over both its migration timeline and its commercial position.

Reframing the Role of Public Cloud

In this context, NC2 is not an alternative to public cloud economics, but a mechanism to actually realize them.

Public cloud providers assume customers can move fast. In reality, many customers cannot, not because they resist change, but because change takes time. Platforms that reduce friction between private and public environments do not undermine cloud strategies. They are here to stabilize them. And they definitely can!

The uncomfortable truth is that economies of scale alone do not guarantee better outcomes for customers, execution does. And execution, in large enterprises, depends less on ideal architectures and more on pragmatic paths that respect existing realities.

When those paths exist, public cloud growth and cloud repatriation stop being opposing forces. They become two sides of the same maturation process, one that rewards platforms designed not just for scale, but for transition.

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

If you ask most large organizations why they use more than one public cloud, the answers are remarkably consistent. It is not fashion, and it is rarely driven by engineering curiosity. It is risk management and a best of breed approach.

Enterprises distribute workloads across multiple public clouds to reduce concentration risk, comply with regulatory expectations, preserve negotiation leverage, and remain operationally resilient in the face of outages that cannot be mitigated by adding another availability zone. In regulated industries, especially in Europe, this thinking has become mainstream. Supervisors explicitly expect organisations to understand their outsourcing dependencies, to manage exit scenarios, and to avoid structural lock-in where it can reasonably be avoided.

Now apply the same logic one layer down into the private cloud world, and the picture changes dramatically.

Across industries and geographies, a significant majority of private cloud workloads still run on a single private cloud platform. In practice, this platform is often VMware (by Broadcom). Estimates vary, but the dominance itself is not controversial. In many enterprises, approximately 70 to 80 percent of virtualized workloads reside on the same platform, regardless of sector.

If the same concentration existed in the public cloud, the discussion would be very different. Boards would ask questions, regulators would intervene, architects would be tasked with designing alternatives. Yet in private cloud infrastructure, this concentration is often treated as normal, even invisible.

Why?

Organisations deliberately choose multiple public clouds

Public cloud multi-cloud strategies are often oversimplified as “fear of lock-in”, but that misses the point.

The primary driver is concentration risk. When critical workloads depend on a single provider, certain failure modes become existential. Provider-wide control plane outages, identity failures, geopolitical constraints, or contractual disputes cannot be mitigated by technical architecture alone. Multi-cloud does not eliminate risk, but it limits the blast radius.

Regulation reinforces this logic. The European banking supervision, for example, treats cloud as an outsourcing risk and expects institutions to demonstrate governance, exit readiness, and operational resilience. An exit strategy that only exists on paper is increasingly viewed as insufficient. There are also pragmatic reasons. Jurisdictional considerations, data protection regimes, and shifting geopolitical realities make organizations reluctant to anchor everything to a single legal and operational framework. Multi-cloud (or hybrid cloud) becomes a way to keep strategic options open.

And finally, there is negotiation power. A credible alternative changes vendor dynamics. Even if workloads never move, the ability to move matters.

This mindset is widely accepted in the public cloud. It is almost uncontroversial.

How the private cloud monoculture emerged

The dominance of a single private cloud platform did not happen by accident, and it did not happen because enterprises were careless.

VMware earned its position over two decades by solving real problems early and building an ecosystem that reinforced itself. Skills became widely available, tooling matured, and operational processes stabilized. Backup, disaster recovery, monitoring, security controls, and audit practices are all aligned around a common platform. Over time, the private cloud platform evolved into more than just software. It became the operating model.

And once that happens, switching becomes an organizational transformation.

Private cloud decisions are also structurally centralized. Unlike public cloud consumption, which is often decentralized across business units, private cloud infrastructure is intentionally standardized. One platform, one set of guardrails, one way of operating. From an efficiency and governance perspective, this makes sense. From a dependency perspective, it creates a monoculture.

For years, this trade-off was acceptable because the environment was stable, licensing was predictable, and the ecosystem was broad. The rules of the game did not change dramatically.

That assumption is now being tested.

What has changed is not the technology, but the dependency profile

VMware remains a technically strong private cloud platform. That is not in dispute. What has changed under Broadcom is the commercial and ecosystem context in which the platform operates. Infrastructure licensing has shifted from a largely predictable, incremental expense into a strategically sensitive commitment. Renewals are no longer routine events. They become moments of leverage.

At the same time, changes in partner models and go-to-market structures affect how organizations buy, renew, and support their private cloud infrastructure. When the surrounding ecosystem narrows, dependency increases, even if the software itself remains excellent.

This is not a judgment on intent or quality. It is just a structural observation. When one private cloud platform represents the majority of an organization’s infrastructure, any material change in pricing, licensing, or ecosystem access becomes a strategic risk by definition.

The real issue is not lock-in, but the absence of a credible exit

Most decision-makers do not care about hypervisors, they care about exposure. The critical question is not whether an organization plans to leave its existing private cloud platform. The question is whether it could leave, within a timeframe the business could tolerate, if it had to.

In many cases, the honest answer is no.

Economic dependency is the first dimension. When a single vendor defines the majority of your infrastructure cost base, budget flexibility shrinks.

Operational dependency is the second. If tooling, processes, security models, and skills are deeply coupled to one platform, migration timelines stretch into years. That alone is a risk, even if no migration is planned.

Ecosystem dependency is the third. Fewer partners and fewer commercial options reduce competitive pressure and resilience.

Strategic dependency is the fourth. The private cloud platform is increasingly becoming the default landing zone for everything that cannot go to the public cloud. At that point, it is no longer just infrastructure. It is a critical organizational infrastructure.

Public cloud regulators have language for this. They call it outsourcing concentration risk. Private cloud infrastructure rarely receives the same attention, even though the consequences can be comparable.

Concentration risk in the public sector – When dependency is financed by taxpayers

In the public sector, concentration risk is not only a technical or commercial question but also a governance question. Public administrations do not invest their own capital. Infrastructure decisions are financed by taxpayers, justified through public procurement, and expected to remain defensible over long time horizons. This fundamentally changes the risk calculus.

When a public institution concentrates the majority of its private cloud infrastructure on a single platform, it is committing public funds, procurement structures, skills development, and long-term dependency to one vendor’s strategic direction. Now, what does it mean for a nation where 80 or 90% of its public sector is dependent on one single vendor?

That dependency can last longer than political cycles, leadership changes, or even the original architectural assumptions. If costs rise, terms change, or exit options narrow, the consequences are beared by the public. This is why procurement law and public sector governance emphasize competition, supplier diversity, and long-term sustainability. In theory, these principles apply equally to private cloud platforms. In practice, historical standardization decisions often override them.

There is also a practical constraint. Public institutions cannot move quickly. Budget cycles, tender requirements, and legal processes mean that correcting structural dependency is slow and expensive once it is entrenched.

Seen through this lens, private cloud concentration risk in the public sector is not a hypothetical problem. It is a deferred liability.

Why organizations hesitate to introduce a new or second private cloud platform

If concentration risk is real, why do organizations not simply add a second platform?

Because fragmentation is also a risk.

Enterprises do not want five private cloud platforms. They do not want duplicated tooling, fragmented operations, or diluted skills. Running parallel infrastructures without a coherent operating model creates unnecessary cost and complexity, without addressing the underlying problem. This is why most organizations are not looking for “another hypervisor”. They are seeking a second private cloud platform that preserves the VM-centric operating model, integrates lifecycle management, and can coexist without necessitating a redesign of governance and processes.

The main objective here is credible optionality.

A market correction – Diversity returns to private cloud infrastructure

One unintended consequence of Broadcom’s acquisition of VMware is that it has reopened a market that had been largely closed for years. For a long time, the conversation about private cloud infrastructure felt settled. VMware was the default, alternatives were niche, and serious evaluation was rare. That has changed.

Technologies that existed on the margins are being reconsidered. Xen-based platforms are evaluated again, where simplicity and cost control dominate. Proxmox is discussed more seriously in environments that value open-source governance and transparency. Microsoft Hyper-V is re-examined, where deep Microsoft integration already exists.

At the same time, vendors are responding. HPE Morpheus VM Essentials reflects a broader trend toward abstraction and lifecycle management that reduces direct dependency on a single virtualization layer.

Nutanix appears in this context not as a disruptive newcomer, but as an established private cloud platform that fits a diversification narrative. For some organizations, it represents a way to introduce a second platform without abandoning existing operations or retraining entire teams from scratch.

None of these options is a universal replacement. That is not the point. The point is that choice has returned.

This diversity is healthy. It forces vendors to compete on clarity, pricing, ecosystem openness, and operational value. It forces customers to revisit assumptions that have gone unchallenged for years and it reintroduces architectural optionality into a layer of infrastructure that had become remarkably static.

This conversation matters now

For years, private cloud concentration risk was theoretical. Today, it is increasingly tangible.

The combination of high platform concentration, shifting commercial models, and narrowing ecosystems forces organizations to re-examine decisions they have not questioned in over a decade. Not because the technology suddenly failed, but because dependency became visible.

The irony is that enterprises already know how to reason about this problem. They apply the same logic every day in public cloud.

The difference is psychological. Private cloud infrastructure feels “owned”. It runs on-premises and it feels sovereign. That feeling can be partially true, but it can also obscure how much strategic control has quietly shifted elsewhere.

A measured conclusion

This is not a call for mass migration away from VMware. That would be reactive and, in many cases, irresponsible.

It is a call to apply the same discipline to private cloud platforms that organizations already apply to public cloud providers. Concentration risk does not disappear because infrastructure runs in a data center.

So, if the terms change, do you have a credible alternative?

Nutanix Is Quietly Redrawing the Boundaries of What an Infrastructure Platform Can Be

Nutanix Is Quietly Redrawing the Boundaries of What an Infrastructure Platform Can Be

Real change happens when a platform evolves in ways that remove old constraints, open new economic paths, and give IT teams strategic room to maneuver. Nutanix has introduced enhancements that, taken individually, appear to be technical refinements, but observed together, they represent something more profound. The transition of the Nutanix Cloud Platform (NCP) into a fabric of compute, storage, and mobility that behaves as one system, no matter where it runs.

This is the dismantling of long-standing architectural trade-offs and the business impact is far greater than the technical headlines suggest.

In this article, I want to explore four developments that signal this shift:

  • Elastic VM Storage across Nutanix clusters
  • Disaggregated compute and storage scaling
  • NC2 is generally available on Google Cloud
  • The strategic partnership between Nutanix and Pure Storage

Individually, these solve real operational challenges. Combined, they create an infrastructure model that moves away from fix constructs and toward an adaptable, cost-efficient, cloud-operating fabric.

Elastic VM Storage – The End of Cluster-Bound Thinking

Nutanix introduced Elastic VM Storage, which the ability for one AHV cluster to consume storage from another Nutanix HCI cluster within the same Prism Central domain. It breaks one of the oldest implicit assumptions in on-premises virtualization that compute and storage must live together in tightly coupled units.

By allowing VMs to be deployed on compute in one cluster while consuming storage from another, Nutanix gives IT teams a new level of elasticity and resource distribution.

It introduces an operational freedom that enterprises have never truly had:

  1. Capacity can be added where it is cheapest. If storage economics favour one site and compute expansion is easier or cheaper in another, Nutanix allows you to make decisions based on cost, not on architectural constraints.
  2. It reduces stranded resources. Every traditional environment suffers from imbalanced clusters. Some run out of storage, others out of CPU, and upgrading often means over-investing on both sides. Elastic VM Storage dissolves those silos.
  3. It prepares organizations for multi-cluster private cloud architectures. Enterprises increasingly distribute workloads across data centers, edge locations, and cloud-adjacent sites. Being able to pool resources across clusters is foundational for this future.

Nutanix is erasing the historical boundary of the cluster as a storage island.

Disaggregated Compute and Storage Scaling

For years, Nutanix’s HCI architecture was built on the elegant simplicity of shared-nothing clusters, where compute and storage scale together. Many customers still want this. In fact, for greenfield deployments, it probably is the cleanest architecture. But enterprises also operate in a world full of legacy arrays, refresh cycles that rarely align, strict licensing budgets, and specialized workload patterns.

With support for disaggregated compute and storage scaling, Nutanix allows:

  • AHV compute-only clusters with external storage (currently supported are Dell PowerFlex and Pure Storage – more to follow)
  • Mixed configurations combining HCI nodes and compute-only nodes
  • Day-0 simplicity for disaggregated deployments

This is a statement from Nutanix, whose DNA was always HCI: The Nutanix Cloud Platform can operate across heterogeneous infrastructure models without making the environment harder to manage.

  1. Customers can modernize at their own pace. If storage arrays still have years of depreciation left, Nutanix allows you to modernize compute now and storage later instead of forcing a full rip-and-replace.
  2. It eliminates unnecessary VMware licensing. Many organizations want to exit expensive hypervisor stacks while continuing to utilize their storage investments. AHV compute-only clusters make this transition significantly cheaper.
  3. It supports high-density compute for new workloads. AI training, GPU farms, and data pipelines often require disproportionate compute relative to storage. Disaggregation aligns the platform with the economics of modern workloads.

This is the kind of flexibility enterprises have asked for during the last few years and Nutanix has now delivered it without compromising simplicity.

Nutanix and Pure Storage

One of the most significant shifts in Nutanix’s evolution is its move beyond traditional HCI boundaries. This began when Nutanix introduced support for Dell PowerFlex as the first officially validated external storage integration, which was a clear signal to the market, that the Nutanix platform was opening itself to disaggregated architectures. With Pure Storage FlashArray now becoming the second external storage platform to be fully supported through NCI for External Storage, that early signal has turned into a strategy and ecosystem.

Nutanix NCI with Pure Storage

Nutanix now enables customers to run AHV compute clusters using enterprise-grade storage arrays while retaining the operational simplicity of Prism, AHV, and NCM. Pure Storage’s integration builds on the foundation established with PowerFlex, but expands the addressable market significantly by bringing a leading flash platform into the Nutanix operating model.

Why is this strategically important?

  • It confirms that Nutanix is committed to disaggregated architectures, not just compatible with them. What began with Dell PowerFlex as a single integration has matured into a structured approach. Nutanix will support multiple external storage ecosystems while providing a consistent compute and management experience.
  • It gives customers real choice in storage without fragmenting operations. With Pure Storage joining PowerFlex, Nutanix now supports two enterprise storage platforms that are widely deployed in existing environments. Customers can keep their existing tier-1 arrays and still modernize compute, hypervisor, and operations around AHV and Prism.
  • It creates an on-ramp for VMware exits with minimal disruption. Many VMware customers own Pure FlashArray deployments or run PowerFlex at scale. With these integrations, they can adopt Nutanix AHV without replatforming storage. The migration becomes a compute and virtualization change and not a full infrastructure overhaul.
  • It positions Nutanix as the control plane above heterogeneous infrastructure. The combination of NCI with PowerFlex and now Pure Storage shows that Nutanix is building an operational layer that unifies disparate architectures.
  • It aligns modernization with financial reality. Storage refreshes and compute refreshes rarely align. Supporting multiple external arrays allows Nutanix customers to modernize compute operations first, defer storage investment, and transition into HCI only when it makes sense.

Nutanix has moved from a tightly defined HCI architecture to an extensible compute platform that can embrace best-in-class storage from multiple vendors.

Nutanix Cloud Clousters on Google Cloud – A Third Strategic Hyperscaler Joins the Story

The general availability of NC2 on Google Cloud completes a strategic triangle. With AWS, Azure and now Google Cloud all supporting Nutanix Cloud Clusters (NC2), Nutanix becomes one of the very few platforms capable of delivering a consistent private cloud operating model across all three major hyperscalers. It fundamentally changes how enterprises can think about cloud architecture, mobility, and strategic independence.

Running NC2 on Google Cloud creates a new kind of optionality. Workloads that previously needed to be refactored or painfully migrated can now move into GCP without rewriting, without architectural compromises, and without inheriting a completely different operational paradigm. For many organizations, especially those leaning into Google’s strengths in analytics, AI, and data services, this becomes a powerful pattern. Keep the operational DNA of your private cloud, but situate workloads closer to the native cloud services that accelerate innovation.

NC2 on Google Cloud

When an enterprise can run the same platform – the same hypervisor, the same automation, the same governance model – across multiple hyperscalers, the risk of cloud lock-in can be reduced. Workload mobility and cloud-exit strategies become a reality.

NC2 on Google Cloud is a sign of how Nutanix envisions the future of hybrid multi-cloud. Not as a patchwork of different platforms stitched together, but a unified operating fabric that runs consistently across every environment. With Google now joining the story, that fabric becomes broader, more flexible, and significantly more strategic.

Conclusion

Nutanix is removing the trade-offs, that enterprises once accepted as inevitable.

Most IT leaders aren’t searching for (new) features. They are searching for ways to reduce risk, control cost, simplify operations, and maintain autonomy while the world around them becomes more complex. Nutanix’s recent enhancements are structural. They chip away at the constraints that made traditional infrastructure unflexible and expensive.

The platform is becoming more open, more flexible, more distributed, and more sovereign by design.

What If Cloud Was Never the Destination But Just One Chapter In A Longer Journey

What If Cloud Was Never the Destination But Just One Chapter In A Longer Journey

For more than a decade, IT strategies were shaped by a powerful promise that the public cloud was the final destination. Enterprises were told that everything would eventually run there, that the data center would become obsolete, and that the only rational strategy was “cloud-first”. For a time, this narrative worked. It created clarity in a complex world and provided decision-makers with a guiding principle.

Hyperscalers accelerated digital transformation in ways no one else could have. Without their scale and speed, the last decade of IT modernization would have looked very different. But what worked as a catalyst does not automatically define the long-term architecture.

But what if that narrative was never entirely true? What if the cloud was not the destination at all, but only a chapter? A critical accelerator in the broader evolution of enterprise infrastructure? The growing evidence suggests exactly that. Today, we are seeing the limits of mono-cloud thinking and the emergence of something new. A shift towards adaptive platforms that prioritize autonomy over location.

The Rise and Fall of Mono-Cloud Thinking

The first wave of cloud adoption was almost euphoric. Moving everything into a single public cloud seemed not just efficient but inevitable. Infrastructure management became simpler, procurement cycles shorter, and time-to-market faster. For CIOs under pressure to modernize, the benefits were immediate and tangible.

Yet over time, the cost savings that once justified the shift started to erode. What initially looked like operational efficiency transformed into long-term operating expenses that grew relentlessly with scale. Data gravity added another layer of friction. While applications were easy to deploy, the vast datasets they relied on were not as mobile. And then came the growing emphasis on sovereignty and compliance. Governments and regulators, citizens and journalists as well, started asking difficult questions about who ultimately controlled the data and under what jurisdiction.

These realities did not erase the value of the public cloud, but they reframed it. Mono-cloud strategies, while powerful in their early days, increasingly appeared too rigid, too costly, and too dependent on external factors beyond the control of the enterprise.

Multi-Cloud as a Halfway Step

In response, many organizations turned to multi-cloud. If one provider created lock-in, why not distribute workloads across two or three? The reasoning was logical. Diversify risk, improve resilience, and gain leverage in vendor negotiations.

But as the theory met reality, the complexity of multi-cloud began to outweigh its promises. Each cloud provider came with its own set of tools, APIs, and management layers, creating operational fragmentation rather than simplification. Policies around security and compliance became harder to enforce consistently. And the cost of expertise rose dramatically, as teams were suddenly required to master multiple environments instead of one.

Multi-cloud, in practice, became less of a strategy and more of a compromise. It revealed the desire for autonomy, but without providing the mechanisms to truly achieve it. What emerged was not freedom, but another form of dependency. This time, on the ability of teams to stitch together disparate environments at great cost and complexity.

The Adaptive Platform Hypothesis

If mono-cloud was too rigid and multi-cloud too fragmented, then what comes next? The hypothesis that is now emerging is that the future will be defined not by a place – cloud, on-premises, or edge – but by the adaptability of the platform that connects them.

Adaptive platforms are designed to eliminate friction, allowing workloads to move freely when circumstances change. They bring compute to the data rather than forcing data to move to compute, which becomes especially critical in the age of AI. They make sovereignty and compliance part of the design rather than an afterthought, ensuring that regulatory shifts do not force expensive architectural overhauls. And most importantly, they allow enterprises to retain operational autonomy even as vendors merge, licensing models change, or new technologies emerge.

This idea reframes the conversation entirely. Instead of asking where workloads should run, the more relevant question becomes how quickly and easily they can be moved, scaled, and adapted. Autonomy, not location, becomes the decisive metric of success.

Autonomy as the New Metric?

The story of the cloud is not over, but the chapter of cloud as a final destination is closing. The public cloud was never the endpoint, but it was a powerful catalyst that changed how we think about IT consumption. But the next stage is already being written, and it is less about destinations than about options.

Certain workloads will always thrive in a hyperscale cloud – think collaboration tools, globally distributed apps, or burst capacity. Others, especially those tied to sovereignty, compliance, or AI data proximity, demand a different approach. Adaptive platforms are emerging to fill that gap.

Enterprises that build for autonomy will be better positioned to navigate an unpredictable future. They will be able to shift workloads without fear of vendor lock-in, place AI infrastructure close to where data resides, and comply with sovereignty requirements without slowing down innovation.

The emerging truth is simple: Cloud was never the destination. It was only one chapter in a much longer journey. The next chapter belongs to adaptive platforms and to organizations bold enough to design for freedom rather than dependency.

Stop Writing About VMware vs. Nutanix

Stop Writing About VMware vs. Nutanix

Over the last months I have noticed something “interesting”. My LinkedIn feed and Google searches are full of posts and blogs that try to compare VMware and Nutanix. Most of them follow the same pattern. They take the obvious features, line them up in two columns, and declare a “winner”. Some even let AI write these comparisons without a single line of lived experience behind it.

The problem? This type of content has no real value for anyone who has actually run these platforms in production. It reduces years of engineering effort, architectural depth, and customer-specific context into a shallow bullet list. Worse, it creates the illusion that such a side-by-side comparison could ever answer the strategic question of “what should I run my business on?”.

The Wrong Question

VMware vs. Nutanix is the wrong question to ask. Both vendors have their advantages, both have strong technology stacks, and both have long histories in enterprise IT. But if you are an IT leader in 2025, your real challenge is not to pick between two virtualization platforms. Your challenge is to define what your infrastructure should enable in the next decade.

Do you need more sovereignty and independence from hyperscalers? Do you need a platform that scales horizontally across the edge, data center, and public cloud with a consistent operating model? Do you need to keep costs predictable and avoid the complexity tax that often comes with layered products and licensing schemes?

Those are the real questions. None of them can be answered by a generic VMware vs. Nutanix LinkedIn post.

The Context Matters

A defense organization in Europe has different requirements than a SaaS startup in Silicon Valley. A government ministry evaluates sovereignty, compliance, and vendor control differently than a commercial bank that cares most about performance and transaction throughput.

The context (regulatory, organizational, and strategic) always matters more than product comparison charts. If someone claims otherwise, they probably have not spent enough time in the field, working with CIOs and architects who wrestle with these issues every day. Yes, (some) features are important and sometimes make the difference, but the big feature war days are over.

It’s About the Partner, Not Just the Platform

At the end of the day, the platform is only one piece of the puzzle. The bigger question is: who do you want as your partner for the next decade?

Technology shifts, products evolve, and roadmaps change. What remains constant is the relationship you build with the vendor or partner behind the platform. Can you trust them to execute your strategy with you? Can you rely on them when things go wrong? Do they share your vision for sovereignty, resilience, and simplicity or are they simply pushing their own agenda?

The answer to these questions matters far more than whether VMware or Nutanix has the upper hand in a feature battle.

A Better Conversation

Instead of writing another VMware vs. Nutanix blog, we should start a different conversation. One that focuses on operating models, trust, innovation, ecosystem integration, and how future-proof your platform is.

Nutanix, VMware, Red Hat, hyperscalers, all of them are building infrastructure and cloud stacks. The differentiator is not whether vendor A has a slightly faster vMotion or vendor B has one more checkbox in the feature matrix. The differentiator is how these platforms align with your strategy, your people, and your risk appetite, and whether you believe the partner behind it is one you can depend on.

Why This Matters Now

The market is in motion. VMware customers are forced to reconsider their roadmap due to the Broadcom acquisition and the associated licensing changes. Nutanix is positioning itself as a sovereign alternative with strong hybrid cloud credentials. Hyperscalers are pushing local zones and sovereign cloud initiatives.

In such a market, chasing simplistic comparisons is a waste of time. Enterprises should focus on long-term alignment with their cloud and data strategy. They should invest in platforms and partners that give them control, choice, and agility.

Final Thought

So let’s stop writing useless VMware vs. Nutanix comparisons. They don’t help anyone who actually has to make decisions at scale. Let’s raise the bar and bring back thought leadership to this industry. Share real experiences. Talk about strategy and outcomes. Show where platforms fit into the bigger picture of sovereignty, resilience, and execution. And most importantly: choose the partner you can trust to walk this path with you.

That is the conversation worth having. Everything else is just noise and bullshit.

Finally, People Start Realizing Sovereignty Is a Spectrum

Finally, People Start Realizing Sovereignty Is a Spectrum

For months and years, the discussion about cloud and digital sovereignty has been dominated by absolutes. It was framed as a black-and-white choice. Either you are sovereign, or you are not. Either you trust hyperscalers, or you don’t. Either you build everything yourself, or you hand it all over. But over the past two years, organizations, governments, and even the vendors themselves have started to recognize that this way of thinking doesn’t reflect reality. Sovereignty is seen as a spectrum now.

When I look at the latest Gartner Magic Quadrant (MQ) for Distributed Hybrid Infrastructure (DHI), this shift becomes even more visible. In the leader’s quadrant, we find AWS, Microsoft, Oracle, Broadcom (VMware), and Nutanix. Each of them is positioned differently, but they all share one thing in common. They now operate somewhere along this sovereignty spectrum. Some of them are “fully” sovereign and some of them are dependent. The truth lies in between, and it is about how much control you want to retain versus how much you are willing to outsource. But it’s also possible to have multiple vendors and solutions co-existing.

Gartner MQ DHI 2025

The Bandwidth of Sovereignty

To make this shift more tangible, think of sovereignty as a bandwidth rather than a single point. On the far left, you give up almost all control and rely fully on global hyperscalers, following their rules, jurisdictions, and technical standards. On the far right, you own and operate everything in your data center, with full control but also full responsibility. Most organizations today are somewhere in between (using a mix of different vendors and clouds).

This bandwidth allows us to rate the leaders in the MQ not as sovereign or non-sovereign, but according to where they sit on the spectrum:

  • AWS stretches furthest toward global reach and scalability. They are still in the process of building a sovereign cloud, and until that becomes reality, none of their extensions (Outposts, Wavelength, Local Zones) can truly be seen as sovereign (please correct me if I am wrong). Their new Dedicated Local Zones bring infrastructure closer, but AWS continues to run the show. Meaning sovereignty is framed through compliance, not operational autonomy.

  • Microsoft sits closer to the middle. With Microsoft’s Sovereign Cloud initiatives in Europe, they acknowledge the political and regulatory reality. Customers gain some control over data residency and compliance, but the operational steering remains with Microsoft (except for their “Sovereign Private Cloud” offering, which consists of Azure Local + Microsoft 365 Local).

  • Oracle has its EU Sovereign Cloud, which is already available today, and with offerings like OCI Dedicated Region and Alloy that push sovereignty closer to customers. Still, these don’t offer operational autonomy, as Oracle continues to manage much of the infrastructure. For full isolation, Oracle provides Oracle Cloud Isolated Region and the smaller Oracle Compute Cloud@Customer Isolated (C3I). These are unique in the hyperscaler landscape and move Oracle further to the right.

  • Broadcom (VMware) operates in a different zone of the spectrum. With VMware’s Cloud Foundation stack, customers can indeed build sovereign clouds with operational autonomy in their own data centers. This puts them further right than most hyperscalers. But Gartner and recent market realities also show that dependency risks are not exclusive to AWS or Azure. VMware customers face uncertainty tied to Broadcom’s licensing models and strategic direction, which balances out their autonomy.

  • Google does not appear in the leader’s quadrant yet, but their Google Distributed Cloud (GDC) deserves mention. Gartner highlights how GDC is strategically advancing, winning sovereign cloud projects with governments and partners, and embedding AI capabilities on-premises. Their trajectory is promising, even if their current market standing hasn’t brought them into the top right yet.

  • Nutanix stands out by offering a comprehensive single product – the Nutanix Cloud Platform (NCP). Gartner underlines that NCP is particularly suited for sovereign workloads, hybrid infrastructure management, and edge multi-cloud deployments. Unlike most hyperscalers, Nutanix delivers one unified stack, including its “own hypervisor as a credible ESXi alternative”. That makes it possible to run a fully sovereign private cloud with operational autonomy, without sacrificing cloud-like agility and elasticity.

Why the Spectrum Matters

This sovereignty spectrum changes how CIOs and policymakers make decisions. Instead of asking “Am I sovereign or not?”, the real question becomes:

How far along the spectrum do I want to be and how much am I willing to compromise for flexibility, cost, or innovation?

It is no longer about right or wrong. Choosing AWS does not make you naive. Choosing Nutanix does not make you paranoid. Choosing Oracle does not make you old-fashioned. Choosing Microsoft doesn’t make you a criminal. Each decision reflects an organization’s position along the bandwidth, balancing risk, trust, cost, and control.

Where We Go From Here

The shift to this spectrum-based view has major consequences. Vendors will increasingly market not only their technology but also their place on the sovereignty bandwidth. Governments will stop asking for absolute sovereignty and instead demand clarity about where along the spectrum a solution sits. And organizations will begin to treat sovereignty not as a one-time decision but as a dynamic posture that can move left or right over time, depending on regulation, innovation, and geopolitical context.

The Gartner MQ shows that the leaders are already converging around this reality. The differentiation now lies in how transparent they are about it and how much choice they give their customers to slide along the spectrum. Sovereignty, in the end, is not a fixed state. It is a journey.