10 Things You Probably Didn’t Know About Nutanix

10 Things You Probably Didn’t Know About Nutanix

Nutanix is often described with a single word: HCI. That description is not wrong, but it is incomplete.

Over the last decade, Nutanix has evolved from a hyperconverged infrastructure (HCI) pioneer into a mature enterprise cloud platform that now sits at the center of many VMware replacement strategies, sovereign cloud designs, and edge architectures. Yet much of this evolution remains poorly understood, partly because old perceptions persist longer than technical reality.

Here are ten things about Nutanix that people often don’t know or underestimate.

1. Nutanix’s DNA is HCI, but the architecture has evolved beyond it

Nutanix was built on hyperconverged infrastructure. That heritage is important, because it shaped the platform’s operational model, automation mindset, and lifecycle discipline.

Over the last years, Nutanix deliberately opened its architecture. Today, compute-only nodes are a possibility, enabled through partnerships with vendors like Dell (PowerStore support for Nutanix is expected to enter early access in spring 2026, with general availability coming in summer 2026) and Pure Storage (for now). This allows customers to decouple compute and storage where it makes architectural or economic sense, without abandoning the Nutanix control plane.

This is Nutanix acknowledging that real enterprise environments are heterogeneous, and that flexibility matters.

2. A Net Promoter Score above 90

Nutanix has reported an NPS score consistently above 90 for several years. In enterprise infrastructure, that number is almost unheard of.

NPS reflects how customers feel after deployment, during operations, upgrades, incidents, and daily use. In a market where infrastructure vendors are often tolerated rather than liked, this level of advocacy is just unique and tells a story if its own.

It suggests that Nutanix’s real differentiation is not just technology, but operational experience. That tends to show up only once systems are running at scale.

3. Nutanix Kubernetes Platform runs almost everywhere

Nutanix Kubernetes Platform (NKP) is often misunderstood as “Kubernetes on Nutanix”. That is only partially true.

NKP can run on:

  • Bare metal
  • Nutanix AHV
  • VMware
  • Public cloud infrastructure

Nutanix Cloud Native Platform

NKP was designed to abstract infrastructure differences rather than enforce platform lock-in. For organizations that already operate mixed environments, or that want to transition gradually, this matters far more than ideological purity.

In practice, NKP becomes a control layer for Kubernetes. That is especially relevant in regulated or sovereign environments where infrastructure choices are often political as much as technical.

4. Nutanix has matured from “challenger” to enterprise-grade platform

It’s honest to acknowledge that Nutanix wasn’t always considered enterprise-ready. In its early years, the company was widely admired for innovation and simplicity, but many large organizations hesitated because the platform, like all young software, had feature gaps, stability concerns in some use cases, and a smaller track record with mission-critical workloads.

That landscape has changed significantly. Over the past several years, Nutanix has steadily strengthened every axis of its platform. From virtualization and distributed storage to Kubernetes, security, and operations at scale. The company’s most recent financial results show that this maturity isn’t theoretical. Fiscal 2025 delivered 18 % year-over-year revenue growth, strong recurring revenue expansion, and Nutanix added thousands of new customers, including over 50 Global 2000 accounts, arguably its strongest annual new-logo performance in years. 

What this means in practice is that many enterprises that once saw Nutanix as a “challenger” now see it as a credible and proven alternative to VMware, and not just in smaller or departmental deployments, but across core data center and hybrid cloud estates.

The old maturity gap has largely disappeared. What remains is a difference of philosophy. Nutanix prioritizes operational simplicity, flexibility, and choice, without compromising the robustness that large organizations demand. And with increasing adoption among Global 2000 enterprises, that philosophy is proving not only viable but competitive at the highest levels of IT decision-making.

5. The “Nutanix is expensive” perception is outdated and often wrong

The idea that Nutanix is more expensive than competitors is one of the most persistent myths in the market. It was shaped by early licensing models and by superficial price comparisons that ignored operational and architectural differences.

Today, Nutanix offers multiple licensing models, including options that other vendors simply do not have.

For example, NCI-VDI for Citrix or Omnissa environments is licensed based on concurrent users (CCU) rather than physical CPU cores. That aligns cost directly with usage and not hardware density.

Even more interesting is NCI Edge, which is designed for distributed environments with smaller footprints (aka ROBO). It is licensed per virtual machine, with clear boundaries:

  • Maximum of 25 VMs per cluster
  • Maximum 96 GB RAM per VM

Consider a realistic example. An organization runs 250 edge sites. Each site has a 3-node cluster with 32 cores per node and hosts 20 VMs:

  • A core-based model would require licensing 24’000 cores
  • With NCI Edge, the customer licenses 5’000 VMs

It fundamentally changes the cost structure of edge and remote deployments. In a traditional core-based licensing model, effective costs might range from $100 to $140 per core for edge nodes. With NCI Edge, the effective per-core cost can drop to $60-80 (illustrative figures). This is not a marginal optimization, it’s huge.

Note: NCM Edge is a product that provides the same capabilities as NCM for edge use cases. NCM-Edge is also limited to a maximum of 25 VMs in a cluster.

6. Almost 90% of Nutanix customers now use AHV

Nutanix has always been fundamentally about HCI and AOS (Acropolis Operating System). From the beginning, the value was never the hypervisor itself, but the distributed storage, data services, and operational model built on top of it. Over time, Nutanix came to a clear conclusion: The hypervisor should be a commodity, not the value anchor of the platform. Out of this thinking, the perception, and later the expression, emerged that AHV is “free”.

No photo description available.

Today, AHV has become the dominant deployment model in the Nutanix ecosystem, with an adoption rate of 88%. This matters for two important reasons. First, it disproves the assumption that customers need to be pushed or incentivized to move to AHV. Second, it demonstrates that AHV is trusted to run mission-critical workloads at scale, across enterprises and service providers.

7. Nutanix is 100% channel-led

Nutanix does not sell directly to customers (for sure there are some exceptions :)). It is a channel-led vendor, by design, and that decision fundamentally shapes how the company operates in the market. Hence, channel commitment at Nutanix is a structural principle.

Partners are not treated as a fulfillment layer or a transactional necessity. They are core to how Nutanix delivers value – from architecture design and implementation to day-two operations, managed services, and long-term customer success. As a result, Nutanix has built one of the strongest partner and service provider ecosystems in the industry, with clear incentives, predictable rules, and room for partners to build sustainable businesses.

This stands in sharp contrast to the current direction of some other infrastructure vendors, where channel models have become more restrictive, less transparent, and increasingly centered around direct control. In that environment, partners often struggle with margin pressure, reduced influence, and uncertainty about their long-term role.

Nutanix takes a different approach. By staying channel-led, it enables local expertise, regional sovereignty, and trusted delivery models, which are especially critical in public sector, regulated industries, and markets where locality and compliance matter as much as technology.

8. MST and Cloud-Native AOS show how far Nutanix has moved beyond classic HCI

Most people associate Nutanix AOS with hyperconverged infrastructure and VM-centric deployments. What is far less known is how deeply Nutanix has evolved its data platform to address multi-cloud and cloud-native architectures.

One example is MST (Multi-Cloud Snapshot Technology). MST enables application-consistent snapshots to be replicated across heterogeneous environments, including on-premises infrastructure and public clouds. Unlike traditional disaster-recovery approaches that assume identical infrastructure on both sides, MST is designed for asymmetric, real-world scenarios. This makes it possible to use the public cloud as a recovery or failover target without re-architecting workloads or maintaining a second, identical private environment. 

MST diagram

In parallel, Nutanix has introduced Cloud Native AOS, which brings enterprise-grade storage and data services directly into Kubernetes environments. Instead of tying storage to virtual machines or specific infrastructure stacks, Cloud Native AOS runs as a Kubernetes-native service and can operate across diverse platforms. This allows stateful applications to benefit from Nutanix data services, such as snapshots, replication, and resilience, without forcing teams back into VM-centric models.

Together, MST and Cloud-Native AOS illustrate an important point. Nutanix is not simply extending HCI into new form factors. It is re-architecting core data services to work across clouds, infrastructures, and application models. These capabilities are often overlooked, but they are strong indicators of where the platform is heading — toward data mobility, resilience, and consistency across increasingly fragmented environments.

EKS Cluster

9. Nutanix SaaS without forcing SaaS

Nutanix offers SaaS-based services such as Data Lens and Nutanix Central. These services are also available on-premises, including for air-gapped environments.

This dual-delivery model recognizes that not all customers can or should consume control planes as public SaaS. 

10. Nutanix has more than a decade of real-world experience replacing VMware

Nutanix has operated alongside VMware for more than ten years, in many cases within the same environments. As a result, replacing vSphere is not a new ambition or a reactive strategy for Nutanix. It is just a long-standing and proven reality.

Equally important is the migration experience. Nutanix Move was built specifically to address one of the most critical challenges in any platform transition. It’s about getting workloads across safely, predictably, and at scale. Move supports migrations from vSphere, Hyper-V, AWS, and other environments, enabling phased and low-risk transitions rather than disruptive “big bang” projects. Beyond workload migration, Move can also translate NSX network and security policies into Nutanix Flow, addressing one of the most commonly cited blockers in VMware exit strategies.

Nutanix has spent more than a decade refining these aspects across thousands of customer environments, which is why many organizations today view it as a credible, de-risked alternative for the long term.

Conclusion

For organizations reassessing their infrastructure strategy, whether driven by VMware uncertainty, edge expansion, regulatory pressure, or cloud cost realities, Nutanix should be on the top of your list. It is a proven platform with a clear philosophy, a growing enterprise footprint, and more than a decade of hard-earned experience. If Nutanix is still on your shortlist as “HCI”, it may be time to look again, and this time at the full picture! 🙂 

Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Over the past two years, a new narrative has taken hold in the cloud market. No, it is not always about sovereign cloud. 🙂 Headlines talk about cloud repatriation – nothing really new, but it is still out there. CIOs speak openly about pulling some workloads back on-premises. Analysts write about organizations “correcting” some earlier cloud decisions to optimize cloud spend. In parallel, hyperscalers themselves now acknowledge that not every workload belongs in the public cloud.

And yet, when you look at the data, you will find a paradox.

IDC and Gartner both project strong, sustained growth in public cloud IaaS spending over the next five years. Not marginal growth and sign of stagnation. But a market that continues to expand at scale, absorbing more workloads, more budgets, and more strategic relevance every year.

At first glance, these two trends appear contradictory. If organizations are repatriating workloads, why does public cloud IaaS continue to grow so aggressively? The answer lies in understanding what is actually being repatriated, what continues to move to the cloud, and how infrastructure constraints are reshaping decision-making in ways that are often misunderstood.

Cloud Repatriation Is Real, but Narrower Than the Narrative Suggests

Cloud repatriation is not a myth. It is happening, but it is also frequently misinterpreted.

Most repatriation initiatives are highly selective. They focus on predictable, steady-state workloads that were lifted into the public cloud under assumptions that no longer hold. Cost transparency has improved, egress fees are better understood and operating models have matured. What once looked flexible and elastic is now seen as expensive and operationally inflexible for certain classes of workloads.

What is rarely discussed is that repatriation does not mean “leaving the cloud”, but I have to repeat it again: It means rebalancing. Meaning, that trganizations are not abandoning public cloud IaaS as a concept. They are just refining their usage of it.

At the same time, some new workloads continue to flow into public cloud environments. Digital-native applications, analytics platforms, some AI pipelines, globally distributed services, and short-lived experimental environments still align extremely well with public cloud economics and operating models. These workloads were not part of the original repatriation debate, and they seem to be growing faster than traditional workloads are being pulled back.

This is how both statements can be true at the same time. Cloud repatriation exists, and public cloud IaaS continues to grow.

The Structural Drivers Behind Continued IaaS Growth

Public cloud IaaS growth is not driven by blind enthusiasm anymore. It is driven by structural forces that have little to do with fashion and everything to do with constraints.

One of the most underestimated factors is time. Building infrastructure takes time and procuring hardware takes time as well. Scaling data centers takes time and many organizations today are not choosing public cloud because it is cheaper or “better”, but because it is available now.

This becomes even more apparent when looking at the hardware market right now.

Hardware Shortages and Rising Server Prices Change the Equation

The infrastructure layer beneath private clouds has suddenly become a bottleneck. Server lead times have increased, GPU availability is constrained and prices for enterprise-grade hardware continue to rise, driven by supply chain pressures, higher component costs, and growing demand from AI workloads.

For organizations running large environments, this introduces a new type of risk. Capacity planning is a logistical problem and no longer just a financial exercise anymore. Even when budgets are approved, hardware may not arrive in time. That is the new reality.

In this context, public cloud data centers represent something extremely valuable: pre-existing capacity. Hyperscalers have already made the capital investments and they already operate at scale. From the customer perspective, infrastructure suddenly looks abundant again.

This is why many organizations currently consider shifting workloads to public cloud IaaS, even if they were previously skeptical. It became a pragmatic response to scarcity.

The Flawed Assumption: “Just Use Public Cloud Instead of Buying Servers”

However, this line of thinking often glosses over a critical distinction.

Many of these organizations do not actually want “cloud-native” infrastructure, if we are being honest here. What they want is physical capacity – They want compute, storage, and networking under predictable performance characteristics. In other words, they want some VMs and bare metal.

Buying servers allows organizations to retain architectural freedom. It allows them to choose their operating system or virtualization stack, their security model, their automation tooling, and their lifecycle strategy. Public cloud IaaS, by contrast, delivers abstraction, but at the cost of dependency.

When organizations consume IaaS services from hyperscalers, they implicitly accept constraints around instance types, networking semantics, storage behavior, and pricing models. Over time, this shapes application architectures and operational processes. The usage of such services suddenly became a lock-in.

Bare Metal in the Public Cloud Is Not a Contradiction

Interestingly, the industry has started to converge on a hybrid answer to this dilemma: bare metal in the public cloud.

Hyperscalers themselves offer bare-metal services. This is an acknowledgment that not all customers want fully abstracted IaaS. Some want physical control without owning physical assets. It is simple as that.

But bare metal alone is not enough. Without a consistent cloud platform on top, bare-metal in the public cloud becomes just another silo. You gain performance and isolation, but you lose portability and operational consistency.

Nutanix Cloud Clusters and the Reframing of IaaS

Nutanix Cloud Platform running on AWS, Azure, and Google Cloud through NC2 (Nutanix Cloud Clusters) introduces a different interpretation of public cloud IaaS.

Instead of consuming hyperscaler-native IaaS primitives, customers deploy a full private cloud stack on bare-metal instances in public cloud data centers. From an architectural perspective, this is a subtle but profound difference.

Customers still benefit from the hyperscaler’s global footprint and hardware availability and they still avoid long procurement cycles, but they do not surrender control of their cloud operating model. The same Nutanix stack runs on-premises and in public cloud, with the same APIs, the same tooling, and the same governance constructs.

Workload Mobility as the Missing Dimension

The most underappreciated benefit of this approach is workload mobility.

In a cloud-native bare-metal deployment tied directly to hyperscaler services, workloads tend to become anchored, migration becomes complex, and exit strategies are theoretical at best.

With NC2, workloads are portable by design. Virtual machines and applications can move between on-premises environments and public cloud (or a service provider cloud) bare-metal clusters without refactoring. In practical terms, this means organizations can use public cloud capacity tactically rather than strategically committing to it. Capacity shortages, temporary demand spikes, regional requirements, or regulatory constraints can be addressed without redefining the entire infrastructure strategy.

This is something traditional IaaS does not offer, and something pure bare-metal consumption does not solve on its own.

Reconciling the Two Trends

When viewed through this lens, the contradiction between cloud repatriation and public cloud IaaS growth disappears.

Public cloud is growing because it solves real problems: availability, scale, and speed. Repatriation is happening because not all problems require abstraction, and not all workloads benefit from cloud-native constraints.

The future is not a reversal of cloud adoption. It is a maturation of it.

Organizations are asking how to use public clouds without losing control. Platforms that allow them to consume cloud capacity while preserving architectural independence are not an alternative to IaaS growth and they are one of the reasons that growth can continue without triggering the next wave of regret-driven repatriation.

What complicates this picture further is that even where public cloud continues to grow, many of its original economic promises are now being questioned again.

The Broken Promise of Economies of Scale

One of the foundational assumptions behind public cloud adoption was economies of scale. The logic seemed sound. Hyperscalers operate at a scale no enterprise could ever match. Massive data centers, global procurement power, highly automated operations. All of this was expected to translate into continuously declining unit costs, or at least stable pricing over time.

That assumption has not materialized as we know by now.

If economies of scale were truly flowing through to customers, we would not be witnessing repeated price increases across compute, storage, networking, and ancillary services. We would not see new pricing tiers, revised licensing constructs, or more aggressive monetization of previously “included” capabilities. The reality is that public cloud pricing has moved in one direction for many workloads, and that direction is up.

This does not mean hyperscalers are acting irrationally. It means the original narrative was incomplete. Yes, scale does reduce certain costs, but it also introduces new ones. That is also true for new innovations and services. Energy prices, land, specialized hardware, regulatory compliance, security investments, and the operational complexity of running globally distributed platforms all scale accordingly. Add margin expectations from capital markets, and the result is not a race to the bottom, but disciplined price optimization.

For customers, however, this creates a growing disconnect between expectation and reality.

When Forecasts Miss Reality

More than half of organizations report that their public cloud spending diverges significantly from what they initially planned. In many cases, the difference is not marginal. Budgets are exceeded, cost models fail to reflect real usage patterns, optimization efforts lag behind application growth.

What is often overlooked is the second-order effect of this divergence. Over a third of organizations report that cloud-related cost and complexity issues directly contribute to delayed projects. Migration timelines slip, modernization initiatives stall, and teams slow down not because technology is unavailable, but because financial and operational uncertainty creeps into every decision.

Commitments, Consumption, and a Structural Risk

Most large organizations do not consume public cloud on a purely on-demand basis. They negotiate commitments, look at reserved capacity, and spend-based discounts. These are strategic agreements designed to lower unit costs in exchange for predictable consumption.

These agreements assume one thing above all else: that workloads will move. They HAVE TO move.

When migrations slow down, a new risk pops up. Organizations fail to reach their committed consumption levels, because they cannot move workloads fast enough. Legacy architectures, migration complexity, skill shortages, and governance friction all play a role.

The consequence is subtle but severe. Committed spend still has to be paid and because of that future negotiations become weaker. The organization enters the next contract cycle with a track record of underconsumption, reduced leverage, and less credibility in forecasting.

In effect, execution risk turns into commercial risk.

This dynamic is rarely discussed publicly, but it is increasingly common in private conversations with CIOs and cloud leaders. The challenge is no longer whether the public cloud can scale, but whether the organization can.

Speed of Migration as an Economic Variable

At this point, migration speed stops being a technical metric and becomes an economic one. The faster workloads can move, the faster negotiated consumption levels can be reached. The slower they move, the more value leaks out of cloud agreements.

This is where many cloud-native migration approaches struggle. Refactoring takes time and re-architecting applications is expensive. Not every workload is a candidate for transformation under real-world constraints.

As a result, organizations are caught between two pressures. On one side, the need to consume public cloud capacity they have already paid for. On the other hand, the inability to move workloads quickly without introducing unacceptable risk.

NC2 as a Consumption Accelerator, Not a Shortcut

This is where Nutanix Cloud Platform with NC2 changes the conversation.

By allowing organizations to run the same private cloud stack on bare metal in AWS, Azure, and Google Cloud, NC2 removes one of the biggest bottlenecks in migration programs: The need to change how workloads are built and operated before they can move.

Workloads can be migrated as they are, operating models remain consistent, governance does not have to be reinvented, and teams do not need to learn a new infrastructure paradigm under time pressure. It’s all about efficiency and speed.

Faster migrations mean workloads start consuming public cloud capacity earlier and the negotiated consumption targets suddenly become achievable. Commitments turn into realized value rather than sunk cost, and the organization regains control over both its migration timeline and its commercial position.

Reframing the Role of Public Cloud

In this context, NC2 is not an alternative to public cloud economics, but a mechanism to actually realize them.

Public cloud providers assume customers can move fast. In reality, many customers cannot, not because they resist change, but because change takes time. Platforms that reduce friction between private and public environments do not undermine cloud strategies. They are here to stabilize them. And they definitely can!

The uncomfortable truth is that economies of scale alone do not guarantee better outcomes for customers, execution does. And execution, in large enterprises, depends less on ideal architectures and more on pragmatic paths that respect existing realities.

When those paths exist, public cloud growth and cloud repatriation stop being opposing forces. They become two sides of the same maturation process, one that rewards platforms designed not just for scale, but for transition.

Nutanix’s EUC Stack Reduces TCO and Improves ROI

Nutanix’s EUC Stack Reduces TCO and Improves ROI

Virtual Desktop Infrastructure (VDI) has always been a conservative technology. It sits close to users, productivity, and operational risk. For years, the dominant conversation revolved around brokers, protocols, and user experience. Today, that conversation is shifting more towards licensing, platform dependency, roadmap uncertainty and support models. Even product availability is becoming the real driver behind VDI decisions.

The recent announcements from Omnissa clearly reflect this shift. Horizon 8 is now generally available on Nutanix AHV, opening a long-awaited alternative virtualization path for enterprise-grade VDI.

VMware vSphere Foundation for VDI (VVF for VDI)

The combined Omnissa Horizon and VMware vSphere Foundation for VDI offering responds to a very real customer desire for simplification. For organizations already standardized on VMware technologies, fewer contracts and a predefined bundle feel familiar and operationally convenient.

Broadcom has announced the discontinuation of VMware vSphere Foundation in specific countries and regions, most notably in parts of EMEA. The decision does not apply globally (yet), but it is explicit, regional, and commercially binding for affected markets. Availability is no longer uniform, and customers must now verify on a country-by-country basis whether VMware vSphere Foundation (VVF) can still be procured.

It is important to be precise, though. The recent discontinuation of VVF applies only to specific countries and, as of today, it seems it does not include VVF for VDI for existing Omnissa customers. Horizon customers can still consume VVF for VDI in those environments, and there has been no formal announcement that this specific bundle will be withdrawn.

At the same time, it would be naive to ignore the broader context. VVF for VDI ultimately depends on the commercial and strategic relationship between Omnissa and Broadcom. Omnissa does not fully control the underlying hypervisor roadmap, its regional availability, or future sales policies. Any material change requires negotiation between two vendors with different priorities and incentives.

Currently, VVF for VDI, which can be bundled with Horizon, includes vSphere 8.  Support for the vSphere portion of the bundle will continue to be provided by VMware by Broadcom. The bundled offerings are available to be purchased in up to 5-year terms (restrictions may apply).  Subject to their general terms, Broadcom/VMware will provide vSphere 8 for the period of the license that a customer has purchased. Broadcom has not yet announced timelines for vSphere 9 support with VVF for VDI.  We are working with Broadcom to enable VVF for VDI in an upcoming vSphere 9.x release, but no date has yet been committed. If a customer has a current requirement to move to vSphere 9, they will need to buy VCF or VVF separately from Broadcom.

Recent decisions around VVF in parts of EMEA illustrate this clearly. Even if VVF for VDI remains available today, customers are implicitly betting on the continued alignment between Omnissa and Broadcom. Packaging may simplify procurement in the short term, but it also concentrates dependency at the most critical layer of the stack. For VDI environments, where stability and predictability are non-negotiables, this dependency becomes an integral part of the risk assessment.

Why This Context Matters for NCI-VDI

This is where Nutanix Cloud Infrastructure for VDI starts to look less like an alternative and more like a structurally safer choice.

With Horizon supported on AHV, customers can decouple broker choice from hypervisor dependency. But the value goes beyond commercial optionality. It also shows up in how the platform behaves operationally.

Omnissa Horizon Agents on Nutanix AHV

Enhanced refresh workflows introduce recovery points for desktop refresh operations. Instead of rebuilding or troubleshooting desktops under pressure, IT teams gain a practical rollback mechanism. It is essentially an undo button for virtual desktops, reducing downtime, simplifying remediation, and improving resilience for business continuity scenarios.

GPU-accelerated VDI is another area where the platform advantage becomes tangible. Managed NVIDIA vGPU support is integrated into compute profiles for Horizon workloads. GPU profiles are no longer an afterthought or a separate administrative domain. This makes it significantly easier to deliver high-performance virtual workstations for AI, design, healthcare imaging, or analytics workloads, while reducing operational complexity for administrators.

For environments relying on RDSH, NCI-VDI now brings full automation for farms, published desktops, and applications. Farm creation, scaling, and app publishing no longer require manual orchestration. 

ClonePrep customization completes the picture. Virtual machines can be customized rapidly during pool or farm creation, giving IT teams precise control over how desktops are initialized. Configurations remain consistent across pools, while still allowing organizational requirements to be enforced centrally.

These are the current Nutanix configuration maximums for AHV:

  • Cluster size – The maximum AHV hosts per cluster is 32
  • VMs per host – The maximum powered on VDI VMs per AHV node is 200.
  • VMs per cluster – The maximum number of powered-on VMs per AHV cluster is 4096.

Note: The Horizon 8 reference architecture for AHV deployments is available here.

Licensing That Reflects How VDI Is Actually Used

Licensing discussions around VDI often focus narrowly on user counts and price points. What is frequently overlooked is what is not included, as well as the architectural assumptions that are quietly embedded in the bundle.

VVF for VDI, whether consumed by Omnissa Horizon customers or Citrix customers (regular VVF), does not include NSX and its distributed firewalling capabilities. Network micro-segmentation, east-west traffic control, and fine-grained security policies are not part of the VVF for VDI entitlement. Customers that require these capabilities must either accept architectural gaps or upgrade to VMware Cloud Foundation (VCF).

NCI-VDI approaches this differently, particularly in the Ultimate edition. Licensing remains per concurrent user, pooled and based on the highest usage, but the functional scope expands in a way that directly impacts architecture and resilience.

With NCI-VDI Ultimate, customers gain native micro-segmentation capabilities as part of the platform. Security is enforced at the workload level without relying on an external networking stack or add-on products. For VDI environments, especially in regulated or multi-tenant scenarios, this enables consistent isolation between desktop pools, user groups, and supporting services without introducing operational complexity.

Replication and availability are another area where licensing and architecture intersect. NCI-VDI Ultimate includes advanced replication capabilities, including metro availability as well as Async DR and NearSync replication.

The key point here is alignment. Licensing reflects how VDI is actually used in production, including security boundaries within the platform, continuous availability expectations, and the need to protect stateful desktops without redesigning the entire environment. When these capabilities are included by design, TCO becomes more predictable and ROI improves over the full lifecycle.

Storage Included

User data has always been one of the hidden cost drivers in VDI projects. Profiles, documents, shared data, and application artifacts often introduce additional products, licenses, and operational silos.

With NCI-VDI, up to 100 GiB of Nutanix Unified Storage (NUS) per user is included and pooled. Home directories, profile data, shared file services, or other workloads can all be covered without introducing a separate storage platform.

Nutanix Unified Storage (NUS) is a software-defined storage platform consolidating file, object, and block storage into a single platform. Integrated with Nutanix hyperconverged infrastructure (HCI), NUS enhances the security and performance of virtual desktops and applications, while simplifying administration of storage. Your team can easily manage and control all file, object, and block data in one place—both on-premises and in the cloud such as AWS and Azure.

Again, fewer products and fewer operational boundaries translate directly into lower TCO.

Support Models Matter When VDI Becomes Business-Critical

Support is rarely part of the initial VDI design discussion. It usually becomes relevant when something breaks or when an upgrade behaves differently than expected.

In the VMware vSphere Foundation model, support is typically delivered through distributors and channel partners. While many partners do excellent work, this structure introduces an additional layer between the customer and the platform vendor. When issues span multiple layers, including broker, hypervisor, and storage, responsibility can become fragmented.

With NCI-VDI, customers running Horizon or Citrix on AHV engage directly with Nutanix for the infrastructure layer. Compute, storage, and virtualization are owned by a single support organization with a Net Promoter Score (NPS) consistently above 90.

Fewer handoffs, faster root-cause analysis, and clearer accountability directly improve operational efficiency and ROI.

Compliance Without Disruption – A Public-Sector Perspective

For healthcare organizations and federal agencies, licensing compliance is a continuity topic. Clinical systems and public services cannot be interrupted because of a licensing issue.

With NCI-VDI, license enforcement preserves operational continuity. Existing workloads continue to run even if a customer temporarily falls out of compliance. There is no forced shutdown and no service interruption.

Restrictions apply elsewhere, such as cluster expansion, access to support, management UIs, or upgrades and patches. Compliance remains enforceable, but without turning it into an operational incident. For public sector environments, this behavior is essential.

Closing Thought

VDI is no longer just about delivering desktops and virtual applications. It has become a platform decision that directly affects cost control, resilience, compliance, and long-term autonomy. Combined offerings like VVF for VDI may simplify procurement in the short term, but they also increase dependency at the most critical layer of the stack, a layer that recent changes have shown can shift regionally, commercially, and strategically.

Nutanix does not force customers into a single broker strategy, Horizon runs on AHV and Citrix remains a long-standing partner. The broker is important, but it is not where most long-term cost, risk, and complexity accumulate. The real differentiation lies below the broker layer.

When compute, storage, virtualization, security, and availability are delivered as one integrated platform, TCO drops almost naturally. Fewer vendors reduce dependency risk, and fewer dependencies reduce roadmap uncertainty. Lastly, fewer handoffs reduce operational friction. Together, these effects compound over time and translate directly into a higher return on investment.

Nutanix NCI-VDI gives customers the freedom to decouple the broker choice from hypervisor dependency, embedding security and availability into the platform, and aligning licensing with how VDI is actually used in production, it reduces TCO in ways that only become fully visible over multiple renewal cycles. 

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

If you ask most large organizations why they use more than one public cloud, the answers are remarkably consistent. It is not fashion, and it is rarely driven by engineering curiosity. It is risk management and a best of breed approach.

Enterprises distribute workloads across multiple public clouds to reduce concentration risk, comply with regulatory expectations, preserve negotiation leverage, and remain operationally resilient in the face of outages that cannot be mitigated by adding another availability zone. In regulated industries, especially in Europe, this thinking has become mainstream. Supervisors explicitly expect organisations to understand their outsourcing dependencies, to manage exit scenarios, and to avoid structural lock-in where it can reasonably be avoided.

Now apply the same logic one layer down into the private cloud world, and the picture changes dramatically.

Across industries and geographies, a significant majority of private cloud workloads still run on a single private cloud platform. In practice, this platform is often VMware (by Broadcom). Estimates vary, but the dominance itself is not controversial. In many enterprises, approximately 70 to 80 percent of virtualized workloads reside on the same platform, regardless of sector.

If the same concentration existed in the public cloud, the discussion would be very different. Boards would ask questions, regulators would intervene, architects would be tasked with designing alternatives. Yet in private cloud infrastructure, this concentration is often treated as normal, even invisible.

Why?

Organisations deliberately choose multiple public clouds

Public cloud multi-cloud strategies are often oversimplified as “fear of lock-in”, but that misses the point.

The primary driver is concentration risk. When critical workloads depend on a single provider, certain failure modes become existential. Provider-wide control plane outages, identity failures, geopolitical constraints, or contractual disputes cannot be mitigated by technical architecture alone. Multi-cloud does not eliminate risk, but it limits the blast radius.

Regulation reinforces this logic. The European banking supervision, for example, treats cloud as an outsourcing risk and expects institutions to demonstrate governance, exit readiness, and operational resilience. An exit strategy that only exists on paper is increasingly viewed as insufficient. There are also pragmatic reasons. Jurisdictional considerations, data protection regimes, and shifting geopolitical realities make organizations reluctant to anchor everything to a single legal and operational framework. Multi-cloud (or hybrid cloud) becomes a way to keep strategic options open.

And finally, there is negotiation power. A credible alternative changes vendor dynamics. Even if workloads never move, the ability to move matters.

This mindset is widely accepted in the public cloud. It is almost uncontroversial.

How the private cloud monoculture emerged

The dominance of a single private cloud platform did not happen by accident, and it did not happen because enterprises were careless.

VMware earned its position over two decades by solving real problems early and building an ecosystem that reinforced itself. Skills became widely available, tooling matured, and operational processes stabilized. Backup, disaster recovery, monitoring, security controls, and audit practices are all aligned around a common platform. Over time, the private cloud platform evolved into more than just software. It became the operating model.

And once that happens, switching becomes an organizational transformation.

Private cloud decisions are also structurally centralized. Unlike public cloud consumption, which is often decentralized across business units, private cloud infrastructure is intentionally standardized. One platform, one set of guardrails, one way of operating. From an efficiency and governance perspective, this makes sense. From a dependency perspective, it creates a monoculture.

For years, this trade-off was acceptable because the environment was stable, licensing was predictable, and the ecosystem was broad. The rules of the game did not change dramatically.

That assumption is now being tested.

What has changed is not the technology, but the dependency profile

VMware remains a technically strong private cloud platform. That is not in dispute. What has changed under Broadcom is the commercial and ecosystem context in which the platform operates. Infrastructure licensing has shifted from a largely predictable, incremental expense into a strategically sensitive commitment. Renewals are no longer routine events. They become moments of leverage.

At the same time, changes in partner models and go-to-market structures affect how organizations buy, renew, and support their private cloud infrastructure. When the surrounding ecosystem narrows, dependency increases, even if the software itself remains excellent.

This is not a judgment on intent or quality. It is just a structural observation. When one private cloud platform represents the majority of an organization’s infrastructure, any material change in pricing, licensing, or ecosystem access becomes a strategic risk by definition.

The real issue is not lock-in, but the absence of a credible exit

Most decision-makers do not care about hypervisors, they care about exposure. The critical question is not whether an organization plans to leave its existing private cloud platform. The question is whether it could leave, within a timeframe the business could tolerate, if it had to.

In many cases, the honest answer is no.

Economic dependency is the first dimension. When a single vendor defines the majority of your infrastructure cost base, budget flexibility shrinks.

Operational dependency is the second. If tooling, processes, security models, and skills are deeply coupled to one platform, migration timelines stretch into years. That alone is a risk, even if no migration is planned.

Ecosystem dependency is the third. Fewer partners and fewer commercial options reduce competitive pressure and resilience.

Strategic dependency is the fourth. The private cloud platform is increasingly becoming the default landing zone for everything that cannot go to the public cloud. At that point, it is no longer just infrastructure. It is a critical organizational infrastructure.

Public cloud regulators have language for this. They call it outsourcing concentration risk. Private cloud infrastructure rarely receives the same attention, even though the consequences can be comparable.

Concentration risk in the public sector – When dependency is financed by taxpayers

In the public sector, concentration risk is not only a technical or commercial question but also a governance question. Public administrations do not invest their own capital. Infrastructure decisions are financed by taxpayers, justified through public procurement, and expected to remain defensible over long time horizons. This fundamentally changes the risk calculus.

When a public institution concentrates the majority of its private cloud infrastructure on a single platform, it is committing public funds, procurement structures, skills development, and long-term dependency to one vendor’s strategic direction. Now, what does it mean for a nation where 80 or 90% of its public sector is dependent on one single vendor?

That dependency can last longer than political cycles, leadership changes, or even the original architectural assumptions. If costs rise, terms change, or exit options narrow, the consequences are beared by the public. This is why procurement law and public sector governance emphasize competition, supplier diversity, and long-term sustainability. In theory, these principles apply equally to private cloud platforms. In practice, historical standardization decisions often override them.

There is also a practical constraint. Public institutions cannot move quickly. Budget cycles, tender requirements, and legal processes mean that correcting structural dependency is slow and expensive once it is entrenched.

Seen through this lens, private cloud concentration risk in the public sector is not a hypothetical problem. It is a deferred liability.

Why organizations hesitate to introduce a new or second private cloud platform

If concentration risk is real, why do organizations not simply add a second platform?

Because fragmentation is also a risk.

Enterprises do not want five private cloud platforms. They do not want duplicated tooling, fragmented operations, or diluted skills. Running parallel infrastructures without a coherent operating model creates unnecessary cost and complexity, without addressing the underlying problem. This is why most organizations are not looking for “another hypervisor”. They are seeking a second private cloud platform that preserves the VM-centric operating model, integrates lifecycle management, and can coexist without necessitating a redesign of governance and processes.

The main objective here is credible optionality.

A market correction – Diversity returns to private cloud infrastructure

One unintended consequence of Broadcom’s acquisition of VMware is that it has reopened a market that had been largely closed for years. For a long time, the conversation about private cloud infrastructure felt settled. VMware was the default, alternatives were niche, and serious evaluation was rare. That has changed.

Technologies that existed on the margins are being reconsidered. Xen-based platforms are evaluated again, where simplicity and cost control dominate. Proxmox is discussed more seriously in environments that value open-source governance and transparency. Microsoft Hyper-V is re-examined, where deep Microsoft integration already exists.

At the same time, vendors are responding. HPE Morpheus VM Essentials reflects a broader trend toward abstraction and lifecycle management that reduces direct dependency on a single virtualization layer.

Nutanix appears in this context not as a disruptive newcomer, but as an established private cloud platform that fits a diversification narrative. For some organizations, it represents a way to introduce a second platform without abandoning existing operations or retraining entire teams from scratch.

None of these options is a universal replacement. That is not the point. The point is that choice has returned.

This diversity is healthy. It forces vendors to compete on clarity, pricing, ecosystem openness, and operational value. It forces customers to revisit assumptions that have gone unchallenged for years and it reintroduces architectural optionality into a layer of infrastructure that had become remarkably static.

This conversation matters now

For years, private cloud concentration risk was theoretical. Today, it is increasingly tangible.

The combination of high platform concentration, shifting commercial models, and narrowing ecosystems forces organizations to re-examine decisions they have not questioned in over a decade. Not because the technology suddenly failed, but because dependency became visible.

The irony is that enterprises already know how to reason about this problem. They apply the same logic every day in public cloud.

The difference is psychological. Private cloud infrastructure feels “owned”. It runs on-premises and it feels sovereign. That feeling can be partially true, but it can also obscure how much strategic control has quietly shifted elsewhere.

A measured conclusion

This is not a call for mass migration away from VMware. That would be reactive and, in many cases, irresponsible.

It is a call to apply the same discipline to private cloud platforms that organizations already apply to public cloud providers. Concentration risk does not disappear because infrastructure runs in a data center.

So, if the terms change, do you have a credible alternative?

Nutanix should not be viewed primarily as a replacement for VMware

Nutanix should not be viewed primarily as a replacement for VMware

Public sector organizations rarely change infrastructure platforms lightly. Stability, continuity, and operational predictability matter more than shiny and modern solutions. Virtual machines became the dominant abstraction because they allowed institutions to standardize operations, separate applications from hardware, and professionalize IT operations over the long term.

For many years, VMware has become synonymous with this VM-centric operating model, as it provided a coherent, mature, and widely adopted implementation of virtualized infrastructure. Choosing VMware was, for a long time, a rational and defensible decision.

Crucially, the platform was modular. Organizations could adopt it incrementally, integrate it with existing tools, and shape their own operating models on top of it. This modularity translated into operational freedom. Institutions retained the ability to decide how far they wanted to go, which components to use, and which parts of their environment should remain under their direct control. These characteristics explain why VMware became the default choice for so many public institutions. It aligned well with the values of stability, proportionality, and long-term accountability.

The strategic question public institutions face today is not whether that decision was wrong. Rather, if they can learn from it. We need to ask ourselves whether the context around that decision has changed and whether continuing along the same platform path still preserves long-term control, optionality, and state capability.

From VM-centric to platform-path dependent

It is important to be precise in terminology. Most public sector IT environments are not VMware-centric by design. They are VM-centric. Virtual machines are the core operational unit, deeply embedded in processes, tooling, skills, and governance models. This distinction is very important. A VM-centric organization can, in principle, operate on different platforms without redefining its entire operating model. A VMware-centric organization, by contrast, has often moved further down a specific architectural path by integrating tightly with proprietary platform services, management layers, and bundled stacks that are difficult to disentangle later.

This is where the strategic divergence begins.

Over time, VMware’s platform has evolved from a modular virtualization layer into an increasingly integrated software-defined data center (SDDC) and VCF-oriented (VMware Cloud Foundation) stack. That evolution is not inherently negative. Integrated platforms can deliver efficiencies and simplified operations, but they also introduce path dependency. Decisions made today shape which options remain viable tomorrow.

So, the decisive factor is not pricing. Prices change. For public institutions, this is a governance issue (not a technical one).

There is a significant difference between organizations that adopted VMware primarily as a hypervisor platform and those that fully embraced the SDDC or VCF vision.

Institutions that did not fully commit to VMware’s integrated SDDC approach often still retain architectural freedom. Their environments are typically characterized by:

  • A strong focus on virtual machines rather than tightly coupled platform services
  • Limited dependency on proprietary automation, networking, or lifecycle tooling
  • Clear separation between infrastructure, operations, and higher-level services

For these organizations, the operational model remains transferable. Skills, processes, and governance structures are not irreversibly bound to a single vendor-defined stack. This has two important consequences.

First, technical lock-in can still be actively managed. The platform does not yet dictate the future architecture. Second, the total cost of change remains realistic. Migration becomes a controlled evolution rather than a disruptive transformation.

In other words, the window for strategic choice is still open.

Why this moment matters for the public sector

Public institutions operate under conditions that differ fundamentally from those of private enterprises. Their mandate is not limited to efficiency, competitiveness, or short-term optimization. Instead, they are entrusted with continuity, legality, and accountability over long time horizons. Infrastructure decisions made today must still be explainable years later, often to different audiences and under very different political circumstances. They must withstand audits, parliamentary inquiries, regulatory reviews, and shifts in leadership without losing their legitimacy.

This requirement fundamentally changes how technology choices must be evaluated. In the public sector, infrastructure is an integral part of the institutional framework that enables the state to function effectively. Decisions are therefore judged not only by their technical benefits and performance, but by their long-term defensibility. A solution that is efficient today but difficult to justify tomorrow represents a latent risk, even if it performs flawlessly in day-to-day operations.

It is within this context that the concept of digital sovereignty has moved from abstraction to obligation. Governments increasingly define digital sovereignty not as isolation or technological nationalism, but as the capacity to maintain control and freedom of an environment. This includes the ability to reassess vendor relationships, adapt sourcing strategies, and respond to geopolitical, legal, or economic shifts without being forced into reactive or crisis-driven decisions.

Digital sovereignty, in this sense, is closely tied to governance and control. It is about ensuring that institutions retain the ability to make informed, deliberate choices over time. That ability depends less on individual technologies and more on the structural properties of the platforms on which those technologies are built. When platforms are designed in ways that limit flexibility, they quietly constrain future options, regardless of their current performance or feature set.

Platform architectures that reduce reversibility are particularly problematic in the public sector. Reversibility does not imply constant change, nor does it require frequent platform switches. It simply means that change remains possible without disproportionate disruption. When an architecture makes it technically or organizationally prohibitive to adjust course, it creates a form of lock-in that extends beyond commercial dependency into the realm of institutional risk.

Even technically advanced platforms can become liabilities if they harden decisions that should remain open. Tight coupling between components, inflexible operational models, or vendor-defined evolution paths may simplify operations in the short term, but they do so at the cost of long-term flexibility. In public institutions, where the ability to adapt is inseparable from democratic accountability and legal responsibility, this trade-off must be examined with particular care.

Ultimately, digital sovereignty in the public sector is about ensuring that those dependencies remain governable. Platforms that preserve reversibility support this goal by allowing institutions to evolve deliberately, rather than react under pressure. Platforms that erode it may function well today, but they quietly accumulate strategic risk that only becomes visible when options have already narrowed.

Seen through this lens, digital sovereignty is a core governance requirement, embedded in the responsibility of public institutions to remain capable, accountable, and in control of their digital future.

Nutanix as a strategic inflection point

This is why Nutanix should not be viewed primarily as a replacement for VMware. Framing it as such immediately steers the discussion in the wrong direction. Replacements imply disruption, sunk costs, and, perhaps most critically in public-sector and enterprise contexts, an implicit critique of past decisions. Infrastructure choices, especially those made years ago, were often rational, well-founded, and appropriate for their time. Suggesting that they now need to be “replaced” risks triggering defensiveness and obscures the real strategic question.

More importantly, the replacement narrative fails to capture what Nutanix actually represents for VM-centric organizations. Nutanix does not demand a wholesale change in operating philosophy. It does not require institutions to abandon virtual machines, rewrite operational playbooks, or dismantle existing governance structures. On the contrary, it deliberately aligns with the VM-centric operating model that many public institutions and enterprises have refined over years of practice.

For this reason, Nutanix is better understood as a strategic inflection point. It marks a moment at which organizations can reassess their platform trajectory without invalidating the past. Virtual machines remain first-class citizens, operational practices remain familiar and roles, responsibilities, and control mechanisms continue to function as before. The day-to-day reality of running infrastructure does not need to change.

What does change is the organization’s strategic posture.

In essence, Nutanix is about restoring the ability to choose. In public-sector (and enterprise environments), that ability is often more valuable than any individual feature or performance metric.

The cost of change versus the cost of waiting

A persistent misconception in infrastructure strategy is the assumption that platform change is, by definition, prohibitively expensive. This belief is understandable. Large-scale IT transformations are often associated with complex migration projects, organizational disruption, and unpredictable outcomes. These associations create a strong incentive to delay any discussion of change for as long as possible.

Yet this intuition is misleading. In practice, the cost of change does not remain constant over time. It increases the longer the architectural lock-in is allowed to deepen.

Platform lock-in rarely occurs as an intentional choice, but it accumulates gradually. Additional services are adopted for convenience, tooling becomes more tightly integrated and operational processes begin to assume the presence of a specific platform. Over time, what was once a flexible foundation hardens into an implicit dependency. At that point, changing direction no longer means replacing a component; it means changing an entire operating model.

Organizations that remain primarily VM-centric and act early are in a very different position. When virtual machines remain the dominant abstraction and higher-level platform services have not yet become deeply embedded, transitions can be managed incrementally. Workloads can be evaluated in stages. Skills can be developed alongside existing operations. Governance and procurement processes can adapt without being forced into emergency decisions.

In these cases, the cost of change is not trivial, but it is proportionate. It reflects the effort required to introduce an alternative (modular) platform, not the effort required to escape a tightly coupled ecosystem.

VMware to Nutanix Windows

By contrast, organizations that postpone evaluation until platform constraints become explicit often find themselves facing a very different reality. When licensing changes, product consolidation, or strategic shifts expose the depth of dependency, the room for change has already narrowed. Timelines become compressed, options shrink, and decisions, that should have been strategic, become reactive.

The cost explosion in these situations is rarely caused by the complexity of the alternative platform. It is caused by the accumulated weight of the existing one. Deep integration, bespoke operational tooling, and platform-specific governance models all add friction to any attempt at change. What might have been a manageable transition years earlier becomes a high-risk transformation project.

This leads to a paradox that many institutions only recognize in hindsight. The best time to evaluate change is precisely when there is no immediate pressure to do so. Early evaluation is a way to preserve choice. It allows organizations to understand their true dependencies, test assumptions, and (perhaps) maintain negotiation leverage.

Waiting, by contrast, does not preserve stability. It often preserves only the illusion of stability, while the cost of future change continues to rise in the background.

For public institutions in particular, this distinction is critical. Their mandate demands foresight, not just reaction. Evaluating platform alternatives before change becomes unavoidable means taking over responsibility.

A window that will not stay open forever

Nutanix should not be framed as a rejection of VMware, nor as a corrective to past decisions. It should be understood as an opportunity for VM-centric public institutions to reassess their strategic position while they still have the flexibility to do so.

Organizations that did not fully adopt VMware’s SDDC approach are in a particularly strong position. Their operational models are portable, their technical lock-in is still manageable and their total cost of change remains proportionate.

For them, the question is whether they want to preserve the ability to decide tomorrow.

And in the public sector, preserving that ability is a governance responsibility.

Nutanix Is Quietly Redrawing the Boundaries of What an Infrastructure Platform Can Be

Nutanix Is Quietly Redrawing the Boundaries of What an Infrastructure Platform Can Be

Real change happens when a platform evolves in ways that remove old constraints, open new economic paths, and give IT teams strategic room to maneuver. Nutanix has introduced enhancements that, taken individually, appear to be technical refinements, but observed together, they represent something more profound. The transition of the Nutanix Cloud Platform (NCP) into a fabric of compute, storage, and mobility that behaves as one system, no matter where it runs.

This is the dismantling of long-standing architectural trade-offs and the business impact is far greater than the technical headlines suggest.

In this article, I want to explore four developments that signal this shift:

  • Elastic VM Storage across Nutanix clusters
  • Disaggregated compute and storage scaling
  • NC2 is generally available on Google Cloud
  • The strategic partnership between Nutanix and Pure Storage

Individually, these solve real operational challenges. Combined, they create an infrastructure model that moves away from fix constructs and toward an adaptable, cost-efficient, cloud-operating fabric.

Elastic VM Storage – The End of Cluster-Bound Thinking

Nutanix introduced Elastic VM Storage, which the ability for one AHV cluster to consume storage from another Nutanix HCI cluster within the same Prism Central domain. It breaks one of the oldest implicit assumptions in on-premises virtualization that compute and storage must live together in tightly coupled units.

By allowing VMs to be deployed on compute in one cluster while consuming storage from another, Nutanix gives IT teams a new level of elasticity and resource distribution.

It introduces an operational freedom that enterprises have never truly had:

  1. Capacity can be added where it is cheapest. If storage economics favour one site and compute expansion is easier or cheaper in another, Nutanix allows you to make decisions based on cost, not on architectural constraints.
  2. It reduces stranded resources. Every traditional environment suffers from imbalanced clusters. Some run out of storage, others out of CPU, and upgrading often means over-investing on both sides. Elastic VM Storage dissolves those silos.
  3. It prepares organizations for multi-cluster private cloud architectures. Enterprises increasingly distribute workloads across data centers, edge locations, and cloud-adjacent sites. Being able to pool resources across clusters is foundational for this future.

Nutanix is erasing the historical boundary of the cluster as a storage island.

Disaggregated Compute and Storage Scaling

For years, Nutanix’s HCI architecture was built on the elegant simplicity of shared-nothing clusters, where compute and storage scale together. Many customers still want this. In fact, for greenfield deployments, it probably is the cleanest architecture. But enterprises also operate in a world full of legacy arrays, refresh cycles that rarely align, strict licensing budgets, and specialized workload patterns.

With support for disaggregated compute and storage scaling, Nutanix allows:

  • AHV compute-only clusters with external storage (currently supported are Dell PowerFlex and Pure Storage – more to follow)
  • Mixed configurations combining HCI nodes and compute-only nodes
  • Day-0 simplicity for disaggregated deployments

This is a statement from Nutanix, whose DNA was always HCI: The Nutanix Cloud Platform can operate across heterogeneous infrastructure models without making the environment harder to manage.

  1. Customers can modernize at their own pace. If storage arrays still have years of depreciation left, Nutanix allows you to modernize compute now and storage later instead of forcing a full rip-and-replace.
  2. It eliminates unnecessary VMware licensing. Many organizations want to exit expensive hypervisor stacks while continuing to utilize their storage investments. AHV compute-only clusters make this transition significantly cheaper.
  3. It supports high-density compute for new workloads. AI training, GPU farms, and data pipelines often require disproportionate compute relative to storage. Disaggregation aligns the platform with the economics of modern workloads.

This is the kind of flexibility enterprises have asked for during the last few years and Nutanix has now delivered it without compromising simplicity.

Nutanix and Pure Storage

One of the most significant shifts in Nutanix’s evolution is its move beyond traditional HCI boundaries. This began when Nutanix introduced support for Dell PowerFlex as the first officially validated external storage integration, which was a clear signal to the market, that the Nutanix platform was opening itself to disaggregated architectures. With Pure Storage FlashArray now becoming the second external storage platform to be fully supported through NCI for External Storage, that early signal has turned into a strategy and ecosystem.

Nutanix NCI with Pure Storage

Nutanix now enables customers to run AHV compute clusters using enterprise-grade storage arrays while retaining the operational simplicity of Prism, AHV, and NCM. Pure Storage’s integration builds on the foundation established with PowerFlex, but expands the addressable market significantly by bringing a leading flash platform into the Nutanix operating model.

Why is this strategically important?

  • It confirms that Nutanix is committed to disaggregated architectures, not just compatible with them. What began with Dell PowerFlex as a single integration has matured into a structured approach. Nutanix will support multiple external storage ecosystems while providing a consistent compute and management experience.
  • It gives customers real choice in storage without fragmenting operations. With Pure Storage joining PowerFlex, Nutanix now supports two enterprise storage platforms that are widely deployed in existing environments. Customers can keep their existing tier-1 arrays and still modernize compute, hypervisor, and operations around AHV and Prism.
  • It creates an on-ramp for VMware exits with minimal disruption. Many VMware customers own Pure FlashArray deployments or run PowerFlex at scale. With these integrations, they can adopt Nutanix AHV without replatforming storage. The migration becomes a compute and virtualization change and not a full infrastructure overhaul.
  • It positions Nutanix as the control plane above heterogeneous infrastructure. The combination of NCI with PowerFlex and now Pure Storage shows that Nutanix is building an operational layer that unifies disparate architectures.
  • It aligns modernization with financial reality. Storage refreshes and compute refreshes rarely align. Supporting multiple external arrays allows Nutanix customers to modernize compute operations first, defer storage investment, and transition into HCI only when it makes sense.

Nutanix has moved from a tightly defined HCI architecture to an extensible compute platform that can embrace best-in-class storage from multiple vendors.

Nutanix Cloud Clousters on Google Cloud – A Third Strategic Hyperscaler Joins the Story

The general availability of NC2 on Google Cloud completes a strategic triangle. With AWS, Azure and now Google Cloud all supporting Nutanix Cloud Clusters (NC2), Nutanix becomes one of the very few platforms capable of delivering a consistent private cloud operating model across all three major hyperscalers. It fundamentally changes how enterprises can think about cloud architecture, mobility, and strategic independence.

Running NC2 on Google Cloud creates a new kind of optionality. Workloads that previously needed to be refactored or painfully migrated can now move into GCP without rewriting, without architectural compromises, and without inheriting a completely different operational paradigm. For many organizations, especially those leaning into Google’s strengths in analytics, AI, and data services, this becomes a powerful pattern. Keep the operational DNA of your private cloud, but situate workloads closer to the native cloud services that accelerate innovation.

NC2 on Google Cloud

When an enterprise can run the same platform – the same hypervisor, the same automation, the same governance model – across multiple hyperscalers, the risk of cloud lock-in can be reduced. Workload mobility and cloud-exit strategies become a reality.

NC2 on Google Cloud is a sign of how Nutanix envisions the future of hybrid multi-cloud. Not as a patchwork of different platforms stitched together, but a unified operating fabric that runs consistently across every environment. With Google now joining the story, that fabric becomes broader, more flexible, and significantly more strategic.

Conclusion

Nutanix is removing the trade-offs, that enterprises once accepted as inevitable.

Most IT leaders aren’t searching for (new) features. They are searching for ways to reduce risk, control cost, simplify operations, and maintain autonomy while the world around them becomes more complex. Nutanix’s recent enhancements are structural. They chip away at the constraints that made traditional infrastructure unflexible and expensive.

The platform is becoming more open, more flexible, more distributed, and more sovereign by design.