Beyond the Price Tag – Why Organizations Choose Nutanix

Beyond the Price Tag – Why Organizations Choose Nutanix

In many customer conversations today, the discussion about Nutanix starts in a very pragmatic place: price.

Before we get the chance to talk about architecture, automation, or hybrid cloud strategies, most organizations first want to answer a simpler question: Can we even afford this option? Only once that hurdle is cleared does the real conversation begin. That is the moment when customers start asking a different question: Is it worth spending our time on this platform?

And that shift in perspective is important, because the current market situation is very different from just a few years ago.

For more than a decade, the virtualization market followed a relatively stable pattern. Many organizations standardized on a single hypervisor/platform and built their operational models, processes, and skill sets around it. The question was rarely which hypervisor to choose but more about which edition or which bundle to buy. The platform decision itself was largely settled.

That stability is gone.

Since the licensing and pricing changes in the VMware ecosystem in 2024, many organizations have been forced to rethink assumptions that had been in place for years. Renewal discussions suddenly became strategic decisions and budget forecasts were no longer predictable. In some cases, the cost increases were large enough to trigger board-level attention. Yes, and sometimes even attract political attention.

But price is only one part of the story.

Many customers also question the long-term direction of the platform on which they built their data centers. They are asking themselves whether the vendor’s strategic priorities still align with their own, and they are looking at consolidation in the industry, reduced product portfolios, and new licensing models, and they are wondering what that means for their own autonomy.

As a result, the conversation has shifted from optimization to re-evaluation.

Instead of finetuning an existing environment, many organizations are now exploring a wide spectrum of alternatives. Hyper-V, HPE VM Essentials, Proxmox, Scale Computing, and open-source stacks. Niche hypervisors and even container-first approaches. The list is long, and in many cases, the evaluation is driven less by feature comparisons and more by strategic considerations.

What is interesting in these discussions is the level of pragmatism.

Most customers are very clear about one thing: they know that VMware still offers one of the most mature and feature-rich stack on the market, but they also admit that they do not actually use all of those features. In some environments, large parts of the advanced functionality have been sitting idle for years.

So the goal is no longer to replicate the past environment in every technical detail.

Customers are willing to accept trade-offs. They do not need the most sophisticated dashboards nor do they need every integration or advanced automation capability. If they can move 80 or 90 percent of their workloads to a new platform, that is already a success. The remaining cases can be handled separately.

This is where a new mindset becomes visible: fail fast, fail forward.

The objective is not to design the perfect architecture on paper. It is to make progress, to reduce dependency, to regain control over costs and strategic direction, and to move to a platform that is predictable, supportable, and aligned with the organization’s own priorities. Even if it means it will stall innovation for a short time.

In that context, price becomes the first filter, not the final decision criterion.

If a platform is clearly unaffordable, the conversation ends there. But if the numbers are within reach, customers start to look deeper. They begin to evaluate operational simplicity, architectural consistency, support quality, and long-term flexibility.

That is usually the point where the Nutanix conversation truly starts.

The Perception Problem

For years, a certain sentence has circulated in the market: “Nutanix is expensive”. It became one of those beliefs that many people repeat without necessarily remembering where it originally came from.

In some organizations, this perception is based on very old benchmarks. In others, it comes from comparisons where different functionality levels were evaluated against each other. And in some cases, it is simply a narrative that persisted over time.

Recently, I have revisited this perception through real customer scenarios. Not theoretical models, but practical environments with realistic configurations, conservative assumptions, and somtimes even with standard (pre-approved) discount levels. What I found was not a universal truth, but a context-dependent story.

In several scenarios, Nutanix was not only competitive but significantly cheaper.

Disclaimer: Before we look at the numbers, a short disclaimer is important. The scenarios shown here are based on realistic configurations, standard architectures, and pre-approved discount levels. They are meant to illustrate typical outcomes, not to serve as official quotes or universally applicable price promises. Actual pricing will always depend on the specific environment, commercial terms, hardware choices, and contractual conditions of each individual customer.

Scenario 1: 500 VDI Users

Assume a VDI environment with 500 users. The infrastructure is built on 2×32-core nodes and designed with an n+2 resilience model. This is a typical production setup, where spare capacity is included so that the environment can tolerate failures without affecting user sessions.

In this configuration, you end up with around 1’152 physical cores that need to be licensed at the platform level. For the baseline comparison, I used this number together with a price of $140 per core. This reflects a very common way the market still thinks about platform costs – total cores multiplied by a unit price. In this baseline, no disaster recovery site is included yet.

With Nutanix, I modeled the environment using the NCI-VDI edition, which is purpose-built for virtual desktop use cases with platforms like Citrix or Omnissa (or Parallels, Dizzion etc.). In this model, I am not licensing 1’152 cores. Instead, I am licensing 500 concurrent users (CCU).

The difference in licensing logic alone already changes the economics of the environment, but there is another aspect that often surprises customers.

There is no additional licensing cost for a disaster recovery site. You can add hosts, refresh hardware, or build a secondary VDI site with the same number of cores, and from a Nutanix licensing perspective, the price remains exactly the same. The licensing is tied to the number of concurrent users, not to the amount of infrastructure standing behind them.

To keep the scenario fully realistic, I calculated three Nutanix options using only pre-approved discounts. Meaning, these are price levels that can typically be offered without extraordinary approvals.

  • The first option combined NCI Pro with NCM Starter – Representing a balanced configuration for standard VDI environments.
  • The second option used NCI Ultimate with NCM Starter – For scenarios where additional capabilities such as microsegmentation are required.
  • The third option was the full stack – Combining NCI Ultimate with NCM Ultimate, providing the complete feature set across both infrastructure and management layers.

All three options came out significantly below the core-based baseline, even the highest edition. And then there is the red bar in the comparison chart.

That red bar represents the same platform model as the baseline, but with the price per core increasing from $140 to $200, which is not an unrealistic assumption for a future renewal. The architecture stays the same, the number of cores stays the same, the resilience model stays the same, but only the unit price changes. Staying with the current platform vendor would result in a massive increase in total cost of ownership, without adding a single new capability to the environment.

cloud13 Nutanix Price NCI VDI

This scenario is not meant to claim that Nutanix is always cheaper. That would be just another oversimplified narrative. But it does show that Nutanix can be more predictable, more scalable, and economically superior, especially in VDI environments where user-based licensing aligns better with how the platform is actually consumed.

Scenario 2: Microsegmented Data Center

In another environment, the discussion was not about VDI or edge sites, but about security.

The customer had a clear, non-negotiable requirement. They wanted to limit lateral movement inside the network and enforce strict communication policies between workloads. This is becoming increasingly common, especially in regulated industries and public sector environments where zero-trust principles are becoming operational requirements.

In the past, microsegmentation was often tied to premium software bundles. Organizations that needed this capability had little choice but to move into higher-tier licensing models, even if they did not require many of the additional features included in those bundles. The security requirement effectively forced them into a more expensive edition, regardless of their actual needs.

In this scenario, the customer was already using microsegmentation and wanted to retain that capability in the target architecture. The comparison was therefore not between a basic and a premium edition, but between two functionally equivalent setups. Both sides had to include network security features.

To make the comparison more realistic and representative of different customer sizes, three Nutanix options were modeled. All three were based on the NCI Ultimate edition, which includes micro-segmentation capabilities, but they reflected different customer profiles and corresponding discount levels.

  • The first option represented a large enterprise environment. In this case, the customer had a high core count and a larger overall deal size, which typically qualifies for higher discount tiers. This option assumed a larger-scale deployment and the kind of commercial conditions that are common in enterprise agreements. It illustrated how the platform behaves economically when deployed at a significant scale.
  • The second option represented a mid-sized environment. Here, the core count and overall deal size were more moderate, leading to medium discount levels. This scenario is often closer to what many regional enterprises, healthcare providers, or mid-sized public sector organizations experience. It provided a balanced view between large enterprise conditions and smaller deployments.
  • The third option reflected a smaller environment, with a lower core count and standard discount levels. This was designed to show what the platform looks like in more typical, smaller-scale deployments, where customers operate under normal commercial conditions without large enterprise agreements.

Across all three options, the architectural assumptions remained consistent. The same security requirements applied, the same functionality was included, and the comparison remained technically equivalent. The only real differences were the scale of the environment and the corresponding commercial terms.

cloud13 Nutanix Price NCI Ult microsegmentation

In each of the three scenarios, the Nutanix configuration remained competitive, and in several cases came out lower in total software cost.

Scenario 3: Distributed Edge Environment

Instead of running a few large clusters in central data centers, some organizations suddenly find themselves operating dozens or even hundreds of small sites. Each location may only host a limited number of virtual machines (VMs), but the number of sites creates a very different licensing footprint.

In this scenario, the customer planned to run around 3’000 virtual machines distributed across roughly 250 edge locations. Each site consisted of only a small number of hosts, designed for local workloads and basic resilience – assume 3 hosts à 32 cores per site = 24’000 cores in total.

In traditional per-core licensing models, these kinds of distributed environments can become expensive very quickly. Even lightly utilized sites still require a certain number of cores to maintain resilience and availability. Multiply that by hundreds of locations, and the software cost grows faster than the actual workload.

Nutanix Cloud Infrastructure – Edge (NCI-Edge) provides a distributed infrastructure platform for small edge deployments. NCI-Edge provides the same capabilities as NCI, combining compute, storage, and networking resources from a cluster of servers into a single logical pool with integrated resiliency, security, performance, and simplified administration. NCI-Edge is limited to a maximum of 25 VMs in a cluster, with each VM being limited to a maximum of 96GB of memory. With NCI-Edge, organizations can efficiently extend the Nutanix platform to remote office/branch office (ROBO) and other edge use cases.

When we modeled this scenario with a Nutanix-based architecture, using conservative assumptions and standard pricing, the outcome was different. The total software cost across all 250 sites was lower than the comparable alternative.

cloud13 NCI Edge

Edge licensing is all about predictability. The licensing model aligned more closely with the operational reality of the environment. Instead of being penalized for running many small sites, the customer could scale their footprint without unexpected increases in costs. The economics made sense for a distributed architecture.

For organizations with large retail networks, industrial edge scenarios, transportation systems, or geographically spread infrastructures, this predictability can be just as important as the absolute price. It allows them to plan growth, roll out new sites, and standardize operations without constantly renegotiating their licensing model.

Scenario 4: From Amazon EVS to Nutanix NC2

Many organizations that moved, or are planning to move, to VMware environments in the public cloud have a very practical reason. They want to keep their existing operational model, their tools, and their skill sets, while shifting the physical infrastructure into a cloud provider’s (Azure, GCP, AWS) data center. The promise is always continuity without disruption.

At first glance, this approach makes sense. You avoid large migration projects, keep your processes intact, and simply relocate the environment. But the economics of these environments have started to change.

I am currently working with an organization that operates a full-stack private cloud at roughly $150 per core. On paper, that stack includes a wide range of capabilities. In reality, however, they only use a small portion of it: the core virtualization layer and basic monitoring and logging. No vSAN, no NSX. Just vSphere and Aria Operations.

Today, they run around 1’920 physical cores on-premises. As part of their cloud strategy, they are considering migrating to Amazon’s Elastic VMware Service (EVS) to exit their own data centers and align with a cloud-first approach. Because the EVS baremetal instances offer higher density, they expect to consolidate their environment to roughly 1’000 cores. Fewer cores, better utilization, same workloads.

Because Amazon EVS is a self-managed service, you are responsible for the lifecycle management and maintenance of the VMware software used in the Amazon EVS environment, such as ESX, vSphere, vSAN, NSX, and SDDC Manager. 

Note: Amazon EVS does not support VMware Cloud Foundation 9 at this time. Currently, the only supported VCF version is VCF 5.2.2 on i4i.metal instances.

That sounds like a straightforward cost-saving exercise, right? But the renewal dynamics tell a different story. Their Broadcom renewal is scheduled for summer 2027, and two scenarios are being discussed:

  • In the first scenario, a typical price increase of around 33 percent is assumed. That would move them from $150 to approximately $200 per core.
  • In the second scenario, the total contract value remains the same despite the reduced core count. In practical terms, that would mean $288 per core, which means an increase of about 92% compared to today.

In other words, even if they cut their footprint almost in half, their effective price per core could nearly double. This is where the discussion turned toward alternatives.

We modeled the same environment using the Nutanix Cloud Platform (NCP) running as NC2 on AWS. It is important to clarify one common misconception here: NC2 is not a separate product with a different architecture. It is the same Nutanix software stack, NCI combined with NCM, deployed on baremetal instances in the public cloud. Operationally, it behaves exactly like an on-premises Nutanix environment.

NC2 on AWS

To reflect different functional needs, I modeled three options:

  • The first option was NCI Pro combined with NCM Starter. This configuration mirrors the customer’s current feature usage, avoiding unnecessary capabilities or “shelfware”. It represents a like-for-like replacement of the existing functionality.
  • The second option used NCI Ultimate with NCM Starter. This added more advanced storage and data services, along with microsegmentation capabilities, giving the customer a richer feature set than they have today.
  • The third option was the full Nutanix Cloud Platform Ultimate stack, including the complete set of infrastructure, automation, and advanced platform services.

Even with these different configurations, the results were consistent. All three Nutanix options came in significantly below the expected VMware renewal costs.

Compared to a VMware renewal at $200 per core, the estimated savings looked roughly as follows:

  • NCI Pro + NCM Starter: About 33 percent lower

  • NCI Ultimate + NCM Starter: About 18 percent lower

  • NCP Ultimate: About 24 percent lower (higher discount for full-stack approach)

If the worst-case scenario of $288 per core were to materialize, the savings would be even higher, ranging from approximately 43 to 54 percent per year!

cloud13 Nutanix Price NCI EVS to NC2

As in the other scenarios, the interesting part was not just the price difference. It was the combination of cost predictability and architectural flexibility. With NC2, the customer could run the same platform on-premises and in the cloud, move workloads between locations, and avoid being tied to a single proprietary cloud virtualization stack.

To support the transition from VMware to Nutanix on NC2, migrations are typically handled with Nutanix Move. This tool allows customers to replicate and migrate virtual machines from existing VMware environments into Nutanix clusters with minimal disruption, reducing the complexity of the platform shift.

In this scenario, the outcome once again challenged the old perception. When modeled with realistic assumptions and current pricing dynamics, Nutanix was very (cost-)competitive. It offered both a lower platform cost and a more flexible long-term architecture.

Scenario 5: Updated Benchmarks, Different Results

Perhaps one of the most revealing examples was not a technical scenario at all, but a simple conversation.

In one engagement, a partner mentioned that their internal Nutanix benchmark was more than two years old. Those numbers had shaped their perception of the platform and influenced how they positioned Nutanix in front of customers. Over time, the benchmark had become an accepted reference point, even though no one had revisited the assumptions underlying it.

When we recalculated the scenario using (VCF vs. NCI Pro with Advanced Replication add-on) current licensing models, realistic configurations, and today’s pricing structures, the outcome was very different from what they expected. The Nutanix solution turned out to be cheaper than expected.

The important information here was not the percentage difference or the exact numbers on the spreadsheet. It was the realization that the entire perception had been built on outdated data. The conclusion they had carried forward for years no longer reflected the reality of the current market.

This experience is not unique. Many organizations still rely on benchmarks, cost models, or architectural assumptions that were created several years ago. Since then, licensing structures have evolved, bundles have changed, and the economics of different platforms have shifted. But the original perception often remains untouched.

In conversations with customers and partners, I frequently hear a similar sentence: “Our Nutanix benchmark might be outdated”. That simple realization often marks the turning point in the discussion. Because once the numbers are recalculated with current data, the story tends to change and the outcome is no longer predetermined. 

Addressing the Renewal Myth

Another concern that often surfaces in conversations is the idea that Nutanix offers an attractive entry price, only to significantly increase costs at renewal time.

This narrative circulates in online forums, informal discussions, and peer-to-peer exchanges. In a market where many organizations have recently experienced unexpected price increases from other vendors, it is understandable that customers approach any new platform with a certain level of skepticism. Trust in licensing models has been shaken, and nobody wants to repeat the same experience a few years down the road.

But in practice, this perception does not reflect how most Nutanix engagements actually unfold. In many cases, Nutanix is able to provide multi-year price guarantees, giving customers clarity not only about the initial investment, but also about what they can expect over the next several years. Instead of treating pricing as a short-term negotiation, the conversation often shifts toward long-term planning and predictability.

This does not mean that prices will remain frozen forever. No software vendor can realistically promise that. Over time, platforms evolve, new features are introduced, innovation continues, and inflation affects the cost structure. It is normal for software pricing to adjust over a multi-year horizon.

The difference lies in transparency.

Rather than hiding future changes behind complex contracts or vague terms, Nutanix is often willing to put the long-term numbers on the table early in the process. Customers can see not only what they pay today, but also how the platform is expected to evolve financially over time. That creates a different kind of conversation – one based on planning and predictability instead of uncertainty.

For many organizations, especially those in regulated industries or the public sector, that predictability is more important than the absolute entry price. It allows them to align budgets, procurement cycles, and strategic roadmaps without the fear of sudden surprises at renewal time.

What Customers Actually Value

Once the initial price discussion is out of the way, the tone of the conversation usually changes. The focus shifts from raw numbers to what the platform actually delivers in day-to-day operations.

At this stage, customers are asking whether it fits their architecture, their processes, and their long-term strategy. And across many conversations, certain themes tend to appear again and again.

One of the most frequently mentioned aspects is the modularity of the platform. Customers appreciate that Nutanix does not force them into a single, monolithic bundle for every use case. A large data center, a VDI environment, and a small edge site may not require the same software edition. With Nutanix, these environments can be licensed differently based on their actual requirements. This flexibility allows customers to align their licensing model with their architecture, instead of reshaping their architecture to fit the licensing.

Another recurring theme is the architectural simplicity of hyperconverged infrastructure itself. Many customers value a distributed system that integrates compute and storage, builds resilience into the platform, and reduces external dependencies. There is no separate SAN to manage, no complex compatibility matrix between multiple storage and compute components. For teams that want to reduce operational overhead and complexity, this design principle often resonates more strongly than any individual feature.

Support quality is another topic that comes up regularly. Nutanix consistently achieves a Net Promoter Score (NPS) above 90, which is unusually high in the enterprise infrastructure space. Customers often describe the support experience as direct and focused, with engineers who stay engaged until the issue is resolved. For organizations that have struggled with multi-vendor support models in the past, this can be a significant improvement.

The ecosystem also plays an important role. Nutanix continues to work closely with major OEM partners such as Dell, Lenovo, HPE, and Cisco. For many customers, especially in the public sector, this is more than a technical detail. It means they can procure hardware through existing framework contracts, trusted suppliers, and established procurement channels, while still running a modern, consistent software platform.

In addition, the platform is gradually opening up to more flexible architectures. Nutanix has introduced support for external storage integrations, starting with platforms from Dell and Pure Storage, with further options expected over time. This gives customers more freedom in how they design their environments, especially if they want to reuse existing storage investments or follow a disaggregated approach for certain workloads.

Taken together, these themes paint a clear picture. Once the price question is answered, the decision is rarely about a single feature or a benchmark number. It becomes a broader evaluation of architecture, operational simplicity, support experience, and long-term flexibility.

And in many of those discussions, that combination of qualities is what makes the platform stand out.

Price Opens the Door. Value Closes the Deal.

If you look across all the scenarios and customer discussions, a consistent pattern begins to emerge.

Price is almost always the starting point. It determines whether a platform even makes it onto the shortlist. In today’s market, where many organizations are under pressure to control costs and justify every investment, that first filter has become more important than ever. If a solution is clearly out of budget, the conversation usually ends before it truly begins.

But we all know that price is rarely the final decision factor.

Once customers see that Nutanix is within their financial reach, or in some cases even cheaper than the alternatives, the focus shifts. The discussion shifts from license metrics and discount levels to the day-to-day realities of running the platform. This is the moment when the conversation moves from procurement to platform strategy.

Customers begin to consider how much time they spend on upgrades, how complex their current environment has become, how many vendors they have to coordinate during incidents, and how predictable their infrastructure roadmap really is. They start to evaluate not just what the platform costs today, but what it means for their operations over the next five or ten years.

And that is often where Nutanix stands out!

The platform may not always be the absolute cheapest option in every possible scenario. No serious technology decision should be based on a single number alone. But the blanket statement that Nutanix is inherently expensive does not hold up when you look at real environments with current data. 

10 Things You Probably Didn’t Know About Nutanix

10 Things You Probably Didn’t Know About Nutanix

Nutanix is often described with a single word: HCI. That description is not wrong, but it is incomplete.

Over the last decade, Nutanix has evolved from a hyperconverged infrastructure (HCI) pioneer into a mature enterprise cloud platform that now sits at the center of many VMware replacement strategies, sovereign cloud designs, and edge architectures. Yet much of this evolution remains poorly understood, partly because old perceptions persist longer than technical reality.

Here are ten things about Nutanix that people often don’t know or underestimate.

1. Nutanix’s DNA is HCI, but the architecture has evolved beyond it

Nutanix was built on hyperconverged infrastructure. That heritage is important, because it shaped the platform’s operational model, automation mindset, and lifecycle discipline.

Over the last years, Nutanix deliberately opened its architecture. Today, compute-only nodes are a possibility, enabled through partnerships with vendors like Dell (PowerStore support for Nutanix is expected to enter early access in spring 2026, with general availability coming in summer 2026) and Pure Storage (for now). This allows customers to decouple compute and storage where it makes architectural or economic sense, without abandoning the Nutanix control plane.

This is Nutanix acknowledging that real enterprise environments are heterogeneous, and that flexibility matters.

2. A Net Promoter Score above 90

Nutanix has reported an NPS score consistently above 90 for several years. In enterprise infrastructure, that number is almost unheard of.

NPS reflects how customers feel after deployment, during operations, upgrades, incidents, and daily use. In a market where infrastructure vendors are often tolerated rather than liked, this level of advocacy is just unique and tells a story if its own.

It suggests that Nutanix’s real differentiation is not just technology, but operational experience. That tends to show up only once systems are running at scale.

3. Nutanix Kubernetes Platform runs almost everywhere

Nutanix Kubernetes Platform (NKP) is often misunderstood as “Kubernetes on Nutanix”. That is only partially true.

NKP can run on:

  • Bare metal
  • Nutanix AHV
  • VMware
  • Public cloud infrastructure

Nutanix Cloud Native Platform

NKP was designed to abstract infrastructure differences rather than enforce platform lock-in. For organizations that already operate mixed environments, or that want to transition gradually, this matters far more than ideological purity.

In practice, NKP becomes a control layer for Kubernetes. That is especially relevant in regulated or sovereign environments where infrastructure choices are often political as much as technical.

4. Nutanix has matured from “challenger” to enterprise-grade platform

It’s honest to acknowledge that Nutanix wasn’t always considered enterprise-ready. In its early years, the company was widely admired for innovation and simplicity, but many large organizations hesitated because the platform, like all young software, had feature gaps, stability concerns in some use cases, and a smaller track record with mission-critical workloads.

That landscape has changed significantly. Over the past several years, Nutanix has steadily strengthened every axis of its platform. From virtualization and distributed storage to Kubernetes, security, and operations at scale. The company’s most recent financial results show that this maturity isn’t theoretical. Fiscal 2025 delivered 18 % year-over-year revenue growth, strong recurring revenue expansion, and Nutanix added thousands of new customers, including over 50 Global 2000 accounts, arguably its strongest annual new-logo performance in years. 

What this means in practice is that many enterprises that once saw Nutanix as a “challenger” now see it as a credible and proven alternative to VMware, and not just in smaller or departmental deployments, but across core data center and hybrid cloud estates.

The old maturity gap has largely disappeared. What remains is a difference of philosophy. Nutanix prioritizes operational simplicity, flexibility, and choice, without compromising the robustness that large organizations demand. And with increasing adoption among Global 2000 enterprises, that philosophy is proving not only viable but competitive at the highest levels of IT decision-making.

5. The “Nutanix is expensive” perception is outdated and often wrong

The idea that Nutanix is more expensive than competitors is one of the most persistent myths in the market. It was shaped by early licensing models and by superficial price comparisons that ignored operational and architectural differences.

Today, Nutanix offers multiple licensing models, including options that other vendors simply do not have.

For example, NCI-VDI for Citrix or Omnissa environments is licensed based on concurrent users (CCU) rather than physical CPU cores. That aligns cost directly with usage and not hardware density.

Even more interesting is NCI Edge, which is designed for distributed environments with smaller footprints (aka ROBO). It is licensed per virtual machine, with clear boundaries:

  • Maximum of 25 VMs per cluster
  • Maximum 96 GB RAM per VM

Consider a realistic example. An organization runs 250 edge sites. Each site has a 3-node cluster with 32 cores per node and hosts 20 VMs:

  • A core-based model would require licensing 24’000 cores
  • With NCI Edge, the customer licenses 5’000 VMs

It fundamentally changes the cost structure of edge and remote deployments. In a traditional core-based licensing model, effective costs might range from $100 to $140 per core for edge nodes. With NCI Edge, the effective per-core cost can drop to $60-80 (illustrative figures). This is not a marginal optimization, it’s huge.

Note: NCM Edge is a product that provides the same capabilities as NCM for edge use cases. NCM-Edge is also limited to a maximum of 25 VMs in a cluster.

6. Almost 90% of Nutanix customers now use AHV

Nutanix has always been fundamentally about HCI and AOS (Acropolis Operating System). From the beginning, the value was never the hypervisor itself, but the distributed storage, data services, and operational model built on top of it. Over time, Nutanix came to a clear conclusion: The hypervisor should be a commodity, not the value anchor of the platform. Out of this thinking, the perception, and later the expression, emerged that AHV is “free”.

No photo description available.

Today, AHV has become the dominant deployment model in the Nutanix ecosystem, with an adoption rate of 88%. This matters for two important reasons. First, it disproves the assumption that customers need to be pushed or incentivized to move to AHV. Second, it demonstrates that AHV is trusted to run mission-critical workloads at scale, across enterprises and service providers.

7. Nutanix is 100% channel-led

Nutanix does not sell directly to customers (for sure there are some exceptions :)). It is a channel-led vendor, by design, and that decision fundamentally shapes how the company operates in the market. Hence, channel commitment at Nutanix is a structural principle.

Partners are not treated as a fulfillment layer or a transactional necessity. They are core to how Nutanix delivers value – from architecture design and implementation to day-two operations, managed services, and long-term customer success. As a result, Nutanix has built one of the strongest partner and service provider ecosystems in the industry, with clear incentives, predictable rules, and room for partners to build sustainable businesses.

This stands in sharp contrast to the current direction of some other infrastructure vendors, where channel models have become more restrictive, less transparent, and increasingly centered around direct control. In that environment, partners often struggle with margin pressure, reduced influence, and uncertainty about their long-term role.

Nutanix takes a different approach. By staying channel-led, it enables local expertise, regional sovereignty, and trusted delivery models, which are especially critical in public sector, regulated industries, and markets where locality and compliance matter as much as technology.

8. MST and Cloud-Native AOS show how far Nutanix has moved beyond classic HCI

Most people associate Nutanix AOS with hyperconverged infrastructure and VM-centric deployments. What is far less known is how deeply Nutanix has evolved its data platform to address multi-cloud and cloud-native architectures.

One example is MST (Multi-Cloud Snapshot Technology). MST enables application-consistent snapshots to be replicated across heterogeneous environments, including on-premises infrastructure and public clouds. Unlike traditional disaster-recovery approaches that assume identical infrastructure on both sides, MST is designed for asymmetric, real-world scenarios. This makes it possible to use the public cloud as a recovery or failover target without re-architecting workloads or maintaining a second, identical private environment. 

MST diagram

In parallel, Nutanix has introduced Cloud Native AOS, which brings enterprise-grade storage and data services directly into Kubernetes environments. Instead of tying storage to virtual machines or specific infrastructure stacks, Cloud Native AOS runs as a Kubernetes-native service and can operate across diverse platforms. This allows stateful applications to benefit from Nutanix data services, such as snapshots, replication, and resilience, without forcing teams back into VM-centric models.

Together, MST and Cloud-Native AOS illustrate an important point. Nutanix is not simply extending HCI into new form factors. It is re-architecting core data services to work across clouds, infrastructures, and application models. These capabilities are often overlooked, but they are strong indicators of where the platform is heading — toward data mobility, resilience, and consistency across increasingly fragmented environments.

EKS Cluster

9. Nutanix SaaS without forcing SaaS

Nutanix offers SaaS-based services such as Data Lens and Nutanix Central. These services are also available on-premises, including for air-gapped environments.

This dual-delivery model recognizes that not all customers can or should consume control planes as public SaaS. 

10. Nutanix has more than a decade of real-world experience replacing VMware

Nutanix has operated alongside VMware for more than ten years, in many cases within the same environments. As a result, replacing vSphere is not a new ambition or a reactive strategy for Nutanix. It is just a long-standing and proven reality.

Equally important is the migration experience. Nutanix Move was built specifically to address one of the most critical challenges in any platform transition. It’s about getting workloads across safely, predictably, and at scale. Move supports migrations from vSphere, Hyper-V, AWS, and other environments, enabling phased and low-risk transitions rather than disruptive “big bang” projects. Beyond workload migration, Move can also translate NSX network and security policies into Nutanix Flow, addressing one of the most commonly cited blockers in VMware exit strategies.

Nutanix has spent more than a decade refining these aspects across thousands of customer environments, which is why many organizations today view it as a credible, de-risked alternative for the long term.

Conclusion

For organizations reassessing their infrastructure strategy, whether driven by VMware uncertainty, edge expansion, regulatory pressure, or cloud cost realities, Nutanix should be on the top of your list. It is a proven platform with a clear philosophy, a growing enterprise footprint, and more than a decade of hard-earned experience. If Nutanix is still on your shortlist as “HCI”, it may be time to look again, and this time at the full picture! 🙂 

Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Over the past two years, a new narrative has taken hold in the cloud market. No, it is not always about sovereign cloud. 🙂 Headlines talk about cloud repatriation – nothing really new, but it is still out there. CIOs speak openly about pulling some workloads back on-premises. Analysts write about organizations “correcting” some earlier cloud decisions to optimize cloud spend. In parallel, hyperscalers themselves now acknowledge that not every workload belongs in the public cloud.

And yet, when you look at the data, you will find a paradox.

IDC and Gartner both project strong, sustained growth in public cloud IaaS spending over the next five years. Not marginal growth and sign of stagnation. But a market that continues to expand at scale, absorbing more workloads, more budgets, and more strategic relevance every year.

At first glance, these two trends appear contradictory. If organizations are repatriating workloads, why does public cloud IaaS continue to grow so aggressively? The answer lies in understanding what is actually being repatriated, what continues to move to the cloud, and how infrastructure constraints are reshaping decision-making in ways that are often misunderstood.

Cloud Repatriation Is Real, but Narrower Than the Narrative Suggests

Cloud repatriation is not a myth. It is happening, but it is also frequently misinterpreted.

Most repatriation initiatives are highly selective. They focus on predictable, steady-state workloads that were lifted into the public cloud under assumptions that no longer hold. Cost transparency has improved, egress fees are better understood and operating models have matured. What once looked flexible and elastic is now seen as expensive and operationally inflexible for certain classes of workloads.

What is rarely discussed is that repatriation does not mean “leaving the cloud”, but I have to repeat it again: It means rebalancing. Meaning, that trganizations are not abandoning public cloud IaaS as a concept. They are just refining their usage of it.

At the same time, some new workloads continue to flow into public cloud environments. Digital-native applications, analytics platforms, some AI pipelines, globally distributed services, and short-lived experimental environments still align extremely well with public cloud economics and operating models. These workloads were not part of the original repatriation debate, and they seem to be growing faster than traditional workloads are being pulled back.

This is how both statements can be true at the same time. Cloud repatriation exists, and public cloud IaaS continues to grow.

The Structural Drivers Behind Continued IaaS Growth

Public cloud IaaS growth is not driven by blind enthusiasm anymore. It is driven by structural forces that have little to do with fashion and everything to do with constraints.

One of the most underestimated factors is time. Building infrastructure takes time and procuring hardware takes time as well. Scaling data centers takes time and many organizations today are not choosing public cloud because it is cheaper or “better”, but because it is available now.

This becomes even more apparent when looking at the hardware market right now.

Hardware Shortages and Rising Server Prices Change the Equation

The infrastructure layer beneath private clouds has suddenly become a bottleneck. Server lead times have increased, GPU availability is constrained and prices for enterprise-grade hardware continue to rise, driven by supply chain pressures, higher component costs, and growing demand from AI workloads.

For organizations running large environments, this introduces a new type of risk. Capacity planning is a logistical problem and no longer just a financial exercise anymore. Even when budgets are approved, hardware may not arrive in time. That is the new reality.

In this context, public cloud data centers represent something extremely valuable: pre-existing capacity. Hyperscalers have already made the capital investments and they already operate at scale. From the customer perspective, infrastructure suddenly looks abundant again.

This is why many organizations currently consider shifting workloads to public cloud IaaS, even if they were previously skeptical. It became a pragmatic response to scarcity.

The Flawed Assumption: “Just Use Public Cloud Instead of Buying Servers”

However, this line of thinking often glosses over a critical distinction.

Many of these organizations do not actually want “cloud-native” infrastructure, if we are being honest here. What they want is physical capacity – They want compute, storage, and networking under predictable performance characteristics. In other words, they want some VMs and bare metal.

Buying servers allows organizations to retain architectural freedom. It allows them to choose their operating system or virtualization stack, their security model, their automation tooling, and their lifecycle strategy. Public cloud IaaS, by contrast, delivers abstraction, but at the cost of dependency.

When organizations consume IaaS services from hyperscalers, they implicitly accept constraints around instance types, networking semantics, storage behavior, and pricing models. Over time, this shapes application architectures and operational processes. The usage of such services suddenly became a lock-in.

Bare Metal in the Public Cloud Is Not a Contradiction

Interestingly, the industry has started to converge on a hybrid answer to this dilemma: bare metal in the public cloud.

Hyperscalers themselves offer bare-metal services. This is an acknowledgment that not all customers want fully abstracted IaaS. Some want physical control without owning physical assets. It is simple as that.

But bare metal alone is not enough. Without a consistent cloud platform on top, bare-metal in the public cloud becomes just another silo. You gain performance and isolation, but you lose portability and operational consistency.

Nutanix Cloud Clusters and the Reframing of IaaS

Nutanix Cloud Platform running on AWS, Azure, and Google Cloud through NC2 (Nutanix Cloud Clusters) introduces a different interpretation of public cloud IaaS.

Instead of consuming hyperscaler-native IaaS primitives, customers deploy a full private cloud stack on bare-metal instances in public cloud data centers. From an architectural perspective, this is a subtle but profound difference.

Customers still benefit from the hyperscaler’s global footprint and hardware availability and they still avoid long procurement cycles, but they do not surrender control of their cloud operating model. The same Nutanix stack runs on-premises and in public cloud, with the same APIs, the same tooling, and the same governance constructs.

Workload Mobility as the Missing Dimension

The most underappreciated benefit of this approach is workload mobility.

In a cloud-native bare-metal deployment tied directly to hyperscaler services, workloads tend to become anchored, migration becomes complex, and exit strategies are theoretical at best.

With NC2, workloads are portable by design. Virtual machines and applications can move between on-premises environments and public cloud (or a service provider cloud) bare-metal clusters without refactoring. In practical terms, this means organizations can use public cloud capacity tactically rather than strategically committing to it. Capacity shortages, temporary demand spikes, regional requirements, or regulatory constraints can be addressed without redefining the entire infrastructure strategy.

This is something traditional IaaS does not offer, and something pure bare-metal consumption does not solve on its own.

Reconciling the Two Trends

When viewed through this lens, the contradiction between cloud repatriation and public cloud IaaS growth disappears.

Public cloud is growing because it solves real problems: availability, scale, and speed. Repatriation is happening because not all problems require abstraction, and not all workloads benefit from cloud-native constraints.

The future is not a reversal of cloud adoption. It is a maturation of it.

Organizations are asking how to use public clouds without losing control. Platforms that allow them to consume cloud capacity while preserving architectural independence are not an alternative to IaaS growth and they are one of the reasons that growth can continue without triggering the next wave of regret-driven repatriation.

What complicates this picture further is that even where public cloud continues to grow, many of its original economic promises are now being questioned again.

The Broken Promise of Economies of Scale

One of the foundational assumptions behind public cloud adoption was economies of scale. The logic seemed sound. Hyperscalers operate at a scale no enterprise could ever match. Massive data centers, global procurement power, highly automated operations. All of this was expected to translate into continuously declining unit costs, or at least stable pricing over time.

That assumption has not materialized as we know by now.

If economies of scale were truly flowing through to customers, we would not be witnessing repeated price increases across compute, storage, networking, and ancillary services. We would not see new pricing tiers, revised licensing constructs, or more aggressive monetization of previously “included” capabilities. The reality is that public cloud pricing has moved in one direction for many workloads, and that direction is up.

This does not mean hyperscalers are acting irrationally. It means the original narrative was incomplete. Yes, scale does reduce certain costs, but it also introduces new ones. That is also true for new innovations and services. Energy prices, land, specialized hardware, regulatory compliance, security investments, and the operational complexity of running globally distributed platforms all scale accordingly. Add margin expectations from capital markets, and the result is not a race to the bottom, but disciplined price optimization.

For customers, however, this creates a growing disconnect between expectation and reality.

When Forecasts Miss Reality

More than half of organizations report that their public cloud spending diverges significantly from what they initially planned. In many cases, the difference is not marginal. Budgets are exceeded, cost models fail to reflect real usage patterns, optimization efforts lag behind application growth.

What is often overlooked is the second-order effect of this divergence. Over a third of organizations report that cloud-related cost and complexity issues directly contribute to delayed projects. Migration timelines slip, modernization initiatives stall, and teams slow down not because technology is unavailable, but because financial and operational uncertainty creeps into every decision.

Commitments, Consumption, and a Structural Risk

Most large organizations do not consume public cloud on a purely on-demand basis. They negotiate commitments, look at reserved capacity, and spend-based discounts. These are strategic agreements designed to lower unit costs in exchange for predictable consumption.

These agreements assume one thing above all else: that workloads will move. They HAVE TO move.

When migrations slow down, a new risk pops up. Organizations fail to reach their committed consumption levels, because they cannot move workloads fast enough. Legacy architectures, migration complexity, skill shortages, and governance friction all play a role.

The consequence is subtle but severe. Committed spend still has to be paid and because of that future negotiations become weaker. The organization enters the next contract cycle with a track record of underconsumption, reduced leverage, and less credibility in forecasting.

In effect, execution risk turns into commercial risk.

This dynamic is rarely discussed publicly, but it is increasingly common in private conversations with CIOs and cloud leaders. The challenge is no longer whether the public cloud can scale, but whether the organization can.

Speed of Migration as an Economic Variable

At this point, migration speed stops being a technical metric and becomes an economic one. The faster workloads can move, the faster negotiated consumption levels can be reached. The slower they move, the more value leaks out of cloud agreements.

This is where many cloud-native migration approaches struggle. Refactoring takes time and re-architecting applications is expensive. Not every workload is a candidate for transformation under real-world constraints.

As a result, organizations are caught between two pressures. On one side, the need to consume public cloud capacity they have already paid for. On the other hand, the inability to move workloads quickly without introducing unacceptable risk.

NC2 as a Consumption Accelerator, Not a Shortcut

This is where Nutanix Cloud Platform with NC2 changes the conversation.

By allowing organizations to run the same private cloud stack on bare metal in AWS, Azure, and Google Cloud, NC2 removes one of the biggest bottlenecks in migration programs: The need to change how workloads are built and operated before they can move.

Workloads can be migrated as they are, operating models remain consistent, governance does not have to be reinvented, and teams do not need to learn a new infrastructure paradigm under time pressure. It’s all about efficiency and speed.

Faster migrations mean workloads start consuming public cloud capacity earlier and the negotiated consumption targets suddenly become achievable. Commitments turn into realized value rather than sunk cost, and the organization regains control over both its migration timeline and its commercial position.

Reframing the Role of Public Cloud

In this context, NC2 is not an alternative to public cloud economics, but a mechanism to actually realize them.

Public cloud providers assume customers can move fast. In reality, many customers cannot, not because they resist change, but because change takes time. Platforms that reduce friction between private and public environments do not undermine cloud strategies. They are here to stabilize them. And they definitely can!

The uncomfortable truth is that economies of scale alone do not guarantee better outcomes for customers, execution does. And execution, in large enterprises, depends less on ideal architectures and more on pragmatic paths that respect existing realities.

When those paths exist, public cloud growth and cloud repatriation stop being opposing forces. They become two sides of the same maturation process, one that rewards platforms designed not just for scale, but for transition.

Nutanix should not be viewed primarily as a replacement for VMware

Nutanix should not be viewed primarily as a replacement for VMware

Public sector organizations rarely change infrastructure platforms lightly. Stability, continuity, and operational predictability matter more than shiny and modern solutions. Virtual machines became the dominant abstraction because they allowed institutions to standardize operations, separate applications from hardware, and professionalize IT operations over the long term.

For many years, VMware has become synonymous with this VM-centric operating model, as it provided a coherent, mature, and widely adopted implementation of virtualized infrastructure. Choosing VMware was, for a long time, a rational and defensible decision.

Crucially, the platform was modular. Organizations could adopt it incrementally, integrate it with existing tools, and shape their own operating models on top of it. This modularity translated into operational freedom. Institutions retained the ability to decide how far they wanted to go, which components to use, and which parts of their environment should remain under their direct control. These characteristics explain why VMware became the default choice for so many public institutions. It aligned well with the values of stability, proportionality, and long-term accountability.

The strategic question public institutions face today is not whether that decision was wrong. Rather, if they can learn from it. We need to ask ourselves whether the context around that decision has changed and whether continuing along the same platform path still preserves long-term control, optionality, and state capability.

From VM-centric to platform-path dependent

It is important to be precise in terminology. Most public sector IT environments are not VMware-centric by design. They are VM-centric. Virtual machines are the core operational unit, deeply embedded in processes, tooling, skills, and governance models. This distinction is very important. A VM-centric organization can, in principle, operate on different platforms without redefining its entire operating model. A VMware-centric organization, by contrast, has often moved further down a specific architectural path by integrating tightly with proprietary platform services, management layers, and bundled stacks that are difficult to disentangle later.

This is where the strategic divergence begins.

Over time, VMware’s platform has evolved from a modular virtualization layer into an increasingly integrated software-defined data center (SDDC) and VCF-oriented (VMware Cloud Foundation) stack. That evolution is not inherently negative. Integrated platforms can deliver efficiencies and simplified operations, but they also introduce path dependency. Decisions made today shape which options remain viable tomorrow.

So, the decisive factor is not pricing. Prices change. For public institutions, this is a governance issue (not a technical one).

There is a significant difference between organizations that adopted VMware primarily as a hypervisor platform and those that fully embraced the SDDC or VCF vision.

Institutions that did not fully commit to VMware’s integrated SDDC approach often still retain architectural freedom. Their environments are typically characterized by:

  • A strong focus on virtual machines rather than tightly coupled platform services
  • Limited dependency on proprietary automation, networking, or lifecycle tooling
  • Clear separation between infrastructure, operations, and higher-level services

For these organizations, the operational model remains transferable. Skills, processes, and governance structures are not irreversibly bound to a single vendor-defined stack. This has two important consequences.

First, technical lock-in can still be actively managed. The platform does not yet dictate the future architecture. Second, the total cost of change remains realistic. Migration becomes a controlled evolution rather than a disruptive transformation.

In other words, the window for strategic choice is still open.

Why this moment matters for the public sector

Public institutions operate under conditions that differ fundamentally from those of private enterprises. Their mandate is not limited to efficiency, competitiveness, or short-term optimization. Instead, they are entrusted with continuity, legality, and accountability over long time horizons. Infrastructure decisions made today must still be explainable years later, often to different audiences and under very different political circumstances. They must withstand audits, parliamentary inquiries, regulatory reviews, and shifts in leadership without losing their legitimacy.

This requirement fundamentally changes how technology choices must be evaluated. In the public sector, infrastructure is an integral part of the institutional framework that enables the state to function effectively. Decisions are therefore judged not only by their technical benefits and performance, but by their long-term defensibility. A solution that is efficient today but difficult to justify tomorrow represents a latent risk, even if it performs flawlessly in day-to-day operations.

It is within this context that the concept of digital sovereignty has moved from abstraction to obligation. Governments increasingly define digital sovereignty not as isolation or technological nationalism, but as the capacity to maintain control and freedom of an environment. This includes the ability to reassess vendor relationships, adapt sourcing strategies, and respond to geopolitical, legal, or economic shifts without being forced into reactive or crisis-driven decisions.

Digital sovereignty, in this sense, is closely tied to governance and control. It is about ensuring that institutions retain the ability to make informed, deliberate choices over time. That ability depends less on individual technologies and more on the structural properties of the platforms on which those technologies are built. When platforms are designed in ways that limit flexibility, they quietly constrain future options, regardless of their current performance or feature set.

Platform architectures that reduce reversibility are particularly problematic in the public sector. Reversibility does not imply constant change, nor does it require frequent platform switches. It simply means that change remains possible without disproportionate disruption. When an architecture makes it technically or organizationally prohibitive to adjust course, it creates a form of lock-in that extends beyond commercial dependency into the realm of institutional risk.

Even technically advanced platforms can become liabilities if they harden decisions that should remain open. Tight coupling between components, inflexible operational models, or vendor-defined evolution paths may simplify operations in the short term, but they do so at the cost of long-term flexibility. In public institutions, where the ability to adapt is inseparable from democratic accountability and legal responsibility, this trade-off must be examined with particular care.

Ultimately, digital sovereignty in the public sector is about ensuring that those dependencies remain governable. Platforms that preserve reversibility support this goal by allowing institutions to evolve deliberately, rather than react under pressure. Platforms that erode it may function well today, but they quietly accumulate strategic risk that only becomes visible when options have already narrowed.

Seen through this lens, digital sovereignty is a core governance requirement, embedded in the responsibility of public institutions to remain capable, accountable, and in control of their digital future.

Nutanix as a strategic inflection point

This is why Nutanix should not be viewed primarily as a replacement for VMware. Framing it as such immediately steers the discussion in the wrong direction. Replacements imply disruption, sunk costs, and, perhaps most critically in public-sector and enterprise contexts, an implicit critique of past decisions. Infrastructure choices, especially those made years ago, were often rational, well-founded, and appropriate for their time. Suggesting that they now need to be “replaced” risks triggering defensiveness and obscures the real strategic question.

More importantly, the replacement narrative fails to capture what Nutanix actually represents for VM-centric organizations. Nutanix does not demand a wholesale change in operating philosophy. It does not require institutions to abandon virtual machines, rewrite operational playbooks, or dismantle existing governance structures. On the contrary, it deliberately aligns with the VM-centric operating model that many public institutions and enterprises have refined over years of practice.

For this reason, Nutanix is better understood as a strategic inflection point. It marks a moment at which organizations can reassess their platform trajectory without invalidating the past. Virtual machines remain first-class citizens, operational practices remain familiar and roles, responsibilities, and control mechanisms continue to function as before. The day-to-day reality of running infrastructure does not need to change.

What does change is the organization’s strategic posture.

In essence, Nutanix is about restoring the ability to choose. In public-sector (and enterprise environments), that ability is often more valuable than any individual feature or performance metric.

The cost of change versus the cost of waiting

A persistent misconception in infrastructure strategy is the assumption that platform change is, by definition, prohibitively expensive. This belief is understandable. Large-scale IT transformations are often associated with complex migration projects, organizational disruption, and unpredictable outcomes. These associations create a strong incentive to delay any discussion of change for as long as possible.

Yet this intuition is misleading. In practice, the cost of change does not remain constant over time. It increases the longer the architectural lock-in is allowed to deepen.

Platform lock-in rarely occurs as an intentional choice, but it accumulates gradually. Additional services are adopted for convenience, tooling becomes more tightly integrated and operational processes begin to assume the presence of a specific platform. Over time, what was once a flexible foundation hardens into an implicit dependency. At that point, changing direction no longer means replacing a component; it means changing an entire operating model.

Organizations that remain primarily VM-centric and act early are in a very different position. When virtual machines remain the dominant abstraction and higher-level platform services have not yet become deeply embedded, transitions can be managed incrementally. Workloads can be evaluated in stages. Skills can be developed alongside existing operations. Governance and procurement processes can adapt without being forced into emergency decisions.

In these cases, the cost of change is not trivial, but it is proportionate. It reflects the effort required to introduce an alternative (modular) platform, not the effort required to escape a tightly coupled ecosystem.

VMware to Nutanix Windows

By contrast, organizations that postpone evaluation until platform constraints become explicit often find themselves facing a very different reality. When licensing changes, product consolidation, or strategic shifts expose the depth of dependency, the room for change has already narrowed. Timelines become compressed, options shrink, and decisions, that should have been strategic, become reactive.

The cost explosion in these situations is rarely caused by the complexity of the alternative platform. It is caused by the accumulated weight of the existing one. Deep integration, bespoke operational tooling, and platform-specific governance models all add friction to any attempt at change. What might have been a manageable transition years earlier becomes a high-risk transformation project.

This leads to a paradox that many institutions only recognize in hindsight. The best time to evaluate change is precisely when there is no immediate pressure to do so. Early evaluation is a way to preserve choice. It allows organizations to understand their true dependencies, test assumptions, and (perhaps) maintain negotiation leverage.

Waiting, by contrast, does not preserve stability. It often preserves only the illusion of stability, while the cost of future change continues to rise in the background.

For public institutions in particular, this distinction is critical. Their mandate demands foresight, not just reaction. Evaluating platform alternatives before change becomes unavoidable means taking over responsibility.

A window that will not stay open forever

Nutanix should not be framed as a rejection of VMware, nor as a corrective to past decisions. It should be understood as an opportunity for VM-centric public institutions to reassess their strategic position while they still have the flexibility to do so.

Organizations that did not fully adopt VMware’s SDDC approach are in a particularly strong position. Their operational models are portable, their technical lock-in is still manageable and their total cost of change remains proportionate.

For them, the question is whether they want to preserve the ability to decide tomorrow.

And in the public sector, preserving that ability is a governance responsibility.

Nutanix Is Quietly Redrawing the Boundaries of What an Infrastructure Platform Can Be

Nutanix Is Quietly Redrawing the Boundaries of What an Infrastructure Platform Can Be

Real change happens when a platform evolves in ways that remove old constraints, open new economic paths, and give IT teams strategic room to maneuver. Nutanix has introduced enhancements that, taken individually, appear to be technical refinements, but observed together, they represent something more profound. The transition of the Nutanix Cloud Platform (NCP) into a fabric of compute, storage, and mobility that behaves as one system, no matter where it runs.

This is the dismantling of long-standing architectural trade-offs and the business impact is far greater than the technical headlines suggest.

In this article, I want to explore four developments that signal this shift:

  • Elastic VM Storage across Nutanix clusters
  • Disaggregated compute and storage scaling
  • NC2 is generally available on Google Cloud
  • The strategic partnership between Nutanix and Pure Storage

Individually, these solve real operational challenges. Combined, they create an infrastructure model that moves away from fix constructs and toward an adaptable, cost-efficient, cloud-operating fabric.

Elastic VM Storage – The End of Cluster-Bound Thinking

Nutanix introduced Elastic VM Storage, which the ability for one AHV cluster to consume storage from another Nutanix HCI cluster within the same Prism Central domain. It breaks one of the oldest implicit assumptions in on-premises virtualization that compute and storage must live together in tightly coupled units.

By allowing VMs to be deployed on compute in one cluster while consuming storage from another, Nutanix gives IT teams a new level of elasticity and resource distribution.

It introduces an operational freedom that enterprises have never truly had:

  1. Capacity can be added where it is cheapest. If storage economics favour one site and compute expansion is easier or cheaper in another, Nutanix allows you to make decisions based on cost, not on architectural constraints.
  2. It reduces stranded resources. Every traditional environment suffers from imbalanced clusters. Some run out of storage, others out of CPU, and upgrading often means over-investing on both sides. Elastic VM Storage dissolves those silos.
  3. It prepares organizations for multi-cluster private cloud architectures. Enterprises increasingly distribute workloads across data centers, edge locations, and cloud-adjacent sites. Being able to pool resources across clusters is foundational for this future.

Nutanix is erasing the historical boundary of the cluster as a storage island.

Disaggregated Compute and Storage Scaling

For years, Nutanix’s HCI architecture was built on the elegant simplicity of shared-nothing clusters, where compute and storage scale together. Many customers still want this. In fact, for greenfield deployments, it probably is the cleanest architecture. But enterprises also operate in a world full of legacy arrays, refresh cycles that rarely align, strict licensing budgets, and specialized workload patterns.

With support for disaggregated compute and storage scaling, Nutanix allows:

  • AHV compute-only clusters with external storage (currently supported are Dell PowerFlex and Pure Storage – more to follow)
  • Mixed configurations combining HCI nodes and compute-only nodes
  • Day-0 simplicity for disaggregated deployments

This is a statement from Nutanix, whose DNA was always HCI: The Nutanix Cloud Platform can operate across heterogeneous infrastructure models without making the environment harder to manage.

  1. Customers can modernize at their own pace. If storage arrays still have years of depreciation left, Nutanix allows you to modernize compute now and storage later instead of forcing a full rip-and-replace.
  2. It eliminates unnecessary VMware licensing. Many organizations want to exit expensive hypervisor stacks while continuing to utilize their storage investments. AHV compute-only clusters make this transition significantly cheaper.
  3. It supports high-density compute for new workloads. AI training, GPU farms, and data pipelines often require disproportionate compute relative to storage. Disaggregation aligns the platform with the economics of modern workloads.

This is the kind of flexibility enterprises have asked for during the last few years and Nutanix has now delivered it without compromising simplicity.

Nutanix and Pure Storage

One of the most significant shifts in Nutanix’s evolution is its move beyond traditional HCI boundaries. This began when Nutanix introduced support for Dell PowerFlex as the first officially validated external storage integration, which was a clear signal to the market, that the Nutanix platform was opening itself to disaggregated architectures. With Pure Storage FlashArray now becoming the second external storage platform to be fully supported through NCI for External Storage, that early signal has turned into a strategy and ecosystem.

Nutanix NCI with Pure Storage

Nutanix now enables customers to run AHV compute clusters using enterprise-grade storage arrays while retaining the operational simplicity of Prism, AHV, and NCM. Pure Storage’s integration builds on the foundation established with PowerFlex, but expands the addressable market significantly by bringing a leading flash platform into the Nutanix operating model.

Why is this strategically important?

  • It confirms that Nutanix is committed to disaggregated architectures, not just compatible with them. What began with Dell PowerFlex as a single integration has matured into a structured approach. Nutanix will support multiple external storage ecosystems while providing a consistent compute and management experience.
  • It gives customers real choice in storage without fragmenting operations. With Pure Storage joining PowerFlex, Nutanix now supports two enterprise storage platforms that are widely deployed in existing environments. Customers can keep their existing tier-1 arrays and still modernize compute, hypervisor, and operations around AHV and Prism.
  • It creates an on-ramp for VMware exits with minimal disruption. Many VMware customers own Pure FlashArray deployments or run PowerFlex at scale. With these integrations, they can adopt Nutanix AHV without replatforming storage. The migration becomes a compute and virtualization change and not a full infrastructure overhaul.
  • It positions Nutanix as the control plane above heterogeneous infrastructure. The combination of NCI with PowerFlex and now Pure Storage shows that Nutanix is building an operational layer that unifies disparate architectures.
  • It aligns modernization with financial reality. Storage refreshes and compute refreshes rarely align. Supporting multiple external arrays allows Nutanix customers to modernize compute operations first, defer storage investment, and transition into HCI only when it makes sense.

Nutanix has moved from a tightly defined HCI architecture to an extensible compute platform that can embrace best-in-class storage from multiple vendors.

Nutanix Cloud Clousters on Google Cloud – A Third Strategic Hyperscaler Joins the Story

The general availability of NC2 on Google Cloud completes a strategic triangle. With AWS, Azure and now Google Cloud all supporting Nutanix Cloud Clusters (NC2), Nutanix becomes one of the very few platforms capable of delivering a consistent private cloud operating model across all three major hyperscalers. It fundamentally changes how enterprises can think about cloud architecture, mobility, and strategic independence.

Running NC2 on Google Cloud creates a new kind of optionality. Workloads that previously needed to be refactored or painfully migrated can now move into GCP without rewriting, without architectural compromises, and without inheriting a completely different operational paradigm. For many organizations, especially those leaning into Google’s strengths in analytics, AI, and data services, this becomes a powerful pattern. Keep the operational DNA of your private cloud, but situate workloads closer to the native cloud services that accelerate innovation.

NC2 on Google Cloud

When an enterprise can run the same platform – the same hypervisor, the same automation, the same governance model – across multiple hyperscalers, the risk of cloud lock-in can be reduced. Workload mobility and cloud-exit strategies become a reality.

NC2 on Google Cloud is a sign of how Nutanix envisions the future of hybrid multi-cloud. Not as a patchwork of different platforms stitched together, but a unified operating fabric that runs consistently across every environment. With Google now joining the story, that fabric becomes broader, more flexible, and significantly more strategic.

Conclusion

Nutanix is removing the trade-offs, that enterprises once accepted as inevitable.

Most IT leaders aren’t searching for (new) features. They are searching for ways to reduce risk, control cost, simplify operations, and maintain autonomy while the world around them becomes more complex. Nutanix’s recent enhancements are structural. They chip away at the constraints that made traditional infrastructure unflexible and expensive.

The platform is becoming more open, more flexible, more distributed, and more sovereign by design.

A Primer on Nutanix Cloud Clusters (NC2)

A Primer on Nutanix Cloud Clusters (NC2)

If you strip cloud strategy down to its essentials, you quickly notice that IT leaders are protecting three things. I am talking about continuity, autonomy and freedom of movement. Yet most clouds, private or public, quietly decimate at least one of these freedoms. You can gain elasticity but lose portability. You get managed services but have to accept immobility. And you can gain efficiency, but introduce concentration risk. Once the first workloads are deployed on a hyperscaler, many organizations underestimate the difficulty of reversing that decision later. And in some cases, they are aware of it and call it a strategic decision.

Nutanix Cloud Clusters (NC2) repositions control. It extends your existing Nutanix Cloud Platform (NCP) directly into the hyperscaler of your choice (AWS, Azure; or Google Cloud in tech preview) without requiring you to rewrite applications or adopt a new operational model. NC2 runs the same Nutanix stack on hyperscaler baremetal. Think of it as extending your private cloud to someone else’s cloud.

Workload Mobility

Most cloud migrations fail not because the target cloud is inadequate, but because the friction of moving virtual machines (VMs) is underestimated. Every dependency, every network pattern, every stored image becomes an anchor that slows down the migration. NC2 removes most of these anchors. Because the target environment is still Nutanix, your VM format, storage layout, operational tooling, and lifecycle management remain identical.

NC2 on AWS

This creates a kind of reversible migration (aka repatriation). You are no longer forced to commit to one direction. You can burst, repatriate or rebalance depending on business needs, not platform constraints. The psychological barrier of “this migration better be worth it because we cannot undo it” disappears.

Cloud Exit

Cloud exit is a topic we have been discussing in our industry for some time now. IT decision-makers want to know if and how they could exit a cloud if necessary. Cost shocks, sovereignty concerns, regulatory pressure, or simple risk diversification can all trigger a reassessment.

What happens if our cloud dependency becomes a risk? What if we need to move? Do we have an exit plan?

NC2 is one of the few architectures where an exit is not a complicated multi-year re-architecture effort. Workloads running on NC2 can be moved back to an on-premises Nutanix cluster without replatforming and without importing cloud-native dependencies that are difficult to untangle. Platform symmetry makes the exit not only thinkable, but executable.

When your workloads run on NC2 in AWS or Azure, they do not inherit the hyperscaler’s native VM formats, storage layouts, or proprietary IAM constructs. They run inside the same Nutanix Cloud Platform you already operate on-prem. This means that the workloads you run in the cloud are the same as those you can run in your data center.

In many organizations, repatriation is seen as a point of failure. Something you only do when the cloud strategy “didn’t work out”. That framing is outdated. Repatriation is increasingly a proactive governance mechanism:

  • Sovereignty changes? Move workloads home.
  • Cost pressure rises? Bring certain workloads back on-prem during peak cost cycles.
  • Predictable costs? Run static workloads privately but scale elastically via NC2.
  • Vendor terms change? Shift to a different infrastructure model.
  • GPU scarcity? Temporarily run training or inference workloads where you have capacity.

Nutanix Hybrid Multi-Cloud Operations

The cloud world has become multipolar. Many organizations are no longer choosing between “on-prem vs cloud”, but between multiple clouds like hyperscalers, European sovereign clouds, vertical-specific clouds, and dedicated regions.

Repatriation used to mean going home. With NC2, it can also mean going sideways:

  • From Azure to a sovereign cloud provider
  • From a hyperscaler to a private cloud built on NCP
  • From one hyperscaler to another when commercial, regulatory, or technical factors shift
  • From cloud to edge
  • From cloud to hosted private infrastructure via a service provider (OVH for example)

In other words, it allows organizations to move workloads to the location that makes sense right now, not the one that made sense during a six-year-old strategy cycle.

Note: NC2 is fundamentally a sovereignty mechanism because it makes long-term commitments reversible.

Operational Relief for Small IT Teams

Every new stack, platform, or cloud demands new knowledge, new operational patterns, new tooling, and new troubleshooting domains. When a team of five suddenly needs to understand the details of AWS, Azure, Nutanix, Kubernetes, storage arrays, hypervisors, and cloud-native services, hybrid cloud becomes an unmanageable landscape.

Even though NC2 is not a managed service, it behaves like a consolidation layer that collapses the operational surface. The team does not need to master the specifics of hyperscaler virtualization models, instance families, cloud-native block storage semantics or proprietary IAM patterns, but they operate the same Nutanix environment everywhere. The public cloud stops being an alien planet with its own physics and becomes an extension of the data center they already know.

For small teams, the value is immense. They no longer split their attention between incompatible worlds. They do not require deep AWS or Azure certifications to run VMs in the cloud, nor do they need a dedicated cloud operations squad. No need to maintain multiple monitoring stacks, patching processes or network topologies. They simply work through Prism, with the same lifecycle management, upgrade workflows, automation, and storage patterns. Regardless of where the hardware resides.

In short, efficiency increases as complexity decreases.

Conclusion

Ultimately, NC2 is not just a technical extension of Nutanix into public cloud regions. Think of it as a structural correction to a decade of cloud decisions shaped by lock-in, fragmentation, and asymmetrical dependencies. It gives organizations the right to change their mind without paying a penalty for it. It reduces operational noise instead of amplifying it. It allows teams to stay focused on outcomes rather than infrastructure politics.