Nutanix’s EUC Stack Reduces TCO and Improves ROI

Nutanix’s EUC Stack Reduces TCO and Improves ROI

Virtual Desktop Infrastructure (VDI) has always been a conservative technology. It sits close to users, productivity, and operational risk. For years, the dominant conversation revolved around brokers, protocols, and user experience. Today, that conversation is shifting more towards licensing, platform dependency, roadmap uncertainty and support models. Even product availability is becoming the real driver behind VDI decisions.

The recent announcements from Omnissa clearly reflect this shift. Horizon 8 is now generally available on Nutanix AHV, opening a long-awaited alternative virtualization path for enterprise-grade VDI.

VMware vSphere Foundation for VDI (VVF for VDI)

The combined Omnissa Horizon and VMware vSphere Foundation for VDI offering responds to a very real customer desire for simplification. For organizations already standardized on VMware technologies, fewer contracts and a predefined bundle feel familiar and operationally convenient.

Broadcom has announced the discontinuation of VMware vSphere Foundation in specific countries and regions, most notably in parts of EMEA. The decision does not apply globally (yet), but it is explicit, regional, and commercially binding for affected markets. Availability is no longer uniform, and customers must now verify on a country-by-country basis whether VMware vSphere Foundation (VVF) can still be procured.

It is important to be precise, though. The recent discontinuation of VVF applies only to specific countries and, as of today, it seems it does not include VVF for VDI for existing Omnissa customers. Horizon customers can still consume VVF for VDI in those environments, and there has been no formal announcement that this specific bundle will be withdrawn.

At the same time, it would be naive to ignore the broader context. VVF for VDI ultimately depends on the commercial and strategic relationship between Omnissa and Broadcom. Omnissa does not fully control the underlying hypervisor roadmap, its regional availability, or future sales policies. Any material change requires negotiation between two vendors with different priorities and incentives.

Currently, VVF for VDI, which can be bundled with Horizon, includes vSphere 8.  Support for the vSphere portion of the bundle will continue to be provided by VMware by Broadcom. The bundled offerings are available to be purchased in up to 5-year terms (restrictions may apply).  Subject to their general terms, Broadcom/VMware will provide vSphere 8 for the period of the license that a customer has purchased. Broadcom has not yet announced timelines for vSphere 9 support with VVF for VDI.  We are working with Broadcom to enable VVF for VDI in an upcoming vSphere 9.x release, but no date has yet been committed. If a customer has a current requirement to move to vSphere 9, they will need to buy VCF or VVF separately from Broadcom.

Recent decisions around VVF in parts of EMEA illustrate this clearly. Even if VVF for VDI remains available today, customers are implicitly betting on the continued alignment between Omnissa and Broadcom. Packaging may simplify procurement in the short term, but it also concentrates dependency at the most critical layer of the stack. For VDI environments, where stability and predictability are non-negotiables, this dependency becomes an integral part of the risk assessment.

Why This Context Matters for NCI-VDI

This is where Nutanix Cloud Infrastructure for VDI starts to look less like an alternative and more like a structurally safer choice.

With Horizon supported on AHV, customers can decouple broker choice from hypervisor dependency. But the value goes beyond commercial optionality. It also shows up in how the platform behaves operationally.

Omnissa Horizon Agents on Nutanix AHV

Enhanced refresh workflows introduce recovery points for desktop refresh operations. Instead of rebuilding or troubleshooting desktops under pressure, IT teams gain a practical rollback mechanism. It is essentially an undo button for virtual desktops, reducing downtime, simplifying remediation, and improving resilience for business continuity scenarios.

GPU-accelerated VDI is another area where the platform advantage becomes tangible. Managed NVIDIA vGPU support is integrated into compute profiles for Horizon workloads. GPU profiles are no longer an afterthought or a separate administrative domain. This makes it significantly easier to deliver high-performance virtual workstations for AI, design, healthcare imaging, or analytics workloads, while reducing operational complexity for administrators.

For environments relying on RDSH, NCI-VDI now brings full automation for farms, published desktops, and applications. Farm creation, scaling, and app publishing no longer require manual orchestration. 

ClonePrep customization completes the picture. Virtual machines can be customized rapidly during pool or farm creation, giving IT teams precise control over how desktops are initialized. Configurations remain consistent across pools, while still allowing organizational requirements to be enforced centrally.

These are the current Nutanix configuration maximums for AHV:

  • Cluster size – The maximum AHV hosts per cluster is 32
  • VMs per host – The maximum powered on VDI VMs per AHV node is 200.
  • VMs per cluster – The maximum number of powered-on VMs per AHV cluster is 4096.

Note: The Horizon 8 reference architecture for AHV deployments is available here.

Licensing That Reflects How VDI Is Actually Used

Licensing discussions around VDI often focus narrowly on user counts and price points. What is frequently overlooked is what is not included, as well as the architectural assumptions that are quietly embedded in the bundle.

VVF for VDI, whether consumed by Omnissa Horizon customers or Citrix customers (regular VVF), does not include NSX and its distributed firewalling capabilities. Network micro-segmentation, east-west traffic control, and fine-grained security policies are not part of the VVF for VDI entitlement. Customers that require these capabilities must either accept architectural gaps or upgrade to VMware Cloud Foundation (VCF).

NCI-VDI approaches this differently, particularly in the Ultimate edition. Licensing remains per concurrent user, pooled and based on the highest usage, but the functional scope expands in a way that directly impacts architecture and resilience.

With NCI-VDI Ultimate, customers gain native micro-segmentation capabilities as part of the platform. Security is enforced at the workload level without relying on an external networking stack or add-on products. For VDI environments, especially in regulated or multi-tenant scenarios, this enables consistent isolation between desktop pools, user groups, and supporting services without introducing operational complexity.

Replication and availability are another area where licensing and architecture intersect. NCI-VDI Ultimate includes advanced replication capabilities, including metro availability as well as Async DR and NearSync replication.

The key point here is alignment. Licensing reflects how VDI is actually used in production, including security boundaries within the platform, continuous availability expectations, and the need to protect stateful desktops without redesigning the entire environment. When these capabilities are included by design, TCO becomes more predictable and ROI improves over the full lifecycle.

Storage Included

User data has always been one of the hidden cost drivers in VDI projects. Profiles, documents, shared data, and application artifacts often introduce additional products, licenses, and operational silos.

With NCI-VDI, up to 100 GiB of Nutanix Unified Storage (NUS) per user is included and pooled. Home directories, profile data, shared file services, or other workloads can all be covered without introducing a separate storage platform.

Nutanix Unified Storage (NUS) is a software-defined storage platform consolidating file, object, and block storage into a single platform. Integrated with Nutanix hyperconverged infrastructure (HCI), NUS enhances the security and performance of virtual desktops and applications, while simplifying administration of storage. Your team can easily manage and control all file, object, and block data in one place—both on-premises and in the cloud such as AWS and Azure.

Again, fewer products and fewer operational boundaries translate directly into lower TCO.

Support Models Matter When VDI Becomes Business-Critical

Support is rarely part of the initial VDI design discussion. It usually becomes relevant when something breaks or when an upgrade behaves differently than expected.

In the VMware vSphere Foundation model, support is typically delivered through distributors and channel partners. While many partners do excellent work, this structure introduces an additional layer between the customer and the platform vendor. When issues span multiple layers, including broker, hypervisor, and storage, responsibility can become fragmented.

With NCI-VDI, customers running Horizon or Citrix on AHV engage directly with Nutanix for the infrastructure layer. Compute, storage, and virtualization are owned by a single support organization with a Net Promoter Score (NPS) consistently above 90.

Fewer handoffs, faster root-cause analysis, and clearer accountability directly improve operational efficiency and ROI.

Compliance Without Disruption – A Public-Sector Perspective

For healthcare organizations and federal agencies, licensing compliance is a continuity topic. Clinical systems and public services cannot be interrupted because of a licensing issue.

With NCI-VDI, license enforcement preserves operational continuity. Existing workloads continue to run even if a customer temporarily falls out of compliance. There is no forced shutdown and no service interruption.

Restrictions apply elsewhere, such as cluster expansion, access to support, management UIs, or upgrades and patches. Compliance remains enforceable, but without turning it into an operational incident. For public sector environments, this behavior is essential.

Closing Thought

VDI is no longer just about delivering desktops and virtual applications. It has become a platform decision that directly affects cost control, resilience, compliance, and long-term autonomy. Combined offerings like VVF for VDI may simplify procurement in the short term, but they also increase dependency at the most critical layer of the stack, a layer that recent changes have shown can shift regionally, commercially, and strategically.

Nutanix does not force customers into a single broker strategy, Horizon runs on AHV and Citrix remains a long-standing partner. The broker is important, but it is not where most long-term cost, risk, and complexity accumulate. The real differentiation lies below the broker layer.

When compute, storage, virtualization, security, and availability are delivered as one integrated platform, TCO drops almost naturally. Fewer vendors reduce dependency risk, and fewer dependencies reduce roadmap uncertainty. Lastly, fewer handoffs reduce operational friction. Together, these effects compound over time and translate directly into a higher return on investment.

Nutanix NCI-VDI gives customers the freedom to decouple the broker choice from hypervisor dependency, embedding security and availability into the platform, and aligning licensing with how VDI is actually used in production, it reduces TCO in ways that only become fully visible over multiple renewal cycles. 

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

If you ask most large organizations why they use more than one public cloud, the answers are remarkably consistent. It is not fashion, and it is rarely driven by engineering curiosity. It is risk management and a best of breed approach.

Enterprises distribute workloads across multiple public clouds to reduce concentration risk, comply with regulatory expectations, preserve negotiation leverage, and remain operationally resilient in the face of outages that cannot be mitigated by adding another availability zone. In regulated industries, especially in Europe, this thinking has become mainstream. Supervisors explicitly expect organisations to understand their outsourcing dependencies, to manage exit scenarios, and to avoid structural lock-in where it can reasonably be avoided.

Now apply the same logic one layer down into the private cloud world, and the picture changes dramatically.

Across industries and geographies, a significant majority of private cloud workloads still run on a single private cloud platform. In practice, this platform is often VMware (by Broadcom). Estimates vary, but the dominance itself is not controversial. In many enterprises, approximately 70 to 80 percent of virtualized workloads reside on the same platform, regardless of sector.

If the same concentration existed in the public cloud, the discussion would be very different. Boards would ask questions, regulators would intervene, architects would be tasked with designing alternatives. Yet in private cloud infrastructure, this concentration is often treated as normal, even invisible.

Why?

Organisations deliberately choose multiple public clouds

Public cloud multi-cloud strategies are often oversimplified as “fear of lock-in”, but that misses the point.

The primary driver is concentration risk. When critical workloads depend on a single provider, certain failure modes become existential. Provider-wide control plane outages, identity failures, geopolitical constraints, or contractual disputes cannot be mitigated by technical architecture alone. Multi-cloud does not eliminate risk, but it limits the blast radius.

Regulation reinforces this logic. The European banking supervision, for example, treats cloud as an outsourcing risk and expects institutions to demonstrate governance, exit readiness, and operational resilience. An exit strategy that only exists on paper is increasingly viewed as insufficient. There are also pragmatic reasons. Jurisdictional considerations, data protection regimes, and shifting geopolitical realities make organizations reluctant to anchor everything to a single legal and operational framework. Multi-cloud (or hybrid cloud) becomes a way to keep strategic options open.

And finally, there is negotiation power. A credible alternative changes vendor dynamics. Even if workloads never move, the ability to move matters.

This mindset is widely accepted in the public cloud. It is almost uncontroversial.

How the private cloud monoculture emerged

The dominance of a single private cloud platform did not happen by accident, and it did not happen because enterprises were careless.

VMware earned its position over two decades by solving real problems early and building an ecosystem that reinforced itself. Skills became widely available, tooling matured, and operational processes stabilized. Backup, disaster recovery, monitoring, security controls, and audit practices are all aligned around a common platform. Over time, the private cloud platform evolved into more than just software. It became the operating model.

And once that happens, switching becomes an organizational transformation.

Private cloud decisions are also structurally centralized. Unlike public cloud consumption, which is often decentralized across business units, private cloud infrastructure is intentionally standardized. One platform, one set of guardrails, one way of operating. From an efficiency and governance perspective, this makes sense. From a dependency perspective, it creates a monoculture.

For years, this trade-off was acceptable because the environment was stable, licensing was predictable, and the ecosystem was broad. The rules of the game did not change dramatically.

That assumption is now being tested.

What has changed is not the technology, but the dependency profile

VMware remains a technically strong private cloud platform. That is not in dispute. What has changed under Broadcom is the commercial and ecosystem context in which the platform operates. Infrastructure licensing has shifted from a largely predictable, incremental expense into a strategically sensitive commitment. Renewals are no longer routine events. They become moments of leverage.

At the same time, changes in partner models and go-to-market structures affect how organizations buy, renew, and support their private cloud infrastructure. When the surrounding ecosystem narrows, dependency increases, even if the software itself remains excellent.

This is not a judgment on intent or quality. It is just a structural observation. When one private cloud platform represents the majority of an organization’s infrastructure, any material change in pricing, licensing, or ecosystem access becomes a strategic risk by definition.

The real issue is not lock-in, but the absence of a credible exit

Most decision-makers do not care about hypervisors, they care about exposure. The critical question is not whether an organization plans to leave its existing private cloud platform. The question is whether it could leave, within a timeframe the business could tolerate, if it had to.

In many cases, the honest answer is no.

Economic dependency is the first dimension. When a single vendor defines the majority of your infrastructure cost base, budget flexibility shrinks.

Operational dependency is the second. If tooling, processes, security models, and skills are deeply coupled to one platform, migration timelines stretch into years. That alone is a risk, even if no migration is planned.

Ecosystem dependency is the third. Fewer partners and fewer commercial options reduce competitive pressure and resilience.

Strategic dependency is the fourth. The private cloud platform is increasingly becoming the default landing zone for everything that cannot go to the public cloud. At that point, it is no longer just infrastructure. It is a critical organizational infrastructure.

Public cloud regulators have language for this. They call it outsourcing concentration risk. Private cloud infrastructure rarely receives the same attention, even though the consequences can be comparable.

Concentration risk in the public sector – When dependency is financed by taxpayers

In the public sector, concentration risk is not only a technical or commercial question but also a governance question. Public administrations do not invest their own capital. Infrastructure decisions are financed by taxpayers, justified through public procurement, and expected to remain defensible over long time horizons. This fundamentally changes the risk calculus.

When a public institution concentrates the majority of its private cloud infrastructure on a single platform, it is committing public funds, procurement structures, skills development, and long-term dependency to one vendor’s strategic direction. Now, what does it mean for a nation where 80 or 90% of its public sector is dependent on one single vendor?

That dependency can last longer than political cycles, leadership changes, or even the original architectural assumptions. If costs rise, terms change, or exit options narrow, the consequences are beared by the public. This is why procurement law and public sector governance emphasize competition, supplier diversity, and long-term sustainability. In theory, these principles apply equally to private cloud platforms. In practice, historical standardization decisions often override them.

There is also a practical constraint. Public institutions cannot move quickly. Budget cycles, tender requirements, and legal processes mean that correcting structural dependency is slow and expensive once it is entrenched.

Seen through this lens, private cloud concentration risk in the public sector is not a hypothetical problem. It is a deferred liability.

Why organizations hesitate to introduce a new or second private cloud platform

If concentration risk is real, why do organizations not simply add a second platform?

Because fragmentation is also a risk.

Enterprises do not want five private cloud platforms. They do not want duplicated tooling, fragmented operations, or diluted skills. Running parallel infrastructures without a coherent operating model creates unnecessary cost and complexity, without addressing the underlying problem. This is why most organizations are not looking for “another hypervisor”. They are seeking a second private cloud platform that preserves the VM-centric operating model, integrates lifecycle management, and can coexist without necessitating a redesign of governance and processes.

The main objective here is credible optionality.

A market correction – Diversity returns to private cloud infrastructure

One unintended consequence of Broadcom’s acquisition of VMware is that it has reopened a market that had been largely closed for years. For a long time, the conversation about private cloud infrastructure felt settled. VMware was the default, alternatives were niche, and serious evaluation was rare. That has changed.

Technologies that existed on the margins are being reconsidered. Xen-based platforms are evaluated again, where simplicity and cost control dominate. Proxmox is discussed more seriously in environments that value open-source governance and transparency. Microsoft Hyper-V is re-examined, where deep Microsoft integration already exists.

At the same time, vendors are responding. HPE Morpheus VM Essentials reflects a broader trend toward abstraction and lifecycle management that reduces direct dependency on a single virtualization layer.

Nutanix appears in this context not as a disruptive newcomer, but as an established private cloud platform that fits a diversification narrative. For some organizations, it represents a way to introduce a second platform without abandoning existing operations or retraining entire teams from scratch.

None of these options is a universal replacement. That is not the point. The point is that choice has returned.

This diversity is healthy. It forces vendors to compete on clarity, pricing, ecosystem openness, and operational value. It forces customers to revisit assumptions that have gone unchallenged for years and it reintroduces architectural optionality into a layer of infrastructure that had become remarkably static.

This conversation matters now

For years, private cloud concentration risk was theoretical. Today, it is increasingly tangible.

The combination of high platform concentration, shifting commercial models, and narrowing ecosystems forces organizations to re-examine decisions they have not questioned in over a decade. Not because the technology suddenly failed, but because dependency became visible.

The irony is that enterprises already know how to reason about this problem. They apply the same logic every day in public cloud.

The difference is psychological. Private cloud infrastructure feels “owned”. It runs on-premises and it feels sovereign. That feeling can be partially true, but it can also obscure how much strategic control has quietly shifted elsewhere.

A measured conclusion

This is not a call for mass migration away from VMware. That would be reactive and, in many cases, irresponsible.

It is a call to apply the same discipline to private cloud platforms that organizations already apply to public cloud providers. Concentration risk does not disappear because infrastructure runs in a data center.

So, if the terms change, do you have a credible alternative?

Nutanix should not be viewed primarily as a replacement for VMware

Nutanix should not be viewed primarily as a replacement for VMware

Public sector organizations rarely change infrastructure platforms lightly. Stability, continuity, and operational predictability matter more than shiny and modern solutions. Virtual machines became the dominant abstraction because they allowed institutions to standardize operations, separate applications from hardware, and professionalize IT operations over the long term.

For many years, VMware has become synonymous with this VM-centric operating model, as it provided a coherent, mature, and widely adopted implementation of virtualized infrastructure. Choosing VMware was, for a long time, a rational and defensible decision.

Crucially, the platform was modular. Organizations could adopt it incrementally, integrate it with existing tools, and shape their own operating models on top of it. This modularity translated into operational freedom. Institutions retained the ability to decide how far they wanted to go, which components to use, and which parts of their environment should remain under their direct control. These characteristics explain why VMware became the default choice for so many public institutions. It aligned well with the values of stability, proportionality, and long-term accountability.

The strategic question public institutions face today is not whether that decision was wrong. Rather, if they can learn from it. We need to ask ourselves whether the context around that decision has changed and whether continuing along the same platform path still preserves long-term control, optionality, and state capability.

From VM-centric to platform-path dependent

It is important to be precise in terminology. Most public sector IT environments are not VMware-centric by design. They are VM-centric. Virtual machines are the core operational unit, deeply embedded in processes, tooling, skills, and governance models. This distinction is very important. A VM-centric organization can, in principle, operate on different platforms without redefining its entire operating model. A VMware-centric organization, by contrast, has often moved further down a specific architectural path by integrating tightly with proprietary platform services, management layers, and bundled stacks that are difficult to disentangle later.

This is where the strategic divergence begins.

Over time, VMware’s platform has evolved from a modular virtualization layer into an increasingly integrated software-defined data center (SDDC) and VCF-oriented (VMware Cloud Foundation) stack. That evolution is not inherently negative. Integrated platforms can deliver efficiencies and simplified operations, but they also introduce path dependency. Decisions made today shape which options remain viable tomorrow.

So, the decisive factor is not pricing. Prices change. For public institutions, this is a governance issue (not a technical one).

There is a significant difference between organizations that adopted VMware primarily as a hypervisor platform and those that fully embraced the SDDC or VCF vision.

Institutions that did not fully commit to VMware’s integrated SDDC approach often still retain architectural freedom. Their environments are typically characterized by:

  • A strong focus on virtual machines rather than tightly coupled platform services
  • Limited dependency on proprietary automation, networking, or lifecycle tooling
  • Clear separation between infrastructure, operations, and higher-level services

For these organizations, the operational model remains transferable. Skills, processes, and governance structures are not irreversibly bound to a single vendor-defined stack. This has two important consequences.

First, technical lock-in can still be actively managed. The platform does not yet dictate the future architecture. Second, the total cost of change remains realistic. Migration becomes a controlled evolution rather than a disruptive transformation.

In other words, the window for strategic choice is still open.

Why this moment matters for the public sector

Public institutions operate under conditions that differ fundamentally from those of private enterprises. Their mandate is not limited to efficiency, competitiveness, or short-term optimization. Instead, they are entrusted with continuity, legality, and accountability over long time horizons. Infrastructure decisions made today must still be explainable years later, often to different audiences and under very different political circumstances. They must withstand audits, parliamentary inquiries, regulatory reviews, and shifts in leadership without losing their legitimacy.

This requirement fundamentally changes how technology choices must be evaluated. In the public sector, infrastructure is an integral part of the institutional framework that enables the state to function effectively. Decisions are therefore judged not only by their technical benefits and performance, but by their long-term defensibility. A solution that is efficient today but difficult to justify tomorrow represents a latent risk, even if it performs flawlessly in day-to-day operations.

It is within this context that the concept of digital sovereignty has moved from abstraction to obligation. Governments increasingly define digital sovereignty not as isolation or technological nationalism, but as the capacity to maintain control and freedom of an environment. This includes the ability to reassess vendor relationships, adapt sourcing strategies, and respond to geopolitical, legal, or economic shifts without being forced into reactive or crisis-driven decisions.

Digital sovereignty, in this sense, is closely tied to governance and control. It is about ensuring that institutions retain the ability to make informed, deliberate choices over time. That ability depends less on individual technologies and more on the structural properties of the platforms on which those technologies are built. When platforms are designed in ways that limit flexibility, they quietly constrain future options, regardless of their current performance or feature set.

Platform architectures that reduce reversibility are particularly problematic in the public sector. Reversibility does not imply constant change, nor does it require frequent platform switches. It simply means that change remains possible without disproportionate disruption. When an architecture makes it technically or organizationally prohibitive to adjust course, it creates a form of lock-in that extends beyond commercial dependency into the realm of institutional risk.

Even technically advanced platforms can become liabilities if they harden decisions that should remain open. Tight coupling between components, inflexible operational models, or vendor-defined evolution paths may simplify operations in the short term, but they do so at the cost of long-term flexibility. In public institutions, where the ability to adapt is inseparable from democratic accountability and legal responsibility, this trade-off must be examined with particular care.

Ultimately, digital sovereignty in the public sector is about ensuring that those dependencies remain governable. Platforms that preserve reversibility support this goal by allowing institutions to evolve deliberately, rather than react under pressure. Platforms that erode it may function well today, but they quietly accumulate strategic risk that only becomes visible when options have already narrowed.

Seen through this lens, digital sovereignty is a core governance requirement, embedded in the responsibility of public institutions to remain capable, accountable, and in control of their digital future.

Nutanix as a strategic inflection point

This is why Nutanix should not be viewed primarily as a replacement for VMware. Framing it as such immediately steers the discussion in the wrong direction. Replacements imply disruption, sunk costs, and, perhaps most critically in public-sector and enterprise contexts, an implicit critique of past decisions. Infrastructure choices, especially those made years ago, were often rational, well-founded, and appropriate for their time. Suggesting that they now need to be “replaced” risks triggering defensiveness and obscures the real strategic question.

More importantly, the replacement narrative fails to capture what Nutanix actually represents for VM-centric organizations. Nutanix does not demand a wholesale change in operating philosophy. It does not require institutions to abandon virtual machines, rewrite operational playbooks, or dismantle existing governance structures. On the contrary, it deliberately aligns with the VM-centric operating model that many public institutions and enterprises have refined over years of practice.

For this reason, Nutanix is better understood as a strategic inflection point. It marks a moment at which organizations can reassess their platform trajectory without invalidating the past. Virtual machines remain first-class citizens, operational practices remain familiar and roles, responsibilities, and control mechanisms continue to function as before. The day-to-day reality of running infrastructure does not need to change.

What does change is the organization’s strategic posture.

In essence, Nutanix is about restoring the ability to choose. In public-sector (and enterprise environments), that ability is often more valuable than any individual feature or performance metric.

The cost of change versus the cost of waiting

A persistent misconception in infrastructure strategy is the assumption that platform change is, by definition, prohibitively expensive. This belief is understandable. Large-scale IT transformations are often associated with complex migration projects, organizational disruption, and unpredictable outcomes. These associations create a strong incentive to delay any discussion of change for as long as possible.

Yet this intuition is misleading. In practice, the cost of change does not remain constant over time. It increases the longer the architectural lock-in is allowed to deepen.

Platform lock-in rarely occurs as an intentional choice, but it accumulates gradually. Additional services are adopted for convenience, tooling becomes more tightly integrated and operational processes begin to assume the presence of a specific platform. Over time, what was once a flexible foundation hardens into an implicit dependency. At that point, changing direction no longer means replacing a component; it means changing an entire operating model.

Organizations that remain primarily VM-centric and act early are in a very different position. When virtual machines remain the dominant abstraction and higher-level platform services have not yet become deeply embedded, transitions can be managed incrementally. Workloads can be evaluated in stages. Skills can be developed alongside existing operations. Governance and procurement processes can adapt without being forced into emergency decisions.

In these cases, the cost of change is not trivial, but it is proportionate. It reflects the effort required to introduce an alternative (modular) platform, not the effort required to escape a tightly coupled ecosystem.

VMware to Nutanix Windows

By contrast, organizations that postpone evaluation until platform constraints become explicit often find themselves facing a very different reality. When licensing changes, product consolidation, or strategic shifts expose the depth of dependency, the room for change has already narrowed. Timelines become compressed, options shrink, and decisions, that should have been strategic, become reactive.

The cost explosion in these situations is rarely caused by the complexity of the alternative platform. It is caused by the accumulated weight of the existing one. Deep integration, bespoke operational tooling, and platform-specific governance models all add friction to any attempt at change. What might have been a manageable transition years earlier becomes a high-risk transformation project.

This leads to a paradox that many institutions only recognize in hindsight. The best time to evaluate change is precisely when there is no immediate pressure to do so. Early evaluation is a way to preserve choice. It allows organizations to understand their true dependencies, test assumptions, and (perhaps) maintain negotiation leverage.

Waiting, by contrast, does not preserve stability. It often preserves only the illusion of stability, while the cost of future change continues to rise in the background.

For public institutions in particular, this distinction is critical. Their mandate demands foresight, not just reaction. Evaluating platform alternatives before change becomes unavoidable means taking over responsibility.

A window that will not stay open forever

Nutanix should not be framed as a rejection of VMware, nor as a corrective to past decisions. It should be understood as an opportunity for VM-centric public institutions to reassess their strategic position while they still have the flexibility to do so.

Organizations that did not fully adopt VMware’s SDDC approach are in a particularly strong position. Their operational models are portable, their technical lock-in is still manageable and their total cost of change remains proportionate.

For them, the question is whether they want to preserve the ability to decide tomorrow.

And in the public sector, preserving that ability is a governance responsibility.

Nutanix Is Quietly Redrawing the Boundaries of What an Infrastructure Platform Can Be

Nutanix Is Quietly Redrawing the Boundaries of What an Infrastructure Platform Can Be

Real change happens when a platform evolves in ways that remove old constraints, open new economic paths, and give IT teams strategic room to maneuver. Nutanix has introduced enhancements that, taken individually, appear to be technical refinements, but observed together, they represent something more profound. The transition of the Nutanix Cloud Platform (NCP) into a fabric of compute, storage, and mobility that behaves as one system, no matter where it runs.

This is the dismantling of long-standing architectural trade-offs and the business impact is far greater than the technical headlines suggest.

In this article, I want to explore four developments that signal this shift:

  • Elastic VM Storage across Nutanix clusters
  • Disaggregated compute and storage scaling
  • NC2 is generally available on Google Cloud
  • The strategic partnership between Nutanix and Pure Storage

Individually, these solve real operational challenges. Combined, they create an infrastructure model that moves away from fix constructs and toward an adaptable, cost-efficient, cloud-operating fabric.

Elastic VM Storage – The End of Cluster-Bound Thinking

Nutanix introduced Elastic VM Storage, which the ability for one AHV cluster to consume storage from another Nutanix HCI cluster within the same Prism Central domain. It breaks one of the oldest implicit assumptions in on-premises virtualization that compute and storage must live together in tightly coupled units.

By allowing VMs to be deployed on compute in one cluster while consuming storage from another, Nutanix gives IT teams a new level of elasticity and resource distribution.

It introduces an operational freedom that enterprises have never truly had:

  1. Capacity can be added where it is cheapest. If storage economics favour one site and compute expansion is easier or cheaper in another, Nutanix allows you to make decisions based on cost, not on architectural constraints.
  2. It reduces stranded resources. Every traditional environment suffers from imbalanced clusters. Some run out of storage, others out of CPU, and upgrading often means over-investing on both sides. Elastic VM Storage dissolves those silos.
  3. It prepares organizations for multi-cluster private cloud architectures. Enterprises increasingly distribute workloads across data centers, edge locations, and cloud-adjacent sites. Being able to pool resources across clusters is foundational for this future.

Nutanix is erasing the historical boundary of the cluster as a storage island.

Disaggregated Compute and Storage Scaling

For years, Nutanix’s HCI architecture was built on the elegant simplicity of shared-nothing clusters, where compute and storage scale together. Many customers still want this. In fact, for greenfield deployments, it probably is the cleanest architecture. But enterprises also operate in a world full of legacy arrays, refresh cycles that rarely align, strict licensing budgets, and specialized workload patterns.

With support for disaggregated compute and storage scaling, Nutanix allows:

  • AHV compute-only clusters with external storage (currently supported are Dell PowerFlex and Pure Storage – more to follow)
  • Mixed configurations combining HCI nodes and compute-only nodes
  • Day-0 simplicity for disaggregated deployments

This is a statement from Nutanix, whose DNA was always HCI: The Nutanix Cloud Platform can operate across heterogeneous infrastructure models without making the environment harder to manage.

  1. Customers can modernize at their own pace. If storage arrays still have years of depreciation left, Nutanix allows you to modernize compute now and storage later instead of forcing a full rip-and-replace.
  2. It eliminates unnecessary VMware licensing. Many organizations want to exit expensive hypervisor stacks while continuing to utilize their storage investments. AHV compute-only clusters make this transition significantly cheaper.
  3. It supports high-density compute for new workloads. AI training, GPU farms, and data pipelines often require disproportionate compute relative to storage. Disaggregation aligns the platform with the economics of modern workloads.

This is the kind of flexibility enterprises have asked for during the last few years and Nutanix has now delivered it without compromising simplicity.

Nutanix and Pure Storage

One of the most significant shifts in Nutanix’s evolution is its move beyond traditional HCI boundaries. This began when Nutanix introduced support for Dell PowerFlex as the first officially validated external storage integration, which was a clear signal to the market, that the Nutanix platform was opening itself to disaggregated architectures. With Pure Storage FlashArray now becoming the second external storage platform to be fully supported through NCI for External Storage, that early signal has turned into a strategy and ecosystem.

Nutanix NCI with Pure Storage

Nutanix now enables customers to run AHV compute clusters using enterprise-grade storage arrays while retaining the operational simplicity of Prism, AHV, and NCM. Pure Storage’s integration builds on the foundation established with PowerFlex, but expands the addressable market significantly by bringing a leading flash platform into the Nutanix operating model.

Why is this strategically important?

  • It confirms that Nutanix is committed to disaggregated architectures, not just compatible with them. What began with Dell PowerFlex as a single integration has matured into a structured approach. Nutanix will support multiple external storage ecosystems while providing a consistent compute and management experience.
  • It gives customers real choice in storage without fragmenting operations. With Pure Storage joining PowerFlex, Nutanix now supports two enterprise storage platforms that are widely deployed in existing environments. Customers can keep their existing tier-1 arrays and still modernize compute, hypervisor, and operations around AHV and Prism.
  • It creates an on-ramp for VMware exits with minimal disruption. Many VMware customers own Pure FlashArray deployments or run PowerFlex at scale. With these integrations, they can adopt Nutanix AHV without replatforming storage. The migration becomes a compute and virtualization change and not a full infrastructure overhaul.
  • It positions Nutanix as the control plane above heterogeneous infrastructure. The combination of NCI with PowerFlex and now Pure Storage shows that Nutanix is building an operational layer that unifies disparate architectures.
  • It aligns modernization with financial reality. Storage refreshes and compute refreshes rarely align. Supporting multiple external arrays allows Nutanix customers to modernize compute operations first, defer storage investment, and transition into HCI only when it makes sense.

Nutanix has moved from a tightly defined HCI architecture to an extensible compute platform that can embrace best-in-class storage from multiple vendors.

Nutanix Cloud Clousters on Google Cloud – A Third Strategic Hyperscaler Joins the Story

The general availability of NC2 on Google Cloud completes a strategic triangle. With AWS, Azure and now Google Cloud all supporting Nutanix Cloud Clusters (NC2), Nutanix becomes one of the very few platforms capable of delivering a consistent private cloud operating model across all three major hyperscalers. It fundamentally changes how enterprises can think about cloud architecture, mobility, and strategic independence.

Running NC2 on Google Cloud creates a new kind of optionality. Workloads that previously needed to be refactored or painfully migrated can now move into GCP without rewriting, without architectural compromises, and without inheriting a completely different operational paradigm. For many organizations, especially those leaning into Google’s strengths in analytics, AI, and data services, this becomes a powerful pattern. Keep the operational DNA of your private cloud, but situate workloads closer to the native cloud services that accelerate innovation.

NC2 on Google Cloud

When an enterprise can run the same platform – the same hypervisor, the same automation, the same governance model – across multiple hyperscalers, the risk of cloud lock-in can be reduced. Workload mobility and cloud-exit strategies become a reality.

NC2 on Google Cloud is a sign of how Nutanix envisions the future of hybrid multi-cloud. Not as a patchwork of different platforms stitched together, but a unified operating fabric that runs consistently across every environment. With Google now joining the story, that fabric becomes broader, more flexible, and significantly more strategic.

Conclusion

Nutanix is removing the trade-offs, that enterprises once accepted as inevitable.

Most IT leaders aren’t searching for (new) features. They are searching for ways to reduce risk, control cost, simplify operations, and maintain autonomy while the world around them becomes more complex. Nutanix’s recent enhancements are structural. They chip away at the constraints that made traditional infrastructure unflexible and expensive.

The platform is becoming more open, more flexible, more distributed, and more sovereign by design.

A Primer on Nutanix Cloud Clusters (NC2)

A Primer on Nutanix Cloud Clusters (NC2)

If you strip cloud strategy down to its essentials, you quickly notice that IT leaders are protecting three things. I am talking about continuity, autonomy and freedom of movement. Yet most clouds, private or public, quietly decimate at least one of these freedoms. You can gain elasticity but lose portability. You get managed services but have to accept immobility. And you can gain efficiency, but introduce concentration risk. Once the first workloads are deployed on a hyperscaler, many organizations underestimate the difficulty of reversing that decision later. And in some cases, they are aware of it and call it a strategic decision.

Nutanix Cloud Clusters (NC2) repositions control. It extends your existing Nutanix Cloud Platform (NCP) directly into the hyperscaler of your choice (AWS, Azure; or Google Cloud in tech preview) without requiring you to rewrite applications or adopt a new operational model. NC2 runs the same Nutanix stack on hyperscaler baremetal. Think of it as extending your private cloud to someone else’s cloud.

Workload Mobility

Most cloud migrations fail not because the target cloud is inadequate, but because the friction of moving virtual machines (VMs) is underestimated. Every dependency, every network pattern, every stored image becomes an anchor that slows down the migration. NC2 removes most of these anchors. Because the target environment is still Nutanix, your VM format, storage layout, operational tooling, and lifecycle management remain identical.

NC2 on AWS

This creates a kind of reversible migration (aka repatriation). You are no longer forced to commit to one direction. You can burst, repatriate or rebalance depending on business needs, not platform constraints. The psychological barrier of “this migration better be worth it because we cannot undo it” disappears.

Cloud Exit

Cloud exit is a topic we have been discussing in our industry for some time now. IT decision-makers want to know if and how they could exit a cloud if necessary. Cost shocks, sovereignty concerns, regulatory pressure, or simple risk diversification can all trigger a reassessment.

What happens if our cloud dependency becomes a risk? What if we need to move? Do we have an exit plan?

NC2 is one of the few architectures where an exit is not a complicated multi-year re-architecture effort. Workloads running on NC2 can be moved back to an on-premises Nutanix cluster without replatforming and without importing cloud-native dependencies that are difficult to untangle. Platform symmetry makes the exit not only thinkable, but executable.

When your workloads run on NC2 in AWS or Azure, they do not inherit the hyperscaler’s native VM formats, storage layouts, or proprietary IAM constructs. They run inside the same Nutanix Cloud Platform you already operate on-prem. This means that the workloads you run in the cloud are the same as those you can run in your data center.

In many organizations, repatriation is seen as a point of failure. Something you only do when the cloud strategy “didn’t work out”. That framing is outdated. Repatriation is increasingly a proactive governance mechanism:

  • Sovereignty changes? Move workloads home.
  • Cost pressure rises? Bring certain workloads back on-prem during peak cost cycles.
  • Predictable costs? Run static workloads privately but scale elastically via NC2.
  • Vendor terms change? Shift to a different infrastructure model.
  • GPU scarcity? Temporarily run training or inference workloads where you have capacity.

Nutanix Hybrid Multi-Cloud Operations

The cloud world has become multipolar. Many organizations are no longer choosing between “on-prem vs cloud”, but between multiple clouds like hyperscalers, European sovereign clouds, vertical-specific clouds, and dedicated regions.

Repatriation used to mean going home. With NC2, it can also mean going sideways:

  • From Azure to a sovereign cloud provider
  • From a hyperscaler to a private cloud built on NCP
  • From one hyperscaler to another when commercial, regulatory, or technical factors shift
  • From cloud to edge
  • From cloud to hosted private infrastructure via a service provider (OVH for example)

In other words, it allows organizations to move workloads to the location that makes sense right now, not the one that made sense during a six-year-old strategy cycle.

Note: NC2 is fundamentally a sovereignty mechanism because it makes long-term commitments reversible.

Operational Relief for Small IT Teams

Every new stack, platform, or cloud demands new knowledge, new operational patterns, new tooling, and new troubleshooting domains. When a team of five suddenly needs to understand the details of AWS, Azure, Nutanix, Kubernetes, storage arrays, hypervisors, and cloud-native services, hybrid cloud becomes an unmanageable landscape.

Even though NC2 is not a managed service, it behaves like a consolidation layer that collapses the operational surface. The team does not need to master the specifics of hyperscaler virtualization models, instance families, cloud-native block storage semantics or proprietary IAM patterns, but they operate the same Nutanix environment everywhere. The public cloud stops being an alien planet with its own physics and becomes an extension of the data center they already know.

For small teams, the value is immense. They no longer split their attention between incompatible worlds. They do not require deep AWS or Azure certifications to run VMs in the cloud, nor do they need a dedicated cloud operations squad. No need to maintain multiple monitoring stacks, patching processes or network topologies. They simply work through Prism, with the same lifecycle management, upgrade workflows, automation, and storage patterns. Regardless of where the hardware resides.

In short, efficiency increases as complexity decreases.

Conclusion

Ultimately, NC2 is not just a technical extension of Nutanix into public cloud regions. Think of it as a structural correction to a decade of cloud decisions shaped by lock-in, fragmentation, and asymmetrical dependencies. It gives organizations the right to change their mind without paying a penalty for it. It reduces operational noise instead of amplifying it. It allows teams to stay focused on outcomes rather than infrastructure politics.

 

When “Staying” Becomes a Journey And Why Nutanix Lets You Take Back Control

When “Staying” Becomes a Journey And Why Nutanix Lets You Take Back Control

There are moments in IT where the real disruption is not the change you choose, but the change that quietly happens around you. Many VMware customers find themselves in exactly such a moment. On the surface, everything feels familiar. The same hypervisor, the same vendors, the same vocabulary. But underneath that surface, something more fundamental is shifting and Broadcom’s new licensing and product model has turned VMware’s future into a one-way street. A gradual but unmistakable movement toward VMware Cloud Foundation (VCF).

What makes this moment so tricky is the illusion it creates. Because the names look the same, many organizations convince themselves that staying with VMware means avoiding change. They assume the path ahead is simply the continuation of the path behind. Yet the platform they are moving toward does not behave like the platform they came from. VCF 9 is a different way of running private cloud. A different architecture, operational model, and a different set of dependencies and constraints.

Once you see this clearly, the situation becomes easier to understand. Even if you stay with VMware, you are moving. The absence of physical distance does not mean the absence of migration. What changes is not the location of your workloads, but the world those workloads inhabit.

And that world looks much more like a cloud transition than like a traditional upgrade.

This is the first truth enterprises need to accept: it is still a migration.

The Subtle Shift From Upgrade to Replatforming

VCF 9 carries its own gravity. It reshapes how the environment must be designed, how networking is stitched together, how lifecycle management works, how domains are laid out, how automation behaves and how operations are structured. It forces full-stack adoption, even if your organization only needs part of the stack. And once the platform becomes prescriptive, you must either adopt its assumptions or fight against them.

If this exact level of change were introduced by a hyperscaler, nobody would hesitate to call it a cloud migration. It would come with discovery workshops, architecture reviews, dependency mapping, proof-of-concepts, testing phases, retraining, risk assessments,and new governance. But because the new platform still carries the VMware name, some organizations treat it as a large patch. Which it clearly is not.

This is where many stumble. An upgrade assumes continuity. A migration assumes transformation. VCF 9 sits firmly on the transformation side of that spectrum. Treating it as anything less increases risk, cost and frustration.

In other words, the work is the same work you would do for a cloud move. Only the destination changes.

Complexity You Did Not Ask For

One of the most overlooked consequences of this shift is the gradual increase in complexity. The move to a full-stack VCF world comes with the same architectural side effects you would expect when adopting any complex platform. More components, more integration points, more rules, more interdependencies, more expertise required to keep things stable.

Organizations want simplicity. You pay for it in architecture that becomes harder to evolve, in operations that require more coordination, in outages that take longer to troubleshoot, in people who must maintain increasingly fragile mental maps, and in costs that rise simply because the platform demands it.

And this is where the forced nature of the move becomes visible. You are inheriting complexity because the vendor has decided the portfolio must move in that direction. This is the difference between transformation that serves your strategy and transformation that serves someone else’s.

One Migration You Cannot Avoid, One Migration You Can Choose

At some point, every organization reaches a moment where movement is no longer a matter of preference but of circumstance. The transition to VCF 9 is exactly that kind of moment. Once this becomes clear, the nature of the decision changes. You stop focusing on how to avoid disruption and start asking a more strategic question: If we are investing the time, energy, and attention anyway, where should this effort lead?

VCF 9 is one possible destination. And it may very well be the right choice for some enterprises. But the key is that it should be a choice and not an automatic continuation of the past.

Customers need a model where the effort you invest in migration pays you back in reduced complexity rather than increased dependency.

Nutanix can be an option and a different operating model.

Yes the interesting truth is that both paths require work. Both involve change. Both need planning, testing, and careful execution. The difference lies in what you get once the work is done. One migration leaves you with a platform that is heavier and more prescriptive than the one you had before. The other leaves you with an environment that is lighter, simpler, and easier to operate.

The Real Choice in a Moment of Unwanted Movement

When change arrives from the outside, it rarely feels fair. It interrupts plans, forces attention onto things you didn’t choose, and demands energy you would rather spend somewhere else. Nobody asked for it. Nobody scheduled it. Yet here it is, reshaping the future architecture of your private cloud, whether you feel ready or not.

A different model of infrastructure can offer a way to use this forced moment of movement to your advantage, to turn a vendor-driven transition into an opportunity to simplify, to regain autonomy, and to design an infrastructure model that supports your next ten years rather than constraining them.

You may not have chosen the timing of this transition. But you can choose the shape of the destination. And in many ways, that is the most meaningful form of control an organization can exercise in a moment where the outside world tries to dictate the path ahead.