Beyond the Price Tag – Why Organizations Choose Nutanix

Beyond the Price Tag – Why Organizations Choose Nutanix

In many customer conversations today, the discussion about Nutanix starts in a very pragmatic place: price.

Before we get the chance to talk about architecture, automation, or hybrid cloud strategies, most organizations first want to answer a simpler question: Can we even afford this option? Only once that hurdle is cleared does the real conversation begin. That is the moment when customers start asking a different question: Is it worth spending our time on this platform?

And that shift in perspective is important, because the current market situation is very different from just a few years ago.

For more than a decade, the virtualization market followed a relatively stable pattern. Many organizations standardized on a single hypervisor/platform and built their operational models, processes, and skill sets around it. The question was rarely which hypervisor to choose but more about which edition or which bundle to buy. The platform decision itself was largely settled.

That stability is gone.

Since the licensing and pricing changes in the VMware ecosystem in 2024, many organizations have been forced to rethink assumptions that had been in place for years. Renewal discussions suddenly became strategic decisions and budget forecasts were no longer predictable. In some cases, the cost increases were large enough to trigger board-level attention. Yes, and sometimes even attract political attention.

But price is only one part of the story.

Many customers also question the long-term direction of the platform on which they built their data centers. They are asking themselves whether the vendor’s strategic priorities still align with their own, and they are looking at consolidation in the industry, reduced product portfolios, and new licensing models, and they are wondering what that means for their own autonomy.

As a result, the conversation has shifted from optimization to re-evaluation.

Instead of finetuning an existing environment, many organizations are now exploring a wide spectrum of alternatives. Hyper-V, HPE VM Essentials, Proxmox, Scale Computing, and open-source stacks. Niche hypervisors and even container-first approaches. The list is long, and in many cases, the evaluation is driven less by feature comparisons and more by strategic considerations.

What is interesting in these discussions is the level of pragmatism.

Most customers are very clear about one thing: they know that VMware still offers one of the most mature and feature-rich stack on the market, but they also admit that they do not actually use all of those features. In some environments, large parts of the advanced functionality have been sitting idle for years.

So the goal is no longer to replicate the past environment in every technical detail.

Customers are willing to accept trade-offs. They do not need the most sophisticated dashboards nor do they need every integration or advanced automation capability. If they can move 80 or 90 percent of their workloads to a new platform, that is already a success. The remaining cases can be handled separately.

This is where a new mindset becomes visible: fail fast, fail forward.

The objective is not to design the perfect architecture on paper. It is to make progress, to reduce dependency, to regain control over costs and strategic direction, and to move to a platform that is predictable, supportable, and aligned with the organization’s own priorities. Even if it means it will stall innovation for a short time.

In that context, price becomes the first filter, not the final decision criterion.

If a platform is clearly unaffordable, the conversation ends there. But if the numbers are within reach, customers start to look deeper. They begin to evaluate operational simplicity, architectural consistency, support quality, and long-term flexibility.

That is usually the point where the Nutanix conversation truly starts.

The Perception Problem

For years, a certain sentence has circulated in the market: “Nutanix is expensive”. It became one of those beliefs that many people repeat without necessarily remembering where it originally came from.

In some organizations, this perception is based on very old benchmarks. In others, it comes from comparisons where different functionality levels were evaluated against each other. And in some cases, it is simply a narrative that persisted over time.

Recently, I have revisited this perception through real customer scenarios. Not theoretical models, but practical environments with realistic configurations, conservative assumptions, and somtimes even with standard (pre-approved) discount levels. What I found was not a universal truth, but a context-dependent story.

In several scenarios, Nutanix was not only competitive but significantly cheaper.

Disclaimer: Before we look at the numbers, a short disclaimer is important. The scenarios shown here are based on realistic configurations, standard architectures, and pre-approved discount levels. They are meant to illustrate typical outcomes, not to serve as official quotes or universally applicable price promises. Actual pricing will always depend on the specific environment, commercial terms, hardware choices, and contractual conditions of each individual customer.

Scenario 1: 500 VDI Users

Assume a VDI environment with 500 users. The infrastructure is built on 2×32-core nodes and designed with an n+2 resilience model. This is a typical production setup, where spare capacity is included so that the environment can tolerate failures without affecting user sessions.

In this configuration, you end up with around 1’152 physical cores that need to be licensed at the platform level. For the baseline comparison, I used this number together with a price of $140 per core. This reflects a very common way the market still thinks about platform costs – total cores multiplied by a unit price. In this baseline, no disaster recovery site is included yet.

With Nutanix, I modeled the environment using the NCI-VDI edition, which is purpose-built for virtual desktop use cases with platforms like Citrix or Omnissa (or Parallels, Dizzion etc.). In this model, I am not licensing 1’152 cores. Instead, I am licensing 500 concurrent users (CCU).

The difference in licensing logic alone already changes the economics of the environment, but there is another aspect that often surprises customers.

There is no additional licensing cost for a disaster recovery site. You can add hosts, refresh hardware, or build a secondary VDI site with the same number of cores, and from a Nutanix licensing perspective, the price remains exactly the same. The licensing is tied to the number of concurrent users, not to the amount of infrastructure standing behind them.

To keep the scenario fully realistic, I calculated three Nutanix options using only pre-approved discounts. Meaning, these are price levels that can typically be offered without extraordinary approvals.

  • The first option combined NCI Pro with NCM Starter – Representing a balanced configuration for standard VDI environments.
  • The second option used NCI Ultimate with NCM Starter – For scenarios where additional capabilities such as microsegmentation are required.
  • The third option was the full stack – Combining NCI Ultimate with NCM Ultimate, providing the complete feature set across both infrastructure and management layers.

All three options came out significantly below the core-based baseline, even the highest edition. And then there is the red bar in the comparison chart.

That red bar represents the same platform model as the baseline, but with the price per core increasing from $140 to $200, which is not an unrealistic assumption for a future renewal. The architecture stays the same, the number of cores stays the same, the resilience model stays the same, but only the unit price changes. Staying with the current platform vendor would result in a massive increase in total cost of ownership, without adding a single new capability to the environment.

cloud13 Nutanix Price NCI VDI

This scenario is not meant to claim that Nutanix is always cheaper. That would be just another oversimplified narrative. But it does show that Nutanix can be more predictable, more scalable, and economically superior, especially in VDI environments where user-based licensing aligns better with how the platform is actually consumed.

Scenario 2: Microsegmented Data Center

In another environment, the discussion was not about VDI or edge sites, but about security.

The customer had a clear, non-negotiable requirement. They wanted to limit lateral movement inside the network and enforce strict communication policies between workloads. This is becoming increasingly common, especially in regulated industries and public sector environments where zero-trust principles are becoming operational requirements.

In the past, microsegmentation was often tied to premium software bundles. Organizations that needed this capability had little choice but to move into higher-tier licensing models, even if they did not require many of the additional features included in those bundles. The security requirement effectively forced them into a more expensive edition, regardless of their actual needs.

In this scenario, the customer was already using microsegmentation and wanted to retain that capability in the target architecture. The comparison was therefore not between a basic and a premium edition, but between two functionally equivalent setups. Both sides had to include network security features.

To make the comparison more realistic and representative of different customer sizes, three Nutanix options were modeled. All three were based on the NCI Ultimate edition, which includes micro-segmentation capabilities, but they reflected different customer profiles and corresponding discount levels.

  • The first option represented a large enterprise environment. In this case, the customer had a high core count and a larger overall deal size, which typically qualifies for higher discount tiers. This option assumed a larger-scale deployment and the kind of commercial conditions that are common in enterprise agreements. It illustrated how the platform behaves economically when deployed at a significant scale.
  • The second option represented a mid-sized environment. Here, the core count and overall deal size were more moderate, leading to medium discount levels. This scenario is often closer to what many regional enterprises, healthcare providers, or mid-sized public sector organizations experience. It provided a balanced view between large enterprise conditions and smaller deployments.
  • The third option reflected a smaller environment, with a lower core count and standard discount levels. This was designed to show what the platform looks like in more typical, smaller-scale deployments, where customers operate under normal commercial conditions without large enterprise agreements.

Across all three options, the architectural assumptions remained consistent. The same security requirements applied, the same functionality was included, and the comparison remained technically equivalent. The only real differences were the scale of the environment and the corresponding commercial terms.

cloud13 Nutanix Price NCI Ult microsegmentation

In each of the three scenarios, the Nutanix configuration remained competitive, and in several cases came out lower in total software cost.

Scenario 3: Distributed Edge Environment

Instead of running a few large clusters in central data centers, some organizations suddenly find themselves operating dozens or even hundreds of small sites. Each location may only host a limited number of virtual machines (VMs), but the number of sites creates a very different licensing footprint.

In this scenario, the customer planned to run around 3’000 virtual machines distributed across roughly 250 edge locations. Each site consisted of only a small number of hosts, designed for local workloads and basic resilience – assume 3 hosts à 32 cores per site = 24’000 cores in total.

In traditional per-core licensing models, these kinds of distributed environments can become expensive very quickly. Even lightly utilized sites still require a certain number of cores to maintain resilience and availability. Multiply that by hundreds of locations, and the software cost grows faster than the actual workload.

Nutanix Cloud Infrastructure – Edge (NCI-Edge) provides a distributed infrastructure platform for small edge deployments. NCI-Edge provides the same capabilities as NCI, combining compute, storage, and networking resources from a cluster of servers into a single logical pool with integrated resiliency, security, performance, and simplified administration. NCI-Edge is limited to a maximum of 25 VMs in a cluster, with each VM being limited to a maximum of 96GB of memory. With NCI-Edge, organizations can efficiently extend the Nutanix platform to remote office/branch office (ROBO) and other edge use cases.

When we modeled this scenario with a Nutanix-based architecture, using conservative assumptions and standard pricing, the outcome was different. The total software cost across all 250 sites was lower than the comparable alternative.

cloud13 NCI Edge

Edge licensing is all about predictability. The licensing model aligned more closely with the operational reality of the environment. Instead of being penalized for running many small sites, the customer could scale their footprint without unexpected increases in costs. The economics made sense for a distributed architecture.

For organizations with large retail networks, industrial edge scenarios, transportation systems, or geographically spread infrastructures, this predictability can be just as important as the absolute price. It allows them to plan growth, roll out new sites, and standardize operations without constantly renegotiating their licensing model.

Scenario 4: From Amazon EVS to Nutanix NC2

Many organizations that moved, or are planning to move, to VMware environments in the public cloud have a very practical reason. They want to keep their existing operational model, their tools, and their skill sets, while shifting the physical infrastructure into a cloud provider’s (Azure, GCP, AWS) data center. The promise is always continuity without disruption.

At first glance, this approach makes sense. You avoid large migration projects, keep your processes intact, and simply relocate the environment. But the economics of these environments have started to change.

I am currently working with an organization that operates a full-stack private cloud at roughly $150 per core. On paper, that stack includes a wide range of capabilities. In reality, however, they only use a small portion of it: the core virtualization layer and basic monitoring and logging. No vSAN, no NSX. Just vSphere and Aria Operations.

Today, they run around 1’920 physical cores on-premises. As part of their cloud strategy, they are considering migrating to Amazon’s Elastic VMware Service (EVS) to exit their own data centers and align with a cloud-first approach. Because the EVS baremetal instances offer higher density, they expect to consolidate their environment to roughly 1’000 cores. Fewer cores, better utilization, same workloads.

Because Amazon EVS is a self-managed service, you are responsible for the lifecycle management and maintenance of the VMware software used in the Amazon EVS environment, such as ESX, vSphere, vSAN, NSX, and SDDC Manager. 

Note: Amazon EVS does not support VMware Cloud Foundation 9 at this time. Currently, the only supported VCF version is VCF 5.2.2 on i4i.metal instances.

That sounds like a straightforward cost-saving exercise, right? But the renewal dynamics tell a different story. Their Broadcom renewal is scheduled for summer 2027, and two scenarios are being discussed:

  • In the first scenario, a typical price increase of around 33 percent is assumed. That would move them from $150 to approximately $200 per core.
  • In the second scenario, the total contract value remains the same despite the reduced core count. In practical terms, that would mean $288 per core, which means an increase of about 92% compared to today.

In other words, even if they cut their footprint almost in half, their effective price per core could nearly double. This is where the discussion turned toward alternatives.

We modeled the same environment using the Nutanix Cloud Platform (NCP) running as NC2 on AWS. It is important to clarify one common misconception here: NC2 is not a separate product with a different architecture. It is the same Nutanix software stack, NCI combined with NCM, deployed on baremetal instances in the public cloud. Operationally, it behaves exactly like an on-premises Nutanix environment.

NC2 on AWS

To reflect different functional needs, I modeled three options:

  • The first option was NCI Pro combined with NCM Starter. This configuration mirrors the customer’s current feature usage, avoiding unnecessary capabilities or “shelfware”. It represents a like-for-like replacement of the existing functionality.
  • The second option used NCI Ultimate with NCM Starter. This added more advanced storage and data services, along with microsegmentation capabilities, giving the customer a richer feature set than they have today.
  • The third option was the full Nutanix Cloud Platform Ultimate stack, including the complete set of infrastructure, automation, and advanced platform services.

Even with these different configurations, the results were consistent. All three Nutanix options came in significantly below the expected VMware renewal costs.

Compared to a VMware renewal at $200 per core, the estimated savings looked roughly as follows:

  • NCI Pro + NCM Starter: About 33 percent lower

  • NCI Ultimate + NCM Starter: About 18 percent lower

  • NCP Ultimate: About 24 percent lower (higher discount for full-stack approach)

If the worst-case scenario of $288 per core were to materialize, the savings would be even higher, ranging from approximately 43 to 54 percent per year!

cloud13 Nutanix Price NCI EVS to NC2

As in the other scenarios, the interesting part was not just the price difference. It was the combination of cost predictability and architectural flexibility. With NC2, the customer could run the same platform on-premises and in the cloud, move workloads between locations, and avoid being tied to a single proprietary cloud virtualization stack.

To support the transition from VMware to Nutanix on NC2, migrations are typically handled with Nutanix Move. This tool allows customers to replicate and migrate virtual machines from existing VMware environments into Nutanix clusters with minimal disruption, reducing the complexity of the platform shift.

In this scenario, the outcome once again challenged the old perception. When modeled with realistic assumptions and current pricing dynamics, Nutanix was very (cost-)competitive. It offered both a lower platform cost and a more flexible long-term architecture.

Scenario 5: Updated Benchmarks, Different Results

Perhaps one of the most revealing examples was not a technical scenario at all, but a simple conversation.

In one engagement, a partner mentioned that their internal Nutanix benchmark was more than two years old. Those numbers had shaped their perception of the platform and influenced how they positioned Nutanix in front of customers. Over time, the benchmark had become an accepted reference point, even though no one had revisited the assumptions underlying it.

When we recalculated the scenario using (VCF vs. NCI Pro with Advanced Replication add-on) current licensing models, realistic configurations, and today’s pricing structures, the outcome was very different from what they expected. The Nutanix solution turned out to be cheaper than expected.

The important information here was not the percentage difference or the exact numbers on the spreadsheet. It was the realization that the entire perception had been built on outdated data. The conclusion they had carried forward for years no longer reflected the reality of the current market.

This experience is not unique. Many organizations still rely on benchmarks, cost models, or architectural assumptions that were created several years ago. Since then, licensing structures have evolved, bundles have changed, and the economics of different platforms have shifted. But the original perception often remains untouched.

In conversations with customers and partners, I frequently hear a similar sentence: “Our Nutanix benchmark might be outdated”. That simple realization often marks the turning point in the discussion. Because once the numbers are recalculated with current data, the story tends to change and the outcome is no longer predetermined. 

Addressing the Renewal Myth

Another concern that often surfaces in conversations is the idea that Nutanix offers an attractive entry price, only to significantly increase costs at renewal time.

This narrative circulates in online forums, informal discussions, and peer-to-peer exchanges. In a market where many organizations have recently experienced unexpected price increases from other vendors, it is understandable that customers approach any new platform with a certain level of skepticism. Trust in licensing models has been shaken, and nobody wants to repeat the same experience a few years down the road.

But in practice, this perception does not reflect how most Nutanix engagements actually unfold. In many cases, Nutanix is able to provide multi-year price guarantees, giving customers clarity not only about the initial investment, but also about what they can expect over the next several years. Instead of treating pricing as a short-term negotiation, the conversation often shifts toward long-term planning and predictability.

This does not mean that prices will remain frozen forever. No software vendor can realistically promise that. Over time, platforms evolve, new features are introduced, innovation continues, and inflation affects the cost structure. It is normal for software pricing to adjust over a multi-year horizon.

The difference lies in transparency.

Rather than hiding future changes behind complex contracts or vague terms, Nutanix is often willing to put the long-term numbers on the table early in the process. Customers can see not only what they pay today, but also how the platform is expected to evolve financially over time. That creates a different kind of conversation – one based on planning and predictability instead of uncertainty.

For many organizations, especially those in regulated industries or the public sector, that predictability is more important than the absolute entry price. It allows them to align budgets, procurement cycles, and strategic roadmaps without the fear of sudden surprises at renewal time.

What Customers Actually Value

Once the initial price discussion is out of the way, the tone of the conversation usually changes. The focus shifts from raw numbers to what the platform actually delivers in day-to-day operations.

At this stage, customers are asking whether it fits their architecture, their processes, and their long-term strategy. And across many conversations, certain themes tend to appear again and again.

One of the most frequently mentioned aspects is the modularity of the platform. Customers appreciate that Nutanix does not force them into a single, monolithic bundle for every use case. A large data center, a VDI environment, and a small edge site may not require the same software edition. With Nutanix, these environments can be licensed differently based on their actual requirements. This flexibility allows customers to align their licensing model with their architecture, instead of reshaping their architecture to fit the licensing.

Another recurring theme is the architectural simplicity of hyperconverged infrastructure itself. Many customers value a distributed system that integrates compute and storage, builds resilience into the platform, and reduces external dependencies. There is no separate SAN to manage, no complex compatibility matrix between multiple storage and compute components. For teams that want to reduce operational overhead and complexity, this design principle often resonates more strongly than any individual feature.

Support quality is another topic that comes up regularly. Nutanix consistently achieves a Net Promoter Score (NPS) above 90, which is unusually high in the enterprise infrastructure space. Customers often describe the support experience as direct and focused, with engineers who stay engaged until the issue is resolved. For organizations that have struggled with multi-vendor support models in the past, this can be a significant improvement.

The ecosystem also plays an important role. Nutanix continues to work closely with major OEM partners such as Dell, Lenovo, HPE, and Cisco. For many customers, especially in the public sector, this is more than a technical detail. It means they can procure hardware through existing framework contracts, trusted suppliers, and established procurement channels, while still running a modern, consistent software platform.

In addition, the platform is gradually opening up to more flexible architectures. Nutanix has introduced support for external storage integrations, starting with platforms from Dell and Pure Storage, with further options expected over time. This gives customers more freedom in how they design their environments, especially if they want to reuse existing storage investments or follow a disaggregated approach for certain workloads.

Taken together, these themes paint a clear picture. Once the price question is answered, the decision is rarely about a single feature or a benchmark number. It becomes a broader evaluation of architecture, operational simplicity, support experience, and long-term flexibility.

And in many of those discussions, that combination of qualities is what makes the platform stand out.

Price Opens the Door. Value Closes the Deal.

If you look across all the scenarios and customer discussions, a consistent pattern begins to emerge.

Price is almost always the starting point. It determines whether a platform even makes it onto the shortlist. In today’s market, where many organizations are under pressure to control costs and justify every investment, that first filter has become more important than ever. If a solution is clearly out of budget, the conversation usually ends before it truly begins.

But we all know that price is rarely the final decision factor.

Once customers see that Nutanix is within their financial reach, or in some cases even cheaper than the alternatives, the focus shifts. The discussion shifts from license metrics and discount levels to the day-to-day realities of running the platform. This is the moment when the conversation moves from procurement to platform strategy.

Customers begin to consider how much time they spend on upgrades, how complex their current environment has become, how many vendors they have to coordinate during incidents, and how predictable their infrastructure roadmap really is. They start to evaluate not just what the platform costs today, but what it means for their operations over the next five or ten years.

And that is often where Nutanix stands out!

The platform may not always be the absolute cheapest option in every possible scenario. No serious technology decision should be based on a single number alone. But the blanket statement that Nutanix is inherently expensive does not hold up when you look at real environments with current data. 

Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Over the past two years, a new narrative has taken hold in the cloud market. No, it is not always about sovereign cloud. 🙂 Headlines talk about cloud repatriation – nothing really new, but it is still out there. CIOs speak openly about pulling some workloads back on-premises. Analysts write about organizations “correcting” some earlier cloud decisions to optimize cloud spend. In parallel, hyperscalers themselves now acknowledge that not every workload belongs in the public cloud.

And yet, when you look at the data, you will find a paradox.

IDC and Gartner both project strong, sustained growth in public cloud IaaS spending over the next five years. Not marginal growth and sign of stagnation. But a market that continues to expand at scale, absorbing more workloads, more budgets, and more strategic relevance every year.

At first glance, these two trends appear contradictory. If organizations are repatriating workloads, why does public cloud IaaS continue to grow so aggressively? The answer lies in understanding what is actually being repatriated, what continues to move to the cloud, and how infrastructure constraints are reshaping decision-making in ways that are often misunderstood.

Cloud Repatriation Is Real, but Narrower Than the Narrative Suggests

Cloud repatriation is not a myth. It is happening, but it is also frequently misinterpreted.

Most repatriation initiatives are highly selective. They focus on predictable, steady-state workloads that were lifted into the public cloud under assumptions that no longer hold. Cost transparency has improved, egress fees are better understood and operating models have matured. What once looked flexible and elastic is now seen as expensive and operationally inflexible for certain classes of workloads.

What is rarely discussed is that repatriation does not mean “leaving the cloud”, but I have to repeat it again: It means rebalancing. Meaning, that trganizations are not abandoning public cloud IaaS as a concept. They are just refining their usage of it.

At the same time, some new workloads continue to flow into public cloud environments. Digital-native applications, analytics platforms, some AI pipelines, globally distributed services, and short-lived experimental environments still align extremely well with public cloud economics and operating models. These workloads were not part of the original repatriation debate, and they seem to be growing faster than traditional workloads are being pulled back.

This is how both statements can be true at the same time. Cloud repatriation exists, and public cloud IaaS continues to grow.

The Structural Drivers Behind Continued IaaS Growth

Public cloud IaaS growth is not driven by blind enthusiasm anymore. It is driven by structural forces that have little to do with fashion and everything to do with constraints.

One of the most underestimated factors is time. Building infrastructure takes time and procuring hardware takes time as well. Scaling data centers takes time and many organizations today are not choosing public cloud because it is cheaper or “better”, but because it is available now.

This becomes even more apparent when looking at the hardware market right now.

Hardware Shortages and Rising Server Prices Change the Equation

The infrastructure layer beneath private clouds has suddenly become a bottleneck. Server lead times have increased, GPU availability is constrained and prices for enterprise-grade hardware continue to rise, driven by supply chain pressures, higher component costs, and growing demand from AI workloads.

For organizations running large environments, this introduces a new type of risk. Capacity planning is a logistical problem and no longer just a financial exercise anymore. Even when budgets are approved, hardware may not arrive in time. That is the new reality.

In this context, public cloud data centers represent something extremely valuable: pre-existing capacity. Hyperscalers have already made the capital investments and they already operate at scale. From the customer perspective, infrastructure suddenly looks abundant again.

This is why many organizations currently consider shifting workloads to public cloud IaaS, even if they were previously skeptical. It became a pragmatic response to scarcity.

The Flawed Assumption: “Just Use Public Cloud Instead of Buying Servers”

However, this line of thinking often glosses over a critical distinction.

Many of these organizations do not actually want “cloud-native” infrastructure, if we are being honest here. What they want is physical capacity – They want compute, storage, and networking under predictable performance characteristics. In other words, they want some VMs and bare metal.

Buying servers allows organizations to retain architectural freedom. It allows them to choose their operating system or virtualization stack, their security model, their automation tooling, and their lifecycle strategy. Public cloud IaaS, by contrast, delivers abstraction, but at the cost of dependency.

When organizations consume IaaS services from hyperscalers, they implicitly accept constraints around instance types, networking semantics, storage behavior, and pricing models. Over time, this shapes application architectures and operational processes. The usage of such services suddenly became a lock-in.

Bare Metal in the Public Cloud Is Not a Contradiction

Interestingly, the industry has started to converge on a hybrid answer to this dilemma: bare metal in the public cloud.

Hyperscalers themselves offer bare-metal services. This is an acknowledgment that not all customers want fully abstracted IaaS. Some want physical control without owning physical assets. It is simple as that.

But bare metal alone is not enough. Without a consistent cloud platform on top, bare-metal in the public cloud becomes just another silo. You gain performance and isolation, but you lose portability and operational consistency.

Nutanix Cloud Clusters and the Reframing of IaaS

Nutanix Cloud Platform running on AWS, Azure, and Google Cloud through NC2 (Nutanix Cloud Clusters) introduces a different interpretation of public cloud IaaS.

Instead of consuming hyperscaler-native IaaS primitives, customers deploy a full private cloud stack on bare-metal instances in public cloud data centers. From an architectural perspective, this is a subtle but profound difference.

Customers still benefit from the hyperscaler’s global footprint and hardware availability and they still avoid long procurement cycles, but they do not surrender control of their cloud operating model. The same Nutanix stack runs on-premises and in public cloud, with the same APIs, the same tooling, and the same governance constructs.

Workload Mobility as the Missing Dimension

The most underappreciated benefit of this approach is workload mobility.

In a cloud-native bare-metal deployment tied directly to hyperscaler services, workloads tend to become anchored, migration becomes complex, and exit strategies are theoretical at best.

With NC2, workloads are portable by design. Virtual machines and applications can move between on-premises environments and public cloud (or a service provider cloud) bare-metal clusters without refactoring. In practical terms, this means organizations can use public cloud capacity tactically rather than strategically committing to it. Capacity shortages, temporary demand spikes, regional requirements, or regulatory constraints can be addressed without redefining the entire infrastructure strategy.

This is something traditional IaaS does not offer, and something pure bare-metal consumption does not solve on its own.

Reconciling the Two Trends

When viewed through this lens, the contradiction between cloud repatriation and public cloud IaaS growth disappears.

Public cloud is growing because it solves real problems: availability, scale, and speed. Repatriation is happening because not all problems require abstraction, and not all workloads benefit from cloud-native constraints.

The future is not a reversal of cloud adoption. It is a maturation of it.

Organizations are asking how to use public clouds without losing control. Platforms that allow them to consume cloud capacity while preserving architectural independence are not an alternative to IaaS growth and they are one of the reasons that growth can continue without triggering the next wave of regret-driven repatriation.

What complicates this picture further is that even where public cloud continues to grow, many of its original economic promises are now being questioned again.

The Broken Promise of Economies of Scale

One of the foundational assumptions behind public cloud adoption was economies of scale. The logic seemed sound. Hyperscalers operate at a scale no enterprise could ever match. Massive data centers, global procurement power, highly automated operations. All of this was expected to translate into continuously declining unit costs, or at least stable pricing over time.

That assumption has not materialized as we know by now.

If economies of scale were truly flowing through to customers, we would not be witnessing repeated price increases across compute, storage, networking, and ancillary services. We would not see new pricing tiers, revised licensing constructs, or more aggressive monetization of previously “included” capabilities. The reality is that public cloud pricing has moved in one direction for many workloads, and that direction is up.

This does not mean hyperscalers are acting irrationally. It means the original narrative was incomplete. Yes, scale does reduce certain costs, but it also introduces new ones. That is also true for new innovations and services. Energy prices, land, specialized hardware, regulatory compliance, security investments, and the operational complexity of running globally distributed platforms all scale accordingly. Add margin expectations from capital markets, and the result is not a race to the bottom, but disciplined price optimization.

For customers, however, this creates a growing disconnect between expectation and reality.

When Forecasts Miss Reality

More than half of organizations report that their public cloud spending diverges significantly from what they initially planned. In many cases, the difference is not marginal. Budgets are exceeded, cost models fail to reflect real usage patterns, optimization efforts lag behind application growth.

What is often overlooked is the second-order effect of this divergence. Over a third of organizations report that cloud-related cost and complexity issues directly contribute to delayed projects. Migration timelines slip, modernization initiatives stall, and teams slow down not because technology is unavailable, but because financial and operational uncertainty creeps into every decision.

Commitments, Consumption, and a Structural Risk

Most large organizations do not consume public cloud on a purely on-demand basis. They negotiate commitments, look at reserved capacity, and spend-based discounts. These are strategic agreements designed to lower unit costs in exchange for predictable consumption.

These agreements assume one thing above all else: that workloads will move. They HAVE TO move.

When migrations slow down, a new risk pops up. Organizations fail to reach their committed consumption levels, because they cannot move workloads fast enough. Legacy architectures, migration complexity, skill shortages, and governance friction all play a role.

The consequence is subtle but severe. Committed spend still has to be paid and because of that future negotiations become weaker. The organization enters the next contract cycle with a track record of underconsumption, reduced leverage, and less credibility in forecasting.

In effect, execution risk turns into commercial risk.

This dynamic is rarely discussed publicly, but it is increasingly common in private conversations with CIOs and cloud leaders. The challenge is no longer whether the public cloud can scale, but whether the organization can.

Speed of Migration as an Economic Variable

At this point, migration speed stops being a technical metric and becomes an economic one. The faster workloads can move, the faster negotiated consumption levels can be reached. The slower they move, the more value leaks out of cloud agreements.

This is where many cloud-native migration approaches struggle. Refactoring takes time and re-architecting applications is expensive. Not every workload is a candidate for transformation under real-world constraints.

As a result, organizations are caught between two pressures. On one side, the need to consume public cloud capacity they have already paid for. On the other hand, the inability to move workloads quickly without introducing unacceptable risk.

NC2 as a Consumption Accelerator, Not a Shortcut

This is where Nutanix Cloud Platform with NC2 changes the conversation.

By allowing organizations to run the same private cloud stack on bare metal in AWS, Azure, and Google Cloud, NC2 removes one of the biggest bottlenecks in migration programs: The need to change how workloads are built and operated before they can move.

Workloads can be migrated as they are, operating models remain consistent, governance does not have to be reinvented, and teams do not need to learn a new infrastructure paradigm under time pressure. It’s all about efficiency and speed.

Faster migrations mean workloads start consuming public cloud capacity earlier and the negotiated consumption targets suddenly become achievable. Commitments turn into realized value rather than sunk cost, and the organization regains control over both its migration timeline and its commercial position.

Reframing the Role of Public Cloud

In this context, NC2 is not an alternative to public cloud economics, but a mechanism to actually realize them.

Public cloud providers assume customers can move fast. In reality, many customers cannot, not because they resist change, but because change takes time. Platforms that reduce friction between private and public environments do not undermine cloud strategies. They are here to stabilize them. And they definitely can!

The uncomfortable truth is that economies of scale alone do not guarantee better outcomes for customers, execution does. And execution, in large enterprises, depends less on ideal architectures and more on pragmatic paths that respect existing realities.

When those paths exist, public cloud growth and cloud repatriation stop being opposing forces. They become two sides of the same maturation process, one that rewards platforms designed not just for scale, but for transition.

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

If you ask most large organizations why they use more than one public cloud, the answers are remarkably consistent. It is not fashion, and it is rarely driven by engineering curiosity. It is risk management and a best of breed approach.

Enterprises distribute workloads across multiple public clouds to reduce concentration risk, comply with regulatory expectations, preserve negotiation leverage, and remain operationally resilient in the face of outages that cannot be mitigated by adding another availability zone. In regulated industries, especially in Europe, this thinking has become mainstream. Supervisors explicitly expect organisations to understand their outsourcing dependencies, to manage exit scenarios, and to avoid structural lock-in where it can reasonably be avoided.

Now apply the same logic one layer down into the private cloud world, and the picture changes dramatically.

Across industries and geographies, a significant majority of private cloud workloads still run on a single private cloud platform. In practice, this platform is often VMware (by Broadcom). Estimates vary, but the dominance itself is not controversial. In many enterprises, approximately 70 to 80 percent of virtualized workloads reside on the same platform, regardless of sector.

If the same concentration existed in the public cloud, the discussion would be very different. Boards would ask questions, regulators would intervene, architects would be tasked with designing alternatives. Yet in private cloud infrastructure, this concentration is often treated as normal, even invisible.

Why?

Organisations deliberately choose multiple public clouds

Public cloud multi-cloud strategies are often oversimplified as “fear of lock-in”, but that misses the point.

The primary driver is concentration risk. When critical workloads depend on a single provider, certain failure modes become existential. Provider-wide control plane outages, identity failures, geopolitical constraints, or contractual disputes cannot be mitigated by technical architecture alone. Multi-cloud does not eliminate risk, but it limits the blast radius.

Regulation reinforces this logic. The European banking supervision, for example, treats cloud as an outsourcing risk and expects institutions to demonstrate governance, exit readiness, and operational resilience. An exit strategy that only exists on paper is increasingly viewed as insufficient. There are also pragmatic reasons. Jurisdictional considerations, data protection regimes, and shifting geopolitical realities make organizations reluctant to anchor everything to a single legal and operational framework. Multi-cloud (or hybrid cloud) becomes a way to keep strategic options open.

And finally, there is negotiation power. A credible alternative changes vendor dynamics. Even if workloads never move, the ability to move matters.

This mindset is widely accepted in the public cloud. It is almost uncontroversial.

How the private cloud monoculture emerged

The dominance of a single private cloud platform did not happen by accident, and it did not happen because enterprises were careless.

VMware earned its position over two decades by solving real problems early and building an ecosystem that reinforced itself. Skills became widely available, tooling matured, and operational processes stabilized. Backup, disaster recovery, monitoring, security controls, and audit practices are all aligned around a common platform. Over time, the private cloud platform evolved into more than just software. It became the operating model.

And once that happens, switching becomes an organizational transformation.

Private cloud decisions are also structurally centralized. Unlike public cloud consumption, which is often decentralized across business units, private cloud infrastructure is intentionally standardized. One platform, one set of guardrails, one way of operating. From an efficiency and governance perspective, this makes sense. From a dependency perspective, it creates a monoculture.

For years, this trade-off was acceptable because the environment was stable, licensing was predictable, and the ecosystem was broad. The rules of the game did not change dramatically.

That assumption is now being tested.

What has changed is not the technology, but the dependency profile

VMware remains a technically strong private cloud platform. That is not in dispute. What has changed under Broadcom is the commercial and ecosystem context in which the platform operates. Infrastructure licensing has shifted from a largely predictable, incremental expense into a strategically sensitive commitment. Renewals are no longer routine events. They become moments of leverage.

At the same time, changes in partner models and go-to-market structures affect how organizations buy, renew, and support their private cloud infrastructure. When the surrounding ecosystem narrows, dependency increases, even if the software itself remains excellent.

This is not a judgment on intent or quality. It is just a structural observation. When one private cloud platform represents the majority of an organization’s infrastructure, any material change in pricing, licensing, or ecosystem access becomes a strategic risk by definition.

The real issue is not lock-in, but the absence of a credible exit

Most decision-makers do not care about hypervisors, they care about exposure. The critical question is not whether an organization plans to leave its existing private cloud platform. The question is whether it could leave, within a timeframe the business could tolerate, if it had to.

In many cases, the honest answer is no.

Economic dependency is the first dimension. When a single vendor defines the majority of your infrastructure cost base, budget flexibility shrinks.

Operational dependency is the second. If tooling, processes, security models, and skills are deeply coupled to one platform, migration timelines stretch into years. That alone is a risk, even if no migration is planned.

Ecosystem dependency is the third. Fewer partners and fewer commercial options reduce competitive pressure and resilience.

Strategic dependency is the fourth. The private cloud platform is increasingly becoming the default landing zone for everything that cannot go to the public cloud. At that point, it is no longer just infrastructure. It is a critical organizational infrastructure.

Public cloud regulators have language for this. They call it outsourcing concentration risk. Private cloud infrastructure rarely receives the same attention, even though the consequences can be comparable.

Concentration risk in the public sector – When dependency is financed by taxpayers

In the public sector, concentration risk is not only a technical or commercial question but also a governance question. Public administrations do not invest their own capital. Infrastructure decisions are financed by taxpayers, justified through public procurement, and expected to remain defensible over long time horizons. This fundamentally changes the risk calculus.

When a public institution concentrates the majority of its private cloud infrastructure on a single platform, it is committing public funds, procurement structures, skills development, and long-term dependency to one vendor’s strategic direction. Now, what does it mean for a nation where 80 or 90% of its public sector is dependent on one single vendor?

That dependency can last longer than political cycles, leadership changes, or even the original architectural assumptions. If costs rise, terms change, or exit options narrow, the consequences are beared by the public. This is why procurement law and public sector governance emphasize competition, supplier diversity, and long-term sustainability. In theory, these principles apply equally to private cloud platforms. In practice, historical standardization decisions often override them.

There is also a practical constraint. Public institutions cannot move quickly. Budget cycles, tender requirements, and legal processes mean that correcting structural dependency is slow and expensive once it is entrenched.

Seen through this lens, private cloud concentration risk in the public sector is not a hypothetical problem. It is a deferred liability.

Why organizations hesitate to introduce a new or second private cloud platform

If concentration risk is real, why do organizations not simply add a second platform?

Because fragmentation is also a risk.

Enterprises do not want five private cloud platforms. They do not want duplicated tooling, fragmented operations, or diluted skills. Running parallel infrastructures without a coherent operating model creates unnecessary cost and complexity, without addressing the underlying problem. This is why most organizations are not looking for “another hypervisor”. They are seeking a second private cloud platform that preserves the VM-centric operating model, integrates lifecycle management, and can coexist without necessitating a redesign of governance and processes.

The main objective here is credible optionality.

A market correction – Diversity returns to private cloud infrastructure

One unintended consequence of Broadcom’s acquisition of VMware is that it has reopened a market that had been largely closed for years. For a long time, the conversation about private cloud infrastructure felt settled. VMware was the default, alternatives were niche, and serious evaluation was rare. That has changed.

Technologies that existed on the margins are being reconsidered. Xen-based platforms are evaluated again, where simplicity and cost control dominate. Proxmox is discussed more seriously in environments that value open-source governance and transparency. Microsoft Hyper-V is re-examined, where deep Microsoft integration already exists.

At the same time, vendors are responding. HPE Morpheus VM Essentials reflects a broader trend toward abstraction and lifecycle management that reduces direct dependency on a single virtualization layer.

Nutanix appears in this context not as a disruptive newcomer, but as an established private cloud platform that fits a diversification narrative. For some organizations, it represents a way to introduce a second platform without abandoning existing operations or retraining entire teams from scratch.

None of these options is a universal replacement. That is not the point. The point is that choice has returned.

This diversity is healthy. It forces vendors to compete on clarity, pricing, ecosystem openness, and operational value. It forces customers to revisit assumptions that have gone unchallenged for years and it reintroduces architectural optionality into a layer of infrastructure that had become remarkably static.

This conversation matters now

For years, private cloud concentration risk was theoretical. Today, it is increasingly tangible.

The combination of high platform concentration, shifting commercial models, and narrowing ecosystems forces organizations to re-examine decisions they have not questioned in over a decade. Not because the technology suddenly failed, but because dependency became visible.

The irony is that enterprises already know how to reason about this problem. They apply the same logic every day in public cloud.

The difference is psychological. Private cloud infrastructure feels “owned”. It runs on-premises and it feels sovereign. That feeling can be partially true, but it can also obscure how much strategic control has quietly shifted elsewhere.

A measured conclusion

This is not a call for mass migration away from VMware. That would be reactive and, in many cases, irresponsible.

It is a call to apply the same discipline to private cloud platforms that organizations already apply to public cloud providers. Concentration risk does not disappear because infrastructure runs in a data center.

So, if the terms change, do you have a credible alternative?

Open source gives you freedom. Nutanix makes that freedom actually usable.

Open source gives you freedom. Nutanix makes that freedom actually usable.

Every organisation that wants to modernise its infrastructure eventually arrives at the same question: How open should my cloud be? Not open as in “free and uncontrolled”, but open as in transparent, portable, verifiable. Open as in “I want to reduce my dependencies, regain autonomy and shape my architecture based on principles”.

What most CIOs and architects have realized over time is that sovereignty and openness are not separate ideas. They depend on each other. And this is where Nutanix has become one of the most interesting players in the market. Because while many vendors talk about optionality, Nutanix has built a platform that is literally assembled out of open-source building blocks. That means curated, hardened, automated and delivered as a consistent experience.

It’s a structured open-source universe, integrated from day one and continuously maintained at enterprise quality.

In other words, Nutanix operationalizes open source, turning it into something teams can deploy, trust and scale without drowning in complexity.

Operationalizing Open Source

Every architect knows that adopting open source at scale is not trivial. The problem is not the software. The problem is the operational burden:

  • Which projects are stable?
  • Which versions are interoperable?
  • Who patches them?
  • Who maintains the lifecycle?
  • How do you standardize the cluster experience across sites, regions, and teams?
  • How do you avoid configuration drift?
  • How do you keep performance predictable?

Nutanix solves this by curating the stack, integrating the components and automating the entire lifecycle. Nutanix Kubernetes Platform (NKP) is basically a “sovereignty accelerator”. It enables organizations to adopt a fully open ecosystem while maintaining the reliability and simplicity that enterprises require.

A Platform Built on Upstream Open Source

What often gets overlooked in the cloud-native conversation is that open source is not a single entity. There is upstream open source, which can be seen as the pure, community-driven version. And then there are vendor-modified forks, custom APIs, and platforms that quietly redirect you into proprietary interfaces the moment you start building something serious.

Nutanix took a very different path. NKP is built on pure upstream open-source components. Not repackaged, not modified into proprietary variants, not wrapped in a “special” vendor API that locks you in. The APIs exposed to the user are the same APIs used everywhere in the CNCF community.

This matters more than most people realize.

Because the moment a vendor alters an API, you lose portability. And the moment you lose portability, you lose sovereignty.

One of the strongest signals that Nutanix also prioritizes sovereignty, is its commitment to Cluster API (CAPI). This is what gives NKP deployments the portability many vendors can only talk about.

Nutanix Cluster API

With CAPI, the cluster lifecycle (creation, upgrade, scaling, deletion) is handled through a common, open standard that works:

  • on-premises & baremetal
  • on Nutanix
  • on AWS, Azure or GCP
  • in other/public sovereign cloud regions
  • at the edge

CAPI means your clusters are not married to your infrastructure vendor.

Nutanix Entered the Gartner MQ for Container Management 2025

Every Gartner Magic Quadrant tells a story. Not just about vendors, but about the direction a market is moving. And the 2025 Magic Quadrant for Container Management is particularly revealing. Not only because Nutanix appears in it for the first time, but because of where Nutanix is positioned and what that position says about the future of cloud-native platforms.

Nutanix made its debut as a Challenger and that’s probably a rare achievement for a first-time entrant. Interestingly and more importantly, Nutanix positioned above Broadcom (VMware) on both axes:

  • Ability to execute
  • Completeness of vision

Gartner Magic Quadrant for Container Management June 2025

2025 marks a new landscape – Broadcom fell out of the leaders quadrant entirely and now lags behind Nutanix in both execution and vision. This reflects a broader transition in customer expectations.

Organizations want portability, sovereign deployment models, and platforms that behave like products rather than collections of components. Nutanix delivered exactly that with NKP and gets recognized for that.

When Openness Becomes Strategy, Sovereignty Becomes Reality

If you step back and look at all the signals, from the rise of sovereign cloud requirements to the changes reflected in Gartner’s latest Magic Quadrant, a clear pattern emerges. The market is moving away from closed ecosystems, inflexible stacks and proprietary abstractions.

Vision today is no longer defined by how many features you can stack on top of Kubernetes. Vision is defined by how well you can make Kubernetes usable, secure, portable and sovereign. In the data center, at the edge, in public clouds, or in fully disconnected/air-gapped environments.

VMware by Broadcom – The Standard of Independence Has Become a Structure of Dependency

VMware by Broadcom – The Standard of Independence Has Become a Structure of Dependency

There comes a point in every IT strategy where doing nothing becomes the most expensive choice. Many VMware by Broadcom customers know this moment well, and they sense that Broadcom’s direction isn’t theirs, but still hesitate to move. The truth is, the real risk isn’t in changing platforms but waiting too long to reclaim control.

I have worked with VMware products for more than 15 years and even spent part of my career as a VMware solution engineer before Broadcom acquired this company. A company that once had a wonderful culture. A culture that, sadly, no longer exists. Many of my former colleagues no longer trust their leadership. What does this mean for you?

We know that VMware environments are mature, battle-tested, and deeply embedded into how enterprises operate. And that’s exactly the problem. Over the years, VMware became more than a platform. It became the language of enterprise IT with vSphere for compute, vSAN for storage, NSX for networking. It’s how we learned to think about infrastructure. That’s the vision of VMware Cloud Foundation (VCF) and the software-defined data center (SDDC).

Fast forward, even when customers are frustrated by cost increases, licensing restrictions, or shifting support models, they rarely act. Why? Because it feels safer to tolerate pain than to invite uncertainty. But stability is often just an illusion. What feels familiar isn’t necessarily secure.

The Forced Migration Nobody Talks About

The irony is that many customers who think they are avoiding change are actually facing one. Just not by choice. Broadcom’s current direction points toward a future where customers can only consume VMware Cloud Foundation (VCF) as a unified, integrated stack. Which, in general, is a good thing, isn’t it?

As a result, you no longer decide which components you actually need. Even if you only use vSphere, vSAN, and Aria Operations today, you will be licensed and forced to deploy the full stack, including NSX and VCF Operations/Automation, whether you need them or not. While that’s still speculation, everything Hock Tan says points in this direction. And many analysts see it the same way.

Broadcom reached VMware’s goal: VCF has become their flagship product, but only by force and not by the customer’s choice. Broadcom has leverage by choosing the right discounts to make VCF the “right” and only choice for customers, even if they don’t want to adopt the full stack.

Paths to VCF 9

What does this mean for your future? In practice, it’s a structural migration disguised as continuity and not just a commercial shift. Moving from a traditional vSphere or HCI-based setup to VCF comes with the same side effects, changes, and costs you would face when adopting a new platform (Nutanix, Red Hat, Azure Local etc.).

Think about it: If you must migrate anyway, why not move toward more control, not less?

Features, Not Products

Broadcom has been clear about its long-term vision. The company now describes VMware Cloud Foundation as the only product name. They see it as the operating system for data centers, which is a great message, but Broadcom wants VMware to operate like Azure, where you don’t “buy” networking or storage. You consume them as built-in features of the platform.

Once this model is fully implemented, you won’t purchase vSphere or NSX. You’ll subscribe to VCF, and those technologies will simply be features. The Aria Suite has already disappeared from the portfolio (example: Aria Operations became VCF Operations). The next product to vanish will be everything except the name VMware Cloud Foundation.

It’s a clever move for Broadcom, but a dangerous one for customers. Yes, I am looking at you. Because when every capability becomes part of a single subscription, the flexibility to choose or not to use disappears. This means your infrastructure, once hybrid and modular, is now a monolith. Imagine the lock-in of any hyperscaler, but on-premises. That’s the new VMware.

The True Cost of Change

Let’s be honest, migrations are not easy. They require time, expertise, and courage. Yes, courage as well. But the cost of change is not the real problem. The cost of inaction is.

When organizations stay on platforms that no longer align with their strategy, they pay with flexibility, not just money. Every renewal locks in another year(s) of dependency. Every delay potentially pushes innovation further out of reach. And with Broadcom’s model, the risk isn’t just financial. The control over your architecture, your upgrade cadence, your integrations, and even your licensing terms slowly moves away from you. And faster than you may think.

VCF SPD November 2025

Broadcom’s new compliance mechanisms amplify that dependency. According to the November 2025 VCF Specific Program Documentation, customers must upload a verified compliance report every 180 days. Failing to do so allows Broadcom to degrade or block management-plane functionality and suspend support entitlements. What once was a perpetual license has become an always-connected control loop. A system that continuously validates, monitors, and enforces usage from the outside. Is that okay for a “sovereign” cloud or you as the operator?

As Hock Tan , our President and CEO, shared in today's General Session at  VMware Explore Barcelona, European customers want control over their data  and processes. | Broadcom

You don’t notice it day by day. But five years later, you realize: Your data center doesn’t belong to you anymore.

Why Change Feels Bigger Than It Is

Anyway, change is often perceived as a massive technical disruption. But in reality, it’s usually a series of small, manageable steps. Modern infrastructure platforms have evolved to make transitions far less painful than before. Today, you can migrate workloads gradually, reuse existing automation scripts, and maintain uptime while transforming the foundation beneath.

What used to be a twelve-month migration project can now be done in phases, with full visibility and reversible checkpoints. The idea is not to replace everything. It’s to regain control, layer by layer.

Freedom as a Strategy

Freedom should be a design principle. It means having a platform that lets you choose, and it also means being able to decide when to upgrade, how to scale, and where your data lives, without waiting for a vendor’s permission.

This is why I joined Nutanix. They don’t force you into a proprietary stack. They abstract complexity instead of hiding it. They allow you to run what you need, and only what you need, whether that’s virtualization, containers, or a mix of both. Yep, and you can also provide DBaaS (NDB) or a private AI platform (NAI).

I’m not telling you to abandon what you know. Take a breath and think about what’s possible when choice returns.

For years, VMware has been the familiar home of enterprise IT. But homes can become cages when you are no longer allowed to move the furniture. The market is moving towards platforms that combine the comfort of virtualization with the agility of cloud without the loss of control.

This shift is already happening. Many organizations start small – with their disaster recovery site, their dev/test environment, or their EUC workloads. Once the first step is done, confidence grows. They realize that freedom doesn’t come from ripping everything out. It comes from taking back control, one decision at a time.

A Quiet Revolution

The next chapter of enterprise infrastructure will not be written by those who cling to the past, but by those who dare to redesign their foundations. Not because they want to change, but because they must to stay agile, compliant, and sovereign in a world where autonomy is everything.

The legal fine print makes it clear. What Broadcom calls modernization is, in fact, a redesign of control. And control rarely moves back to the customer once it’s gone.

The question is no longer “Can we afford to change?”

It should be “Can we afford not to?”. Can YOU afford not to?

And maybe that’s where your next journey begins. Not with fear, but with the quiet confidence that the time to regain control has finally arrived.

Why I Left Oracle and Joined Nutanix

Why I Left Oracle and Joined Nutanix

There are moments in a career when you stop and realise that the path beneath your feet is no longer the path you set out to walk. Sometimes the change is subtle, almost invisible and other times it becomes impossible to ignore. For me, this moment arrived somewhere between large public sector strategy discussions, another round of organizational changes, and one more conversation about “global priorities” that had little connection to the needs of Swiss or European sovereign infrastructure.

I spent a meaningful year at Oracle. I met great people and learned what it means to bring a (dedicated) hyperscale cloud into regulated environments. OCI Dedicated Region is still one of the most interesting and ambitious engineering efforts in the cloud industry. But at some point, I realized that my personal mission of digital sovereignty, open choice, and the empowerment of customers started to diverge from where I felt the company was going.

Not wrong. Not bad. Just different. And that difference grew large enough that it became impossible to pretend we were still walking in the same direction.

Sovereignty has always been my north star

Years before Oracle, long before the idea of sovereign clouds became a political agenda, I cared about the question of who controls technology. My time at VMware shaped that perspective deeply. Private cloud, infrastructure independence, and the ability for organizations to define their own architecture rather than renting someone else’s world.

Even during my time at Oracle, I continued to view everything through that sovereignty lens. Dedicated Region was my way of reconciling public cloud innovation with local control, which is a compelling proposition in many cases. But it became increasingly clear to me that the broader industry narrative was drifting toward full-stack centralization. Clouds wanted to become operating systems. Platforms wanted to become monopolies. The idea that customers deserve autonomy was becoming a footnote.

At some point, you have to ask yourself: Are you still aligned with the direction of travel, or are you just trying to keep up even though you know you want something else?

Realizing that it was time to step off the path

There is no single moment that triggered my decision to leave. It was more like a slow accumulation of signals. My conversations increasingly shifted from “how do we empower customers?” to “how do we position the stack?”. The freedom and creativity I had in the early days of promoting sovereign cloud initiatives felt narrower over time. And internally, I caught myself spending more energy explaining why sovereignty matters than building solutions around it.

If your work becomes a negotiation with your own values, you eventually reach a point where you must choose. Stay and adapt, or step forward and realign.

I chose alignment.

Why private cloud again?

When you think deeply about sovereignty, you eventually come to the simple truth that sovereignty does not happen by accident. It is not a checkbox, a certificate, or a location of a data center. Sovereignty is an architectural stance. A design choice. A commitment to decentralization, reversibility, and customer control.

And that is where private cloud becomes relevant again as the foundation for a new era of controlled autonomy.

The more the world embraces hyperscale convenience, the more valuable real control becomes. The more cloud platforms abstract everything away, the more important it becomes to own the layers that matter. The more AI, data, and national infrastructure rely on cloud services, the more essential locally governed, locally designed, locally operable environments become.

Private cloud, done right, is a rebalancing of power.

Why Nutanix was the logical next chapter

If you want to work on digital sovereignty in a way that is meaningful, credible, and technically grounded, there are only a handful of companies where that mission is more than a marketing line. Nutanix is one of them and arguably the most aligned with the idea of customer freedom.

Nutanix sits in a unique space. It is an infrastructure platform that modernizes private cloud while keeping openness at the center. It doesn’t force customers into a predefined world and it creates the foundation upon which customers can build their own.

Choice becomes real again. Migration paths become optional rather than forced. Hybrid and multi-cloud become strategies instead of slogans. And customers regain something that hyperscale economics has quietly eroded for years. Yep, the right to decide their own future.

What I found at Nutanix is a philosophy that echoes my own. Technology should not dictate. It should enable. It should adapt to the customer, not the other way around. It should enhance sovereignty rather than dilute it behind yet another managed layer. And it should make modernization possible without making independence impossible.

Stepping into a mission, not just a new job

Leaving Oracle was not an escape. It was a conscious return to the principles that have guided me for more than a decade. I joined Nutanix not because it is fashionable, but because it represents the next phase of what the infrastructure world needs. A platform that gives power back to the organisations that increasingly rely on technology for national, economic, and operational resilience.

Modernisation should not mean giving up autonomy. Cloud adoption should not mean losing choice. Future architectures should not be designed by someone else’s business model.

Nutanix brings the balance back. It brings control back. It brings the freedom to design infrastructure on your terms.

And that is where I want to contribute. That is where I want to help customers. That is the path I want to walk.

Final Words

This move means a realignment with my own principles and the narrative I want to push into the market. The next decade will belong to organizations that understand this early and build accordingly.

I want to help shape that decade with customers, partners, policymakers, and anyone who believes that the future of infrastructure must be both modern and self-determined.

Leaving Oracle was the end of a chapter. Joining Nutanix is the continuation of a mission.

And for the first time in a long time, I feel like I am walking exactly where I am supposed to be.