Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

If you ask most large organizations why they use more than one public cloud, the answers are remarkably consistent. It is not fashion, and it is rarely driven by engineering curiosity. It is risk management and a best of breed approach.

Enterprises distribute workloads across multiple public clouds to reduce concentration risk, comply with regulatory expectations, preserve negotiation leverage, and remain operationally resilient in the face of outages that cannot be mitigated by adding another availability zone. In regulated industries, especially in Europe, this thinking has become mainstream. Supervisors explicitly expect organisations to understand their outsourcing dependencies, to manage exit scenarios, and to avoid structural lock-in where it can reasonably be avoided.

Now apply the same logic one layer down into the private cloud world, and the picture changes dramatically.

Across industries and geographies, a significant majority of private cloud workloads still run on a single private cloud platform. In practice, this platform is often VMware (by Broadcom). Estimates vary, but the dominance itself is not controversial. In many enterprises, approximately 70 to 80 percent of virtualized workloads reside on the same platform, regardless of sector.

If the same concentration existed in the public cloud, the discussion would be very different. Boards would ask questions, regulators would intervene, architects would be tasked with designing alternatives. Yet in private cloud infrastructure, this concentration is often treated as normal, even invisible.

Why?

Organisations deliberately choose multiple public clouds

Public cloud multi-cloud strategies are often oversimplified as “fear of lock-in”, but that misses the point.

The primary driver is concentration risk. When critical workloads depend on a single provider, certain failure modes become existential. Provider-wide control plane outages, identity failures, geopolitical constraints, or contractual disputes cannot be mitigated by technical architecture alone. Multi-cloud does not eliminate risk, but it limits the blast radius.

Regulation reinforces this logic. The European banking supervision, for example, treats cloud as an outsourcing risk and expects institutions to demonstrate governance, exit readiness, and operational resilience. An exit strategy that only exists on paper is increasingly viewed as insufficient. There are also pragmatic reasons. Jurisdictional considerations, data protection regimes, and shifting geopolitical realities make organizations reluctant to anchor everything to a single legal and operational framework. Multi-cloud (or hybrid cloud) becomes a way to keep strategic options open.

And finally, there is negotiation power. A credible alternative changes vendor dynamics. Even if workloads never move, the ability to move matters.

This mindset is widely accepted in the public cloud. It is almost uncontroversial.

How the private cloud monoculture emerged

The dominance of a single private cloud platform did not happen by accident, and it did not happen because enterprises were careless.

VMware earned its position over two decades by solving real problems early and building an ecosystem that reinforced itself. Skills became widely available, tooling matured, and operational processes stabilized. Backup, disaster recovery, monitoring, security controls, and audit practices are all aligned around a common platform. Over time, the private cloud platform evolved into more than just software. It became the operating model.

And once that happens, switching becomes an organizational transformation.

Private cloud decisions are also structurally centralized. Unlike public cloud consumption, which is often decentralized across business units, private cloud infrastructure is intentionally standardized. One platform, one set of guardrails, one way of operating. From an efficiency and governance perspective, this makes sense. From a dependency perspective, it creates a monoculture.

For years, this trade-off was acceptable because the environment was stable, licensing was predictable, and the ecosystem was broad. The rules of the game did not change dramatically.

That assumption is now being tested.

What has changed is not the technology, but the dependency profile

VMware remains a technically strong private cloud platform. That is not in dispute. What has changed under Broadcom is the commercial and ecosystem context in which the platform operates. Infrastructure licensing has shifted from a largely predictable, incremental expense into a strategically sensitive commitment. Renewals are no longer routine events. They become moments of leverage.

At the same time, changes in partner models and go-to-market structures affect how organizations buy, renew, and support their private cloud infrastructure. When the surrounding ecosystem narrows, dependency increases, even if the software itself remains excellent.

This is not a judgment on intent or quality. It is just a structural observation. When one private cloud platform represents the majority of an organization’s infrastructure, any material change in pricing, licensing, or ecosystem access becomes a strategic risk by definition.

The real issue is not lock-in, but the absence of a credible exit

Most decision-makers do not care about hypervisors, they care about exposure. The critical question is not whether an organization plans to leave its existing private cloud platform. The question is whether it could leave, within a timeframe the business could tolerate, if it had to.

In many cases, the honest answer is no.

Economic dependency is the first dimension. When a single vendor defines the majority of your infrastructure cost base, budget flexibility shrinks.

Operational dependency is the second. If tooling, processes, security models, and skills are deeply coupled to one platform, migration timelines stretch into years. That alone is a risk, even if no migration is planned.

Ecosystem dependency is the third. Fewer partners and fewer commercial options reduce competitive pressure and resilience.

Strategic dependency is the fourth. The private cloud platform is increasingly becoming the default landing zone for everything that cannot go to the public cloud. At that point, it is no longer just infrastructure. It is a critical organizational infrastructure.

Public cloud regulators have language for this. They call it outsourcing concentration risk. Private cloud infrastructure rarely receives the same attention, even though the consequences can be comparable.

Concentration risk in the public sector – When dependency is financed by taxpayers

In the public sector, concentration risk is not only a technical or commercial question but also a governance question. Public administrations do not invest their own capital. Infrastructure decisions are financed by taxpayers, justified through public procurement, and expected to remain defensible over long time horizons. This fundamentally changes the risk calculus.

When a public institution concentrates the majority of its private cloud infrastructure on a single platform, it is committing public funds, procurement structures, skills development, and long-term dependency to one vendor’s strategic direction. Now, what does it mean for a nation where 80 or 90% of its public sector is dependent on one single vendor?

That dependency can last longer than political cycles, leadership changes, or even the original architectural assumptions. If costs rise, terms change, or exit options narrow, the consequences are beared by the public. This is why procurement law and public sector governance emphasize competition, supplier diversity, and long-term sustainability. In theory, these principles apply equally to private cloud platforms. In practice, historical standardization decisions often override them.

There is also a practical constraint. Public institutions cannot move quickly. Budget cycles, tender requirements, and legal processes mean that correcting structural dependency is slow and expensive once it is entrenched.

Seen through this lens, private cloud concentration risk in the public sector is not a hypothetical problem. It is a deferred liability.

Why organizations hesitate to introduce a new or second private cloud platform

If concentration risk is real, why do organizations not simply add a second platform?

Because fragmentation is also a risk.

Enterprises do not want five private cloud platforms. They do not want duplicated tooling, fragmented operations, or diluted skills. Running parallel infrastructures without a coherent operating model creates unnecessary cost and complexity, without addressing the underlying problem. This is why most organizations are not looking for “another hypervisor”. They are seeking a second private cloud platform that preserves the VM-centric operating model, integrates lifecycle management, and can coexist without necessitating a redesign of governance and processes.

The main objective here is credible optionality.

A market correction – Diversity returns to private cloud infrastructure

One unintended consequence of Broadcom’s acquisition of VMware is that it has reopened a market that had been largely closed for years. For a long time, the conversation about private cloud infrastructure felt settled. VMware was the default, alternatives were niche, and serious evaluation was rare. That has changed.

Technologies that existed on the margins are being reconsidered. Xen-based platforms are evaluated again, where simplicity and cost control dominate. Proxmox is discussed more seriously in environments that value open-source governance and transparency. Microsoft Hyper-V is re-examined, where deep Microsoft integration already exists.

At the same time, vendors are responding. HPE Morpheus VM Essentials reflects a broader trend toward abstraction and lifecycle management that reduces direct dependency on a single virtualization layer.

Nutanix appears in this context not as a disruptive newcomer, but as an established private cloud platform that fits a diversification narrative. For some organizations, it represents a way to introduce a second platform without abandoning existing operations or retraining entire teams from scratch.

None of these options is a universal replacement. That is not the point. The point is that choice has returned.

This diversity is healthy. It forces vendors to compete on clarity, pricing, ecosystem openness, and operational value. It forces customers to revisit assumptions that have gone unchallenged for years and it reintroduces architectural optionality into a layer of infrastructure that had become remarkably static.

This conversation matters now

For years, private cloud concentration risk was theoretical. Today, it is increasingly tangible.

The combination of high platform concentration, shifting commercial models, and narrowing ecosystems forces organizations to re-examine decisions they have not questioned in over a decade. Not because the technology suddenly failed, but because dependency became visible.

The irony is that enterprises already know how to reason about this problem. They apply the same logic every day in public cloud.

The difference is psychological. Private cloud infrastructure feels “owned”. It runs on-premises and it feels sovereign. That feeling can be partially true, but it can also obscure how much strategic control has quietly shifted elsewhere.

A measured conclusion

This is not a call for mass migration away from VMware. That would be reactive and, in many cases, irresponsible.

It is a call to apply the same discipline to private cloud platforms that organizations already apply to public cloud providers. Concentration risk does not disappear because infrastructure runs in a data center.

So, if the terms change, do you have a credible alternative?

Nutanix should not be viewed primarily as a replacement for VMware

Nutanix should not be viewed primarily as a replacement for VMware

Public sector organizations rarely change infrastructure platforms lightly. Stability, continuity, and operational predictability matter more than shiny and modern solutions. Virtual machines became the dominant abstraction because they allowed institutions to standardize operations, separate applications from hardware, and professionalize IT operations over the long term.

For many years, VMware has become synonymous with this VM-centric operating model, as it provided a coherent, mature, and widely adopted implementation of virtualized infrastructure. Choosing VMware was, for a long time, a rational and defensible decision.

Crucially, the platform was modular. Organizations could adopt it incrementally, integrate it with existing tools, and shape their own operating models on top of it. This modularity translated into operational freedom. Institutions retained the ability to decide how far they wanted to go, which components to use, and which parts of their environment should remain under their direct control. These characteristics explain why VMware became the default choice for so many public institutions. It aligned well with the values of stability, proportionality, and long-term accountability.

The strategic question public institutions face today is not whether that decision was wrong. Rather, if they can learn from it. We need to ask ourselves whether the context around that decision has changed and whether continuing along the same platform path still preserves long-term control, optionality, and state capability.

From VM-centric to platform-path dependent

It is important to be precise in terminology. Most public sector IT environments are not VMware-centric by design. They are VM-centric. Virtual machines are the core operational unit, deeply embedded in processes, tooling, skills, and governance models. This distinction is very important. A VM-centric organization can, in principle, operate on different platforms without redefining its entire operating model. A VMware-centric organization, by contrast, has often moved further down a specific architectural path by integrating tightly with proprietary platform services, management layers, and bundled stacks that are difficult to disentangle later.

This is where the strategic divergence begins.

Over time, VMware’s platform has evolved from a modular virtualization layer into an increasingly integrated software-defined data center (SDDC) and VCF-oriented (VMware Cloud Foundation) stack. That evolution is not inherently negative. Integrated platforms can deliver efficiencies and simplified operations, but they also introduce path dependency. Decisions made today shape which options remain viable tomorrow.

So, the decisive factor is not pricing. Prices change. For public institutions, this is a governance issue (not a technical one).

There is a significant difference between organizations that adopted VMware primarily as a hypervisor platform and those that fully embraced the SDDC or VCF vision.

Institutions that did not fully commit to VMware’s integrated SDDC approach often still retain architectural freedom. Their environments are typically characterized by:

  • A strong focus on virtual machines rather than tightly coupled platform services
  • Limited dependency on proprietary automation, networking, or lifecycle tooling
  • Clear separation between infrastructure, operations, and higher-level services

For these organizations, the operational model remains transferable. Skills, processes, and governance structures are not irreversibly bound to a single vendor-defined stack. This has two important consequences.

First, technical lock-in can still be actively managed. The platform does not yet dictate the future architecture. Second, the total cost of change remains realistic. Migration becomes a controlled evolution rather than a disruptive transformation.

In other words, the window for strategic choice is still open.

Why this moment matters for the public sector

Public institutions operate under conditions that differ fundamentally from those of private enterprises. Their mandate is not limited to efficiency, competitiveness, or short-term optimization. Instead, they are entrusted with continuity, legality, and accountability over long time horizons. Infrastructure decisions made today must still be explainable years later, often to different audiences and under very different political circumstances. They must withstand audits, parliamentary inquiries, regulatory reviews, and shifts in leadership without losing their legitimacy.

This requirement fundamentally changes how technology choices must be evaluated. In the public sector, infrastructure is an integral part of the institutional framework that enables the state to function effectively. Decisions are therefore judged not only by their technical benefits and performance, but by their long-term defensibility. A solution that is efficient today but difficult to justify tomorrow represents a latent risk, even if it performs flawlessly in day-to-day operations.

It is within this context that the concept of digital sovereignty has moved from abstraction to obligation. Governments increasingly define digital sovereignty not as isolation or technological nationalism, but as the capacity to maintain control and freedom of an environment. This includes the ability to reassess vendor relationships, adapt sourcing strategies, and respond to geopolitical, legal, or economic shifts without being forced into reactive or crisis-driven decisions.

Digital sovereignty, in this sense, is closely tied to governance and control. It is about ensuring that institutions retain the ability to make informed, deliberate choices over time. That ability depends less on individual technologies and more on the structural properties of the platforms on which those technologies are built. When platforms are designed in ways that limit flexibility, they quietly constrain future options, regardless of their current performance or feature set.

Platform architectures that reduce reversibility are particularly problematic in the public sector. Reversibility does not imply constant change, nor does it require frequent platform switches. It simply means that change remains possible without disproportionate disruption. When an architecture makes it technically or organizationally prohibitive to adjust course, it creates a form of lock-in that extends beyond commercial dependency into the realm of institutional risk.

Even technically advanced platforms can become liabilities if they harden decisions that should remain open. Tight coupling between components, inflexible operational models, or vendor-defined evolution paths may simplify operations in the short term, but they do so at the cost of long-term flexibility. In public institutions, where the ability to adapt is inseparable from democratic accountability and legal responsibility, this trade-off must be examined with particular care.

Ultimately, digital sovereignty in the public sector is about ensuring that those dependencies remain governable. Platforms that preserve reversibility support this goal by allowing institutions to evolve deliberately, rather than react under pressure. Platforms that erode it may function well today, but they quietly accumulate strategic risk that only becomes visible when options have already narrowed.

Seen through this lens, digital sovereignty is a core governance requirement, embedded in the responsibility of public institutions to remain capable, accountable, and in control of their digital future.

Nutanix as a strategic inflection point

This is why Nutanix should not be viewed primarily as a replacement for VMware. Framing it as such immediately steers the discussion in the wrong direction. Replacements imply disruption, sunk costs, and, perhaps most critically in public-sector and enterprise contexts, an implicit critique of past decisions. Infrastructure choices, especially those made years ago, were often rational, well-founded, and appropriate for their time. Suggesting that they now need to be “replaced” risks triggering defensiveness and obscures the real strategic question.

More importantly, the replacement narrative fails to capture what Nutanix actually represents for VM-centric organizations. Nutanix does not demand a wholesale change in operating philosophy. It does not require institutions to abandon virtual machines, rewrite operational playbooks, or dismantle existing governance structures. On the contrary, it deliberately aligns with the VM-centric operating model that many public institutions and enterprises have refined over years of practice.

For this reason, Nutanix is better understood as a strategic inflection point. It marks a moment at which organizations can reassess their platform trajectory without invalidating the past. Virtual machines remain first-class citizens, operational practices remain familiar and roles, responsibilities, and control mechanisms continue to function as before. The day-to-day reality of running infrastructure does not need to change.

What does change is the organization’s strategic posture.

In essence, Nutanix is about restoring the ability to choose. In public-sector (and enterprise environments), that ability is often more valuable than any individual feature or performance metric.

The cost of change versus the cost of waiting

A persistent misconception in infrastructure strategy is the assumption that platform change is, by definition, prohibitively expensive. This belief is understandable. Large-scale IT transformations are often associated with complex migration projects, organizational disruption, and unpredictable outcomes. These associations create a strong incentive to delay any discussion of change for as long as possible.

Yet this intuition is misleading. In practice, the cost of change does not remain constant over time. It increases the longer the architectural lock-in is allowed to deepen.

Platform lock-in rarely occurs as an intentional choice, but it accumulates gradually. Additional services are adopted for convenience, tooling becomes more tightly integrated and operational processes begin to assume the presence of a specific platform. Over time, what was once a flexible foundation hardens into an implicit dependency. At that point, changing direction no longer means replacing a component; it means changing an entire operating model.

Organizations that remain primarily VM-centric and act early are in a very different position. When virtual machines remain the dominant abstraction and higher-level platform services have not yet become deeply embedded, transitions can be managed incrementally. Workloads can be evaluated in stages. Skills can be developed alongside existing operations. Governance and procurement processes can adapt without being forced into emergency decisions.

In these cases, the cost of change is not trivial, but it is proportionate. It reflects the effort required to introduce an alternative (modular) platform, not the effort required to escape a tightly coupled ecosystem.

VMware to Nutanix Windows

By contrast, organizations that postpone evaluation until platform constraints become explicit often find themselves facing a very different reality. When licensing changes, product consolidation, or strategic shifts expose the depth of dependency, the room for change has already narrowed. Timelines become compressed, options shrink, and decisions, that should have been strategic, become reactive.

The cost explosion in these situations is rarely caused by the complexity of the alternative platform. It is caused by the accumulated weight of the existing one. Deep integration, bespoke operational tooling, and platform-specific governance models all add friction to any attempt at change. What might have been a manageable transition years earlier becomes a high-risk transformation project.

This leads to a paradox that many institutions only recognize in hindsight. The best time to evaluate change is precisely when there is no immediate pressure to do so. Early evaluation is a way to preserve choice. It allows organizations to understand their true dependencies, test assumptions, and (perhaps) maintain negotiation leverage.

Waiting, by contrast, does not preserve stability. It often preserves only the illusion of stability, while the cost of future change continues to rise in the background.

For public institutions in particular, this distinction is critical. Their mandate demands foresight, not just reaction. Evaluating platform alternatives before change becomes unavoidable means taking over responsibility.

A window that will not stay open forever

Nutanix should not be framed as a rejection of VMware, nor as a corrective to past decisions. It should be understood as an opportunity for VM-centric public institutions to reassess their strategic position while they still have the flexibility to do so.

Organizations that did not fully adopt VMware’s SDDC approach are in a particularly strong position. Their operational models are portable, their technical lock-in is still manageable and their total cost of change remains proportionate.

For them, the question is whether they want to preserve the ability to decide tomorrow.

And in the public sector, preserving that ability is a governance responsibility.

When “Staying” Becomes a Journey And Why Nutanix Lets You Take Back Control

When “Staying” Becomes a Journey And Why Nutanix Lets You Take Back Control

There are moments in IT where the real disruption is not the change you choose, but the change that quietly happens around you. Many VMware customers find themselves in exactly such a moment. On the surface, everything feels familiar. The same hypervisor, the same vendors, the same vocabulary. But underneath that surface, something more fundamental is shifting and Broadcom’s new licensing and product model has turned VMware’s future into a one-way street. A gradual but unmistakable movement toward VMware Cloud Foundation (VCF).

What makes this moment so tricky is the illusion it creates. Because the names look the same, many organizations convince themselves that staying with VMware means avoiding change. They assume the path ahead is simply the continuation of the path behind. Yet the platform they are moving toward does not behave like the platform they came from. VCF 9 is a different way of running private cloud. A different architecture, operational model, and a different set of dependencies and constraints.

Once you see this clearly, the situation becomes easier to understand. Even if you stay with VMware, you are moving. The absence of physical distance does not mean the absence of migration. What changes is not the location of your workloads, but the world those workloads inhabit.

And that world looks much more like a cloud transition than like a traditional upgrade.

This is the first truth enterprises need to accept: it is still a migration.

The Subtle Shift From Upgrade to Replatforming

VCF 9 carries its own gravity. It reshapes how the environment must be designed, how networking is stitched together, how lifecycle management works, how domains are laid out, how automation behaves and how operations are structured. It forces full-stack adoption, even if your organization only needs part of the stack. And once the platform becomes prescriptive, you must either adopt its assumptions or fight against them.

If this exact level of change were introduced by a hyperscaler, nobody would hesitate to call it a cloud migration. It would come with discovery workshops, architecture reviews, dependency mapping, proof-of-concepts, testing phases, retraining, risk assessments,and new governance. But because the new platform still carries the VMware name, some organizations treat it as a large patch. Which it clearly is not.

This is where many stumble. An upgrade assumes continuity. A migration assumes transformation. VCF 9 sits firmly on the transformation side of that spectrum. Treating it as anything less increases risk, cost and frustration.

In other words, the work is the same work you would do for a cloud move. Only the destination changes.

Complexity You Did Not Ask For

One of the most overlooked consequences of this shift is the gradual increase in complexity. The move to a full-stack VCF world comes with the same architectural side effects you would expect when adopting any complex platform. More components, more integration points, more rules, more interdependencies, more expertise required to keep things stable.

Organizations want simplicity. You pay for it in architecture that becomes harder to evolve, in operations that require more coordination, in outages that take longer to troubleshoot, in people who must maintain increasingly fragile mental maps, and in costs that rise simply because the platform demands it.

And this is where the forced nature of the move becomes visible. You are inheriting complexity because the vendor has decided the portfolio must move in that direction. This is the difference between transformation that serves your strategy and transformation that serves someone else’s.

One Migration You Cannot Avoid, One Migration You Can Choose

At some point, every organization reaches a moment where movement is no longer a matter of preference but of circumstance. The transition to VCF 9 is exactly that kind of moment. Once this becomes clear, the nature of the decision changes. You stop focusing on how to avoid disruption and start asking a more strategic question: If we are investing the time, energy, and attention anyway, where should this effort lead?

VCF 9 is one possible destination. And it may very well be the right choice for some enterprises. But the key is that it should be a choice and not an automatic continuation of the past.

Customers need a model where the effort you invest in migration pays you back in reduced complexity rather than increased dependency.

Nutanix can be an option and a different operating model.

Yes the interesting truth is that both paths require work. Both involve change. Both need planning, testing, and careful execution. The difference lies in what you get once the work is done. One migration leaves you with a platform that is heavier and more prescriptive than the one you had before. The other leaves you with an environment that is lighter, simpler, and easier to operate.

The Real Choice in a Moment of Unwanted Movement

When change arrives from the outside, it rarely feels fair. It interrupts plans, forces attention onto things you didn’t choose, and demands energy you would rather spend somewhere else. Nobody asked for it. Nobody scheduled it. Yet here it is, reshaping the future architecture of your private cloud, whether you feel ready or not.

A different model of infrastructure can offer a way to use this forced moment of movement to your advantage, to turn a vendor-driven transition into an opportunity to simplify, to regain autonomy, and to design an infrastructure model that supports your next ten years rather than constraining them.

You may not have chosen the timing of this transition. But you can choose the shape of the destination. And in many ways, that is the most meaningful form of control an organization can exercise in a moment where the outside world tries to dictate the path ahead.

VMware by Broadcom – The Standard of Independence Has Become a Structure of Dependency

VMware by Broadcom – The Standard of Independence Has Become a Structure of Dependency

There comes a point in every IT strategy where doing nothing becomes the most expensive choice. Many VMware by Broadcom customers know this moment well, and they sense that Broadcom’s direction isn’t theirs, but still hesitate to move. The truth is, the real risk isn’t in changing platforms but waiting too long to reclaim control.

I have worked with VMware products for more than 15 years and even spent part of my career as a VMware solution engineer before Broadcom acquired this company. A company that once had a wonderful culture. A culture that, sadly, no longer exists. Many of my former colleagues no longer trust their leadership. What does this mean for you?

We know that VMware environments are mature, battle-tested, and deeply embedded into how enterprises operate. And that’s exactly the problem. Over the years, VMware became more than a platform. It became the language of enterprise IT with vSphere for compute, vSAN for storage, NSX for networking. It’s how we learned to think about infrastructure. That’s the vision of VMware Cloud Foundation (VCF) and the software-defined data center (SDDC).

Fast forward, even when customers are frustrated by cost increases, licensing restrictions, or shifting support models, they rarely act. Why? Because it feels safer to tolerate pain than to invite uncertainty. But stability is often just an illusion. What feels familiar isn’t necessarily secure.

The Forced Migration Nobody Talks About

The irony is that many customers who think they are avoiding change are actually facing one. Just not by choice. Broadcom’s current direction points toward a future where customers can only consume VMware Cloud Foundation (VCF) as a unified, integrated stack. Which, in general, is a good thing, isn’t it?

As a result, you no longer decide which components you actually need. Even if you only use vSphere, vSAN, and Aria Operations today, you will be licensed and forced to deploy the full stack, including NSX and VCF Operations/Automation, whether you need them or not. While that’s still speculation, everything Hock Tan says points in this direction. And many analysts see it the same way.

Broadcom reached VMware’s goal: VCF has become their flagship product, but only by force and not by the customer’s choice. Broadcom has leverage by choosing the right discounts to make VCF the “right” and only choice for customers, even if they don’t want to adopt the full stack.

Paths to VCF 9

What does this mean for your future? In practice, it’s a structural migration disguised as continuity and not just a commercial shift. Moving from a traditional vSphere or HCI-based setup to VCF comes with the same side effects, changes, and costs you would face when adopting a new platform (Nutanix, Red Hat, Azure Local etc.).

Think about it: If you must migrate anyway, why not move toward more control, not less?

Features, Not Products

Broadcom has been clear about its long-term vision. The company now describes VMware Cloud Foundation as the only product name. They see it as the operating system for data centers, which is a great message, but Broadcom wants VMware to operate like Azure, where you don’t “buy” networking or storage. You consume them as built-in features of the platform.

Once this model is fully implemented, you won’t purchase vSphere or NSX. You’ll subscribe to VCF, and those technologies will simply be features. The Aria Suite has already disappeared from the portfolio (example: Aria Operations became VCF Operations). The next product to vanish will be everything except the name VMware Cloud Foundation.

It’s a clever move for Broadcom, but a dangerous one for customers. Yes, I am looking at you. Because when every capability becomes part of a single subscription, the flexibility to choose or not to use disappears. This means your infrastructure, once hybrid and modular, is now a monolith. Imagine the lock-in of any hyperscaler, but on-premises. That’s the new VMware.

The True Cost of Change

Let’s be honest, migrations are not easy. They require time, expertise, and courage. Yes, courage as well. But the cost of change is not the real problem. The cost of inaction is.

When organizations stay on platforms that no longer align with their strategy, they pay with flexibility, not just money. Every renewal locks in another year(s) of dependency. Every delay potentially pushes innovation further out of reach. And with Broadcom’s model, the risk isn’t just financial. The control over your architecture, your upgrade cadence, your integrations, and even your licensing terms slowly moves away from you. And faster than you may think.

VCF SPD November 2025

Broadcom’s new compliance mechanisms amplify that dependency. According to the November 2025 VCF Specific Program Documentation, customers must upload a verified compliance report every 180 days. Failing to do so allows Broadcom to degrade or block management-plane functionality and suspend support entitlements. What once was a perpetual license has become an always-connected control loop. A system that continuously validates, monitors, and enforces usage from the outside. Is that okay for a “sovereign” cloud or you as the operator?

As Hock Tan , our President and CEO, shared in today's General Session at  VMware Explore Barcelona, European customers want control over their data  and processes. | Broadcom

You don’t notice it day by day. But five years later, you realize: Your data center doesn’t belong to you anymore.

Why Change Feels Bigger Than It Is

Anyway, change is often perceived as a massive technical disruption. But in reality, it’s usually a series of small, manageable steps. Modern infrastructure platforms have evolved to make transitions far less painful than before. Today, you can migrate workloads gradually, reuse existing automation scripts, and maintain uptime while transforming the foundation beneath.

What used to be a twelve-month migration project can now be done in phases, with full visibility and reversible checkpoints. The idea is not to replace everything. It’s to regain control, layer by layer.

Freedom as a Strategy

Freedom should be a design principle. It means having a platform that lets you choose, and it also means being able to decide when to upgrade, how to scale, and where your data lives, without waiting for a vendor’s permission.

This is why I joined Nutanix. They don’t force you into a proprietary stack. They abstract complexity instead of hiding it. They allow you to run what you need, and only what you need, whether that’s virtualization, containers, or a mix of both. Yep, and you can also provide DBaaS (NDB) or a private AI platform (NAI).

I’m not telling you to abandon what you know. Take a breath and think about what’s possible when choice returns.

For years, VMware has been the familiar home of enterprise IT. But homes can become cages when you are no longer allowed to move the furniture. The market is moving towards platforms that combine the comfort of virtualization with the agility of cloud without the loss of control.

This shift is already happening. Many organizations start small – with their disaster recovery site, their dev/test environment, or their EUC workloads. Once the first step is done, confidence grows. They realize that freedom doesn’t come from ripping everything out. It comes from taking back control, one decision at a time.

A Quiet Revolution

The next chapter of enterprise infrastructure will not be written by those who cling to the past, but by those who dare to redesign their foundations. Not because they want to change, but because they must to stay agile, compliant, and sovereign in a world where autonomy is everything.

The legal fine print makes it clear. What Broadcom calls modernization is, in fact, a redesign of control. And control rarely moves back to the customer once it’s gone.

The question is no longer “Can we afford to change?”

It should be “Can we afford not to?”. Can YOU afford not to?

And maybe that’s where your next journey begins. Not with fear, but with the quiet confidence that the time to regain control has finally arrived.

It’s Time to Rethink Your Private Cloud Strategy

It’s Time to Rethink Your Private Cloud Strategy

For over a decade, VMware has been the foundation of enterprise IT. Virtualization was almost synonymous with VMware, and entire operating models were built around it. But every era of technology eventually reaches a turning point. With vSphere 8 approaching its End of General Support in October 2027, followed by the End of Technical Guidance in 2029, customers are most probably being asked to commit to VMware Cloud Foundation (VCF) 9.x and beyond.

On paper, this may look like just another upgrade cycle, but in reality, it forces every CIO and IT leader to pause and ask the harder questions: How much control do we still have? How much flexibility remains? And do we have the freedom to define our own future?

Why This Moment Feels Different

Enterprises are not new to change. Platforms evolve, vendors shift focus, and pricing structures come and go. Normally, these transitions are gradual, with plenty of time to adapt.

What feels different today is the depth of dependency on VMware. Many organizations built their entire data center strategy on one assumption: VMware is the safe choice. VMware became the backbone of operations, the standard on which teams, processes, and certifications were built.

CIOs realize the “safe choice” is no longer guaranteed to be the most secure or sustainable. Instead of incremental adjustments, they face fundamental questions: Do we want to double down, or do we want to rebalance our dependencies?

Time Is Shorter Than It Looks

2027 may sound far away, but IT leaders know that large infrastructure decisions take years. A realistic migration journey involves:

  • Evaluation & Strategy (6 to 12 months) – Assessing alternatives, validating requirements, building a business case.

  • Proof of Concept & Pilots (6to 12 months) – Testing technology, ensuring integration, training staff.

  • Procurement & Budgeting (3 to 9 months) – Aligning financial approvals, negotiating contracts, securing resources.

  • Migration & Adoption (12 to24 months) – Moving workloads, stabilizing operations, decommissioning legacy systems.

Put these together, and the timeline shrinks quickly. The real risk is not the change itself, but running out of time to make that change on your terms.

The Pricing Question You Can’t Ignore

Now imagine this scenario:

The list price for VMware Cloud Foundation today sits around $350 per core per year. Let’s say Broadcom adjusts it by +20%, raising it to $420 this year. Then, two years later, just before your next renewal, it increases again to $500 per core per year.

Would your situation and thoughts change?

For many enterprises, this is not a theoretical question. Cost predictability is part of operational stability. If your infrastructure platform becomes a recurring cost variable, every budgeting cycle turns into a crisis of confidence.

When platforms evolve faster than budgets, even loyal customers start re-evaluating their dependency. The total cost of ownership is no longer about what you pay for software -more about what it costs to stay locked in.

And this is where strategic foresight matters most: Do you plan your next three years assuming stability, or do you prepare for volatility?

The Crossroads – vSphere 9 or VCF 9

In the short term, many customers will take the most pragmatic route. They upgrade to vSphere 9 to buy time. It’s the logical next step, preserving compatibility and delaying a bigger architectural decision.

But this path comes with an expiration date. Broadcom’s strategic focus is clear, the future of VMware is VCF 9. Over time, standalone vSphere environments will likely receive less development focus and fewer feature innovations. Eventually, organizations will be encouraged, if not forced, to adopt the integrated VCF model, because vSphere standalone or VMware vSphere Foundation (VVF) are going to be more expensive than VMware Cloud Foundation.

For some, this convergence will simplify operations. For others, it will mean even higher costs, reduced flexibility, and tighter coupling with VMware’s lifecycle.

This is the true decision point. Staying on vSphere 9 buys time, but it doesn’t buy independence (think about sovereignty too!). It’s a pause, not a pivot. Sooner or later, every organization will have to decide:

  • Commit fully to VMware Cloud Foundation and accept the new model, or

  • Diversify and build flexibility with platforms that maintain open integration and operational control

Preparing for the Next Decade

The next decade will reshape enterprise IT. Whether through AI adoption, sovereign cloud requirements, or sustainability mandates, infrastructure decisions will have long-lasting impact.

The question is not whether VMware remains relevant – it will. The question is whether your organization wants to let VMware’s roadmap dictate your future.

This moment should be viewed not as a threat but as an opportunity. It’s a chance (again) to reassess dependencies, diversify, and secure true autonomy. For CIOs, the VMware shift is less about technology and more about leadership.

Yes, it’s about ensuring that your infrastructure strategy aligns with your long-term vision, not just with a vendor’s plan.

Sovereignty in the Cloud is A Matter of Perspective

Sovereignty in the Cloud is A Matter of Perspective

Sovereignty in the cloud is often treated as a cost.  Something that slows innovation, complicates operations, and makes infrastructure more expensive. But for governments, critical industries, and regulated enterprises, sovereignty is the basis of resilience, compliance, and long-term autonomy. Hence, sovereignty is not seen as a burden. The way a provider positions sovereignty reveals a lot about how they see the balance between global scale and local control.

Some platforms like Oracle’s EU Sovereign Cloud show that sovereignty doesn’t have to come at the expense of capability. It delivers the same services, the same pricing, and operates entirely with EU-based staff. Nutanix pushes the idea even further with its distributed cloud operating model, proving that sovereignty and value can reinforce each other rather than clash.

Microsoft’s Framing

In Microsoft’s chart, the hyperscale cloud sits on the far left of the spectrum. Standard Azure and Microsoft 365 are presented as delivering only minimal sovereignty, little residency choice, and almost no operational control. The upside, in their telling, is that this model maximizes “cloud value” through global scale, innovation, and efficiency.

Microsoft Sovereignty Trade-Offs

Move further to the right and you encounter Microsoft’s sovereign variants. Here, they place offerings such as Azure Local with M365 Local and national partner clouds like Delos in Germany or Bleu in France. These are designed to deliver more sovereignty and operational control by layering in local staff, isolated infrastructure, and stricter national compliance. Yet the framing is still one of compromise. As you gain sovereignty, you are told that some of the value of the hyperscale model inevitably falls away.

Microsoft’s Sovereign Cloud Portfolio

To reinforce this point, Microsoft presents a portfolio of three models. The first is the Sovereign Public Cloud, which is owned and operated directly by Microsoft. Data remains in Europe, and customers get software-based sovereignty controls such as “Customer Lockbox” or “Confidential Computing”. It runs in Microsoft’s existing datacenters and doesn’t require migration, but it is still, at its core, a hyperscale cloud with policy guardrails added on top.

The second model is the Sovereign Private Cloud. This is customer-owned or partner-operated, running on Azure Local and Microsoft 365 Local inside local data centers. It can be hybrid or even disconnected, and is validated through Microsoft’s traditional on-premises server stack such as Hyper-V, Exchange, or SharePoint. Here, sovereignty increases because customers hold the operational keys, but it is clearly a departure from the hyperscale simplicity.

Microsoft Sovereign Cloud Portfolio

Finally, there are the National Partner Clouds, built in cooperation with approved local entities such as SAP for Delos in Germany or Orange and Cap Gemini for Bleu in France. These clouds are fully isolated, meet the most stringent government standards like SecNumCloud in France, and are aimed at governments and critical infrastructure providers. In Microsoft’s portfolio, this is the most sovereign option, but also the furthest away from the original promise of the hyperscale cloud.

On paper, this portfolio looks broad. But the pattern remains: Microsoft treats sovereignty as something that adds control at the expense of cloud value.

What If We Reframe the Axes From “Cloud Value” to “Business Value”?

That framing makes sense if you are a hyperscaler whose advantage lies in global scale. But it doesn’t reflect how governments, critical infrastructure providers, or regulated enterprises measure success. If we shift the Y-axis away from “cloud value” and instead call it “business value”, the story changes completely. Business value is about resilience, compliance, cost predictability, reliable performance in local contexts, and the flexibility to choose infrastructure and partners that meet strategic needs.

The X-axis also takes on a different character. Instead of seeing sovereignty, residency, and operations as a cost or a burden, they become assets. The more sovereignty an organization can exercise, the more it can align its IT operations with national policies, regulatory mandates, and its own resilience strategies. In this reframing, sovereignty is not a trade-off, but a multiplier.

What the New Landscape Shows

Once you adopt this perspective, the map of cloud providers looks very different.

Sovereign Cloud Analysis Chart

Please note: Exact positions on such a chart are always debatable, depending on whether you weigh ecosystem, scale, or sovereignty highest. 🙂

Microsoft Azure sits in the lower left, offering little in terms of sovereignty or control and, as a result, little real business value for sectors that depend on compliance and resilience. Adding Microsoft’s so-called sovereign controls moves the position slightly upward and to the right, but it still remains closer to enhanced compliance than genuine sovereignty. AWS’s European Sovereign Cloud lands in the middle, reflecting its cautious promises, which are a step toward sovereignty but not yet backed by deep operational independence.

Oracle’s EU Sovereign Cloud moves higher because it combines full service parity with the regular Oracle Cloud, identical pricing, and EU-based operations, making it a credible sovereign choice without hidden compromises. OCI Dedicated Region provides strong business value in a customer’s location, but since operations remain largely in Oracle’s hands, it offers less direct control than something like VMware. VMware by Broadcom sits further to the right thanks to the control it gives customers who run the stack themselves, but its business value is dragged down by complexity, licensing issues, and legacy cost.

The clear outlier is Nutanix, rising toward the top-right corner. Its distributed cloud model spanning on-prem, edge, and multi-cloud maximizes control and business value compared to most peers. Yes, Nutanix is not flawless, and yes, Nutanix lacks the massive partner ecosystem and developer gravity of hyperscalers, but for organizations prioritizing sovereignty, it comes closest to the “ideal zone”.

Conclusion

The lesson here is simple. Sovereignty is always a matter of perspective. For a hyperscaler, it looks like a tax on efficiency. For governments, banks, hospitals, or critical industries, it is the very foundation of value. For enterprises trying to reconcile global ambitions with local obligations, sovereignty is not a drag on innovation but the way to ensure autonomy, resilience, and compliance.

Microsoft’s chart is not technically wrong, but it is incomplete. Once you redefine the axes around real-world business priorities, you see that sovereignty does not reduce value. For many organizations, it is the only way to maximize it – though the exact balance point will differ depending on whether your priority is scale, compliance, or operational autonomy.