Digitale Souveränität und der Broadcom-Wendepunkt – Warum Oktober 2027 für den Public Sector kritisch wird

Digitale Souveränität und der Broadcom-Wendepunkt – Warum Oktober 2027 für den Public Sector kritisch wird

English Version: https://www.linkedin.com/pulse/broadcoms-october-2027-turning-point-why-public-sector-rebmann-t3vme/

Digitale Souveränität ist in den letzten Jahren zu einem zentralen Begriff geworden, insbesondere im öffentlichen Sektor. Dennoch wird die Diskussion häufig an der Oberfläche geführt. Oft geht es um Datenstandorte, um europäische Cloud-Initiativen oder um zusätzliche Sicherheitsmechanismen. Was dabei übersehen wird, ist die eigentliche Ebene, auf der Souveränität entsteht oder verloren geht. Nämlich bei Architektur der Plattformen, auf denen unsere IT basiert.

Über viele Jahre hinweg haben Organisationen ihre Infrastruktur auf VMware aufgebaut. Virtualisierung war der stabile Kern, auf dem sich moderne Rechenzentren und später auch Private-Cloud-Umgebungen entwickelt haben. Diese Umgebungen waren in ihrer ursprünglichen Form modular. Compute, Storage und Netzwerk inkl. Management konnten unabhängig voneinander betrieben und weiterentwickelt werden. Diese Modularität war ein entscheidender Erfolgsfaktor – vor allem im KMU-Segment. Sie ermöglichte es Organisationen, ihre Architektur schrittweise anzupassen, Technologien auszutauschen oder zu ergänzen und Betriebsmodelle weiterzuentwickeln, ohne jedes Mal das gesamte Fundament neu bauen zu müssen.

Gleichzeitig hat sich über die Jahre eine starke Marktkonzentration aufgebaut. Es ist realistisch davon auszugehen, dass heute rund 80 Prozent des Public Sectors auf VMware-Technologie basieren. Diese breite Verbreitung war lange ein Vorteil, weil sie Standardisierung, Know-how-Aufbau und ein starkes Partner-Ökosystem ermöglicht hat. Heute wird genau diese Konzentration jedoch zu einem strukturellen Risiko. Denn wenn ein einzelner Anbieter seine Strategie grundlegend verändert, betrifft das nicht einzelne Organisationen, sondern einen Grossteil des gesamten Ökosystems. Ja, sogar einen Grossteil von Schweizer Rechenzentren.

Mit der Übernahme von VMware durch Broadcom hat sich diese Ausgangslage grundlegend verändert. Die Transformation erfolgt dabei nicht in einem Schritt, sondern in mehreren, klar erkennbaren Phasen.

Phase 1

Der erste Einschnitt war wirtschaftlicher Natur. Neue Lizenzmodelle und Bündelungen haben die Kostenstruktur verändert und in vielen Fällen deutlich erhöht. Damit wurde die wirtschaftliche Souveränität vieler Organisationen bereits spürbar eingeschränkt. Entscheidungen konnten nicht mehr allein auf Basis des tatsächlichen Bedarfs getroffen werden, sondern mussten sich zunehmend an vorgegebenen Lizenzmodellen orientieren.

Phase 2

Parallel dazu hat sich das Partner-Ökosystem verändert. Viele VMware-Partner sind verschwunden oder haben ihre Rolle angepasst. Für Kunden bedeutet das eine reduzierte Auswahl an Integratoren und Dienstleistern, weniger Wettbewerb und damit indirekt auch weniger Einflussmöglichkeiten. Souveränität zeigt sich nicht nur in Technologie, sondern auch in der Fähigkeit, zwischen verschiedenen Partnern und Betriebsmodellen wählen zu können. Wenn diese Auswahl kleiner wird, sinkt auch die Handlungsfreiheit.

Phase 3

Die dritte Phase, die sich aktuell abzeichnet, ist die technisch-strukturelle. Mit der strategischen Ausrichtung auf VMware Cloud Foundation 9 (VCF) als dominierendes Zielmodell wird die Architektur selbst zum Steuerungsinstrument. Was früher ein flexibler Baukasten war, entwickelt sich zunehmend zu einem integrierten Gesamtstack, in dem einzelne Komponenten nicht mehr unabhängig voneinander betrachtet werden können.

Technisch betrachtet bringt ein solcher Ansatz Vorteile mit sich. Standardisierung reduziert Komplexität, integrierte Betriebsmodelle können Effizienzgewinne ermöglichen, und ein klar definierter Stack vereinfacht den Betrieb. Doch diese Integration hat eine Konsequenz, die in der aktuellen Diskussion oft unterschätzt wird. Sie verändert die grundlegende Beziehung zwischen Kunde und Plattform.

Man kann die digitale Souveränität anhand von drei zentralen Fähigkeiten messen:

  1. Der Möglichkeit zu wechseln,
  2. der Fähigkeit zur Gestaltung und
  3. der Fähigkeit zur Einflussnahme

Diese drei Dimensionen sind entscheidend, weil sie darüber bestimmen, ob eine Organisation ihre IT aktiv steuern kann oder ob sie zunehmend in ein vorgegebenes Modell hineinwächst.

Genau diese Fähigkeiten werden durch die aktuelle Entwicklung schrittweise reduziert. Die Wechselmöglichkeit bleibt formal bestehen, wird aber faktisch deutlich erschwert, weil ein Wechsel nicht mehr den Austausch einzelner Komponenten bedeutet, sondern die Transformation eines gesamten Systems. Die Gestaltungsfähigkeit nimmt ab, weil Architekturentscheidungen zunehmend durch den Anbieter (Broadcom) definiert werden. Und auch die Einflussnahme sinkt, da die Verhandlungsmacht mit wachsender Abhängigkeit vom integrierten Stack strukturell abnimmt. Ähnlich wie bei der Public Cloud.

Viele Organisationen haben auf die ersten Veränderungen reagiert. Zum Beispiel haben grössere Spitäler und Kantone ihre Verträge mit Broadcom verlängert, um kurzfristig Planungssicherheit zu gewinnen und operative Ruhe zu schaffen. Diese Entscheidung ist nachvollziehbar. Sie verschafft Zeit, stabilisiert Budgets und vermeidet kurzfristige Risiken.

VCF9 zwingend ab Oktober 2027

Doch genau hier liegt ein Missverständnis, das in vielen Gesprächen sichtbar wird. Diese Verlängerungen haben keine zusätzliche Zeit geschaffen.

Die eigentliche Entwicklung läuft unabhängig davon weiter. Die strategische Ausrichtung auf VCF (VCF9) und die damit verbundene Transformation der Architektur bleiben bestehen. Der relevante Zeitpunkt verschiebt sich nicht durch ein Vertrags-Renewal.

Der eigentliche Wendepunkt bleibt bestehen. Oktober 2027.

Wie VCF Operations das Zielmodell erzwingt

Mit Version 9 von VMware Cloud Foundation verändert sich nicht nur die Architektur, sondern auch die Art und Weise, wie Compliance im Betrieb umgesetzt wird. Gemäss den aktuellen Lizenz- und Nutzungsbedingungen wird für Umgebungen ab Version 9 ein verpflichtendes Compliance Reporting eingeführt.

VCF is sold as a single product; the included components and capabilities can only be utilized on, or for the same physical Cores where the vSphere in VCF Core license is deployed.

Organisationen, die VCF einsetzen, sind demnach verpflichtet, regelmässig Compliance-Berichte zu erstellen und bereitzustellen – initial nach 180 Tagen und danach in wiederkehrenden Intervallen.

Das Compliance Reporting wird über VCF Operations abgewickelt. Damit wird diese Komponente faktisch zur Voraussetzung (VCF9 is also Voraussetzung) für den regelkonformen Betrieb. Ohne entsprechende Integration ist die Einhaltung der Vorgaben nicht mehr vollständig gewährleistet.

VCF9 Compliance Reporting

Damit entsteht ein zusätzlicher Mechanismus, der die Nutzung des vollständigen VCF-Stacks verstärkt.

In Kombination mit Lizenzmodellen, Architekturvorgaben und integrierten Betriebsfunktionen ergibt sich ein konsistentes Muster. Der Weg in das Zielmodell wird nicht nur empfohlen, sondern zunehmend strukturell abgesichert.

Quelle: https://ftpdocs.broadcom.com/cadocs/0/contentimages/VCF_SPD_July2025.pdf

Was mit VCF 9 tatsächlich installiert wird

Beim Deployment einer VCF-Umgebung wird nicht nur eine Virtualisierungsplattform (ESX Hypervisor) installiert. Vielmehr wird ein vollständiger, integrierter Stack aus Infrastruktur-, Netzwerk- und Betriebskomponenten bereitgestellt.

Konkret umfasst eine Standardinstallation mehrere zentrale Bausteine:

  • vSphere (ESX & vCenter) als Compute- und Management-Layer
  • NSX für Netzwerk und Security
  • vSAN oder alternative Storage-Integrationen
  • SDDC Manager und Fleet Management
  • sowie VCF Operations und VCF Automation als zentrale Betriebs- und Steuerungsschicht

Die einzelnen Komponenten sind nicht mehr unabhängig voneinander sinnvoll betreibbar. Sie werden zu einem zusammenhängenden System, das nur im Gesamtmodell seine volle Funktionalität entfaltet.

Quelle: https://blogs.vmware.com/cloud-foundation/2025/07/03/vcf-9-0-deployment-pathways

Neue Anforderungen an Architektur und Betrieb

Diese Veränderung bleibt nicht auf der technischen Ebene stehen. Sie hat direkte Auswirkungen auf die Leute, die diese Plattformen planen und betreiben.

Architekten und Betriebsteams müssen sich in ein deutlich breiteres und komplexeres System einarbeiten. Während sich viele Organisationen bisher stark auf den Hypervisor und klassische Virtualisierungskomponenten konzentriert haben, kommen nun zusätzliche Schichten hinzu, die zwingend Teil des Betriebsmodells sind.

Organisationen müssen neue Kompetenzen aufbauen, Prozesse anpassen und ein tieferes Verständnis für das Zusammenspiel der einzelnen Komponenten entwickeln.

Der Fokus verschiebt sich weg vom Betrieb einzelner Technologien hin zum Betrieb eines integrierten Systems. Entscheidungen in einem Bereich wirken sich unmittelbar auf andere Bereiche aus. Architektur, Betrieb und Automatisierung sind enger miteinander verknüpft als je zuvor.

Diese Entwicklung ist nicht ungewöhnlich. Sie entspricht dem generellen Trend in Richtung Plattformisierung. Doch sie hat eine klare Konsequenz. Und dieser muss man sich bewusst sein.

Informationsdefizit

Was die Situation zusätzlich verschärft, ist ein strukturelles Informationsdefizit. Viele Kunden und auch viele Partner sind sich der Tragweite dieser Veränderung noch nicht bewusst. Die Entwicklung hin zu einem integrierten, erzwungenen Plattformmodell wird oft als schrittweise Evolution wahrgenommen, nicht als fundamentaler Architekturbruch.

In der Praxis bedeutet das, dass sich ein Grossteil des Marktes heute in einer Phase scheinbarer Stabilität befindet, während sich gleichzeitig eine strukturelle Veränderung vorbereitet, die in etwa 18 Monaten ihre volle Wirkung entfalten wird.

Souveränität grossflächig in Gefahr

Bis dahin werden viele Organisationen gezwungen sein, ihre bestehenden Umgebungen zu transformieren oder neu auszurichten. Supportzyklen laufen aus, technologische Abhängigkeiten verstärken sich, und die Migration in integrierte Modelle wird zunehmend zur Voraussetzung für den Weiterbetrieb. Was heute wie eine temporäre Stabilisierung wirkt, ist in Wirklichkeit eine Phase vor einer strukturellen Entscheidung.

Anmerkung: Die VMware-Technologie ist nach wie vor exzellent. Jedoch ist VMware nicht mehr “VMware”, sondern nun Broadcom.

Digitale Souveränität geht nicht verloren, weil die Technologie schlecht ist.

Sie geht verloren, wenn Entscheidungen an Reversibilität verlieren. Wenn Architektur, Betrieb und Lizenzmodell so eng miteinander verknüpft sind, dass Alternativen zwar existieren, aber praktisch kaum mehr realistisch umsetzbar sind, verschiebt sich die Kontrolle nachhaltig.

Für den Public Sector in der Schweiz bedeutet das, dass sich die Ausgangslage fundamental verändert. Viele Organisationen betreiben heute VMware-basierte Private Clouds, haben über Jahre Know-How aufgebaut und ihre Betriebsmodelle darauf ausgerichtet. Der Übergang zu einem integrierten Modell wie VCF9 ist deshalb kein einfacher Technologieschritt, sondern eine strategische Weichenstellung.

Es ist nicht unrealistisch anzunehmen, dass ein grosser Teil der öffentlichen IT-Landschaft ab Oktober 2027 nicht mehr den zentralen Kriterien digitaler Souveränität entsprechen wird.

Nutanix als Alternative – Zurück zur Modularität

In diesem Kontext wird Nutanix häufig als Alternative genannt. Interessant ist dabei weniger die Positionierung als “Private-Cloud-Anbieter”, sondern die zugrunde liegende Architekturphilosophie.

Auch Nutanix bietet heute eine vollständige Private-Cloud-Plattform. Infrastruktur, Automatisierung, Datenservices und moderne Plattformdienste können integriert bereitgestellt werden. Auf den ersten Blick ähnelt dieses Modell dem, was auch VMware Cloud Foundation verfolgt. Der entscheidende Unterschied liegt jedoch nicht im Funktionsumfang, sondern in der Art und Weise, wie dieser bereitgestellt wird.

Während sich VMware (by Broadcom) zunehmend in Richtung eines verpflichtenden, eng integrierten Gesamtstacks entwickelt, folgt Nutanix weiterhin einem modularen Ansatz. Funktionen können kombiniert werden, müssen es aber nicht. Organisationen können entscheiden, welche Komponenten sie tatsächlich benötigen und in welchem Umfang sie diese einsetzen.

Genau diese Eigenschaft war auch ein wesentlicher Grund für den Erfolg von VMware in der Zeit vor der Broadcom-Übernahme.

Nutanix knüpft in gewisser Weise an dieses Prinzip an. Die Plattform kann als vollständige Private Cloud betrieben werden, ohne dass sie zu einem starren Zielmodell wird. Gleichzeitig ermöglicht sie unterschiedliche Betriebsmodelle, vom klassischen Rechenzentrum über Service-Provider-Umgebungen bis hin zu hybriden Szenarien. Entscheidend ist dabei, dass die operative Logik konsistent bleibt. Workloads und Betriebsprozesse sind nicht an ein einzelnes Modell gebunden, sondern können sich entlang der Anforderungen entwickeln.

Die Einordnung von Nutanix als Alternative sollte dennoch differenziert erfolgen. Auch hier handelt es sich um eine kommerzielle Plattform mit eigener Roadmap, eigenem (breiten) Ökosystem und eigenen Abhängigkeiten. Digitale Souveränität entsteht durch das Zusammenspiel von Technologie, Governance, Kompetenzen und strategischen Entscheidungen.

Eine Frage, die bisher zu selten gestellt wird

Die Risiken der Public Cloud sind im öffentlichen Sektor seit Jahren Gegenstand intensiver Diskussionen. Fragen zu Abhängigkeiten, Preisentwicklung, geopolitischem Einfluss und fehlender Kontrolle gehören heute zur Standardbewertung jeder grösseren Cloud-Entscheidung. Im Private-Cloud-Umfeld hingegen wird eine vergleichbare Debatte bisher gar nicht geführt.

Dabei deutet sich eine strukturell ähnliche Entwicklung an.

Aus einer Souveränitätsperspektive stellt sich jedoch noch eine andere Frage.

Welche Konsequenzen hat es, wenn ein grosser Teil des Public Sectors auf eine einheitliche Plattformarchitektur standardisiert, deren Betriebsmodell, Lizenzstruktur und Weiterentwicklung massgeblich von einem Anbieter bestimmt werden?

Ein Vergleich mit der Public Cloud hilft, diese Fragestellung zu beantworten.

Würde heute ein Grossteil der öffentlichen Verwaltung seine IT vollständig auf Plattformen wie Microsoft Azure, Amazon Web Services oder Google Cloud Platform betreiben und in der Folge eine signifikante Preissteigerung im Bereich von 50 bis 100 Prozent erfolgen, wäre die Reaktion absehbar. Die Diskussion über Abhängigkeiten, Alternativen und strategische Steuerbarkeit würde unmittelbar an Intensität gewinnen.

Im Private-Cloud-Umfeld ist eine vergleichbare Dynamik bereits erkennbar, wird jedoch anders wahrgenommen.

Während Risiken in der Public Cloud frühzeitig adressiert wurden, wird die gleiche Entwicklung im Private-Cloud-Umfeld häufig noch als rein technologische Evolution betrachtet. Die zugrunde liegende Abhängigkeit ist jedoch vergleichbar.

  • Welche Massnahmen werden also heute ergriffen, um diese Form der Abhängigkeit aktiv zu steuern?
  • Welche Strategien existieren, um Wechseloptionen realistisch zu erhalten?
  • Und in welchem Umfang werden Alternativen geprüft, solange diese noch mit vertretbarem Aufwand umsetzbar sind?

Diese Fragen lassen sich nur beantworten, wenn die zugrunde liegenden Veränderungen überhaupt den Kunden und Partnern klar sind.

Ein Blick auf die Beschaffung

Ein Blick auf aktuelle Ausschreibungen auf simap.ch zeigt ein klares Bild. VMware ist im Schweizer Public Sector tief verankert. Zahlreiche Organisationen haben in den Jahren 2024 und 2025 ihre bestehenden Umgebungen verlängert oder weiter ausgebaut. Die Vertragsvolumen bewegen sich im Millionenbereich und sind in vielen Fällen über mehrere Jahre ausgelegt – häufig bis 2028, 2029 oder darüber hinaus.

Viele dieser Entscheidungen wurden in einer Phase getroffen, in der Stabilität, Planbarkeit und operative Kontinuität im Vordergrund standen. Vertragsverlängerungen boten kurzfristige Sicherheit, insbesondere vor dem Hintergrund veränderter Lizenzmodelle und steigender Kosten.Wie schon erwähnt, hat man sich hier wohl Planungssicherheit verschaffen möchten, war sich jedoch nicht bewusst, dass ab Oktober 2027 ein neue Architektur und ein neues Betriebsmodell aufgezwungen wird.

Gleichzeitig wurde damit eine bestehende Architektur fortgeschrieben. Die Folge ist kein unmittelbarer Bruch, sondern eine schrittweise Verfestigung.

Über mehrere Jahre hinweg entstehen Bindungen, die technisch und wirtschaftlich zunehmend schwerer zu verändern sind. Der Handlungsspielraum bleibt formal bestehen, wird aber faktisch enger.

Quellen

Nutanix –  The Questions Swiss VMware Customers Ask

Nutanix – The Questions Swiss VMware Customers Ask

In my first five months at Nutanix, I have had dozens of conversations across the Swiss market. From federal organizations to cantonal institutions, from service providers to highly regulated environments. On paper, these discussions look completely different – different architectures, different priorities, and different timelines.

It took me a while to realize it, but there was a clear pattern. Regardless of size or sector, the same underlying questions keep surfacing, no matter if we were talking about a 1’000-, 4’500-, or 20’000-core infrastructure. And more interestingly, most of these questions are not about features or technical capabilities, these questions came later in the discussions.

Most questions are about risk, cost and control, and sometimes about sovereignty. It all has to do with certainty, doubts, stability and predictability.

So, it’s less about the available alternatives per se. Customers are trying to understand what staying actually means, what risk this implies.

1) Isn’t switching too risky?

This is one of the questions that appear very early when meeting prospects. Sometimes even right after the introduction before any real discussion has started.

It’s a natural reaction. For a long time, staying on VMware was the safest choice and there was no real reason to reconsider it.

But VMware is not “VMware” anymore, it is Broadcom now. So, what many organizations are experiencing today is not instability in their infrastructure, but the conditions around it. There is not one single customer that is telling me that “VMware” is not performing as expected and just great technology.

For many customers, especially those in regulated industries, it’s about predictability and control. Therefore staying (with VMware) is no longer automatically the safest option anymore.

What I see in practice is, that IT organizations quickly move away from the idea of a distruptive “big bang” migration. Instead, they start thinking in phases and use cases, and move workloads step by step. Systems run in parallel and confidence builds gradually. The projects that I have won are VDI and edge use cases. Larger projects with larger infrastructure take more time.

So, what’s the learning? While I understand why customers ask “isn’t switching too risky”, it’s just the wrong question.

The better question would be: What’s the risk of staying and how do we move without taking unnecessary risk?

From there, the conversation almost inevitably moves to cost.

2) Is Nutanix really cheaper?

Sounds like simple question, right? A number-to-number comparison, a classic price discussion. It’s anything but simple.

Because what most organizations are comparing is not two equivalent scenarios. They are comparing what they used to pay for VMware with what they might pay for something new. And that creates a distorted baseline from the very beginning. With Broadcom, at least in Switzerland, there is no more VMware vSphere Foundation (VVF) or vSphere Enterprise Plus standalone. You can only get VMware vSphere Standard (VVS) or VMware Cloud Foundation (VCF).

On paper, that sounds like simplification and in practice, it introduces a different kind of complexity. Because suddenly, organizations are not just buying what they need. They are buying what is included.

In many of the discussions I have had, customers admit that they are not using the full breadth of the VCF stack (even they have the VCF subscription. A lot of those VMware customers only use vSphere, some of them vSAN, and the most of them use Aria Operations. No NSX and no Aria Automation. And if you need advanced security features like micro-segmentation, you need an add-on for $200 (list price).

You can compare Nutanix against the entire VCF bundle. In that case, the question becomes “Can Nutanix replace everything that is included?”.

Or you can compare Nutanix against what you are actually using today. And suddenly, the picture changes. Dramatically.

Both perspectives are valid, but they lead to very different conclusions – commercially and strategically.

Let me rephrase the question, which now becomes: Why am I paying for functionality I don’t need?

This is something I explored in more detail in my recent article “Beyond the Price Tag – Why Organizations Choose Nutanix” The core idea is simple. Cost is rarely just about the price per core or the discount level. In the end it is about how closely your investment aligns with your actual requirements.

With Nutanix, you don’t start with everything and try to justify it afterwards, and you can start with what you actually need. And then you expand, step by step, where it creates value.

It sounds like a small difference, but in practice it changes the entire commercial logic.

3) Don’t we end up paying twice during the migration?

It’s a fair concern. Running two environments in parallel is often unavoidable during a transition. Without specific support, that can mean carrying two full licensing models at the same time.

This is exactly where Nutanix has taken a very pragmatic approach. Through its migration programs, customers can receive up to one year of Nutanix licensing at no additional cost during the transition period.

That doesn’t eliminate the complexity of a migration, but it removes a key barrier. It gives organizations time. And most importantly, it allows them to do this without being penalized financially for taking a careful approach.

4) We don’t have Nutanix skills

Over the past months, one pattern has become very clear. Broadcom is not just repositioning VMware commercially, but it is standardizing it architecturally. Everything points in the same direction: VMware Cloud Foundation is no longer an option. It is the only option.

And if you look at the publicly available information, this trajectory becomes even more tangible. Current indications suggest that support for vSphere 8.x and VCF versions not aligned with vSphere 9 will eventually come to an end. Which effectively means that, from around October 2027 onwards, unless Broadcom changes course again, customers will only be able to buy and deploy VCF 9.x.

In other words, the path forward is already being defined.

Now, to be fair, there are customers for whom this aligns well. Organizations that have already embraced VCF, invested in NSX, automation, and the broader stack, for them, this is a continuation of a journey they have consciously chosen.

But they are not the majority.

Most environments I see across Switzerland are still far from a fully adopted VCF architecture. They are running vSphere at scale, often with external storage and networking, established operational models, and teams that are deeply skilled in what they do today.

And this is exactly where the concern about “Nutanix skills” usually comes up. “Do we have the people for this?”

The reality is that Nutanix does not require you to throw away everything your teams have learned over the past 10 or 15 years. Quite the opposite.

The fundamental principles remain the same. You are still running virtual machines, designing clusters, ensuring availability, managing storage policies, operating networks, and securing workloads. Concepts like high availability, lifecycle management, capacity planning, and operational governance don’t disappear.

In fact, many VMware engineers adapt to Nutanix much faster than expected. Why? Because Nutanix deliberately simplified the operational model. Instead of stitching together compute, storage, and networking from different layers and tools, Nutanix brings these capabilities into a single, integrated platform.

cloud13.ch Prism Central

So yes, adopting Nutanix requires learning. But let’s be honest, so does adopting VCF. You need to be aware that moving to VCF is not just a licensing change. It includes an operational transformation. VCF also means new skills, new processes, new dependencies, and a new operational model.

So while Broadcom’s vision is actually quite clear – and, in many ways, understandable – it comes with consequences. The vision is to deliver a private cloud platform and a model where individual product names fade into the background, and what matters are capabilities. Compute, storage, networking, security, and automation are delivered as an integrated service layer, and VMware is becoming more like a public cloud. Conceptually, that makes sense to me.

You are adopting a new operating paradigm. The only real advantage compared to moving to a public cloud like Azure is, that your virtual machine format remains the same. Your VMs don’t need to be converted. But beyond that, the effort is comparable:

  • You still need to redesign your architecture
  • You still need to rethink networking and security
  • You still need to retrain your teams
  • You still need to plan and execute a structured migration

And this is exactly where the conversation reconnects with the themes we discussed earlier (cost, risk, control).

5) Isn’t Nutanix doing the same as Broadcom?

Yes, Nutanix absolutely offers a private cloud platform that can run in the data center, at the edge, or in the public cloud. So, in terms of vision, both VMware (under Broadcom) and Nutanix are heading towards a similar destination: A cloud-like operating model for on-premises environments.

Before the Broadcom era, VMware was known for something very specific: Modularity

With Nutanix, you can absolutely consume the full private cloud platform. But you don’t have to.

Nutanix continues to deliver a modular set of software building blocks that can be used independently or as a complete stack. The Nutanix Cloud Platform (NCP) includes multiple components such as Nutanix Cloud Infrastructure (NCI), Nutanix Cloud Manager (NCM), Unified Storage (NUS), Database Service (NDB), Nutanix Kubernetes Platform (NKP) and more. Each is available as a separate option depending on customer needs: https://www.nutanix.com/products/cloud-platform/software-options

Organizations can pick and choose exactly what they want to deploy:

  • A VDI environment? Use NCI‑VDI
  • An edge cluster with minimal footprint? Use NCI‑Edge for small‑scale, distributed deployments
  • A full enterprise platform spanning multiple sites? Deploy NCI Ultimate, NCM, Unified Storage, and Database Service as needed

6) Is Nutanix enterprise-ready?

A few years ago, that would have been a fair and important concern.

Back then, Nutanix was still perceived by many as a strong challenger. Innovative, yes. Promising, definitely. But not always seen as the default choice for the most critical, large-scale environments.

Is Hyper-V enterprise-ready? Is Azure Local enterprise-ready? What about newer or increasingly popular options like Proxmox?

The answer, in most cases, is simply assumed. And yet, if we take a step back, the question itself is more about perception.

Because Nutanix has been in the market for well over a decade. Its hypervisor, AHV, has been running production workloads for more than ten years. It is not new, it is not experimental, it is not an emerging technology trying to find its place.

It is established!

And that is reflected not only in customer adoption, but also in how the market evaluates the platform. Nutanix has consistently been positioned in the top-right quadrant of the Gartner Magic Quadrant for Distributed Hybrid Infrastructure.

Broadcom (VMware) Named a Leader in the 2025 GartnerⓇ Magic QuadrantTM for  Distributed Hybrid Infrastructure for the 3rd Consecutive Year - VMware  Cloud Foundation (VCF) Blog

By any objective measure, Nutanix has already crossed the “enterprise-ready” threshold a long time ago.

7) Are we just replacing one dependency with another?

It’s a fair question, and probably one of the most important ones in the entire discussion. Because if the last few years have shown anything, it’s that lock-in is no longer an abstract concept.

No platform is completely free of dependencies. There is no such thing as a truly neutral infrastructure stack. Every decision introduces some form of coupling – to a vendor, to an architecture, to an operating model.

Dependencies exist, always. That’s not the important part. It’s about where they sit and how much control you retain over them. And this is exactly where the conversation becomes more interesting.

As discussed earlier, architectures are becoming more opinionated, more predefined, more aligned to a single operating model. Which means the dependency moves downwards into the foundation.

Nutanix, in contrast, shifts that balance towards the application layer. And this is where Kubernetes becomes important.

Because once applications are containerized and orchestrated through Kubernetes, the underlying infrastructure starts to matter less. Not irrelevant, but less dominant. Workloads become more portable, deployment models become more consistent, and the ability to move between environments becomes an option.

Nutanix Kubernetes Platform (NKP) provides an integrated way to run and manage Kubernetes across environments, without forcing customers into a specific cloud or infrastructure model. It aligns with the broader idea of hybrid and multi-cloud, but in a way that keeps operational control with the customer.

Nutanix Kubernetes Platform Open Source

Replacing one platform with another is not inherently solving lock-in. But repositioning where dependencies sit, that’s ultimately what many organizations are looking for. Again, it’s about having the ability to stay on control. Because NKP is not tied to a single infrastructure backend:

  • It can run on Nutanix
  • It can run on VMware infrastructure
  • It can run in public cloud environments
  • It can even run directly on baremetal

Compare that to more tightly integrated approaches like the vSphere Kubernetes Service (VKS). VKS is deeply embedded into the vSphere ecosystem. It works well as long as you remain within that environment. But it is, by design, not portable beyond it. And that brings us back to the core point.

Lock-in is not eliminated by choosing a different vendor. It is reduced when your most critical layers are no longer restricted to a single environment.

How easily can you change tomorrow?

8) What if Nutanix gets acquired as well?

Another question has started to surface more frequently. It usually comes a bit later in the conversation, once the technical fit is understood, and once the commercial discussion has taken shape.

It’s a question that reflects the current mood in the market, and I have to admit it’s a valid one. Because the last few years have shown that ownership changes can have real consequences. They can reshape pricing models, redefine product strategies, and fundamentally alter the relationship between vendor and customer.

This question often leads to the wrong conclusion. We have to understand that the issue with VMware was not the acquisition itself. Acquisitions happen and they are part of how the technology industry evolves. The real issue was the impact that followed:

  • The shift in pricing
  • The restructuring of packaging
  • The reduced flexibility
  • And, ultimately, the feeling among many customers that control has moved away from them

That is what triggered the current wave of re-evaluation. So, when customers ask whether the same could happen elsewhere, they are not really asking about ownership. They are asking about exposure. If we follow that line of thinking consistently, the question doesn’t stop at Nutanix. You could ask the same about almost any platform in the market. What if Proxmox gets acquired? What if a hyperscaler changes its pricing model or service terms? What if an open source project shifts direction because of new commercial backing?

There is no scenario in which a platform is completely immune to change. And that is exactly my point. Trying to eliminate that risk entirely is not realistic.

9) AHV is not open source, is that a risk?

Nutanix’s Acropolis Hypervisor (AHV) is built on KVM, one of the most widely used open-source hypervisors out there. The foundation is open, and what Nutanix does is take that foundation and turn it into something that is actually operable at scale.

Open source sounds like freedom. And in some cases, it absolutely is. But in many real-world environments, it also means something else:

  • More components
  • More integration work
  • More lifecycle management
  • More responsibility on your own teams
  • Especially at the infrastructure layer

Running a fully open source stack often means you are effectively building your own platform. You are combining a hypervisor, storage, networking, automation, and then making sure everything works together, stays updated, remains secure, and is supported when something breaks. That can be the right approach, but only if you actually want to operate like that.

At the infrastructure layer, especially in virtualization, open source rarely creates meaningful strategic advantage. The hypervisor has become a mature, almost commoditized component. Whether it’s KVM, AHV, Hyper-V, or ESXi. They all solve the same fundamental problem, and they solve it well.

Open source creates the most value where differentiation happens. And that is not at the bottom of the stack. It’s at the top, at the application layer. This could be Kubernetes or building open source applications (think about OpenDesk or Nextcloud).

10) What about sovereignty?

Sovereignty is not a feature you can simply “add” to a platform. And more importantly, it’s not just a hyperscaler problem (anymore). This is something I already explored in a previous article – the idea that dependency doesn’t suddenly disappear just because infrastructure runs on-premises or in a private cloud. You can still be deeply dependent on a vendor’s licensing model, roadmap, and architectural decisions.

There is one dimension of sovereignty that stands out above all others in current customer conversations: Economic sovereignty.

For many existing Broadcom customers, this has become the most immediate and tangible pain point:

  • Not data residency
  • Not compliance
  • Not even technical capability

But cost predictability and the loss of it. And that brings us back to the platform.

The ability to maintain economic sovereignty is directly linked to how flexible your architecture is. If your platform enforces a predefined bundle, a fixed operating model, and limited alternatives, then your room to negotiate and adapt becomes smaller over time. If, on the other hand, your platform allows you to scale components independently, choose where workloads run, and avoid unnecessary dependencies, then you retain leverage.

Nutanix runs on-premises and in service provider environments. It also runs in public clouds (Nutanix NC2).

With the Nutanix Elevate Service Provider Program (NESPP), Nutanix enables managed service providers to build and operate sovereign cloud platforms themselves.

If your platform gives you flexibility, technically and commercially, then sovereignty becomes achievable.

Not VMware versus Nutanix

And this is ultimately where the entire discussion converges. Because despite all the technical arguments, the pricing models, the migration paths, and the architectural considerations, this is not a story about VMware versus Nutanix. What I see in the market right now is something different – a shift in how organizations relate to their infrastructure:

  • Control vs. dependency
  • Predictability vs. uncertainty
  • Choice vs. constraint

As I said before, dependency, in this context, is about exposure. Control, on the other hand, is not about owning everything or building everything yourself. And predictability (like trust), once lost, is difficult to rebuild.

If we help customers to ask different questions, the conversations change. It becomes less about selecting a product and more about defining a direction.

So, is your plan to adapt to change or to shape it?

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

Multi-cloud is normal in public cloud. Why is “single-cloud” still normal in private cloud?

If you ask most large organizations why they use more than one public cloud, the answers are remarkably consistent. It is not fashion, and it is rarely driven by engineering curiosity. It is risk management and a best of breed approach.

Enterprises distribute workloads across multiple public clouds to reduce concentration risk, comply with regulatory expectations, preserve negotiation leverage, and remain operationally resilient in the face of outages that cannot be mitigated by adding another availability zone. In regulated industries, especially in Europe, this thinking has become mainstream. Supervisors explicitly expect organisations to understand their outsourcing dependencies, to manage exit scenarios, and to avoid structural lock-in where it can reasonably be avoided.

Now apply the same logic one layer down into the private cloud world, and the picture changes dramatically.

Across industries and geographies, a significant majority of private cloud workloads still run on a single private cloud platform. In practice, this platform is often VMware (by Broadcom). Estimates vary, but the dominance itself is not controversial. In many enterprises, approximately 70 to 80 percent of virtualized workloads reside on the same platform, regardless of sector.

If the same concentration existed in the public cloud, the discussion would be very different. Boards would ask questions, regulators would intervene, architects would be tasked with designing alternatives. Yet in private cloud infrastructure, this concentration is often treated as normal, even invisible.

Why?

Organisations deliberately choose multiple public clouds

Public cloud multi-cloud strategies are often oversimplified as “fear of lock-in”, but that misses the point.

The primary driver is concentration risk. When critical workloads depend on a single provider, certain failure modes become existential. Provider-wide control plane outages, identity failures, geopolitical constraints, or contractual disputes cannot be mitigated by technical architecture alone. Multi-cloud does not eliminate risk, but it limits the blast radius.

Regulation reinforces this logic. The European banking supervision, for example, treats cloud as an outsourcing risk and expects institutions to demonstrate governance, exit readiness, and operational resilience. An exit strategy that only exists on paper is increasingly viewed as insufficient. There are also pragmatic reasons. Jurisdictional considerations, data protection regimes, and shifting geopolitical realities make organizations reluctant to anchor everything to a single legal and operational framework. Multi-cloud (or hybrid cloud) becomes a way to keep strategic options open.

And finally, there is negotiation power. A credible alternative changes vendor dynamics. Even if workloads never move, the ability to move matters.

This mindset is widely accepted in the public cloud. It is almost uncontroversial.

How the private cloud monoculture emerged

The dominance of a single private cloud platform did not happen by accident, and it did not happen because enterprises were careless.

VMware earned its position over two decades by solving real problems early and building an ecosystem that reinforced itself. Skills became widely available, tooling matured, and operational processes stabilized. Backup, disaster recovery, monitoring, security controls, and audit practices are all aligned around a common platform. Over time, the private cloud platform evolved into more than just software. It became the operating model.

And once that happens, switching becomes an organizational transformation.

Private cloud decisions are also structurally centralized. Unlike public cloud consumption, which is often decentralized across business units, private cloud infrastructure is intentionally standardized. One platform, one set of guardrails, one way of operating. From an efficiency and governance perspective, this makes sense. From a dependency perspective, it creates a monoculture.

For years, this trade-off was acceptable because the environment was stable, licensing was predictable, and the ecosystem was broad. The rules of the game did not change dramatically.

That assumption is now being tested.

What has changed is not the technology, but the dependency profile

VMware remains a technically strong private cloud platform. That is not in dispute. What has changed under Broadcom is the commercial and ecosystem context in which the platform operates. Infrastructure licensing has shifted from a largely predictable, incremental expense into a strategically sensitive commitment. Renewals are no longer routine events. They become moments of leverage.

At the same time, changes in partner models and go-to-market structures affect how organizations buy, renew, and support their private cloud infrastructure. When the surrounding ecosystem narrows, dependency increases, even if the software itself remains excellent.

This is not a judgment on intent or quality. It is just a structural observation. When one private cloud platform represents the majority of an organization’s infrastructure, any material change in pricing, licensing, or ecosystem access becomes a strategic risk by definition.

The real issue is not lock-in, but the absence of a credible exit

Most decision-makers do not care about hypervisors, they care about exposure. The critical question is not whether an organization plans to leave its existing private cloud platform. The question is whether it could leave, within a timeframe the business could tolerate, if it had to.

In many cases, the honest answer is no.

Economic dependency is the first dimension. When a single vendor defines the majority of your infrastructure cost base, budget flexibility shrinks.

Operational dependency is the second. If tooling, processes, security models, and skills are deeply coupled to one platform, migration timelines stretch into years. That alone is a risk, even if no migration is planned.

Ecosystem dependency is the third. Fewer partners and fewer commercial options reduce competitive pressure and resilience.

Strategic dependency is the fourth. The private cloud platform is increasingly becoming the default landing zone for everything that cannot go to the public cloud. At that point, it is no longer just infrastructure. It is a critical organizational infrastructure.

Public cloud regulators have language for this. They call it outsourcing concentration risk. Private cloud infrastructure rarely receives the same attention, even though the consequences can be comparable.

Concentration risk in the public sector – When dependency is financed by taxpayers

In the public sector, concentration risk is not only a technical or commercial question but also a governance question. Public administrations do not invest their own capital. Infrastructure decisions are financed by taxpayers, justified through public procurement, and expected to remain defensible over long time horizons. This fundamentally changes the risk calculus.

When a public institution concentrates the majority of its private cloud infrastructure on a single platform, it is committing public funds, procurement structures, skills development, and long-term dependency to one vendor’s strategic direction. Now, what does it mean for a nation where 80 or 90% of its public sector is dependent on one single vendor?

That dependency can last longer than political cycles, leadership changes, or even the original architectural assumptions. If costs rise, terms change, or exit options narrow, the consequences are beared by the public. This is why procurement law and public sector governance emphasize competition, supplier diversity, and long-term sustainability. In theory, these principles apply equally to private cloud platforms. In practice, historical standardization decisions often override them.

There is also a practical constraint. Public institutions cannot move quickly. Budget cycles, tender requirements, and legal processes mean that correcting structural dependency is slow and expensive once it is entrenched.

Seen through this lens, private cloud concentration risk in the public sector is not a hypothetical problem. It is a deferred liability.

Why organizations hesitate to introduce a new or second private cloud platform

If concentration risk is real, why do organizations not simply add a second platform?

Because fragmentation is also a risk.

Enterprises do not want five private cloud platforms. They do not want duplicated tooling, fragmented operations, or diluted skills. Running parallel infrastructures without a coherent operating model creates unnecessary cost and complexity, without addressing the underlying problem. This is why most organizations are not looking for “another hypervisor”. They are seeking a second private cloud platform that preserves the VM-centric operating model, integrates lifecycle management, and can coexist without necessitating a redesign of governance and processes.

The main objective here is credible optionality.

A market correction – Diversity returns to private cloud infrastructure

One unintended consequence of Broadcom’s acquisition of VMware is that it has reopened a market that had been largely closed for years. For a long time, the conversation about private cloud infrastructure felt settled. VMware was the default, alternatives were niche, and serious evaluation was rare. That has changed.

Technologies that existed on the margins are being reconsidered. Xen-based platforms are evaluated again, where simplicity and cost control dominate. Proxmox is discussed more seriously in environments that value open-source governance and transparency. Microsoft Hyper-V is re-examined, where deep Microsoft integration already exists.

At the same time, vendors are responding. HPE Morpheus VM Essentials reflects a broader trend toward abstraction and lifecycle management that reduces direct dependency on a single virtualization layer.

Nutanix appears in this context not as a disruptive newcomer, but as an established private cloud platform that fits a diversification narrative. For some organizations, it represents a way to introduce a second platform without abandoning existing operations or retraining entire teams from scratch.

None of these options is a universal replacement. That is not the point. The point is that choice has returned.

This diversity is healthy. It forces vendors to compete on clarity, pricing, ecosystem openness, and operational value. It forces customers to revisit assumptions that have gone unchallenged for years and it reintroduces architectural optionality into a layer of infrastructure that had become remarkably static.

This conversation matters now

For years, private cloud concentration risk was theoretical. Today, it is increasingly tangible.

The combination of high platform concentration, shifting commercial models, and narrowing ecosystems forces organizations to re-examine decisions they have not questioned in over a decade. Not because the technology suddenly failed, but because dependency became visible.

The irony is that enterprises already know how to reason about this problem. They apply the same logic every day in public cloud.

The difference is psychological. Private cloud infrastructure feels “owned”. It runs on-premises and it feels sovereign. That feeling can be partially true, but it can also obscure how much strategic control has quietly shifted elsewhere.

A measured conclusion

This is not a call for mass migration away from VMware. That would be reactive and, in many cases, irresponsible.

It is a call to apply the same discipline to private cloud platforms that organizations already apply to public cloud providers. Concentration risk does not disappear because infrastructure runs in a data center.

So, if the terms change, do you have a credible alternative?

Nutanix should not be viewed primarily as a replacement for VMware

Nutanix should not be viewed primarily as a replacement for VMware

Public sector organizations rarely change infrastructure platforms lightly. Stability, continuity, and operational predictability matter more than shiny and modern solutions. Virtual machines became the dominant abstraction because they allowed institutions to standardize operations, separate applications from hardware, and professionalize IT operations over the long term.

For many years, VMware has become synonymous with this VM-centric operating model, as it provided a coherent, mature, and widely adopted implementation of virtualized infrastructure. Choosing VMware was, for a long time, a rational and defensible decision.

Crucially, the platform was modular. Organizations could adopt it incrementally, integrate it with existing tools, and shape their own operating models on top of it. This modularity translated into operational freedom. Institutions retained the ability to decide how far they wanted to go, which components to use, and which parts of their environment should remain under their direct control. These characteristics explain why VMware became the default choice for so many public institutions. It aligned well with the values of stability, proportionality, and long-term accountability.

The strategic question public institutions face today is not whether that decision was wrong. Rather, if they can learn from it. We need to ask ourselves whether the context around that decision has changed and whether continuing along the same platform path still preserves long-term control, optionality, and state capability.

From VM-centric to platform-path dependent

It is important to be precise in terminology. Most public sector IT environments are not VMware-centric by design. They are VM-centric. Virtual machines are the core operational unit, deeply embedded in processes, tooling, skills, and governance models. This distinction is very important. A VM-centric organization can, in principle, operate on different platforms without redefining its entire operating model. A VMware-centric organization, by contrast, has often moved further down a specific architectural path by integrating tightly with proprietary platform services, management layers, and bundled stacks that are difficult to disentangle later.

This is where the strategic divergence begins.

Over time, VMware’s platform has evolved from a modular virtualization layer into an increasingly integrated software-defined data center (SDDC) and VCF-oriented (VMware Cloud Foundation) stack. That evolution is not inherently negative. Integrated platforms can deliver efficiencies and simplified operations, but they also introduce path dependency. Decisions made today shape which options remain viable tomorrow.

So, the decisive factor is not pricing. Prices change. For public institutions, this is a governance issue (not a technical one).

There is a significant difference between organizations that adopted VMware primarily as a hypervisor platform and those that fully embraced the SDDC or VCF vision.

Institutions that did not fully commit to VMware’s integrated SDDC approach often still retain architectural freedom. Their environments are typically characterized by:

  • A strong focus on virtual machines rather than tightly coupled platform services
  • Limited dependency on proprietary automation, networking, or lifecycle tooling
  • Clear separation between infrastructure, operations, and higher-level services

For these organizations, the operational model remains transferable. Skills, processes, and governance structures are not irreversibly bound to a single vendor-defined stack. This has two important consequences.

First, technical lock-in can still be actively managed. The platform does not yet dictate the future architecture. Second, the total cost of change remains realistic. Migration becomes a controlled evolution rather than a disruptive transformation.

In other words, the window for strategic choice is still open.

Why this moment matters for the public sector

Public institutions operate under conditions that differ fundamentally from those of private enterprises. Their mandate is not limited to efficiency, competitiveness, or short-term optimization. Instead, they are entrusted with continuity, legality, and accountability over long time horizons. Infrastructure decisions made today must still be explainable years later, often to different audiences and under very different political circumstances. They must withstand audits, parliamentary inquiries, regulatory reviews, and shifts in leadership without losing their legitimacy.

This requirement fundamentally changes how technology choices must be evaluated. In the public sector, infrastructure is an integral part of the institutional framework that enables the state to function effectively. Decisions are therefore judged not only by their technical benefits and performance, but by their long-term defensibility. A solution that is efficient today but difficult to justify tomorrow represents a latent risk, even if it performs flawlessly in day-to-day operations.

It is within this context that the concept of digital sovereignty has moved from abstraction to obligation. Governments increasingly define digital sovereignty not as isolation or technological nationalism, but as the capacity to maintain control and freedom of an environment. This includes the ability to reassess vendor relationships, adapt sourcing strategies, and respond to geopolitical, legal, or economic shifts without being forced into reactive or crisis-driven decisions.

Digital sovereignty, in this sense, is closely tied to governance and control. It is about ensuring that institutions retain the ability to make informed, deliberate choices over time. That ability depends less on individual technologies and more on the structural properties of the platforms on which those technologies are built. When platforms are designed in ways that limit flexibility, they quietly constrain future options, regardless of their current performance or feature set.

Platform architectures that reduce reversibility are particularly problematic in the public sector. Reversibility does not imply constant change, nor does it require frequent platform switches. It simply means that change remains possible without disproportionate disruption. When an architecture makes it technically or organizationally prohibitive to adjust course, it creates a form of lock-in that extends beyond commercial dependency into the realm of institutional risk.

Even technically advanced platforms can become liabilities if they harden decisions that should remain open. Tight coupling between components, inflexible operational models, or vendor-defined evolution paths may simplify operations in the short term, but they do so at the cost of long-term flexibility. In public institutions, where the ability to adapt is inseparable from democratic accountability and legal responsibility, this trade-off must be examined with particular care.

Ultimately, digital sovereignty in the public sector is about ensuring that those dependencies remain governable. Platforms that preserve reversibility support this goal by allowing institutions to evolve deliberately, rather than react under pressure. Platforms that erode it may function well today, but they quietly accumulate strategic risk that only becomes visible when options have already narrowed.

Seen through this lens, digital sovereignty is a core governance requirement, embedded in the responsibility of public institutions to remain capable, accountable, and in control of their digital future.

Nutanix as a strategic inflection point

This is why Nutanix should not be viewed primarily as a replacement for VMware. Framing it as such immediately steers the discussion in the wrong direction. Replacements imply disruption, sunk costs, and, perhaps most critically in public-sector and enterprise contexts, an implicit critique of past decisions. Infrastructure choices, especially those made years ago, were often rational, well-founded, and appropriate for their time. Suggesting that they now need to be “replaced” risks triggering defensiveness and obscures the real strategic question.

More importantly, the replacement narrative fails to capture what Nutanix actually represents for VM-centric organizations. Nutanix does not demand a wholesale change in operating philosophy. It does not require institutions to abandon virtual machines, rewrite operational playbooks, or dismantle existing governance structures. On the contrary, it deliberately aligns with the VM-centric operating model that many public institutions and enterprises have refined over years of practice.

For this reason, Nutanix is better understood as a strategic inflection point. It marks a moment at which organizations can reassess their platform trajectory without invalidating the past. Virtual machines remain first-class citizens, operational practices remain familiar and roles, responsibilities, and control mechanisms continue to function as before. The day-to-day reality of running infrastructure does not need to change.

What does change is the organization’s strategic posture.

In essence, Nutanix is about restoring the ability to choose. In public-sector (and enterprise environments), that ability is often more valuable than any individual feature or performance metric.

The cost of change versus the cost of waiting

A persistent misconception in infrastructure strategy is the assumption that platform change is, by definition, prohibitively expensive. This belief is understandable. Large-scale IT transformations are often associated with complex migration projects, organizational disruption, and unpredictable outcomes. These associations create a strong incentive to delay any discussion of change for as long as possible.

Yet this intuition is misleading. In practice, the cost of change does not remain constant over time. It increases the longer the architectural lock-in is allowed to deepen.

Platform lock-in rarely occurs as an intentional choice, but it accumulates gradually. Additional services are adopted for convenience, tooling becomes more tightly integrated and operational processes begin to assume the presence of a specific platform. Over time, what was once a flexible foundation hardens into an implicit dependency. At that point, changing direction no longer means replacing a component; it means changing an entire operating model.

Organizations that remain primarily VM-centric and act early are in a very different position. When virtual machines remain the dominant abstraction and higher-level platform services have not yet become deeply embedded, transitions can be managed incrementally. Workloads can be evaluated in stages. Skills can be developed alongside existing operations. Governance and procurement processes can adapt without being forced into emergency decisions.

In these cases, the cost of change is not trivial, but it is proportionate. It reflects the effort required to introduce an alternative (modular) platform, not the effort required to escape a tightly coupled ecosystem.

VMware to Nutanix Windows

By contrast, organizations that postpone evaluation until platform constraints become explicit often find themselves facing a very different reality. When licensing changes, product consolidation, or strategic shifts expose the depth of dependency, the room for change has already narrowed. Timelines become compressed, options shrink, and decisions, that should have been strategic, become reactive.

The cost explosion in these situations is rarely caused by the complexity of the alternative platform. It is caused by the accumulated weight of the existing one. Deep integration, bespoke operational tooling, and platform-specific governance models all add friction to any attempt at change. What might have been a manageable transition years earlier becomes a high-risk transformation project.

This leads to a paradox that many institutions only recognize in hindsight. The best time to evaluate change is precisely when there is no immediate pressure to do so. Early evaluation is a way to preserve choice. It allows organizations to understand their true dependencies, test assumptions, and (perhaps) maintain negotiation leverage.

Waiting, by contrast, does not preserve stability. It often preserves only the illusion of stability, while the cost of future change continues to rise in the background.

For public institutions in particular, this distinction is critical. Their mandate demands foresight, not just reaction. Evaluating platform alternatives before change becomes unavoidable means taking over responsibility.

A window that will not stay open forever

Nutanix should not be framed as a rejection of VMware, nor as a corrective to past decisions. It should be understood as an opportunity for VM-centric public institutions to reassess their strategic position while they still have the flexibility to do so.

Organizations that did not fully adopt VMware’s SDDC approach are in a particularly strong position. Their operational models are portable, their technical lock-in is still manageable and their total cost of change remains proportionate.

For them, the question is whether they want to preserve the ability to decide tomorrow.

And in the public sector, preserving that ability is a governance responsibility.

When “Staying” Becomes a Journey And Why Nutanix Lets You Take Back Control

When “Staying” Becomes a Journey And Why Nutanix Lets You Take Back Control

There are moments in IT where the real disruption is not the change you choose, but the change that quietly happens around you. Many VMware customers find themselves in exactly such a moment. On the surface, everything feels familiar. The same hypervisor, the same vendors, the same vocabulary. But underneath that surface, something more fundamental is shifting and Broadcom’s new licensing and product model has turned VMware’s future into a one-way street. A gradual but unmistakable movement toward VMware Cloud Foundation (VCF).

What makes this moment so tricky is the illusion it creates. Because the names look the same, many organizations convince themselves that staying with VMware means avoiding change. They assume the path ahead is simply the continuation of the path behind. Yet the platform they are moving toward does not behave like the platform they came from. VCF 9 is a different way of running private cloud. A different architecture, operational model, and a different set of dependencies and constraints.

Once you see this clearly, the situation becomes easier to understand. Even if you stay with VMware, you are moving. The absence of physical distance does not mean the absence of migration. What changes is not the location of your workloads, but the world those workloads inhabit.

And that world looks much more like a cloud transition than like a traditional upgrade.

This is the first truth enterprises need to accept: it is still a migration.

The Subtle Shift From Upgrade to Replatforming

VCF 9 carries its own gravity. It reshapes how the environment must be designed, how networking is stitched together, how lifecycle management works, how domains are laid out, how automation behaves and how operations are structured. It forces full-stack adoption, even if your organization only needs part of the stack. And once the platform becomes prescriptive, you must either adopt its assumptions or fight against them.

If this exact level of change were introduced by a hyperscaler, nobody would hesitate to call it a cloud migration. It would come with discovery workshops, architecture reviews, dependency mapping, proof-of-concepts, testing phases, retraining, risk assessments,and new governance. But because the new platform still carries the VMware name, some organizations treat it as a large patch. Which it clearly is not.

This is where many stumble. An upgrade assumes continuity. A migration assumes transformation. VCF 9 sits firmly on the transformation side of that spectrum. Treating it as anything less increases risk, cost and frustration.

In other words, the work is the same work you would do for a cloud move. Only the destination changes.

Complexity You Did Not Ask For

One of the most overlooked consequences of this shift is the gradual increase in complexity. The move to a full-stack VCF world comes with the same architectural side effects you would expect when adopting any complex platform. More components, more integration points, more rules, more interdependencies, more expertise required to keep things stable.

Organizations want simplicity. You pay for it in architecture that becomes harder to evolve, in operations that require more coordination, in outages that take longer to troubleshoot, in people who must maintain increasingly fragile mental maps, and in costs that rise simply because the platform demands it.

And this is where the forced nature of the move becomes visible. You are inheriting complexity because the vendor has decided the portfolio must move in that direction. This is the difference between transformation that serves your strategy and transformation that serves someone else’s.

One Migration You Cannot Avoid, One Migration You Can Choose

At some point, every organization reaches a moment where movement is no longer a matter of preference but of circumstance. The transition to VCF 9 is exactly that kind of moment. Once this becomes clear, the nature of the decision changes. You stop focusing on how to avoid disruption and start asking a more strategic question: If we are investing the time, energy, and attention anyway, where should this effort lead?

VCF 9 is one possible destination. And it may very well be the right choice for some enterprises. But the key is that it should be a choice and not an automatic continuation of the past.

Customers need a model where the effort you invest in migration pays you back in reduced complexity rather than increased dependency.

Nutanix can be an option and a different operating model.

Yes the interesting truth is that both paths require work. Both involve change. Both need planning, testing, and careful execution. The difference lies in what you get once the work is done. One migration leaves you with a platform that is heavier and more prescriptive than the one you had before. The other leaves you with an environment that is lighter, simpler, and easier to operate.

The Real Choice in a Moment of Unwanted Movement

When change arrives from the outside, it rarely feels fair. It interrupts plans, forces attention onto things you didn’t choose, and demands energy you would rather spend somewhere else. Nobody asked for it. Nobody scheduled it. Yet here it is, reshaping the future architecture of your private cloud, whether you feel ready or not.

A different model of infrastructure can offer a way to use this forced moment of movement to your advantage, to turn a vendor-driven transition into an opportunity to simplify, to regain autonomy, and to design an infrastructure model that supports your next ten years rather than constraining them.

You may not have chosen the timing of this transition. But you can choose the shape of the destination. And in many ways, that is the most meaningful form of control an organization can exercise in a moment where the outside world tries to dictate the path ahead.

VMware by Broadcom – The Standard of Independence Has Become a Structure of Dependency

VMware by Broadcom – The Standard of Independence Has Become a Structure of Dependency

There comes a point in every IT strategy where doing nothing becomes the most expensive choice. Many VMware by Broadcom customers know this moment well, and they sense that Broadcom’s direction isn’t theirs, but still hesitate to move. The truth is, the real risk isn’t in changing platforms but waiting too long to reclaim control.

I have worked with VMware products for more than 15 years and even spent part of my career as a VMware solution engineer before Broadcom acquired this company. A company that once had a wonderful culture. A culture that, sadly, no longer exists. Many of my former colleagues no longer trust their leadership. What does this mean for you?

We know that VMware environments are mature, battle-tested, and deeply embedded into how enterprises operate. And that’s exactly the problem. Over the years, VMware became more than a platform. It became the language of enterprise IT with vSphere for compute, vSAN for storage, NSX for networking. It’s how we learned to think about infrastructure. That’s the vision of VMware Cloud Foundation (VCF) and the software-defined data center (SDDC).

Fast forward, even when customers are frustrated by cost increases, licensing restrictions, or shifting support models, they rarely act. Why? Because it feels safer to tolerate pain than to invite uncertainty. But stability is often just an illusion. What feels familiar isn’t necessarily secure.

The Forced Migration Nobody Talks About

The irony is that many customers who think they are avoiding change are actually facing one. Just not by choice. Broadcom’s current direction points toward a future where customers can only consume VMware Cloud Foundation (VCF) as a unified, integrated stack. Which, in general, is a good thing, isn’t it?

As a result, you no longer decide which components you actually need. Even if you only use vSphere, vSAN, and Aria Operations today, you will be licensed and forced to deploy the full stack, including NSX and VCF Operations/Automation, whether you need them or not. While that’s still speculation, everything Hock Tan says points in this direction. And many analysts see it the same way.

Broadcom reached VMware’s goal: VCF has become their flagship product, but only by force and not by the customer’s choice. Broadcom has leverage by choosing the right discounts to make VCF the “right” and only choice for customers, even if they don’t want to adopt the full stack.

Paths to VCF 9

What does this mean for your future? In practice, it’s a structural migration disguised as continuity and not just a commercial shift. Moving from a traditional vSphere or HCI-based setup to VCF comes with the same side effects, changes, and costs you would face when adopting a new platform (Nutanix, Red Hat, Azure Local etc.).

Think about it: If you must migrate anyway, why not move toward more control, not less?

Features, Not Products

Broadcom has been clear about its long-term vision. The company now describes VMware Cloud Foundation as the only product name. They see it as the operating system for data centers, which is a great message, but Broadcom wants VMware to operate like Azure, where you don’t “buy” networking or storage. You consume them as built-in features of the platform.

Once this model is fully implemented, you won’t purchase vSphere or NSX. You’ll subscribe to VCF, and those technologies will simply be features. The Aria Suite has already disappeared from the portfolio (example: Aria Operations became VCF Operations). The next product to vanish will be everything except the name VMware Cloud Foundation.

It’s a clever move for Broadcom, but a dangerous one for customers. Yes, I am looking at you. Because when every capability becomes part of a single subscription, the flexibility to choose or not to use disappears. This means your infrastructure, once hybrid and modular, is now a monolith. Imagine the lock-in of any hyperscaler, but on-premises. That’s the new VMware.

The True Cost of Change

Let’s be honest, migrations are not easy. They require time, expertise, and courage. Yes, courage as well. But the cost of change is not the real problem. The cost of inaction is.

When organizations stay on platforms that no longer align with their strategy, they pay with flexibility, not just money. Every renewal locks in another year(s) of dependency. Every delay potentially pushes innovation further out of reach. And with Broadcom’s model, the risk isn’t just financial. The control over your architecture, your upgrade cadence, your integrations, and even your licensing terms slowly moves away from you. And faster than you may think.

VCF SPD November 2025

Broadcom’s new compliance mechanisms amplify that dependency. According to the November 2025 VCF Specific Program Documentation, customers must upload a verified compliance report every 180 days. Failing to do so allows Broadcom to degrade or block management-plane functionality and suspend support entitlements. What once was a perpetual license has become an always-connected control loop. A system that continuously validates, monitors, and enforces usage from the outside. Is that okay for a “sovereign” cloud or you as the operator?

As Hock Tan , our President and CEO, shared in today's General Session at  VMware Explore Barcelona, European customers want control over their data  and processes. | Broadcom

You don’t notice it day by day. But five years later, you realize: Your data center doesn’t belong to you anymore.

Why Change Feels Bigger Than It Is

Anyway, change is often perceived as a massive technical disruption. But in reality, it’s usually a series of small, manageable steps. Modern infrastructure platforms have evolved to make transitions far less painful than before. Today, you can migrate workloads gradually, reuse existing automation scripts, and maintain uptime while transforming the foundation beneath.

What used to be a twelve-month migration project can now be done in phases, with full visibility and reversible checkpoints. The idea is not to replace everything. It’s to regain control, layer by layer.

Freedom as a Strategy

Freedom should be a design principle. It means having a platform that lets you choose, and it also means being able to decide when to upgrade, how to scale, and where your data lives, without waiting for a vendor’s permission.

This is why I joined Nutanix. They don’t force you into a proprietary stack. They abstract complexity instead of hiding it. They allow you to run what you need, and only what you need, whether that’s virtualization, containers, or a mix of both. Yep, and you can also provide DBaaS (NDB) or a private AI platform (NAI).

I’m not telling you to abandon what you know. Take a breath and think about what’s possible when choice returns.

For years, VMware has been the familiar home of enterprise IT. But homes can become cages when you are no longer allowed to move the furniture. The market is moving towards platforms that combine the comfort of virtualization with the agility of cloud without the loss of control.

This shift is already happening. Many organizations start small – with their disaster recovery site, their dev/test environment, or their EUC workloads. Once the first step is done, confidence grows. They realize that freedom doesn’t come from ripping everything out. It comes from taking back control, one decision at a time.

A Quiet Revolution

The next chapter of enterprise infrastructure will not be written by those who cling to the past, but by those who dare to redesign their foundations. Not because they want to change, but because they must to stay agile, compliant, and sovereign in a world where autonomy is everything.

The legal fine print makes it clear. What Broadcom calls modernization is, in fact, a redesign of control. And control rarely moves back to the customer once it’s gone.

The question is no longer “Can we afford to change?”

It should be “Can we afford not to?”. Can YOU afford not to?

And maybe that’s where your next journey begins. Not with fear, but with the quiet confidence that the time to regain control has finally arrived.