Nach Broadcom – Wie ein Markt seine eigene Unsicherheit produziert und warum nicht jede Plattform zur nächsten VMware wird

Nach Broadcom – Wie ein Markt seine eigene Unsicherheit produziert und warum nicht jede Plattform zur nächsten VMware wird

Als Broadcom die Übernahme von VMware vollzog, war die erste Reaktion vieler Kunden rational. Man prüfte Verträge, bewertetet Kosten und evaluiert(e) Alternativen. Die zweite Reaktion war emotionaler und nachhaltiger. Ein grundlegendes Misstrauen hat sich in den Markt eingeschlichen. Nicht nur gegenüber VMware, sondern gegenüber Plattformanbietern generell.

In vielen meiner Gesprächen zeigt sich heute ein wiederkehrender Gedanke: Selbst wenn Unternehmen eine Alternative evaluieren, etwa Nutanix, steht eine spannende Frage im Raum. Was, wenn sich die Geschichte wiederholt?

Diese Frage ist verständlich, aber es ist nicht die wichtige und richtige Frage.

Die eigentliche Veränderung – Vertrauen ist zur Architekturfrage geworden

Die Diskussion rund um Virtualisierung, Private Cloud oder Hybrid Cloud war lange technologisch geprägt. Es ging um Performance, Features, Integration. Heute verschiebt sich jedoch der Fokus. Es geht plötzlich um Kontrolle, Planbarkeit und zunehmend um strukturelles Vertrauen.

Broadcom hat mit seinem Vorgehen nicht nur Preise und Lizenzmodelle verändert. Es hat eine neue Wahrnehmung geschaffen, dass Plattformen können sich fundamental ändern können, ohne dass Kunden darauf Einfluss haben.

Das Ergebnis ist eine Art “Generalverdacht”. Anbieter werden nicht mehr nur technisch bewertet, sondern entlang einer impliziten Risikoachse: Wie wahrscheinlich ist es, dass dieser Anbieter in drei bis fünf Jahren ein völlig anderes Geschäftsmodell verfolgt und sogar aufgekauft wird?

Nutanix im Kontext dieser neuen Realität

Nutanix unterscheidet sich strukturell von dem, was viele Kunden aktuell implizit befürchten. Das Unternehmen ist börsennotiert, breit im Markt verankert und im Besitz international diversifizierter Investoren. Es gibt keinen dominierenden Eigentümer mit strategischer Agenda, der kurzfristig fundamentale Richtungswechsel erzwingen könnte.

Das bedeutet nicht, dass Veränderungen ausgeschlossen sind, das wäre naiv. Aber es verändert die Wahrscheinlichkeit und vor allem die Dynamik solcher Veränderungen.

Die oft geäusserte Sorge, Nutanix könnte das nächste “VMware” werden, ist unwahrscheinlich. Sie überträgt ein spezifisches Ereignis auf ein völlig anderes strukturelles Umfeld.

Interessanterweise ist ein gegenteiliges Szenario realistischer. Anbieter im unteren oder mittleren Marktsegment, die stark wachsen und Marktanteile gewinnen, werden eher zu Übernahmezielen. Ein Beispiel, das in vielen Diskussionen fällt, ist Proxmox. Genau solche Player stehen historisch eher im Fokus strategischer Konsolidierungen.

Der Reflex “Open Source”

Parallel zur Skepsis gegenüber kommerziellen Plattformen lässt sich ein zweiter Trend beobachten, nämlich der Rückzug in Open Source.

Begriffe wie “Unabhängigkeit”, “kein Vendor Lock-in” oder “volle Kontrolle” prägen diese Diskussion. Technologisch stehen dabei Lösungen wie OpenStack oder Apache CloudStack im Vordergrund.

Auch hier, der Gedanke dahinter ist nachvollziehbar, doch er wird oft zu einfach formuliert.

Open Source löst nicht das Grundproblem der Plattformabhängigkeit. Es verschiebt es lediglich.

Denn die entscheidende Frage ist nicht, ob Software offen oder proprietär ist, sondern eher: Wie einfach kann ich meine Workloads bewegen?

Cloud Exit bleibt ein physisches Problem

Egal ob Unternehmen auf Nutanix, Open Source oder klassische Virtualisierung setzen,ein Plattformwechsel bedeutet fast immer:

  • Migration von virtuellen Maschinen
  • Anpassung von Netzwerkkonfigurationen
  • Neuaufbau von Automatisierung und gewissen Betriebsmodellen
  • Testen und Validieren der Workloads

Selbst innerhalb von Open-Source-Ökosystemen ist Interoperabilität begrenzt. Ein Wechsel von OpenStack zu CloudStack ist kein “Lift & Shift”, sondern eher ein Transformationsprojekt.

Auch der Weg in die Public Cloud ändert daran wenig. Workloads müssen angepasst, Images konvertiert und Abhängigkeiten neu beurteilt werden.

Die Vorstellung, dass Open Source automatisch zu einem “reibungslosen Exit” führt, hält einer praktischen Überprüfung selten stand.

Kubernetes als vermeintlicher Ausweg und seine Grenzen

Ein ähnliches Narrativ existiert rund um Kubernetes. Containerisierung gilt als Königsweg zur Portabilität. Einmal modernisiert, überall lauffähig. So zumindest die Annahme. In der Praxis zeigt sich aber ein anderes Bild. Kubernetes ist kein homogener Standard, denn jede Distribution bringt ihr eigenes Ökosystem mit:

  • unterschiedliche Netzwerk-Stacks
  • verschiedene Storage-Integrationen
  • eigene Security-Modelle
  • proprietäre Erweiterungen und Services

Ein Cluster auf einer Plattform ist nicht identisch mit einem Cluster auf einer anderen. Der Wechsel zwischen Kubernetes-Umgebungen reduziert zwar bestimmte Abhängigkeiten auf Applikationsebene, verschiebt aber die Komplexität in die Plattformintegration.

Auch hier gilt: Portabilität ist möglich, aber nicht kostenlos.

Was sich tatsächlich verändert hat

Die vielleicht wichtigste Erkenntnis nach Broadcom ist keine technische, sondern eher eine strategische. Unternehmen müssen Plattformentscheidungen heute unter zwei Perspektiven treffen:

  1. Was kann die Plattform heute leisten?
  2. Wie wahrscheinlich ist es, dass sich ihre Spielregeln morgen verändern?

Diese zweite Dimension war früher implizit, heute ist sie zentral.

Eine erste nüchterne Schlussfolgerung

Der Markt reagiert aktuell verständlich, aber nicht immer differenziert oder fundiert. Nicht jeder Anbieter wird zur nächsten VMware und nicht jede Open-Source-Strategie führt automatisch zu mehr Kontrolle.

Die eigentliche Herausforderung bleibt unverändert – Komplexität verschwindet nicht, sie verlagert sich einfach.

Wer heute über Plattformen entscheidet, sollte also weniger in Kategorien wie “proprietär vs. Open Source” denken und stärker in Szenarien:

  • Wie sieht ein realistischer Exit aus?
  • Wie hoch ist der operative Aufwand eines Wechsels?
  • Welche Abhängigkeiten entstehen – technisch, organisatorisch und wirtschaftlich?

Die Antworten darauf sind selten ideologisch, sondern fast immer pragmatisch.Und genau darin liegt doch die eigentliche Aufgabe, nämlich
nicht den perfekten Anbieter zu finden, sondern den bewusst gewählten.

Open Source als Fundament, nicht als Gegenmodell

In der aktuellen Debatte wird Open Source oft als Gegenentwurf zu kommerziellen Plattformen positioniert. Jedoch sind Open Source und Enterprise-Plattformen längst keine Gegensätze mehr, sondern sind zunehmend miteinander verwoben.

Gerade Nutanix ist ein Beispiel dafür, wie sich diese beiden Welten verbinden lassen.

Der Nutanix-Hypervisor AHV basiert auf KVM, einem der etabliertesten Open-Source-Hypervisoren weltweit. KVM bildet seit Jahren die Grundlage zahlreicher Cloud-Plattformen und wird auch von Hyperscalern eingesetzt. Nutanix hat darauf aufbauend eine Enterprise-Schicht entwickelt, die Themen wie Lifecycle Management, Automatisierung, Security und Support integriert.

Das ist ein entscheidender Unterschied zur klassischen Open-Source-Nutzung. Das Rad wird nicht neu erfunden, sondern ein stabiler, offener Kern wird gezielt erweitert, gehärtet und in einen betriebsfähigen Kontext gebracht.

Open Source bleibt erhalten, aber die operative Komplexität wird abstrahiert. Das Engineering und Innovations-Management wird somit ausgelagert.

Kubernetes ohne Plattformzwang

Mit der Nutanix Kubernetes Platform (NKP) verfolgt Nutanix bewusst keinen proprietären Lock-in-Ansatz. Im Gegenteil, NKP ist als Plattform konzipiert, die sich aus einer Vielzahl von CNCF-Projekten zusammensetzt. Also genau jenen Open-Source-Bausteinen, die heute das Kubernetes-Ökosystem prägen.

Nutanix Kubernetes Platform Open Source

Der entscheidende Punkt ist dabei nicht die Technologie selbst, sondern die Platzierung. NKP ist nicht an die Nutanix-eigene Virtualisierungsplattform gebunden. Das bedeutet konkret:

  • Kubernetes-Cluster können auf Nutanix betrieben werden
  • ebenso auf VMware-Umgebungen
  • auf Baremetal-Infrastrukturen
  • oder direkt in Public Clouds

Während viele Plattformanbieter versuchen, Kubernetes enger an ihre eigene Infrastruktur zu binden (z.B. VMware mit VKS), verfolgt Nutanix einen anderen Ansatz. Kubernetes soll dort laufen, wo es für den Kunden sinnvoll ist und nicht dort, wo es lizenztechnisch oder architektonisch “erwartet” wird.

Das unterschätzte Detail – Entkopplung als Designprinzip

Diese Architektur führt zu einer bewussten Entkopplung zwischen Infrastruktur und Plattform.

Ein Unternehmen kann sich für NKP entscheiden, ohne sich gleichzeitig für den gesamten Nutanix-Stack festlegen zu müssen. Umgekehrt kann Nutanix-Infrastruktur betrieben werden, ohne Kubernetes zwingend darauf zu standardisieren.

Diese Modularität steht im klaren Kontrast zu Entwicklungen im Markt, bei denen Plattformen zunehmend als geschlossene Systeme positioniert werden.

Gerade im Kontext der aktuellen VMware-Debatte ist das relevant. Viele Kunden fürchten, dass der Einstieg in eine Plattform automatisch zu einer langfristigen, schwer auflösbaren Bindung führt.

Das Beispiel NKP zeigt, dass es auch anders geht.

Open Source bleibt, aber nicht im Rohzustand

Ein weiterer Punkt, der in der Praxis oft missverstanden wird: Open Source allein löst keine betrieblichen Herausforderungen.

Projekte wie Kubernetes, KVM oder auch die verschiedenen CNCF-Komponenten sind leistungsfähig, aber sie sind nicht per se “Enterprise-ready”. Sie müssen integriert, betrieben, überwacht, abgesichert und weiterentwickelt werden.

Genau hier setzt Nutanix an. Die Strategie besteht nicht darin, Open Source zu ersetzen, sondern sie in einen konsistenten Betriebsrahmen zu bringen. Das Ergebnis ist kein Widerspruch, sondern eine Kombination aus:

  • Offene Technologien als Basis
  • Kommerzielle Plattform als Betriebsmodell

Fazit

Nach den Erfahrungen mit Broadcom suchen viele Unternehmen nach Alternativen, die sowohl technologisch tragfähig als auch strategisch verlässlich sind. Dabei entstehen oft zwei Extreme:

  1. Auf der einen Seite die Rückkehr zu “reiner” Open Source
  2. Auf der anderen Seite die Suche nach einem neuen Plattformanbieter

Der Ansatz von Nutanix liegt genau zwischen diesen beiden Polen:

  • die Offenheit etablierter Open-Source-Technologien
  • kombiniert mit einem klar definierten Betriebsmodell
  • und einer bewusst modularen Architektur

Das bedeutet nicht, dass Abhängigkeiten verschwinden, aber sie werden transparenter und in vielen Fällen auch steuerbarer.

Und genau das ist in der aktuellen Marktsituation entscheidend. Nicht die Illusion vollständiger Unabhängigkeit, sondern die Fähigkeit, Abhängigkeiten bewusst zu gestalten.

Digitale Souveränität und der Broadcom-Wendepunkt – Warum Oktober 2027 für den Public Sector kritisch wird

Digitale Souveränität und der Broadcom-Wendepunkt – Warum Oktober 2027 für den Public Sector kritisch wird

English Version: https://www.linkedin.com/pulse/broadcoms-october-2027-turning-point-why-public-sector-rebmann-t3vme/

Digitale Souveränität ist in den letzten Jahren zu einem zentralen Begriff geworden, insbesondere im öffentlichen Sektor. Dennoch wird die Diskussion häufig an der Oberfläche geführt. Oft geht es um Datenstandorte, um europäische Cloud-Initiativen oder um zusätzliche Sicherheitsmechanismen. Was dabei übersehen wird, ist die eigentliche Ebene, auf der Souveränität entsteht oder verloren geht. Nämlich bei Architektur der Plattformen, auf denen unsere IT basiert.

Über viele Jahre hinweg haben Organisationen ihre Infrastruktur auf VMware aufgebaut. Virtualisierung war der stabile Kern, auf dem sich moderne Rechenzentren und später auch Private-Cloud-Umgebungen entwickelt haben. Diese Umgebungen waren in ihrer ursprünglichen Form modular. Compute, Storage und Netzwerk inkl. Management konnten unabhängig voneinander betrieben und weiterentwickelt werden. Diese Modularität war ein entscheidender Erfolgsfaktor – vor allem im KMU-Segment. Sie ermöglichte es Organisationen, ihre Architektur schrittweise anzupassen, Technologien auszutauschen oder zu ergänzen und Betriebsmodelle weiterzuentwickeln, ohne jedes Mal das gesamte Fundament neu bauen zu müssen.

Gleichzeitig hat sich über die Jahre eine starke Marktkonzentration aufgebaut. Es ist realistisch davon auszugehen, dass heute rund 80 Prozent des Public Sectors auf VMware-Technologie basieren. Diese breite Verbreitung war lange ein Vorteil, weil sie Standardisierung, Know-how-Aufbau und ein starkes Partner-Ökosystem ermöglicht hat. Heute wird genau diese Konzentration jedoch zu einem strukturellen Risiko. Denn wenn ein einzelner Anbieter seine Strategie grundlegend verändert, betrifft das nicht einzelne Organisationen, sondern einen Grossteil des gesamten Ökosystems. Ja, sogar einen Grossteil von Schweizer Rechenzentren.

Mit der Übernahme von VMware durch Broadcom hat sich diese Ausgangslage grundlegend verändert. Die Transformation erfolgt dabei nicht in einem Schritt, sondern in mehreren, klar erkennbaren Phasen.

Phase 1

Der erste Einschnitt war wirtschaftlicher Natur. Neue Lizenzmodelle und Bündelungen haben die Kostenstruktur verändert und in vielen Fällen deutlich erhöht. Damit wurde die wirtschaftliche Souveränität vieler Organisationen bereits spürbar eingeschränkt. Entscheidungen konnten nicht mehr allein auf Basis des tatsächlichen Bedarfs getroffen werden, sondern mussten sich zunehmend an vorgegebenen Lizenzmodellen orientieren.

Phase 2

Parallel dazu hat sich das Partner-Ökosystem verändert. Viele VMware-Partner sind verschwunden oder haben ihre Rolle angepasst. Für Kunden bedeutet das eine reduzierte Auswahl an Integratoren und Dienstleistern, weniger Wettbewerb und damit indirekt auch weniger Einflussmöglichkeiten. Souveränität zeigt sich nicht nur in Technologie, sondern auch in der Fähigkeit, zwischen verschiedenen Partnern und Betriebsmodellen wählen zu können. Wenn diese Auswahl kleiner wird, sinkt auch die Handlungsfreiheit.

Phase 3

Die dritte Phase, die sich aktuell abzeichnet, ist die technisch-strukturelle. Mit der strategischen Ausrichtung auf VMware Cloud Foundation 9 (VCF) als dominierendes Zielmodell wird die Architektur selbst zum Steuerungsinstrument. Was früher ein flexibler Baukasten war, entwickelt sich zunehmend zu einem integrierten Gesamtstack, in dem einzelne Komponenten nicht mehr unabhängig voneinander betrachtet werden können.

Technisch betrachtet bringt ein solcher Ansatz Vorteile mit sich. Standardisierung reduziert Komplexität, integrierte Betriebsmodelle können Effizienzgewinne ermöglichen, und ein klar definierter Stack vereinfacht den Betrieb. Doch diese Integration hat eine Konsequenz, die in der aktuellen Diskussion oft unterschätzt wird. Sie verändert die grundlegende Beziehung zwischen Kunde und Plattform.

Man kann die digitale Souveränität anhand von drei zentralen Fähigkeiten messen:

  1. Der Möglichkeit zu wechseln,
  2. der Fähigkeit zur Gestaltung und
  3. der Fähigkeit zur Einflussnahme

Diese drei Dimensionen sind entscheidend, weil sie darüber bestimmen, ob eine Organisation ihre IT aktiv steuern kann oder ob sie zunehmend in ein vorgegebenes Modell hineinwächst.

Genau diese Fähigkeiten werden durch die aktuelle Entwicklung schrittweise reduziert. Die Wechselmöglichkeit bleibt formal bestehen, wird aber faktisch deutlich erschwert, weil ein Wechsel nicht mehr den Austausch einzelner Komponenten bedeutet, sondern die Transformation eines gesamten Systems. Die Gestaltungsfähigkeit nimmt ab, weil Architekturentscheidungen zunehmend durch den Anbieter (Broadcom) definiert werden. Und auch die Einflussnahme sinkt, da die Verhandlungsmacht mit wachsender Abhängigkeit vom integrierten Stack strukturell abnimmt. Ähnlich wie bei der Public Cloud.

Viele Organisationen haben auf die ersten Veränderungen reagiert. Zum Beispiel haben grössere Spitäler und Kantone ihre Verträge mit Broadcom verlängert, um kurzfristig Planungssicherheit zu gewinnen und operative Ruhe zu schaffen. Diese Entscheidung ist nachvollziehbar. Sie verschafft Zeit, stabilisiert Budgets und vermeidet kurzfristige Risiken.

VCF9 zwingend ab Oktober 2027

Doch genau hier liegt ein Missverständnis, das in vielen Gesprächen sichtbar wird. Diese Verlängerungen haben keine zusätzliche Zeit geschaffen.

Die eigentliche Entwicklung läuft unabhängig davon weiter. Die strategische Ausrichtung auf VCF (VCF9) und die damit verbundene Transformation der Architektur bleiben bestehen. Der relevante Zeitpunkt verschiebt sich nicht durch ein Vertrags-Renewal.

Der eigentliche Wendepunkt bleibt bestehen. Oktober 2027.

Wie VCF Operations das Zielmodell erzwingt

Mit Version 9 von VMware Cloud Foundation verändert sich nicht nur die Architektur, sondern auch die Art und Weise, wie Compliance im Betrieb umgesetzt wird. Gemäss den aktuellen Lizenz- und Nutzungsbedingungen wird für Umgebungen ab Version 9 ein verpflichtendes Compliance Reporting eingeführt.

VCF is sold as a single product; the included components and capabilities can only be utilized on, or for the same physical Cores where the vSphere in VCF Core license is deployed.

Organisationen, die VCF einsetzen, sind demnach verpflichtet, regelmässig Compliance-Berichte zu erstellen und bereitzustellen – initial nach 180 Tagen und danach in wiederkehrenden Intervallen.

Das Compliance Reporting wird über VCF Operations abgewickelt. Damit wird diese Komponente faktisch zur Voraussetzung (VCF9 is also Voraussetzung) für den regelkonformen Betrieb. Ohne entsprechende Integration ist die Einhaltung der Vorgaben nicht mehr vollständig gewährleistet.

VCF9 Compliance Reporting

Damit entsteht ein zusätzlicher Mechanismus, der die Nutzung des vollständigen VCF-Stacks verstärkt.

In Kombination mit Lizenzmodellen, Architekturvorgaben und integrierten Betriebsfunktionen ergibt sich ein konsistentes Muster. Der Weg in das Zielmodell wird nicht nur empfohlen, sondern zunehmend strukturell abgesichert.

Quelle: https://ftpdocs.broadcom.com/cadocs/0/contentimages/VCF_SPD_July2025.pdf

Was mit VCF 9 tatsächlich installiert wird

Beim Deployment einer VCF-Umgebung wird nicht nur eine Virtualisierungsplattform (ESX Hypervisor) installiert. Vielmehr wird ein vollständiger, integrierter Stack aus Infrastruktur-, Netzwerk- und Betriebskomponenten bereitgestellt.

Konkret umfasst eine Standardinstallation mehrere zentrale Bausteine:

  • vSphere (ESX & vCenter) als Compute- und Management-Layer
  • NSX für Netzwerk und Security
  • vSAN oder alternative Storage-Integrationen
  • SDDC Manager und Fleet Management
  • sowie VCF Operations und VCF Automation als zentrale Betriebs- und Steuerungsschicht

Die einzelnen Komponenten sind nicht mehr unabhängig voneinander sinnvoll betreibbar. Sie werden zu einem zusammenhängenden System, das nur im Gesamtmodell seine volle Funktionalität entfaltet.

Quelle: https://blogs.vmware.com/cloud-foundation/2025/07/03/vcf-9-0-deployment-pathways

Neue Anforderungen an Architektur und Betrieb

Diese Veränderung bleibt nicht auf der technischen Ebene stehen. Sie hat direkte Auswirkungen auf die Leute, die diese Plattformen planen und betreiben.

Architekten und Betriebsteams müssen sich in ein deutlich breiteres und komplexeres System einarbeiten. Während sich viele Organisationen bisher stark auf den Hypervisor und klassische Virtualisierungskomponenten konzentriert haben, kommen nun zusätzliche Schichten hinzu, die zwingend Teil des Betriebsmodells sind.

Organisationen müssen neue Kompetenzen aufbauen, Prozesse anpassen und ein tieferes Verständnis für das Zusammenspiel der einzelnen Komponenten entwickeln.

Der Fokus verschiebt sich weg vom Betrieb einzelner Technologien hin zum Betrieb eines integrierten Systems. Entscheidungen in einem Bereich wirken sich unmittelbar auf andere Bereiche aus. Architektur, Betrieb und Automatisierung sind enger miteinander verknüpft als je zuvor.

Diese Entwicklung ist nicht ungewöhnlich. Sie entspricht dem generellen Trend in Richtung Plattformisierung. Doch sie hat eine klare Konsequenz. Und dieser muss man sich bewusst sein.

Informationsdefizit

Was die Situation zusätzlich verschärft, ist ein strukturelles Informationsdefizit. Viele Kunden und auch viele Partner sind sich der Tragweite dieser Veränderung noch nicht bewusst. Die Entwicklung hin zu einem integrierten, erzwungenen Plattformmodell wird oft als schrittweise Evolution wahrgenommen, nicht als fundamentaler Architekturbruch.

In der Praxis bedeutet das, dass sich ein Grossteil des Marktes heute in einer Phase scheinbarer Stabilität befindet, während sich gleichzeitig eine strukturelle Veränderung vorbereitet, die in etwa 18 Monaten ihre volle Wirkung entfalten wird.

Souveränität grossflächig in Gefahr

Bis dahin werden viele Organisationen gezwungen sein, ihre bestehenden Umgebungen zu transformieren oder neu auszurichten. Supportzyklen laufen aus, technologische Abhängigkeiten verstärken sich, und die Migration in integrierte Modelle wird zunehmend zur Voraussetzung für den Weiterbetrieb. Was heute wie eine temporäre Stabilisierung wirkt, ist in Wirklichkeit eine Phase vor einer strukturellen Entscheidung.

Anmerkung: Die VMware-Technologie ist nach wie vor exzellent. Jedoch ist VMware nicht mehr “VMware”, sondern nun Broadcom.

Digitale Souveränität geht nicht verloren, weil die Technologie schlecht ist.

Sie geht verloren, wenn Entscheidungen an Reversibilität verlieren. Wenn Architektur, Betrieb und Lizenzmodell so eng miteinander verknüpft sind, dass Alternativen zwar existieren, aber praktisch kaum mehr realistisch umsetzbar sind, verschiebt sich die Kontrolle nachhaltig.

Für den Public Sector in der Schweiz bedeutet das, dass sich die Ausgangslage fundamental verändert. Viele Organisationen betreiben heute VMware-basierte Private Clouds, haben über Jahre Know-How aufgebaut und ihre Betriebsmodelle darauf ausgerichtet. Der Übergang zu einem integrierten Modell wie VCF9 ist deshalb kein einfacher Technologieschritt, sondern eine strategische Weichenstellung.

Es ist nicht unrealistisch anzunehmen, dass ein grosser Teil der öffentlichen IT-Landschaft ab Oktober 2027 nicht mehr den zentralen Kriterien digitaler Souveränität entsprechen wird.

Nutanix als Alternative – Zurück zur Modularität

In diesem Kontext wird Nutanix häufig als Alternative genannt. Interessant ist dabei weniger die Positionierung als “Private-Cloud-Anbieter”, sondern die zugrunde liegende Architekturphilosophie.

Auch Nutanix bietet heute eine vollständige Private-Cloud-Plattform. Infrastruktur, Automatisierung, Datenservices und moderne Plattformdienste können integriert bereitgestellt werden. Auf den ersten Blick ähnelt dieses Modell dem, was auch VMware Cloud Foundation verfolgt. Der entscheidende Unterschied liegt jedoch nicht im Funktionsumfang, sondern in der Art und Weise, wie dieser bereitgestellt wird.

Während sich VMware (by Broadcom) zunehmend in Richtung eines verpflichtenden, eng integrierten Gesamtstacks entwickelt, folgt Nutanix weiterhin einem modularen Ansatz. Funktionen können kombiniert werden, müssen es aber nicht. Organisationen können entscheiden, welche Komponenten sie tatsächlich benötigen und in welchem Umfang sie diese einsetzen.

Genau diese Eigenschaft war auch ein wesentlicher Grund für den Erfolg von VMware in der Zeit vor der Broadcom-Übernahme.

Nutanix knüpft in gewisser Weise an dieses Prinzip an. Die Plattform kann als vollständige Private Cloud betrieben werden, ohne dass sie zu einem starren Zielmodell wird. Gleichzeitig ermöglicht sie unterschiedliche Betriebsmodelle, vom klassischen Rechenzentrum über Service-Provider-Umgebungen bis hin zu hybriden Szenarien. Entscheidend ist dabei, dass die operative Logik konsistent bleibt. Workloads und Betriebsprozesse sind nicht an ein einzelnes Modell gebunden, sondern können sich entlang der Anforderungen entwickeln.

Die Einordnung von Nutanix als Alternative sollte dennoch differenziert erfolgen. Auch hier handelt es sich um eine kommerzielle Plattform mit eigener Roadmap, eigenem (breiten) Ökosystem und eigenen Abhängigkeiten. Digitale Souveränität entsteht durch das Zusammenspiel von Technologie, Governance, Kompetenzen und strategischen Entscheidungen.

Eine Frage, die bisher zu selten gestellt wird

Die Risiken der Public Cloud sind im öffentlichen Sektor seit Jahren Gegenstand intensiver Diskussionen. Fragen zu Abhängigkeiten, Preisentwicklung, geopolitischem Einfluss und fehlender Kontrolle gehören heute zur Standardbewertung jeder grösseren Cloud-Entscheidung. Im Private-Cloud-Umfeld hingegen wird eine vergleichbare Debatte bisher gar nicht geführt.

Dabei deutet sich eine strukturell ähnliche Entwicklung an.

Aus einer Souveränitätsperspektive stellt sich jedoch noch eine andere Frage.

Welche Konsequenzen hat es, wenn ein grosser Teil des Public Sectors auf eine einheitliche Plattformarchitektur standardisiert, deren Betriebsmodell, Lizenzstruktur und Weiterentwicklung massgeblich von einem Anbieter bestimmt werden?

Ein Vergleich mit der Public Cloud hilft, diese Fragestellung zu beantworten.

Würde heute ein Grossteil der öffentlichen Verwaltung seine IT vollständig auf Plattformen wie Microsoft Azure, Amazon Web Services oder Google Cloud Platform betreiben und in der Folge eine signifikante Preissteigerung im Bereich von 50 bis 100 Prozent erfolgen, wäre die Reaktion absehbar. Die Diskussion über Abhängigkeiten, Alternativen und strategische Steuerbarkeit würde unmittelbar an Intensität gewinnen.

Im Private-Cloud-Umfeld ist eine vergleichbare Dynamik bereits erkennbar, wird jedoch anders wahrgenommen.

Während Risiken in der Public Cloud frühzeitig adressiert wurden, wird die gleiche Entwicklung im Private-Cloud-Umfeld häufig noch als rein technologische Evolution betrachtet. Die zugrunde liegende Abhängigkeit ist jedoch vergleichbar.

  • Welche Massnahmen werden also heute ergriffen, um diese Form der Abhängigkeit aktiv zu steuern?
  • Welche Strategien existieren, um Wechseloptionen realistisch zu erhalten?
  • Und in welchem Umfang werden Alternativen geprüft, solange diese noch mit vertretbarem Aufwand umsetzbar sind?

Diese Fragen lassen sich nur beantworten, wenn die zugrunde liegenden Veränderungen überhaupt den Kunden und Partnern klar sind.

Ein Blick auf die Beschaffung

Ein Blick auf aktuelle Ausschreibungen auf simap.ch zeigt ein klares Bild. VMware ist im Schweizer Public Sector tief verankert. Zahlreiche Organisationen haben in den Jahren 2024 und 2025 ihre bestehenden Umgebungen verlängert oder weiter ausgebaut. Die Vertragsvolumen bewegen sich im Millionenbereich und sind in vielen Fällen über mehrere Jahre ausgelegt – häufig bis 2028, 2029 oder darüber hinaus.

Viele dieser Entscheidungen wurden in einer Phase getroffen, in der Stabilität, Planbarkeit und operative Kontinuität im Vordergrund standen. Vertragsverlängerungen boten kurzfristige Sicherheit, insbesondere vor dem Hintergrund veränderter Lizenzmodelle und steigender Kosten.Wie schon erwähnt, hat man sich hier wohl Planungssicherheit verschaffen möchten, war sich jedoch nicht bewusst, dass ab Oktober 2027 ein neue Architektur und ein neues Betriebsmodell aufgezwungen werden könnte.

Gleichzeitig wurde damit eine bestehende Architektur vorgeschrieben. Die Folge ist kein unmittelbarer Bruch, sondern eine schrittweise Verfestigung.

Über mehrere Jahre hinweg entstehen Bindungen, die technisch und wirtschaftlich zunehmend schwerer zu verändern sind. Der Handlungsspielraum bleibt formal bestehen, wird aber faktisch enger.

Quellen

Nutanix –  The Questions Swiss VMware Customers Ask

Nutanix – The Questions Swiss VMware Customers Ask

In my first five months at Nutanix, I have had dozens of conversations across the Swiss market. From federal organizations to cantonal institutions, from service providers to highly regulated environments. On paper, these discussions look completely different – different architectures, different priorities, and different timelines.

It took me a while to realize it, but there was a clear pattern. Regardless of size or sector, the same underlying questions keep surfacing, no matter if we were talking about a 1’000-, 4’500-, or 20’000-core infrastructure. And more interestingly, most of these questions are not about features or technical capabilities, these questions came later in the discussions.

Most questions are about risk, cost and control, and sometimes about sovereignty. It all has to do with certainty, doubts, stability and predictability.

So, it’s less about the available alternatives per se. Customers are trying to understand what staying actually means, what risk this implies.

1) Isn’t switching too risky?

This is one of the questions that appear very early when meeting prospects. Sometimes even right after the introduction before any real discussion has started.

It’s a natural reaction. For a long time, staying on VMware was the safest choice and there was no real reason to reconsider it.

But VMware is not “VMware” anymore, it is Broadcom now. So, what many organizations are experiencing today is not instability in their infrastructure, but the conditions around it. There is not one single customer that is telling me that “VMware” is not performing as expected and just great technology.

For many customers, especially those in regulated industries, it’s about predictability and control. Therefore staying (with VMware) is no longer automatically the safest option anymore.

What I see in practice is, that IT organizations quickly move away from the idea of a distruptive “big bang” migration. Instead, they start thinking in phases and use cases, and move workloads step by step. Systems run in parallel and confidence builds gradually. The projects that I have won are VDI and edge use cases. Larger projects with larger infrastructure take more time.

So, what’s the learning? While I understand why customers ask “isn’t switching too risky”, it’s just the wrong question.

The better question would be: What’s the risk of staying and how do we move without taking unnecessary risk?

From there, the conversation almost inevitably moves to cost.

2) Is Nutanix really cheaper?

Sounds like simple question, right? A number-to-number comparison, a classic price discussion. It’s anything but simple.

Because what most organizations are comparing is not two equivalent scenarios. They are comparing what they used to pay for VMware with what they might pay for something new. And that creates a distorted baseline from the very beginning. With Broadcom, at least in Switzerland, there is no more VMware vSphere Foundation (VVF) or vSphere Enterprise Plus standalone. You can only get VMware vSphere Standard (VVS) or VMware Cloud Foundation (VCF).

On paper, that sounds like simplification and in practice, it introduces a different kind of complexity. Because suddenly, organizations are not just buying what they need. They are buying what is included.

In many of the discussions I have had, customers admit that they are not using the full breadth of the VCF stack (even they have the VCF subscription. A lot of those VMware customers only use vSphere, some of them vSAN, and the most of them use Aria Operations. No NSX and no Aria Automation. And if you need advanced security features like micro-segmentation, you need an add-on for $200 (list price).

You can compare Nutanix against the entire VCF bundle. In that case, the question becomes “Can Nutanix replace everything that is included?”.

Or you can compare Nutanix against what you are actually using today. And suddenly, the picture changes. Dramatically.

Both perspectives are valid, but they lead to very different conclusions – commercially and strategically.

Let me rephrase the question, which now becomes: Why am I paying for functionality I don’t need?

This is something I explored in more detail in my recent article “Beyond the Price Tag – Why Organizations Choose Nutanix” The core idea is simple. Cost is rarely just about the price per core or the discount level. In the end it is about how closely your investment aligns with your actual requirements.

With Nutanix, you don’t start with everything and try to justify it afterwards, and you can start with what you actually need. And then you expand, step by step, where it creates value.

It sounds like a small difference, but in practice it changes the entire commercial logic.

3) Don’t we end up paying twice during the migration?

It’s a fair concern. Running two environments in parallel is often unavoidable during a transition. Without specific support, that can mean carrying two full licensing models at the same time.

This is exactly where Nutanix has taken a very pragmatic approach. Through its migration programs, customers can receive up to one year of Nutanix licensing at no additional cost during the transition period.

That doesn’t eliminate the complexity of a migration, but it removes a key barrier. It gives organizations time. And most importantly, it allows them to do this without being penalized financially for taking a careful approach.

4) We don’t have Nutanix skills

Over the past months, one pattern has become very clear. Broadcom is not just repositioning VMware commercially, but it is standardizing it architecturally. Everything points in the same direction: VMware Cloud Foundation is no longer an option. It is the only option.

And if you look at the publicly available information, this trajectory becomes even more tangible. Current indications suggest that support for vSphere 8.x and VCF versions not aligned with vSphere 9 will eventually come to an end. Which effectively means that, from around October 2027 onwards, unless Broadcom changes course again, customers will only be able to buy and deploy VCF 9.x.

In other words, the path forward is already being defined.

Now, to be fair, there are customers for whom this aligns well. Organizations that have already embraced VCF, invested in NSX, automation, and the broader stack, for them, this is a continuation of a journey they have consciously chosen.

But they are not the majority.

Most environments I see across Switzerland are still far from a fully adopted VCF architecture. They are running vSphere at scale, often with external storage and networking, established operational models, and teams that are deeply skilled in what they do today.

And this is exactly where the concern about “Nutanix skills” usually comes up. “Do we have the people for this?”

The reality is that Nutanix does not require you to throw away everything your teams have learned over the past 10 or 15 years. Quite the opposite.

The fundamental principles remain the same. You are still running virtual machines, designing clusters, ensuring availability, managing storage policies, operating networks, and securing workloads. Concepts like high availability, lifecycle management, capacity planning, and operational governance don’t disappear.

In fact, many VMware engineers adapt to Nutanix much faster than expected. Why? Because Nutanix deliberately simplified the operational model. Instead of stitching together compute, storage, and networking from different layers and tools, Nutanix brings these capabilities into a single, integrated platform.

cloud13.ch Prism Central

So yes, adopting Nutanix requires learning. But let’s be honest, so does adopting VCF. You need to be aware that moving to VCF is not just a licensing change. It includes an operational transformation. VCF also means new skills, new processes, new dependencies, and a new operational model.

So while Broadcom’s vision is actually quite clear – and, in many ways, understandable – it comes with consequences. The vision is to deliver a private cloud platform and a model where individual product names fade into the background, and what matters are capabilities. Compute, storage, networking, security, and automation are delivered as an integrated service layer, and VMware is becoming more like a public cloud. Conceptually, that makes sense to me.

You are adopting a new operating paradigm. The only real advantage compared to moving to a public cloud like Azure is, that your virtual machine format remains the same. Your VMs don’t need to be converted. But beyond that, the effort is comparable:

  • You still need to redesign your architecture
  • You still need to rethink networking and security
  • You still need to retrain your teams
  • You still need to plan and execute a structured migration

And this is exactly where the conversation reconnects with the themes we discussed earlier (cost, risk, control).

5) Isn’t Nutanix doing the same as Broadcom?

Yes, Nutanix absolutely offers a private cloud platform that can run in the data center, at the edge, or in the public cloud. So, in terms of vision, both VMware (under Broadcom) and Nutanix are heading towards a similar destination: A cloud-like operating model for on-premises environments.

Before the Broadcom era, VMware was known for something very specific: Modularity

With Nutanix, you can absolutely consume the full private cloud platform. But you don’t have to.

Nutanix continues to deliver a modular set of software building blocks that can be used independently or as a complete stack. The Nutanix Cloud Platform (NCP) includes multiple components such as Nutanix Cloud Infrastructure (NCI), Nutanix Cloud Manager (NCM), Unified Storage (NUS), Database Service (NDB), Nutanix Kubernetes Platform (NKP) and more. Each is available as a separate option depending on customer needs: https://www.nutanix.com/products/cloud-platform/software-options

Organizations can pick and choose exactly what they want to deploy:

  • A VDI environment? Use NCI‑VDI
  • An edge cluster with minimal footprint? Use NCI‑Edge for small‑scale, distributed deployments
  • A full enterprise platform spanning multiple sites? Deploy NCI Ultimate, NCM, Unified Storage, and Database Service as needed

6) Is Nutanix enterprise-ready?

A few years ago, that would have been a fair and important concern.

Back then, Nutanix was still perceived by many as a strong challenger. Innovative, yes. Promising, definitely. But not always seen as the default choice for the most critical, large-scale environments.

Is Hyper-V enterprise-ready? Is Azure Local enterprise-ready? What about newer or increasingly popular options like Proxmox?

The answer, in most cases, is simply assumed. And yet, if we take a step back, the question itself is more about perception.

Because Nutanix has been in the market for well over a decade. Its hypervisor, AHV, has been running production workloads for more than ten years. It is not new, it is not experimental, it is not an emerging technology trying to find its place.

It is established!

And that is reflected not only in customer adoption, but also in how the market evaluates the platform. Nutanix has consistently been positioned in the top-right quadrant of the Gartner Magic Quadrant for Distributed Hybrid Infrastructure.

Broadcom (VMware) Named a Leader in the 2025 GartnerⓇ Magic QuadrantTM for  Distributed Hybrid Infrastructure for the 3rd Consecutive Year - VMware  Cloud Foundation (VCF) Blog

By any objective measure, Nutanix has already crossed the “enterprise-ready” threshold a long time ago.

7) Are we just replacing one dependency with another?

It’s a fair question, and probably one of the most important ones in the entire discussion. Because if the last few years have shown anything, it’s that lock-in is no longer an abstract concept.

No platform is completely free of dependencies. There is no such thing as a truly neutral infrastructure stack. Every decision introduces some form of coupling – to a vendor, to an architecture, to an operating model.

Dependencies exist, always. That’s not the important part. It’s about where they sit and how much control you retain over them. And this is exactly where the conversation becomes more interesting.

As discussed earlier, architectures are becoming more opinionated, more predefined, more aligned to a single operating model. Which means the dependency moves downwards into the foundation.

Nutanix, in contrast, shifts that balance towards the application layer. And this is where Kubernetes becomes important.

Because once applications are containerized and orchestrated through Kubernetes, the underlying infrastructure starts to matter less. Not irrelevant, but less dominant. Workloads become more portable, deployment models become more consistent, and the ability to move between environments becomes an option.

Nutanix Kubernetes Platform (NKP) provides an integrated way to run and manage Kubernetes across environments, without forcing customers into a specific cloud or infrastructure model. It aligns with the broader idea of hybrid and multi-cloud, but in a way that keeps operational control with the customer.

Nutanix Kubernetes Platform Open Source

Replacing one platform with another is not inherently solving lock-in. But repositioning where dependencies sit, that’s ultimately what many organizations are looking for. Again, it’s about having the ability to stay on control. Because NKP is not tied to a single infrastructure backend:

  • It can run on Nutanix
  • It can run on VMware infrastructure
  • It can run in public cloud environments
  • It can even run directly on baremetal

Compare that to more tightly integrated approaches like the vSphere Kubernetes Service (VKS). VKS is deeply embedded into the vSphere ecosystem. It works well as long as you remain within that environment. But it is, by design, not portable beyond it. And that brings us back to the core point.

Lock-in is not eliminated by choosing a different vendor. It is reduced when your most critical layers are no longer restricted to a single environment.

How easily can you change tomorrow?

8) What if Nutanix gets acquired as well?

Another question has started to surface more frequently. It usually comes a bit later in the conversation, once the technical fit is understood, and once the commercial discussion has taken shape.

It’s a question that reflects the current mood in the market, and I have to admit it’s a valid one. Because the last few years have shown that ownership changes can have real consequences. They can reshape pricing models, redefine product strategies, and fundamentally alter the relationship between vendor and customer.

This question often leads to the wrong conclusion. We have to understand that the issue with VMware was not the acquisition itself. Acquisitions happen and they are part of how the technology industry evolves. The real issue was the impact that followed:

  • The shift in pricing
  • The restructuring of packaging
  • The reduced flexibility
  • And, ultimately, the feeling among many customers that control has moved away from them

That is what triggered the current wave of re-evaluation. So, when customers ask whether the same could happen elsewhere, they are not really asking about ownership. They are asking about exposure. If we follow that line of thinking consistently, the question doesn’t stop at Nutanix. You could ask the same about almost any platform in the market. What if Proxmox gets acquired? What if a hyperscaler changes its pricing model or service terms? What if an open source project shifts direction because of new commercial backing?

There is no scenario in which a platform is completely immune to change. And that is exactly my point. Trying to eliminate that risk entirely is not realistic.

9) AHV is not open source, is that a risk?

Nutanix’s Acropolis Hypervisor (AHV) is built on KVM, one of the most widely used open-source hypervisors out there. The foundation is open, and what Nutanix does is take that foundation and turn it into something that is actually operable at scale.

Open source sounds like freedom. And in some cases, it absolutely is. But in many real-world environments, it also means something else:

  • More components
  • More integration work
  • More lifecycle management
  • More responsibility on your own teams
  • Especially at the infrastructure layer

Running a fully open source stack often means you are effectively building your own platform. You are combining a hypervisor, storage, networking, automation, and then making sure everything works together, stays updated, remains secure, and is supported when something breaks. That can be the right approach, but only if you actually want to operate like that.

At the infrastructure layer, especially in virtualization, open source rarely creates meaningful strategic advantage. The hypervisor has become a mature, almost commoditized component. Whether it’s KVM, AHV, Hyper-V, or ESXi. They all solve the same fundamental problem, and they solve it well.

Open source creates the most value where differentiation happens. And that is not at the bottom of the stack. It’s at the top, at the application layer. This could be Kubernetes or building open source applications (think about OpenDesk or Nextcloud).

10) What about sovereignty?

Sovereignty is not a feature you can simply “add” to a platform. And more importantly, it’s not just a hyperscaler problem (anymore). This is something I already explored in a previous article – the idea that dependency doesn’t suddenly disappear just because infrastructure runs on-premises or in a private cloud. You can still be deeply dependent on a vendor’s licensing model, roadmap, and architectural decisions.

There is one dimension of sovereignty that stands out above all others in current customer conversations: Economic sovereignty.

For many existing Broadcom customers, this has become the most immediate and tangible pain point:

  • Not data residency
  • Not compliance
  • Not even technical capability

But cost predictability and the loss of it. And that brings us back to the platform.

The ability to maintain economic sovereignty is directly linked to how flexible your architecture is. If your platform enforces a predefined bundle, a fixed operating model, and limited alternatives, then your room to negotiate and adapt becomes smaller over time. If, on the other hand, your platform allows you to scale components independently, choose where workloads run, and avoid unnecessary dependencies, then you retain leverage.

Nutanix runs on-premises and in service provider environments. It also runs in public clouds (Nutanix NC2).

With the Nutanix Elevate Service Provider Program (NESPP), Nutanix enables managed service providers to build and operate sovereign cloud platforms themselves.

If your platform gives you flexibility, technically and commercially, then sovereignty becomes achievable.

Not VMware versus Nutanix

And this is ultimately where the entire discussion converges. Because despite all the technical arguments, the pricing models, the migration paths, and the architectural considerations, this is not a story about VMware versus Nutanix. What I see in the market right now is something different – a shift in how organizations relate to their infrastructure:

  • Control vs. dependency
  • Predictability vs. uncertainty
  • Choice vs. constraint

As I said before, dependency, in this context, is about exposure. Control, on the other hand, is not about owning everything or building everything yourself. And predictability (like trust), once lost, is difficult to rebuild.

If we help customers to ask different questions, the conversations change. It becomes less about selecting a product and more about defining a direction.

So, is your plan to adapt to change or to shape it?

Beyond the Price Tag – Why Organizations Choose Nutanix

Beyond the Price Tag – Why Organizations Choose Nutanix

In many customer conversations today, the discussion about Nutanix starts in a very pragmatic place: price.

Before we get the chance to talk about architecture, automation, or hybrid cloud strategies, most organizations first want to answer a simpler question: Can we even afford this option? Only once that hurdle is cleared does the real conversation begin. That is the moment when customers start asking a different question: Is it worth spending our time on this platform?

And that shift in perspective is important, because the current market situation is very different from just a few years ago.

For more than a decade, the virtualization market followed a relatively stable pattern. Many organizations standardized on a single hypervisor/platform and built their operational models, processes, and skill sets around it. The question was rarely which hypervisor to choose but more about which edition or which bundle to buy. The platform decision itself was largely settled.

That stability is gone.

Since the licensing and pricing changes in the VMware ecosystem in 2024, many organizations have been forced to rethink assumptions that had been in place for years. Renewal discussions suddenly became strategic decisions and budget forecasts were no longer predictable. In some cases, the cost increases were large enough to trigger board-level attention. Yes, and sometimes even attract political attention.

But price is only one part of the story.

Many customers also question the long-term direction of the platform on which they built their data centers. They are asking themselves whether the vendor’s strategic priorities still align with their own, and they are looking at consolidation in the industry, reduced product portfolios, and new licensing models, and they are wondering what that means for their own autonomy.

As a result, the conversation has shifted from optimization to re-evaluation.

Instead of finetuning an existing environment, many organizations are now exploring a wide spectrum of alternatives. Hyper-V, HPE VM Essentials, Proxmox, Scale Computing, and open-source stacks. Niche hypervisors and even container-first approaches. The list is long, and in many cases, the evaluation is driven less by feature comparisons and more by strategic considerations.

What is interesting in these discussions is the level of pragmatism.

Most customers are very clear about one thing: they know that VMware still offers one of the most mature and feature-rich stack on the market, but they also admit that they do not actually use all of those features. In some environments, large parts of the advanced functionality have been sitting idle for years.

So the goal is no longer to replicate the past environment in every technical detail.

Customers are willing to accept trade-offs. They do not need the most sophisticated dashboards nor do they need every integration or advanced automation capability. If they can move 80 or 90 percent of their workloads to a new platform, that is already a success. The remaining cases can be handled separately.

This is where a new mindset becomes visible: fail fast, fail forward.

The objective is not to design the perfect architecture on paper. It is to make progress, to reduce dependency, to regain control over costs and strategic direction, and to move to a platform that is predictable, supportable, and aligned with the organization’s own priorities. Even if it means it will stall innovation for a short time.

In that context, price becomes the first filter, not the final decision criterion.

If a platform is clearly unaffordable, the conversation ends there. But if the numbers are within reach, customers start to look deeper. They begin to evaluate operational simplicity, architectural consistency, support quality, and long-term flexibility.

That is usually the point where the Nutanix conversation truly starts.

The Perception Problem

For years, a certain sentence has circulated in the market: “Nutanix is expensive”. It became one of those beliefs that many people repeat without necessarily remembering where it originally came from.

In some organizations, this perception is based on very old benchmarks. In others, it comes from comparisons where different functionality levels were evaluated against each other. And in some cases, it is simply a narrative that persisted over time.

Recently, I have revisited this perception through real customer scenarios. Not theoretical models, but practical environments with realistic configurations, conservative assumptions, and somtimes even with standard (pre-approved) discount levels. What I found was not a universal truth, but a context-dependent story.

In several scenarios, Nutanix was not only competitive but significantly cheaper.

Disclaimer: Before we look at the numbers, a short disclaimer is important. The scenarios shown here are based on realistic configurations, standard architectures, and pre-approved discount levels. They are meant to illustrate typical outcomes, not to serve as official quotes or universally applicable price promises. Actual pricing will always depend on the specific environment, commercial terms, hardware choices, and contractual conditions of each individual customer.

Scenario 1: 500 VDI Users

Assume a VDI environment with 500 users. The infrastructure is built on 2×32-core nodes and designed with an n+2 resilience model. This is a typical production setup, where spare capacity is included so that the environment can tolerate failures without affecting user sessions.

In this configuration, you end up with around 1’152 physical cores that need to be licensed at the platform level. For the baseline comparison, I used this number together with a price of $140 per core. This reflects a very common way the market still thinks about platform costs – total cores multiplied by a unit price. In this baseline, no disaster recovery site is included yet.

With Nutanix, I modeled the environment using the NCI-VDI edition, which is purpose-built for virtual desktop use cases with platforms like Citrix or Omnissa (or Parallels, Dizzion etc.). In this model, I am not licensing 1’152 cores. Instead, I am licensing 500 concurrent users (CCU).

The difference in licensing logic alone already changes the economics of the environment, but there is another aspect that often surprises customers.

There is no additional licensing cost for a disaster recovery site. You can add hosts, refresh hardware, or build a secondary VDI site with the same number of cores, and from a Nutanix licensing perspective, the price remains exactly the same. The licensing is tied to the number of concurrent users, not to the amount of infrastructure standing behind them.

To keep the scenario fully realistic, I calculated three Nutanix options using only pre-approved discounts. Meaning, these are price levels that can typically be offered without extraordinary approvals.

  • The first option combined NCI Pro with NCM Starter – Representing a balanced configuration for standard VDI environments.
  • The second option used NCI Ultimate with NCM Starter – For scenarios where additional capabilities such as microsegmentation are required.
  • The third option was the full stack – Combining NCI Ultimate with NCM Ultimate, providing the complete feature set across both infrastructure and management layers.

All three options came out significantly below the core-based baseline, even the highest edition. And then there is the red bar in the comparison chart.

That red bar represents the same platform model as the baseline, but with the price per core increasing from $140 to $200, which is not an unrealistic assumption for a future renewal. The architecture stays the same, the number of cores stays the same, the resilience model stays the same, but only the unit price changes. Staying with the current platform vendor would result in a massive increase in total cost of ownership, without adding a single new capability to the environment.

cloud13 Nutanix Price NCI VDI

This scenario is not meant to claim that Nutanix is always cheaper. That would be just another oversimplified narrative. But it does show that Nutanix can be more predictable, more scalable, and economically superior, especially in VDI environments where user-based licensing aligns better with how the platform is actually consumed.

Scenario 2: Microsegmented Data Center

In another environment, the discussion was not about VDI or edge sites, but about security.

The customer had a clear, non-negotiable requirement. They wanted to limit lateral movement inside the network and enforce strict communication policies between workloads. This is becoming increasingly common, especially in regulated industries and public sector environments where zero-trust principles are becoming operational requirements.

In the past, microsegmentation was often tied to premium software bundles. Organizations that needed this capability had little choice but to move into higher-tier licensing models, even if they did not require many of the additional features included in those bundles. The security requirement effectively forced them into a more expensive edition, regardless of their actual needs.

In this scenario, the customer was already using microsegmentation and wanted to retain that capability in the target architecture. The comparison was therefore not between a basic and a premium edition, but between two functionally equivalent setups. Both sides had to include network security features.

To make the comparison more realistic and representative of different customer sizes, three Nutanix options were modeled. All three were based on the NCI Ultimate edition, which includes micro-segmentation capabilities, but they reflected different customer profiles and corresponding discount levels.

  • The first option represented a large enterprise environment. In this case, the customer had a high core count and a larger overall deal size, which typically qualifies for higher discount tiers. This option assumed a larger-scale deployment and the kind of commercial conditions that are common in enterprise agreements. It illustrated how the platform behaves economically when deployed at a significant scale.
  • The second option represented a mid-sized environment. Here, the core count and overall deal size were more moderate, leading to medium discount levels. This scenario is often closer to what many regional enterprises, healthcare providers, or mid-sized public sector organizations experience. It provided a balanced view between large enterprise conditions and smaller deployments.
  • The third option reflected a smaller environment, with a lower core count and standard discount levels. This was designed to show what the platform looks like in more typical, smaller-scale deployments, where customers operate under normal commercial conditions without large enterprise agreements.

Across all three options, the architectural assumptions remained consistent. The same security requirements applied, the same functionality was included, and the comparison remained technically equivalent. The only real differences were the scale of the environment and the corresponding commercial terms.

cloud13 Nutanix Price NCI Ult microsegmentation

In each of the three scenarios, the Nutanix configuration remained competitive, and in several cases came out lower in total software cost.

Scenario 3: Distributed Edge Environment

Instead of running a few large clusters in central data centers, some organizations suddenly find themselves operating dozens or even hundreds of small sites. Each location may only host a limited number of virtual machines (VMs), but the number of sites creates a very different licensing footprint.

In this scenario, the customer planned to run around 3’000 virtual machines distributed across roughly 250 edge locations. Each site consisted of only a small number of hosts, designed for local workloads and basic resilience – assume 3 hosts à 32 cores per site = 24’000 cores in total.

In traditional per-core licensing models, these kinds of distributed environments can become expensive very quickly. Even lightly utilized sites still require a certain number of cores to maintain resilience and availability. Multiply that by hundreds of locations, and the software cost grows faster than the actual workload.

Nutanix Cloud Infrastructure – Edge (NCI-Edge) provides a distributed infrastructure platform for small edge deployments. NCI-Edge provides the same capabilities as NCI, combining compute, storage, and networking resources from a cluster of servers into a single logical pool with integrated resiliency, security, performance, and simplified administration. NCI-Edge is limited to a maximum of 25 VMs in a cluster, with each VM being limited to a maximum of 96GB of memory. With NCI-Edge, organizations can efficiently extend the Nutanix platform to remote office/branch office (ROBO) and other edge use cases.

When we modeled this scenario with a Nutanix-based architecture, using conservative assumptions and standard pricing, the outcome was different. The total software cost across all 250 sites was lower than the comparable alternative.

cloud13 NCI Edge

Edge licensing is all about predictability. The licensing model aligned more closely with the operational reality of the environment. Instead of being penalized for running many small sites, the customer could scale their footprint without unexpected increases in costs. The economics made sense for a distributed architecture.

For organizations with large retail networks, industrial edge scenarios, transportation systems, or geographically spread infrastructures, this predictability can be just as important as the absolute price. It allows them to plan growth, roll out new sites, and standardize operations without constantly renegotiating their licensing model.

Scenario 4: From Amazon EVS to Nutanix NC2

Many organizations that moved, or are planning to move, to VMware environments in the public cloud have a very practical reason. They want to keep their existing operational model, their tools, and their skill sets, while shifting the physical infrastructure into a cloud provider’s (Azure, GCP, AWS) data center. The promise is always continuity without disruption.

At first glance, this approach makes sense. You avoid large migration projects, keep your processes intact, and simply relocate the environment. But the economics of these environments have started to change.

I am currently working with an organization that operates a full-stack private cloud at roughly $150 per core. On paper, that stack includes a wide range of capabilities. In reality, however, they only use a small portion of it: the core virtualization layer and basic monitoring and logging. No vSAN, no NSX. Just vSphere and Aria Operations.

Today, they run around 1’920 physical cores on-premises. As part of their cloud strategy, they are considering migrating to Amazon’s Elastic VMware Service (EVS) to exit their own data centers and align with a cloud-first approach. Because the EVS baremetal instances offer higher density, they expect to consolidate their environment to roughly 1’000 cores. Fewer cores, better utilization, same workloads.

Because Amazon EVS is a self-managed service, you are responsible for the lifecycle management and maintenance of the VMware software used in the Amazon EVS environment, such as ESX, vSphere, vSAN, NSX, and SDDC Manager. 

Note: Amazon EVS does not support VMware Cloud Foundation 9 at this time. Currently, the only supported VCF version is VCF 5.2.2 on i4i.metal instances.

That sounds like a straightforward cost-saving exercise, right? But the renewal dynamics tell a different story. Their Broadcom renewal is scheduled for summer 2027, and two scenarios are being discussed:

  • In the first scenario, a typical price increase of around 33 percent is assumed. That would move them from $150 to approximately $200 per core.
  • In the second scenario, the total contract value remains the same despite the reduced core count. In practical terms, that would mean $288 per core, which means an increase of about 92% compared to today.

In other words, even if they cut their footprint almost in half, their effective price per core could nearly double. This is where the discussion turned toward alternatives.

We modeled the same environment using the Nutanix Cloud Platform (NCP) running as NC2 on AWS. It is important to clarify one common misconception here: NC2 is not a separate product with a different architecture. It is the same Nutanix software stack, NCI combined with NCM, deployed on baremetal instances in the public cloud. Operationally, it behaves exactly like an on-premises Nutanix environment.

NC2 on AWS

To reflect different functional needs, I modeled three options:

  • The first option was NCI Pro combined with NCM Starter. This configuration mirrors the customer’s current feature usage, avoiding unnecessary capabilities or “shelfware”. It represents a like-for-like replacement of the existing functionality.
  • The second option used NCI Ultimate with NCM Starter. This added more advanced storage and data services, along with microsegmentation capabilities, giving the customer a richer feature set than they have today.
  • The third option was the full Nutanix Cloud Platform Ultimate stack, including the complete set of infrastructure, automation, and advanced platform services.

Even with these different configurations, the results were consistent. All three Nutanix options came in significantly below the expected VMware renewal costs.

Compared to a VMware renewal at $200 per core, the estimated savings looked roughly as follows:

  • NCI Pro + NCM Starter: About 33 percent lower

  • NCI Ultimate + NCM Starter: About 18 percent lower

  • NCP Ultimate: About 24 percent lower (higher discount for full-stack approach)

If the worst-case scenario of $288 per core were to materialize, the savings would be even higher, ranging from approximately 43 to 54 percent per year!

cloud13 Nutanix Price NCI EVS to NC2

As in the other scenarios, the interesting part was not just the price difference. It was the combination of cost predictability and architectural flexibility. With NC2, the customer could run the same platform on-premises and in the cloud, move workloads between locations, and avoid being tied to a single proprietary cloud virtualization stack.

To support the transition from VMware to Nutanix on NC2, migrations are typically handled with Nutanix Move. This tool allows customers to replicate and migrate virtual machines from existing VMware environments into Nutanix clusters with minimal disruption, reducing the complexity of the platform shift.

In this scenario, the outcome once again challenged the old perception. When modeled with realistic assumptions and current pricing dynamics, Nutanix was very (cost-)competitive. It offered both a lower platform cost and a more flexible long-term architecture.

Scenario 5: Updated Benchmarks, Different Results

Perhaps one of the most revealing examples was not a technical scenario at all, but a simple conversation.

In one engagement, a partner mentioned that their internal Nutanix benchmark was more than two years old. Those numbers had shaped their perception of the platform and influenced how they positioned Nutanix in front of customers. Over time, the benchmark had become an accepted reference point, even though no one had revisited the assumptions underlying it.

When we recalculated the scenario using (VCF vs. NCI Pro with Advanced Replication add-on) current licensing models, realistic configurations, and today’s pricing structures, the outcome was very different from what they expected. The Nutanix solution turned out to be cheaper than expected.

The important information here was not the percentage difference or the exact numbers on the spreadsheet. It was the realization that the entire perception had been built on outdated data. The conclusion they had carried forward for years no longer reflected the reality of the current market.

This experience is not unique. Many organizations still rely on benchmarks, cost models, or architectural assumptions that were created several years ago. Since then, licensing structures have evolved, bundles have changed, and the economics of different platforms have shifted. But the original perception often remains untouched.

In conversations with customers and partners, I frequently hear a similar sentence: “Our Nutanix benchmark might be outdated”. That simple realization often marks the turning point in the discussion. Because once the numbers are recalculated with current data, the story tends to change and the outcome is no longer predetermined. 

Addressing the Renewal Myth

Another concern that often surfaces in conversations is the idea that Nutanix offers an attractive entry price, only to significantly increase costs at renewal time.

This narrative circulates in online forums, informal discussions, and peer-to-peer exchanges. In a market where many organizations have recently experienced unexpected price increases from other vendors, it is understandable that customers approach any new platform with a certain level of skepticism. Trust in licensing models has been shaken, and nobody wants to repeat the same experience a few years down the road.

But in practice, this perception does not reflect how most Nutanix engagements actually unfold. In many cases, Nutanix is able to provide multi-year price guarantees, giving customers clarity not only about the initial investment, but also about what they can expect over the next several years. Instead of treating pricing as a short-term negotiation, the conversation often shifts toward long-term planning and predictability.

This does not mean that prices will remain frozen forever. No software vendor can realistically promise that. Over time, platforms evolve, new features are introduced, innovation continues, and inflation affects the cost structure. It is normal for software pricing to adjust over a multi-year horizon.

The difference lies in transparency.

Rather than hiding future changes behind complex contracts or vague terms, Nutanix is often willing to put the long-term numbers on the table early in the process. Customers can see not only what they pay today, but also how the platform is expected to evolve financially over time. That creates a different kind of conversation – one based on planning and predictability instead of uncertainty.

For many organizations, especially those in regulated industries or the public sector, that predictability is more important than the absolute entry price. It allows them to align budgets, procurement cycles, and strategic roadmaps without the fear of sudden surprises at renewal time.

What Customers Actually Value

Once the initial price discussion is out of the way, the tone of the conversation usually changes. The focus shifts from raw numbers to what the platform actually delivers in day-to-day operations.

At this stage, customers are asking whether it fits their architecture, their processes, and their long-term strategy. And across many conversations, certain themes tend to appear again and again.

One of the most frequently mentioned aspects is the modularity of the platform. Customers appreciate that Nutanix does not force them into a single, monolithic bundle for every use case. A large data center, a VDI environment, and a small edge site may not require the same software edition. With Nutanix, these environments can be licensed differently based on their actual requirements. This flexibility allows customers to align their licensing model with their architecture, instead of reshaping their architecture to fit the licensing.

Another recurring theme is the architectural simplicity of hyperconverged infrastructure itself. Many customers value a distributed system that integrates compute and storage, builds resilience into the platform, and reduces external dependencies. There is no separate SAN to manage, no complex compatibility matrix between multiple storage and compute components. For teams that want to reduce operational overhead and complexity, this design principle often resonates more strongly than any individual feature.

Support quality is another topic that comes up regularly. Nutanix consistently achieves a Net Promoter Score (NPS) above 90, which is unusually high in the enterprise infrastructure space. Customers often describe the support experience as direct and focused, with engineers who stay engaged until the issue is resolved. For organizations that have struggled with multi-vendor support models in the past, this can be a significant improvement.

The ecosystem also plays an important role. Nutanix continues to work closely with major OEM partners such as Dell, Lenovo, HPE, and Cisco. For many customers, especially in the public sector, this is more than a technical detail. It means they can procure hardware through existing framework contracts, trusted suppliers, and established procurement channels, while still running a modern, consistent software platform.

In addition, the platform is gradually opening up to more flexible architectures. Nutanix has introduced support for external storage integrations, starting with platforms from Dell and Pure Storage, with further options expected over time. This gives customers more freedom in how they design their environments, especially if they want to reuse existing storage investments or follow a disaggregated approach for certain workloads.

Taken together, these themes paint a clear picture. Once the price question is answered, the decision is rarely about a single feature or a benchmark number. It becomes a broader evaluation of architecture, operational simplicity, support experience, and long-term flexibility.

And in many of those discussions, that combination of qualities is what makes the platform stand out.

Price Opens the Door. Value Closes the Deal.

If you look across all the scenarios and customer discussions, a consistent pattern begins to emerge.

Price is almost always the starting point. It determines whether a platform even makes it onto the shortlist. In today’s market, where many organizations are under pressure to control costs and justify every investment, that first filter has become more important than ever. If a solution is clearly out of budget, the conversation usually ends before it truly begins.

But we all know that price is rarely the final decision factor.

Once customers see that Nutanix is within their financial reach, or in some cases even cheaper than the alternatives, the focus shifts. The discussion shifts from license metrics and discount levels to the day-to-day realities of running the platform. This is the moment when the conversation moves from procurement to platform strategy.

Customers begin to consider how much time they spend on upgrades, how complex their current environment has become, how many vendors they have to coordinate during incidents, and how predictable their infrastructure roadmap really is. They start to evaluate not just what the platform costs today, but what it means for their operations over the next five or ten years.

And that is often where Nutanix stands out!

The platform may not always be the absolute cheapest option in every possible scenario. No serious technology decision should be based on a single number alone. But the blanket statement that Nutanix is inherently expensive does not hold up when you look at real environments with current data. 

10 Things You Probably Didn’t Know About Nutanix

10 Things You Probably Didn’t Know About Nutanix

Nutanix is often described with a single word: HCI. That description is not wrong, but it is incomplete.

Over the last decade, Nutanix has evolved from a hyperconverged infrastructure (HCI) pioneer into a mature enterprise cloud platform that now sits at the center of many VMware replacement strategies, sovereign cloud designs, and edge architectures. Yet much of this evolution remains poorly understood, partly because old perceptions persist longer than technical reality.

Here are ten things about Nutanix that people often don’t know or underestimate.

1. Nutanix’s DNA is HCI, but the architecture has evolved beyond it

Nutanix was built on hyperconverged infrastructure. That heritage is important, because it shaped the platform’s operational model, automation mindset, and lifecycle discipline.

Over the last years, Nutanix deliberately opened its architecture. Today, compute-only nodes are a possibility, enabled through partnerships with vendors like Dell (PowerStore support for Nutanix is expected to enter early access in spring 2026, with general availability coming in summer 2026) and Pure Storage (for now). This allows customers to decouple compute and storage where it makes architectural or economic sense, without abandoning the Nutanix control plane.

This is Nutanix acknowledging that real enterprise environments are heterogeneous, and that flexibility matters.

2. A Net Promoter Score above 90

Nutanix has reported an NPS score consistently above 90 for several years. In enterprise infrastructure, that number is almost unheard of.

NPS reflects how customers feel after deployment, during operations, upgrades, incidents, and daily use. In a market where infrastructure vendors are often tolerated rather than liked, this level of advocacy is just unique and tells a story if its own.

It suggests that Nutanix’s real differentiation is not just technology, but operational experience. That tends to show up only once systems are running at scale.

3. Nutanix Kubernetes Platform runs almost everywhere

Nutanix Kubernetes Platform (NKP) is often misunderstood as “Kubernetes on Nutanix”. That is only partially true.

NKP can run on:

  • Bare metal
  • Nutanix AHV
  • VMware
  • Public cloud infrastructure

Nutanix Cloud Native Platform

NKP was designed to abstract infrastructure differences rather than enforce platform lock-in. For organizations that already operate mixed environments, or that want to transition gradually, this matters far more than ideological purity.

In practice, NKP becomes a control layer for Kubernetes. That is especially relevant in regulated or sovereign environments where infrastructure choices are often political as much as technical.

4. Nutanix has matured from “challenger” to enterprise-grade platform

It’s honest to acknowledge that Nutanix wasn’t always considered enterprise-ready. In its early years, the company was widely admired for innovation and simplicity, but many large organizations hesitated because the platform, like all young software, had feature gaps, stability concerns in some use cases, and a smaller track record with mission-critical workloads.

That landscape has changed significantly. Over the past several years, Nutanix has steadily strengthened every axis of its platform. From virtualization and distributed storage to Kubernetes, security, and operations at scale. The company’s most recent financial results show that this maturity isn’t theoretical. Fiscal 2025 delivered 18 % year-over-year revenue growth, strong recurring revenue expansion, and Nutanix added thousands of new customers, including over 50 Global 2000 accounts, arguably its strongest annual new-logo performance in years. 

What this means in practice is that many enterprises that once saw Nutanix as a “challenger” now see it as a credible and proven alternative to VMware, and not just in smaller or departmental deployments, but across core data center and hybrid cloud estates.

The old maturity gap has largely disappeared. What remains is a difference of philosophy. Nutanix prioritizes operational simplicity, flexibility, and choice, without compromising the robustness that large organizations demand. And with increasing adoption among Global 2000 enterprises, that philosophy is proving not only viable but competitive at the highest levels of IT decision-making.

5. The “Nutanix is expensive” perception is outdated and often wrong

The idea that Nutanix is more expensive than competitors is one of the most persistent myths in the market. It was shaped by early licensing models and by superficial price comparisons that ignored operational and architectural differences.

Today, Nutanix offers multiple licensing models, including options that other vendors simply do not have.

For example, NCI-VDI for Citrix or Omnissa environments is licensed based on concurrent users (CCU) rather than physical CPU cores. That aligns cost directly with usage and not hardware density.

Even more interesting is NCI Edge, which is designed for distributed environments with smaller footprints (aka ROBO). It is licensed per virtual machine, with clear boundaries:

  • Maximum of 25 VMs per cluster
  • Maximum 96 GB RAM per VM

Consider a realistic example. An organization runs 250 edge sites. Each site has a 3-node cluster with 32 cores per node and hosts 20 VMs:

  • A core-based model would require licensing 24’000 cores
  • With NCI Edge, the customer licenses 5’000 VMs

It fundamentally changes the cost structure of edge and remote deployments. In a traditional core-based licensing model, effective costs might range from $100 to $140 per core for edge nodes. With NCI Edge, the effective per-core cost can drop to $60-80 (illustrative figures). This is not a marginal optimization, it’s huge.

Note: NCM Edge is a product that provides the same capabilities as NCM for edge use cases. NCM-Edge is also limited to a maximum of 25 VMs in a cluster.

6. Almost 90% of Nutanix customers now use AHV

Nutanix has always been fundamentally about HCI and AOS (Acropolis Operating System). From the beginning, the value was never the hypervisor itself, but the distributed storage, data services, and operational model built on top of it. Over time, Nutanix came to a clear conclusion: The hypervisor should be a commodity, not the value anchor of the platform. Out of this thinking, the perception, and later the expression, emerged that AHV is “free”.

No photo description available.

Today, AHV has become the dominant deployment model in the Nutanix ecosystem, with an adoption rate of 88%. This matters for two important reasons. First, it disproves the assumption that customers need to be pushed or incentivized to move to AHV. Second, it demonstrates that AHV is trusted to run mission-critical workloads at scale, across enterprises and service providers.

7. Nutanix is 100% channel-led

Nutanix does not sell directly to customers (for sure there are some exceptions :)). It is a channel-led vendor, by design, and that decision fundamentally shapes how the company operates in the market. Hence, channel commitment at Nutanix is a structural principle.

Partners are not treated as a fulfillment layer or a transactional necessity. They are core to how Nutanix delivers value – from architecture design and implementation to day-two operations, managed services, and long-term customer success. As a result, Nutanix has built one of the strongest partner and service provider ecosystems in the industry, with clear incentives, predictable rules, and room for partners to build sustainable businesses.

This stands in sharp contrast to the current direction of some other infrastructure vendors, where channel models have become more restrictive, less transparent, and increasingly centered around direct control. In that environment, partners often struggle with margin pressure, reduced influence, and uncertainty about their long-term role.

Nutanix takes a different approach. By staying channel-led, it enables local expertise, regional sovereignty, and trusted delivery models, which are especially critical in public sector, regulated industries, and markets where locality and compliance matter as much as technology.

8. MST and Cloud-Native AOS show how far Nutanix has moved beyond classic HCI

Most people associate Nutanix AOS with hyperconverged infrastructure and VM-centric deployments. What is far less known is how deeply Nutanix has evolved its data platform to address multi-cloud and cloud-native architectures.

One example is MST (Multi-Cloud Snapshot Technology). MST enables application-consistent snapshots to be replicated across heterogeneous environments, including on-premises infrastructure and public clouds. Unlike traditional disaster-recovery approaches that assume identical infrastructure on both sides, MST is designed for asymmetric, real-world scenarios. This makes it possible to use the public cloud as a recovery or failover target without re-architecting workloads or maintaining a second, identical private environment. 

MST diagram

In parallel, Nutanix has introduced Cloud Native AOS, which brings enterprise-grade storage and data services directly into Kubernetes environments. Instead of tying storage to virtual machines or specific infrastructure stacks, Cloud Native AOS runs as a Kubernetes-native service and can operate across diverse platforms. This allows stateful applications to benefit from Nutanix data services, such as snapshots, replication, and resilience, without forcing teams back into VM-centric models.

Together, MST and Cloud-Native AOS illustrate an important point. Nutanix is not simply extending HCI into new form factors. It is re-architecting core data services to work across clouds, infrastructures, and application models. These capabilities are often overlooked, but they are strong indicators of where the platform is heading — toward data mobility, resilience, and consistency across increasingly fragmented environments.

EKS Cluster

9. Nutanix SaaS without forcing SaaS

Nutanix offers SaaS-based services such as Data Lens and Nutanix Central. These services are also available on-premises, including for air-gapped environments.

This dual-delivery model recognizes that not all customers can or should consume control planes as public SaaS. 

10. Nutanix has more than a decade of real-world experience replacing VMware

Nutanix has operated alongside VMware for more than ten years, in many cases within the same environments. As a result, replacing vSphere is not a new ambition or a reactive strategy for Nutanix. It is just a long-standing and proven reality.

Equally important is the migration experience. Nutanix Move was built specifically to address one of the most critical challenges in any platform transition. It’s about getting workloads across safely, predictably, and at scale. Move supports migrations from vSphere, Hyper-V, AWS, and other environments, enabling phased and low-risk transitions rather than disruptive “big bang” projects. Beyond workload migration, Move can also translate NSX network and security policies into Nutanix Flow, addressing one of the most commonly cited blockers in VMware exit strategies.

Nutanix has spent more than a decade refining these aspects across thousands of customer environments, which is why many organizations today view it as a credible, de-risked alternative for the long term.

Conclusion

For organizations reassessing their infrastructure strategy, whether driven by VMware uncertainty, edge expansion, regulatory pressure, or cloud cost realities, Nutanix should be on the top of your list. It is a proven platform with a clear philosophy, a growing enterprise footprint, and more than a decade of hard-earned experience. If Nutanix is still on your shortlist as “HCI”, it may be time to look again, and this time at the full picture! 🙂 

Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Cloud Repatriation and the Growth Paradox of Public Cloud IaaS

Over the past two years, a new narrative has taken hold in the cloud market. No, it is not always about sovereign cloud. 🙂 Headlines talk about cloud repatriation – nothing really new, but it is still out there. CIOs speak openly about pulling some workloads back on-premises. Analysts write about organizations “correcting” some earlier cloud decisions to optimize cloud spend. In parallel, hyperscalers themselves now acknowledge that not every workload belongs in the public cloud.

And yet, when you look at the data, you will find a paradox.

IDC and Gartner both project strong, sustained growth in public cloud IaaS spending over the next five years. Not marginal growth and sign of stagnation. But a market that continues to expand at scale, absorbing more workloads, more budgets, and more strategic relevance every year.

At first glance, these two trends appear contradictory. If organizations are repatriating workloads, why does public cloud IaaS continue to grow so aggressively? The answer lies in understanding what is actually being repatriated, what continues to move to the cloud, and how infrastructure constraints are reshaping decision-making in ways that are often misunderstood.

Cloud Repatriation Is Real, but Narrower Than the Narrative Suggests

Cloud repatriation is not a myth. It is happening, but it is also frequently misinterpreted.

Most repatriation initiatives are highly selective. They focus on predictable, steady-state workloads that were lifted into the public cloud under assumptions that no longer hold. Cost transparency has improved, egress fees are better understood and operating models have matured. What once looked flexible and elastic is now seen as expensive and operationally inflexible for certain classes of workloads.

What is rarely discussed is that repatriation does not mean “leaving the cloud”, but I have to repeat it again: It means rebalancing. Meaning, that trganizations are not abandoning public cloud IaaS as a concept. They are just refining their usage of it.

At the same time, some new workloads continue to flow into public cloud environments. Digital-native applications, analytics platforms, some AI pipelines, globally distributed services, and short-lived experimental environments still align extremely well with public cloud economics and operating models. These workloads were not part of the original repatriation debate, and they seem to be growing faster than traditional workloads are being pulled back.

This is how both statements can be true at the same time. Cloud repatriation exists, and public cloud IaaS continues to grow.

The Structural Drivers Behind Continued IaaS Growth

Public cloud IaaS growth is not driven by blind enthusiasm anymore. It is driven by structural forces that have little to do with fashion and everything to do with constraints.

One of the most underestimated factors is time. Building infrastructure takes time and procuring hardware takes time as well. Scaling data centers takes time and many organizations today are not choosing public cloud because it is cheaper or “better”, but because it is available now.

This becomes even more apparent when looking at the hardware market right now.

Hardware Shortages and Rising Server Prices Change the Equation

The infrastructure layer beneath private clouds has suddenly become a bottleneck. Server lead times have increased, GPU availability is constrained and prices for enterprise-grade hardware continue to rise, driven by supply chain pressures, higher component costs, and growing demand from AI workloads.

For organizations running large environments, this introduces a new type of risk. Capacity planning is a logistical problem and no longer just a financial exercise anymore. Even when budgets are approved, hardware may not arrive in time. That is the new reality.

In this context, public cloud data centers represent something extremely valuable: pre-existing capacity. Hyperscalers have already made the capital investments and they already operate at scale. From the customer perspective, infrastructure suddenly looks abundant again.

This is why many organizations currently consider shifting workloads to public cloud IaaS, even if they were previously skeptical. It became a pragmatic response to scarcity.

The Flawed Assumption: “Just Use Public Cloud Instead of Buying Servers”

However, this line of thinking often glosses over a critical distinction.

Many of these organizations do not actually want “cloud-native” infrastructure, if we are being honest here. What they want is physical capacity – They want compute, storage, and networking under predictable performance characteristics. In other words, they want some VMs and bare metal.

Buying servers allows organizations to retain architectural freedom. It allows them to choose their operating system or virtualization stack, their security model, their automation tooling, and their lifecycle strategy. Public cloud IaaS, by contrast, delivers abstraction, but at the cost of dependency.

When organizations consume IaaS services from hyperscalers, they implicitly accept constraints around instance types, networking semantics, storage behavior, and pricing models. Over time, this shapes application architectures and operational processes. The usage of such services suddenly became a lock-in.

Bare Metal in the Public Cloud Is Not a Contradiction

Interestingly, the industry has started to converge on a hybrid answer to this dilemma: bare metal in the public cloud.

Hyperscalers themselves offer bare-metal services. This is an acknowledgment that not all customers want fully abstracted IaaS. Some want physical control without owning physical assets. It is simple as that.

But bare metal alone is not enough. Without a consistent cloud platform on top, bare-metal in the public cloud becomes just another silo. You gain performance and isolation, but you lose portability and operational consistency.

Nutanix Cloud Clusters and the Reframing of IaaS

Nutanix Cloud Platform running on AWS, Azure, and Google Cloud through NC2 (Nutanix Cloud Clusters) introduces a different interpretation of public cloud IaaS.

Instead of consuming hyperscaler-native IaaS primitives, customers deploy a full private cloud stack on bare-metal instances in public cloud data centers. From an architectural perspective, this is a subtle but profound difference.

Customers still benefit from the hyperscaler’s global footprint and hardware availability and they still avoid long procurement cycles, but they do not surrender control of their cloud operating model. The same Nutanix stack runs on-premises and in public cloud, with the same APIs, the same tooling, and the same governance constructs.

Workload Mobility as the Missing Dimension

The most underappreciated benefit of this approach is workload mobility.

In a cloud-native bare-metal deployment tied directly to hyperscaler services, workloads tend to become anchored, migration becomes complex, and exit strategies are theoretical at best.

With NC2, workloads are portable by design. Virtual machines and applications can move between on-premises environments and public cloud (or a service provider cloud) bare-metal clusters without refactoring. In practical terms, this means organizations can use public cloud capacity tactically rather than strategically committing to it. Capacity shortages, temporary demand spikes, regional requirements, or regulatory constraints can be addressed without redefining the entire infrastructure strategy.

This is something traditional IaaS does not offer, and something pure bare-metal consumption does not solve on its own.

Reconciling the Two Trends

When viewed through this lens, the contradiction between cloud repatriation and public cloud IaaS growth disappears.

Public cloud is growing because it solves real problems: availability, scale, and speed. Repatriation is happening because not all problems require abstraction, and not all workloads benefit from cloud-native constraints.

The future is not a reversal of cloud adoption. It is a maturation of it.

Organizations are asking how to use public clouds without losing control. Platforms that allow them to consume cloud capacity while preserving architectural independence are not an alternative to IaaS growth and they are one of the reasons that growth can continue without triggering the next wave of regret-driven repatriation.

What complicates this picture further is that even where public cloud continues to grow, many of its original economic promises are now being questioned again.

The Broken Promise of Economies of Scale

One of the foundational assumptions behind public cloud adoption was economies of scale. The logic seemed sound. Hyperscalers operate at a scale no enterprise could ever match. Massive data centers, global procurement power, highly automated operations. All of this was expected to translate into continuously declining unit costs, or at least stable pricing over time.

That assumption has not materialized as we know by now.

If economies of scale were truly flowing through to customers, we would not be witnessing repeated price increases across compute, storage, networking, and ancillary services. We would not see new pricing tiers, revised licensing constructs, or more aggressive monetization of previously “included” capabilities. The reality is that public cloud pricing has moved in one direction for many workloads, and that direction is up.

This does not mean hyperscalers are acting irrationally. It means the original narrative was incomplete. Yes, scale does reduce certain costs, but it also introduces new ones. That is also true for new innovations and services. Energy prices, land, specialized hardware, regulatory compliance, security investments, and the operational complexity of running globally distributed platforms all scale accordingly. Add margin expectations from capital markets, and the result is not a race to the bottom, but disciplined price optimization.

For customers, however, this creates a growing disconnect between expectation and reality.

When Forecasts Miss Reality

More than half of organizations report that their public cloud spending diverges significantly from what they initially planned. In many cases, the difference is not marginal. Budgets are exceeded, cost models fail to reflect real usage patterns, optimization efforts lag behind application growth.

What is often overlooked is the second-order effect of this divergence. Over a third of organizations report that cloud-related cost and complexity issues directly contribute to delayed projects. Migration timelines slip, modernization initiatives stall, and teams slow down not because technology is unavailable, but because financial and operational uncertainty creeps into every decision.

Commitments, Consumption, and a Structural Risk

Most large organizations do not consume public cloud on a purely on-demand basis. They negotiate commitments, look at reserved capacity, and spend-based discounts. These are strategic agreements designed to lower unit costs in exchange for predictable consumption.

These agreements assume one thing above all else: that workloads will move. They HAVE TO move.

When migrations slow down, a new risk pops up. Organizations fail to reach their committed consumption levels, because they cannot move workloads fast enough. Legacy architectures, migration complexity, skill shortages, and governance friction all play a role.

The consequence is subtle but severe. Committed spend still has to be paid and because of that future negotiations become weaker. The organization enters the next contract cycle with a track record of underconsumption, reduced leverage, and less credibility in forecasting.

In effect, execution risk turns into commercial risk.

This dynamic is rarely discussed publicly, but it is increasingly common in private conversations with CIOs and cloud leaders. The challenge is no longer whether the public cloud can scale, but whether the organization can.

Speed of Migration as an Economic Variable

At this point, migration speed stops being a technical metric and becomes an economic one. The faster workloads can move, the faster negotiated consumption levels can be reached. The slower they move, the more value leaks out of cloud agreements.

This is where many cloud-native migration approaches struggle. Refactoring takes time and re-architecting applications is expensive. Not every workload is a candidate for transformation under real-world constraints.

As a result, organizations are caught between two pressures. On one side, the need to consume public cloud capacity they have already paid for. On the other hand, the inability to move workloads quickly without introducing unacceptable risk.

NC2 as a Consumption Accelerator, Not a Shortcut

This is where Nutanix Cloud Platform with NC2 changes the conversation.

By allowing organizations to run the same private cloud stack on bare metal in AWS, Azure, and Google Cloud, NC2 removes one of the biggest bottlenecks in migration programs: The need to change how workloads are built and operated before they can move.

Workloads can be migrated as they are, operating models remain consistent, governance does not have to be reinvented, and teams do not need to learn a new infrastructure paradigm under time pressure. It’s all about efficiency and speed.

Faster migrations mean workloads start consuming public cloud capacity earlier and the negotiated consumption targets suddenly become achievable. Commitments turn into realized value rather than sunk cost, and the organization regains control over both its migration timeline and its commercial position.

Reframing the Role of Public Cloud

In this context, NC2 is not an alternative to public cloud economics, but a mechanism to actually realize them.

Public cloud providers assume customers can move fast. In reality, many customers cannot, not because they resist change, but because change takes time. Platforms that reduce friction between private and public environments do not undermine cloud strategies. They are here to stabilize them. And they definitely can!

The uncomfortable truth is that economies of scale alone do not guarantee better outcomes for customers, execution does. And execution, in large enterprises, depends less on ideal architectures and more on pragmatic paths that respect existing realities.

When those paths exist, public cloud growth and cloud repatriation stop being opposing forces. They become two sides of the same maturation process, one that rewards platforms designed not just for scale, but for transition.