Why Workloads Are Really Repatriating to Private Cloud and How to Prepare for AI

Why Workloads Are Really Repatriating to Private Cloud and How to Prepare for AI

In the beginning, renting won. Managed services and elastic capacity let teams move faster than procurement cycles, and the “convenience tax” felt like a bargain. A decade later, many enterprises have discovered what one high-profile cloud exit made clear: The same convenience that speeds delivery can erode margins at scale. That realization is driving a new wave of selective repatriation, moving the right workloads from hyperscale public clouds back to private cloud platforms, while a second force emerges simultaneously. AI is changing what a data center needs to look like. Any conversation about bringing workloads home that ignores AI-readiness is incomplete.

What’s really happening (and what isn’t)

Repatriation today is targeted. IDC’s Server and Storage Workloads Survey found that only ~8-9% of companies plan full repatriation. Most enterprises bring back specific components like production data, backup pipelines, or compute, where economics, latency, or exit risk justify it.

Media coverage has sharpened the picture. CIO.com frames repatriation as strategic workload placement rather than a retreat. InfoWorld’s look at 2025 trends notes rising data-center use even as public-cloud spend keeps growing. Forrester’s 2025 predictions echo the co-existence. Public cloud expands, private cloud thrives alongside it.  Hybrid is normal. Sovereignty, cost control, and performance are the levers. 

And then there are the headline case studies. 37signals (Basecamp/HEY) publicized their journey off AWS – deleting their account in 2025 after moving storage to on-prem arrays and citing seven-figure annual savings on S3 alone. Whether or not your estate looks like theirs, it crystallized the idea that the convenience premium can outgrow its value at scale.

Why the calculus changed

Unit economics at scale. Per-unit cloud pricing that felt fine at 100 TB looks different at multiple PB, especially once you add data egress, cross-AZ traffic, and premium managed services. Well-understood examples (Dropbox earlier) show material savings when high-volume, steady-state workloads move to owned capacity. 

Performance locality and control. Some migrations lifted and shifted latency-sensitive systems into the wrong place. Round-trip times, noisy neighbors, or throttling can make the public cloud an expensive place to be for chatty, tightly coupled apps. Industry coverage repeatedly points to “the wrong workload in the wrong spot” as a repatriation driver. 

Sovereignty and exit risk. Regulated industries must reconcile GDPR/DORA-class obligations and the US CLOUD Act with how and where data is processed. The mid-market is echoing this too. Surveys show a decisive tilt toward moving select apps for compliance, control, and resilience reasons. 

FinOps maturity. After a few budgeting cycles, many teams have better visibility into cloud variability and the true cost of managed services. Some will optimize in-place, others will re-platform components where private cloud wins over a 3-5 year horizon.

Don’t bring it back to a 2015 data center

Even if you never plan to train frontier models, AI has changed the physical design targets. Racks that once drew 8-12 kW now need to host 30-50 kW routinely and 80-100+ kW for dense GPU nodes. Next-gen AI racks can approach 1 MW per rack in extreme projections.

Evolution of power consumption & dissipation per rack (2000-2030)

Image credit: Lennox Data Center Solutions

Air alone won’t be enough. Direct-to-Chip or immersion liquid cooling, higher-voltage distribution, and smarter power monitoring become minimum requirements. European sites face grid constraints that make efficiency and modular growth plans essential. 

This is the retrofit conversation many teams are missing. If you repatriate analytics, vector databases, or LLM inference and can’t cool them, you’ve just traded one bottleneck for another.

How the analysts frame the decision

A fair reading across recent coverage lands on three points:

  1. Hybrid wins. Public cloud spend grows, and so do private deployments, because each has a place. Use the public cloud for burst, global reach, and cutting-edge managed AI services. Use the private cloud for steady-state, regulated (sovereign), chatty, or data-gravity workloads.
  2. Repatriation is selective. It’s about fit. Data sets with heavy egress, systems with strict jurisdiction rules, or platforms that benefit from tight locality are top candidates.
  3. AI is now a first-order constraint. Power, cooling, and GPU lifecycle management change the platform brief. Liquid cooling and higher rack densities stop being exotic and become practical requirements.

Why Nutanix is the safest private cloud bet for enterprises and the regulated world

If you are going to own part of the stack again, two things matter: Operational simplicity and future-proofing. This is where Nutanix stands out.

A single control plane for private, hybrid, and edge. Nutanix Cloud Platform (NCP) lets you run VMs, files/objects, and containers with one operational model across on-prem and public cloud extensions. It’s built for steady-state enterprise workloads and the messy middle of hybrid.

Kubernetes without the operational tax. Nutanix Kubernetes Platform (NKP), born from the D2iQ acquisition, prioritizes day-2 lifecycle management, policy, and consistency across environments. If you are repatriating microservices or building AI micro-stacks close to data, this reduces toil.

AI-ready from the hypervisor up. AHV supports NVIDIA GPU passthrough and vGPU, and Nutanix has published guidance and integrations for NVIDIA AI Enterprise. That means you can schedule, share, and secure GPUs for training or inference alongside classic workloads, instead of creating a special-case island.

Data services with immutability. If you bring data home, protect it. Nutanix Unified Storage (NUS) provides WORM/immutability and integrates with leading cyber-recovery vendors, giving you ransomware-resilient backups and object locks without bolt-on complexity. 

Enterprise AI without lock-in. Nutanix Enterprise AI (NAI) focuses on building and operating model services on any CNCF-certified Kubernetes (on-prem, at the edge, or in cloud) so you keep your data where it belongs while retaining choice over models and frameworks. That aligns directly with sovereignty programs in government and regulated industries.

A Full-Stack Platform for Private AI

You get a private cloud that behaves like a public cloud where it matters, including lifecycle automation, resilience, and APIs. Under your control and jurisdiction.

Designing the landing zone

On day zero, deploy NCP as your substrate with AHV and Nutanix Unified Storage. Enable GPU pools on hosts that will run inference/training, and integrate NKP for container workloads. Attach immutable backup policies to objects and align with your chosen cyber-recovery stack. As you migrate, standardize on one identity plane and network policy model so VMs and containers are governed the same way. When you are ready to operationalize AI services closer to data, layer NAI to package and run model APIs with the same lifecycle tooling you already know.

The bottom line?

Repatriation is the natural correction after a decade of fast, sometimes indiscriminate, lift-and-shift, and not an anti-cloud movement. The best operators are recalibrating placement. AI turns this from a pure cost exercise into an infrastructure redesign. You can’t bring modern workloads home to a legacy room.

If you want the private side of that hybrid story without rebuilding a platform team from scratch, Nutanix is the safe choice. You get a single control plane for virtualization, storage, and Kubernetes, immutable data services for cyber-resilience, proven GPU support, and an AI stack that respects your sovereignty choices. That’s how you pay for convenience once, not forever, and how you make the next decade less about taxes and more about outcomes. 

How much is it costing you to believe that VMware or public cloud are cheaper?

How much is it costing you to believe that VMware or public cloud are cheaper?

Every technology leader knows this moment: the procurement team sits across the table and asks the question you’ve heard a hundred times before. “Why is this solution more expensive than what we thought?”

When it comes to Nutanix, the honest answer is simple: it’s not cheap. And it shouldn’t be.
Because what you’re paying for is not just software – you’re paying for enterprise-readiness, operational simplicity, support quality, and long-term resilience. And don’t forget freedom and sovereignty.

But let’s put that into perspective.

The Myth of Cheap IT

Many IT strategies start with the illusion of saving money. The public cloud is often positioned as the easy, cost-effective way forward. The first few months even look promising: minimal upfront investments, quick provisioning, instant access to services.

But costs in the public cloud scale differently. What starts as an attractive proof of concept soon becomes a recurring nightmare in the CFO’s inbox. Networking, egress charges, storage tiers, backup, and compliance layers all stack on top of the base infrastructure. Before long, that “cheap” platform becomes one of the most expensive commitments in the entire IT budget.

We don’t have to talk in hypotheticals here. Just look at 37signals (now Basecamp). Beginning in 2022, they started migrating away from Amazon Web Services (AWS) because of escalating costs. Their AWS bill had ballooned to $3.2 million annually, with $1.5 million of that just for storage. By investing $700,000 in Dell servers and $1.5 million in Pure Storage arrays, they migrated 18 Petabytes of data out of AWS and completely shut down their cloud account by summer 2025. The result? Annual savings of more than $2 million, alongside full ownership and visibility into their infrastructure. For 37signals, the math was simple: public cloud had become the expensive choice.

VMware customers are experiencing something similar, but in a different flavor. Broadcom’s new licensing model has transformed familiar cost structures into something far less predictable and much higher. Organizations that relied on VMware for decades now face steep renewals, mandatory bundles, and less flexibility to optimize spend.

So yes, let’s talk about “expensive”. But let’s be honest about what expensive really looks like.

Paying for Readiness

Let’s talk about Nutanix. At first glance, it may not be the cheapest option on the table. But Nutanix is built from the ground up to deliver enterprise capabilities that reduce hidden costs and avoid painful surprises.

  • What others solve with layers of tools, Nutanix delivers in a single, integrated platform. That means fewer licenses, fewer integration projects, and fewer teams chasing issues across silos.

  • The architecture distributes risk instead of concentrating it. Failures don’t cascade, operations don’t grind to a halt, and recovery doesn’t require a small army.

  • You decide the hardware, the software, and how you extend into the public cloud. That means intentional lock-ins, and no forced upgrades just because a vendor decided to change the rules.

Value is the Real Differentiator

Price is always visible. It’s the line item that everyone sees. But value is often hidden in what doesn’t happen. The outages that rarely occur. The security incidents avoided. The integration projects you don’t need.

When you compare Nutanix against VMware’s new pricing or against runaway public cloud bills, the story shifts. What once looked “expensive” now feels reasonable. Because with Nutanix, you are not paying for legacy baggage or unpredictable consumption models. It is for a platform that runs mission-critical workloads in your sovereign environment.

The Real Cost of Cheap

There’s an old truth in enterprise IT: cheap usually ends up being the most expensive choice.
Cutting costs upfront often means sacrificing reliability, adding complexity, or creating other lock-in that limits your future options. And every one of those decisions comes back later as a much bigger invoice. Sometimes in dollars, sometimes in lost trust.

Nutanix is not cheap. But it is predictable. It is proven. And it is built for organizations that cannot afford to compromise on the workloads that matter most.

Final Thought

The question is not whether Nutanix costs money, of course it does. The real question is what you get in return, and how it compares to the alternatives. Against public cloud bills spiraling out of control and VMware contracts that now feel more like ransom notes, Nutanix delivers clarity, control, sovereignty, and enterprise-grade quality.

And today, that is worth every cent.

Why Nutanix Represents the Next Chapter

Why Nutanix Represents the Next Chapter

For more than two decades, VMware has been the backbone of enterprise IT. It virtualized the data center, transformed the way infrastructure was consumed, and defined the operating model of an entire generation of CIOs and IT architects. That era mattered, and it brought incredible efficiency gains. But as much as VMware shaped the last chapter, the story of enterprise infrastructure is now moving on. And the real question for organizations is not “VMware or Nutanix?”, but the real question is: how much control are you willing to keep over your own future?

The Wrong Question

The way the conversation is often framed, Nutanix against VMware,  misses the point entirely. Customers are not trying to settle a sports rivalry. They are not interested in cheering for one logo over another. What they are really trying to figure out is whether their infrastructure strategy gives them freedom or creates dependency. It is less about choosing between two vendors and more about choosing how much autonomy they retain.

VMware is still seen as the incumbent, the technology that defined stability and became the default. Nutanix is often described as the challenger. But in reality, the battleground has shifted. It is no longer about virtualization versus hyperconvergence, but about which platform offers true adaptability in a multi-cloud world.

The VMware Era – A Breakthrough That Belongs to the Past

There is no denying VMware’s historical importance. Virtualization was a revolution. It allowed enterprises to consolidate, to scale, and to rethink how applications were deployed. For a long time, VMware was synonymous with progress.

But revolutions have life cycles. Virtualization solved yesterday’s problems, and the challenges today look very different. Enterprises now face hybrid and multi-cloud realities, sovereignty concerns, and the rise of AI workloads that stretch far beyond the boundaries of a hypervisor. VMware’s empire was built for an era where the primary challenge was infrastructure efficiency. That chapter is now closing.

The Nutanix Trajectory – From HCI to a Distributed Cloud OS

Nutanix started with hyperconverged infrastructure. That much is true but it never stopped there. Over the years, Nutanix has steadily moved towards building a distributed cloud operating system that spans on-premises data centers, public clouds, and the edge.

This evolution matters because it reframes Nutanix not as a competitor in VMware’s world, but as the shaper of a new one. Think about it. Now, it’s about who provides the freedom to run workloads wherever they make the most sense without being forced into a corner by contracts, licensing, or technical constraints.

The Cost of Inertia

For many customers, staying with VMware feels like the path of least resistance. There are sunk costs, existing skill sets, and the comfort of familiarity, but inertia comes at a price. The longer enterprises delay modernization, the more difficult and expensive it becomes to catch up later.

The Broadcom acquisition has accelerated this reality. Pricing changes, bundled contracts, and ecosystem lock-in are daily conversations in boardrooms. Dependency has become a strategic liability. What once felt like stability now feels like fragility.

Leverage Instead of Lock-In

This is where Nutanix changes the narrative. It is not simply offering an alternative hypervisor or another management tool. It is offering leverage – the ability to simplify operations while keeping doors open.

With Nutanix, customers can run workloads on-premises, in AWS, in Azure, in GCP, or across them all. They can adopt cloud-native services without abandoning existing investments. They can prepare for sovereignty requirements or AI infrastructure needs without being tied to a single roadmap dictated by a vendor’s financial strategy.

That is what leverage looks like. It gives CIOs and IT leaders negotiation power. It ensures that the infrastructure strategy is not dictated by one supplier’s pricing model, but by the customer’s own business needs.

The Next Chapter

VMware defined the last era of enterprise IT. It built the virtualization chapter that will always remain a cornerstone in IT history. But the next chapter is being written by Nutanix. Not because it “beat” VMware, but because it aligned itself with the challenges enterprises are facing today: autonomy, adaptability, and resilience.

This chapter is about who controls the terms of the game. And for organizations that want to stay in charge of their own destiny, Nutanix represents the next chapter.

Why Sovereign Hybrid Multi-Cloud is the Future of Cloud in Europe

Why Sovereign Hybrid Multi-Cloud is the Future of Cloud in Europe

When people talk about cloud computing, the conversation almost always drifts toward the hyperscalers. AWS, Azure, and Google Cloud have shaped what we consider a “cloud” today. They offer seemingly endless catalogs of services, APIs for everything, and a global footprint. So why does Nutanix call its Nutanix Cloud Platform (NCP) a private cloud, even though its catalog of IaaS and PaaS services is far more limited?

To answer that, it makes sense to go back to the roots. NIST’s SP 800-145 definition of cloud computing is still the most relevant one. According to it, five essential characteristics make something a cloud: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. NIST then defines four deployment models: public, private, community, and hybrid.

If you look at NCP through that lens, it ticks the boxes. It delivers on-demand infrastructure through Prism and APIs, it abstracts and pools compute and storage across nodes, it scales out quickly, and it gives you metering and reporting on consumption. A private cloud is about the deployment model and the operating characteristics, not about the length of the service catalog. And that’s why NCP rightfully positions itself as a private cloud platform.

Nutanix Cloud Platform Hybrid Multi-Cloud

At the same time, it would be wrong to assume that private clouds stop at virtual machines and storage. Modern platforms are extending their scope with built-in capabilities for container orchestration, making Kubernetes a first-class citizen for enterprises that want to modernize their applications without stitching together multiple toolchains. On top of that, AI workloads are no longer confined to the public cloud. Private clouds can now deliver integrated solutions for deploying, managing, and scaling AI and machine learning models, combining GPUs, data services, and lifecycle management in one place. This means organizations are not locked out of next-generation workloads simply because they run on private infrastructure.

A good example is what many European governments are facing right now. Imagine a national healthcare system wanting to explore generative AI to improve medical research or diagnostics. Regulatory pressure dictates that sensitive patient data must never leave national borders, let alone be processed in a global public cloud where data residency and sovereignty are unclear. By running AI services directly on top of their private cloud, with Kubernetes as the orchestration layer, they can experiment with new models, train them on local GPU resources, and still keep complete operational control. This setup allows them to comply with AI regulations, maintain full sovereignty, and at the same time benefit from the elasticity and speed of a modern cloud environment. It’s a model that not only protects sovereignty but also accelerates innovation. Innovation at a different pace, but it’s still innovation.

Now, here’s where my personal perspective comes in. I no longer believe that the hyperscalers’ stretch into the private domain – think AWS Outposts, Azure Local, or even dedicated models like Oracle’s Dedicated Region – represents the future of cloud. In continental Europe especially, I see these as exceptions rather than the rule. The reality now is that most organizations here are far more concerned with sovereignty, control, and independence than with consuming a hyperscaler’s entire catalog in a smaller, local flavor.

What I believe will be far more relevant is the rise of private clouds as the foundation of enterprise IT. Combined with a hybrid multi-cloud strategy, this opens the door to what I would call a sovereign hybrid multi-cloud architecture. The idea is simple: sovereign and sensitive workloads live in a private cloud that is under your control, built to allow quick migration and even a cloud-exit if needed. At the same time, non-critical workloads can live comfortably in a public cloud where an intentional lock-in may even make sense, because you benefit from the deep integration and services that hyperscalers do best.

And this is where the “exit” part becomes critical. Picture a regulator suddenly deciding that certain workloads containing citizen data cannot legally remain in a U.S.-owned public cloud. For an organization without a sovereign hybrid strategy, this could mean months of firefighting, emergency projects, and unplanned costs to migrate or even rebuild applications. But for those who have invested in a sovereign private cloud foundation, with portable workloads across virtual machines and containers, this becomes a controlled process. Data and apps can be moved back under national jurisdiction quickly (or to any other destination), without breaking services or putting compliance at risk. It turns a crisis into a manageable transition.

VMware Sovereign Cloud Borders

This two-speed model gives you the best of both worlds. Sovereignty where it matters, and scale where it helps. And it puts private cloud platforms like Nutanix NCP in a much more strategic light. They are not just a “mini AWS” or a simplified on-prem extension, but are the anchor that allows enterprises and governments to build an IT architecture with both freedom of choice and long-term resilience.

While public clouds are often seen as environments where control and sovereignty are limited, organizations can now introduce an abstraction and governance layer on top of hyperscaler infrastructure. By running workloads through this layer, whether virtual machines or containers, enterprises gain consistent security controls independent of the underlying public cloud provider, unified operations and management across private and public deployments, and workload portability that avoids deep dependency on hyperscaler-native tools. Most importantly, sovereignty is enhanced, since governance, compliance, and security frameworks remain under the organization’s control.

This architecture essentially transforms the public cloud into an extension of the sovereign environment, rather than a separate silo. It means that even when workloads reside on hyperscaler infrastructure, they can still benefit from enhanced security, governance, and operational consistency, forming the cornerstone of a true sovereign hybrid multi-cloud.

In short, the question is not whether someone like Nutanix can compete with hyperscalers on the number of services. The real question is whether organizations in Europe want to remain fully dependent on global public clouds or if they want the ability to run sovereign, portable workloads under their own control. From what I see, the latter is becoming the priority.

The Principle of Sovereignty – From the Gods to the Digital Age

The Principle of Sovereignty – From the Gods to the Digital Age

Disclaimer: This article reflects my personal interpretation and understanding of the ideas presented by Jordan B. Peterson in his lecture Introduction to the Idea of God. It is based on my own assumptions, reflections, and independent research, and does not claim to represent any theological or academic authority. My intention is not to discuss religion, but to explore an intellectual and philosophical angle on the concept of sovereignty and how it connects to questions of responsibility, order, and autonomy in the digital age.

The views expressed here are entirely my own and written in the spirit of curiosity, personal development, and open dialogue.

For most of modern history, sovereignty was a word reserved for kings, nations, and divine powers. Today, it has migrated into the digital realm. Governments speak of data sovereignty, organisations demand cloud sovereignty, and regulators debate digital autonomy as if it were a new invention. Yet the word itself carries a much deeper heritage.

I have always been drawn to content that challenges me to think deeper. To develop myself, to learn, and to become a more decent human being. That is why I often watch lectures and interviews from thinkers like Jordan B. Peterson. His lecture Introduction to the Idea of God recently caught my attention, not because of its theological angle, but because of how he describes sovereignty. Not as external power, but as an inner principle – the ability to rule responsibly within a domain while remaining subordinate to higher laws and moral order.

That insight provides an unexpected bridge into the digital age. It suggests that sovereignty, whether human or technological, is never just about control. It is about responsible dominion, the ability to govern a domain while remaining answerable to the principles that make such governance legitimate.

When we speak about digital sovereignty, we are therefore not talking about ownership alone. We are talking about whether our digital systems reflect that same hierarchy of responsibility and whether power remains accountable to the principles it claims to serve.

The ancient logic of rule

Long before the language of “data localization” and “cloud independence”, ancient myths wrestled with the same question: who rules, and under what law?

Peterson revisits the Mesopotamian story of Marduk, the god who slays the chaos-monster Tiamat and creates the ordered world from her remains. Marduk becomes king of the gods not because he seizes power, but because he sees clearly (his eyes encircle his head) and speaks truth (his voice creates order). Sovereignty, in this reading, is vision plus articulation, which means that perception and speech transform chaos into structure.

Myth, How Marduk Became King of all the Ancient Babylonian Gods -  Mesopotamia for Kids

Source: https://mesopotamia.mrdonn.org/marduk.html 

That principle later reappears in the Hebrew conception of God as a lawgiver above kings. Even the ruler is subordinate to the law, and even the sovereign must bow to the principle of sovereignty itself. This inversion, that the highest power is bound by what is highest, became the cornerstone of Western political thought and the invisible moral logic behind constitutional democracies.

When power forgets that it is subordinate, it turns into tyranny. When it forgets its responsibility, chaos returns.

From political sovereignty to digital power

Fast-forward to the 21st century. We have entered a world where data centers replace palaces and algorithms rival parliaments. Our digital infrastructures have become the new seat of sovereignty. Whoever controls data, compute, and AI, controls the rhythm of economies and the tempo of societies.

Yet, as Peterson would put it, the psychological pattern remains the same. The temptation for absolute rule, for technical sovereignty without moral subordination, is as old as humanity. When a cloud provider holds the keys to a nation’s data but owes its allegiance to no higher principle than profit, sovereignty decays into dominance. When a government seeks total control over data flows without respecting personal freedom, sovereignty mutates into surveillance.

Therefore, digital sovereignty cannot be achieved by isolation alone. It requires a principle above control and a framework of responsibility, transparency, and trust. Without that, even the most sovereign infrastructure becomes a gilded cage.

Sovereignty as responsibility, not control

Peterson defines sovereignty as rule under law, not rule above it. In the biblical imagination, even God’s power is bound by his word – the principle that creates and limits at the same time. That is the paradox of legitimate sovereignty: its strength comes from restraint.

Applied to technology, this means that a sovereign cloud is sovereign because it binds itself to principle. It protects data privacy, enforces jurisdictional control, and allows auditability not because regulation demands it, but because integrity does.

The European vision of digital sovereignty, from the EU’s Data Act to Gaia-X, is, at its core, an attempt to recreate that ancient alignment between power and rule. It is a collective recognition that technical autonomy without ethical structure leads nowhere. Sovereign infrastructure is not an end in itself but a container for trust.

The danger of absolute sovereignty

Peterson warns that when sovereignty becomes absolute, when a ruler or system recognises no law above itself, catastrophe follows. He points to 20th-century totalitarian regimes where ideology replaced principle and individuals were crushed under the weight of unbounded power.

Our digital landscape carries similar risks. The hyperscaler model, while enabling global innovation, also consolidates an unprecedented concentration of authority. When three or four companies decide how data is stored, how AI models are trained, and which APIs define interoperability, sovereignty becomes fragile. We exchange the chaos of fragmentation for the order of dependence.

Likewise, when nations pursue “data nationalism” without transparency or interoperability, they may protect sovereignty in name but destroy it in spirit. The closed sovereign cloud is the modern equivalent of the paranoid king. Safe behind walls, but blind to the world beyond them.

True sovereignty, Peterson reminds us, requires both vision and speech. You must see beyond your borders and articulate the principles that guide your rule.

Seeing and speaking truth

In Peterson’s cosmology, the sovereign creates order by confronting chaos directly by seeing clearly and speaking truthfully. That process translates elegantly into the digital domain.

To build responsible digital systems, we must first see where our dependencies lie: in APIs, in hardware supply chains, in foreign jurisdictions, in proprietary formats. Sovereignty begins with visibility, with mapping the landscape of control.

Then comes the speech, the articulation of principle. What do we value? Data integrity? Interoperability? Legal transparency? Without speaking those truths, sovereignty becomes mute. The process of defining digital sovereignty is therefore not only political or technical but also linguistic and moral. It requires the courage to define what the higher principle actually is.

In Europe, this articulation is still evolving. The Gaia-X framework, the Swiss Government Cloud initiative, or France’s “Cloud de Confiance” are all attempts to speak sovereignty into existence by declaring that our infrastructure must reflect our societal values, not merely technical efficiency.

The individual at the root of sovereignty

One of Peterson’s most subtle points is that sovereignty ultimately begins in the individual. You cannot have a sovereign nation composed of individuals who refuse responsibility. Likewise, you cannot have a sovereign digital ecosystem built by actors who treat compliance as theatre and trust as a marketing term.

Every architect, policymaker, and operator holds a fragment of sovereignty. The question is whether they act as tyrants within their domain or as stewards under a higher rule. A cloud platform becomes sovereign when its people behave sovereignly and when they balance autonomy with discipline, ownership with humility.

In that sense, digital sovereignty is a psychological project as much as a technical one. It demands integrity, courage, and self-limitation at every level of the stack.

From divine order to digital order

The story of sovereignty, from the earliest myths to the latest regulations, is the same story told in different languages. It is the human struggle to balance power and principle, freedom and responsibility, chaos and order.

Peterson’s interpretation of sovereignty – the ruler who sees and speaks, the individual who stands upright before the highest good – offers a mirror for our technological age. The digital world is the new frontier of chaos. Our infrastructures are its architecture of order. Whether that order remains free depends on whether we remember the ancient rule: even the sovereign must be subordinate to sovereignty itself.

A sovereign cloud, therefore, is not the digital expression of nationalism, but the continuation of an older and nobler tradition. To govern power with conscience, to build systems that serve rather than dominate, to make technology reflect the values of the societies it empowers.

The true measure of sovereignty, digital or divine, lies not in the scope of control but in the depth of responsibility.

It’s Time to Rethink Your Private Cloud Strategy

It’s Time to Rethink Your Private Cloud Strategy

For over a decade, VMware has been the foundation of enterprise IT. Virtualization was almost synonymous with VMware, and entire operating models were built around it. But every era of technology eventually reaches a turning point. With vSphere 8 approaching its End of General Support in October 2027, followed by the End of Technical Guidance in 2029, customers are most probably being asked to commit to VMware Cloud Foundation (VCF) 9.x and beyond.

On paper, this may look like just another upgrade cycle, but in reality, it forces every CIO and IT leader to pause and ask the harder questions: How much control do we still have? How much flexibility remains? And do we have the freedom to define our own future?

Why This Moment Feels Different

Enterprises are not new to change. Platforms evolve, vendors shift focus, and pricing structures come and go. Normally, these transitions are gradual, with plenty of time to adapt.

What feels different today is the depth of dependency on VMware. Many organizations built their entire data center strategy on one assumption: VMware is the safe choice. VMware became the backbone of operations, the standard on which teams, processes, and certifications were built.

CIOs realize the “safe choice” is no longer guaranteed to be the most secure or sustainable. Instead of incremental adjustments, they face fundamental questions: Do we want to double down, or do we want to rebalance our dependencies?

Time Is Shorter Than It Looks

2027 may sound far away, but IT leaders know that large infrastructure decisions take years. A realistic migration journey involves:

  • Evaluation & Strategy (6 to 12 months) – Assessing alternatives, validating requirements, building a business case.

  • Proof of Concept & Pilots (6to 12 months) – Testing technology, ensuring integration, training staff.

  • Procurement & Budgeting (3 to 9 months) – Aligning financial approvals, negotiating contracts, securing resources.

  • Migration & Adoption (12 to24 months) – Moving workloads, stabilizing operations, decommissioning legacy systems.

Put these together, and the timeline shrinks quickly. The real risk is not the change itself, but running out of time to make that change on your terms.

The Pricing Question You Can’t Ignore

Now imagine this scenario:

The list price for VMware Cloud Foundation today sits around $350 per core per year. Let’s say Broadcom adjusts it by +20%, raising it to $420 this year. Then, two years later, just before your next renewal, it increases again to $500 per core per year.

Would your situation and thoughts change?

For many enterprises, this is not a theoretical question. Cost predictability is part of operational stability. If your infrastructure platform becomes a recurring cost variable, every budgeting cycle turns into a crisis of confidence.

When platforms evolve faster than budgets, even loyal customers start re-evaluating their dependency. The total cost of ownership is no longer about what you pay for software -more about what it costs to stay locked in.

And this is where strategic foresight matters most: Do you plan your next three years assuming stability, or do you prepare for volatility?

The Crossroads – vSphere 9 or VCF 9

In the short term, many customers will take the most pragmatic route. They upgrade to vSphere 9 to buy time. It’s the logical next step, preserving compatibility and delaying a bigger architectural decision.

But this path comes with an expiration date. Broadcom’s strategic focus is clear, the future of VMware is VCF 9. Over time, standalone vSphere environments will likely receive less development focus and fewer feature innovations. Eventually, organizations will be encouraged, if not forced, to adopt the integrated VCF model, because vSphere standalone or VMware vSphere Foundation (VVF) are going to be more expensive than VMware Cloud Foundation.

For some, this convergence will simplify operations. For others, it will mean even higher costs, reduced flexibility, and tighter coupling with VMware’s lifecycle.

This is the true decision point. Staying on vSphere 9 buys time, but it doesn’t buy independence (think about sovereignty too!). It’s a pause, not a pivot. Sooner or later, every organization will have to decide:

  • Commit fully to VMware Cloud Foundation and accept the new model, or

  • Diversify and build flexibility with platforms that maintain open integration and operational control

Preparing for the Next Decade

The next decade will reshape enterprise IT. Whether through AI adoption, sovereign cloud requirements, or sustainability mandates, infrastructure decisions will have long-lasting impact.

The question is not whether VMware remains relevant – it will. The question is whether your organization wants to let VMware’s roadmap dictate your future.

This moment should be viewed not as a threat but as an opportunity. It’s a chance (again) to reassess dependencies, diversify, and secure true autonomy. For CIOs, the VMware shift is less about technology and more about leadership.

Yes, it’s about ensuring that your infrastructure strategy aligns with your long-term vision, not just with a vendor’s plan.