What If Cloud Was Never the Destination But Just One Chapter In A Longer Journey

What If Cloud Was Never the Destination But Just One Chapter In A Longer Journey

For more than a decade, IT strategies were shaped by a powerful promise that the public cloud was the final destination. Enterprises were told that everything would eventually run there, that the data center would become obsolete, and that the only rational strategy was “cloud-first”. For a time, this narrative worked. It created clarity in a complex world and provided decision-makers with a guiding principle.

Hyperscalers accelerated digital transformation in ways no one else could have. Without their scale and speed, the last decade of IT modernization would have looked very different. But what worked as a catalyst does not automatically define the long-term architecture.

But what if that narrative was never entirely true? What if the cloud was not the destination at all, but only a chapter? A critical accelerator in the broader evolution of enterprise infrastructure? The growing evidence suggests exactly that. Today, we are seeing the limits of mono-cloud thinking and the emergence of something new. A shift towards adaptive platforms that prioritize autonomy over location.

The Rise and Fall of Mono-Cloud Thinking

The first wave of cloud adoption was almost euphoric. Moving everything into a single public cloud seemed not just efficient but inevitable. Infrastructure management became simpler, procurement cycles shorter, and time-to-market faster. For CIOs under pressure to modernize, the benefits were immediate and tangible.

Yet over time, the cost savings that once justified the shift started to erode. What initially looked like operational efficiency transformed into long-term operating expenses that grew relentlessly with scale. Data gravity added another layer of friction. While applications were easy to deploy, the vast datasets they relied on were not as mobile. And then came the growing emphasis on sovereignty and compliance. Governments and regulators, citizens and journalists as well, started asking difficult questions about who ultimately controlled the data and under what jurisdiction.

These realities did not erase the value of the public cloud, but they reframed it. Mono-cloud strategies, while powerful in their early days, increasingly appeared too rigid, too costly, and too dependent on external factors beyond the control of the enterprise.

Multi-Cloud as a Halfway Step

In response, many organizations turned to multi-cloud. If one provider created lock-in, why not distribute workloads across two or three? The reasoning was logical. Diversify risk, improve resilience, and gain leverage in vendor negotiations.

But as the theory met reality, the complexity of multi-cloud began to outweigh its promises. Each cloud provider came with its own set of tools, APIs, and management layers, creating operational fragmentation rather than simplification. Policies around security and compliance became harder to enforce consistently. And the cost of expertise rose dramatically, as teams were suddenly required to master multiple environments instead of one.

Multi-cloud, in practice, became less of a strategy and more of a compromise. It revealed the desire for autonomy, but without providing the mechanisms to truly achieve it. What emerged was not freedom, but another form of dependency. This time, on the ability of teams to stitch together disparate environments at great cost and complexity.

The Adaptive Platform Hypothesis

If mono-cloud was too rigid and multi-cloud too fragmented, then what comes next? The hypothesis that is now emerging is that the future will be defined not by a place – cloud, on-premises, or edge – but by the adaptability of the platform that connects them.

Adaptive platforms are designed to eliminate friction, allowing workloads to move freely when circumstances change. They bring compute to the data rather than forcing data to move to compute, which becomes especially critical in the age of AI. They make sovereignty and compliance part of the design rather than an afterthought, ensuring that regulatory shifts do not force expensive architectural overhauls. And most importantly, they allow enterprises to retain operational autonomy even as vendors merge, licensing models change, or new technologies emerge.

This idea reframes the conversation entirely. Instead of asking where workloads should run, the more relevant question becomes how quickly and easily they can be moved, scaled, and adapted. Autonomy, not location, becomes the decisive metric of success.

Autonomy as the New Metric?

The story of the cloud is not over, but the chapter of cloud as a final destination is closing. The public cloud was never the endpoint, but it was a powerful catalyst that changed how we think about IT consumption. But the next stage is already being written, and it is less about destinations than about options.

Certain workloads will always thrive in a hyperscale cloud – think collaboration tools, globally distributed apps, or burst capacity. Others, especially those tied to sovereignty, compliance, or AI data proximity, demand a different approach. Adaptive platforms are emerging to fill that gap.

Enterprises that build for autonomy will be better positioned to navigate an unpredictable future. They will be able to shift workloads without fear of vendor lock-in, place AI infrastructure close to where data resides, and comply with sovereignty requirements without slowing down innovation.

The emerging truth is simple: Cloud was never the destination. It was only one chapter in a much longer journey. The next chapter belongs to adaptive platforms and to organizations bold enough to design for freedom rather than dependency.

Moving into Any Cloud Is Easy. Leaving Is the Hard Part

Moving into Any Cloud Is Easy. Leaving Is the Hard Part

For more than a decade, the industry has been focused on one direction. Yes, into the cloud. Migration projects, cloud-first strategies, and transformation initiatives all pointed the way toward a future where workloads would move out of data centers and into public platforms. Success was measured in adoption speed and the number of applications migrated. Very few people stopped to ask a more uncomfortable question: What if one day we needed to move out again?

This question, long treated as hypothetical, has now become a real consideration for many organizations. Cloud exit strategies, once discussed only at the margins of risk assessments, are entering boardroom conversations. They are no longer about distrust or resistance to cloud services, but about preparedness and strategic flexibility.

Part of the challenge is perception. In the early years, the cloud was often viewed as a one-way street. Once workloads were migrated, it was assumed they would stay there indefinitely. The benefits were obvious (agility, global reach, elastic scale, and a steady stream of innovation). Under such conditions, why would anyone think about leaving? But reality is rarely that simple. Over time, enterprises discovered that circumstances change. Costs, which in the beginning looked predictable, began to rise, especially for workloads that run continuously. Regulations evolved, sometimes requiring that data be handled differently or stored in new ways. Geopolitical factors entered the discussion, adding new dimensions of risk and dependency. What once felt like a permanent destination started to look more like another stop in a longer journey.

Exiting the cloud, however, is rarely straightforward. Workloads are not just applications; they are deeply tied to the data they use. Moving terabytes or petabytes across environments is slow, expensive, and operationally challenging. The same is true for integrations. Applications are connected to identity systems, monitoring frameworks, CI/CD pipelines, and third-party APIs. Each of these dependencies creates another anchor that makes relocation harder. Licensing and contracts add another layer of complexity, where the economics or even the legal terms of use can discourage or delay migration. And finally, there are the human and process elements. Teams adapt their ways of working to a given platform, build automation around its services, and shape their daily operations accordingly. Changing environments means changing habits, retraining staff, and, in some cases, restructuring teams.

Despite these obstacles, exit strategies are becoming more important. Rising costs are one reason, particularly for predictable workloads, where running them elsewhere might be more economical. Compliance and sovereignty requirements are another. New rules can suddenly make a deployment non-compliant, forcing organizations to rethink their choices. A third driver is the need for strategic flexibility. Many leaders want to ensure they are not overly dependent on a single provider or operating model. Having the ability to relocate workloads when circumstances demand it has become a necessity.

This is why exit strategies should be seen less as a technical exercise and more as a strategic discipline. The goal is not to duplicate everything or keep environments constantly synchronized, which would be wasteful and unrealistic. Instead, the goal is to maintain options. Options to repatriate workloads when economics dictate, options to move when compliance requires, and options to expand when innovation opportunities emerge. The best exit strategies are not documents that sit on a shelf. They are capabilities built into the way an enterprise designs, operates, and governs its IT landscape.

History in IT shows why this matters. Mainframes, proprietary UNIX systems and even some early virtualization platforms all created situations of deep dependency. At the time, those technologies delivered enormous value. But eventually, organizations needed to evolve and often found themselves constrained. The lesson is not to avoid new technologies, but to adopt them with foresight, knowing that change is inevitable. Exit strategies are part of that foresight.

Looking ahead, enterprises can prepare by building in certain principles. Workloads that are critical to the business should be designed with portability in mind, even if not every application needs that level of flexibility. Data should be separated from compute wherever possible, because data gravity is one of the biggest barriers to mobility. And governance should be consistent across environments, so that compliance, security, and cost management follow workloads rather than being tied to a single location. These principles do not mean abandoning the cloud or holding it at arm’s length. On the contrary, they make the cloud more sustainable as a strategic choice.

Cloud services will continue to play a central role in modern IT. The benefits are well understood, and the pace of innovation will ensure that they remain attractive. But adaptability has become just as important as adoption. Having an exit strategy is not a sign of mistrust. It is a recognition that circumstances can change, and that organizations should be prepared. In the end, the key question is no longer only how fast you can move into the cloud, but also how easily you can move out again if you ever need to. And this includes the private cloud as well.

Can a Unified Multi-Cloud Inventory Transform Cloud Management?

Can a Unified Multi-Cloud Inventory Transform Cloud Management?

When we spread our workloads across clouds like Oracle Cloud, AWS, Azure, Google Cloud, maybe even IBM, or smaller niche players, we knowingly accept complexity. Each cloud speaks its own language, offers its own services, and maintains its own console. What if there were a central place where we could see everything: every resource, every relationship, across every cloud? A place that lets us truly understand how our distributed architecture lives and breathes?

I find myself wondering if we could one day explore a tool or approach that functions as a multi-cloud inventory, keeping track of every VM, container, database, and permission – regardless of the platform. Not because it’s a must-have today, but because the idea sparks curiosity: what would it mean for cloud governance, cost transparency, and risk reduction if we had this true single pane of glass?

Who feels triggered now because I said “single pane of glass”? 😀 Let’s move on!

Could a Multi-Cloud Command Center Change How We Visualize Our Environment?

Let’s imagine it: a clean interface, showing not just lists of resources, but the relationships between them. Network flows across cloud boundaries. Shared secrets between apps on “cloud A” and databases on “cloud B”. Authentication tokens moving between clouds.

What excites me here isn’t the dashboard itself, but the possibility of visualizing the hidden links across clouds. Instead of troubleshooting blindly, or juggling a dozen consoles, we could zoom out for a bird’s-eye view. Seeing in one place how data and services crisscross providers.

Multi-Cloud Insights

I don’t know if we’ll get there anytime soon (or if such a solution already exists) but exploring the idea of a unified multi-cloud visualization tool feels like an adventure worth considering.

Multi-Cloud Search and Insights

When something breaks, when we are chasing a misconfiguration, or when we want to understand where we might be exposed, it often starts with a question: Where is this resource? Where is that permission open?

What if we could type that question once and get instant answers across clouds? A global search bar that could return every unencrypted public bucket or every server with a certain tag, no matter which provider it’s on.

Multi-Cloud Graph Query

Wouldn’t it be interesting if that search also showed contextual information: connected resources, compliance violations, or cost impact? It’s a thought I keep returning to because the journey toward proactive multi-cloud operations might start with simple, unified answers.

Could a True Multi-Cloud App Require This Kind of Unified Lens?

Some teams are already building apps that stretch across clouds: an API front-end in one provider, authentication in another, ML workloads on specialized platforms, and data lakes somewhere else entirely. These aren’t cloud-agnostic apps, they are “cloud-diverse” apps. Purpose-built to exploit best-of-breed services from different providers.

That makes me wonder: if an app inherently depends on multiple clouds, doesn’t it deserve a control plane that’s just as distributed? Something that understands the unique role each cloud plays, and how they interact, in one coherent operational picture?

I don’t have a clear answer, but I can’t help thinking about how multi-cloud-native apps might need true multi-cloud-native management.

VMware Aria Hub and Graph – Was It a Glimpse of the Future?

Not so long ago, VMware introduced Aria Hub and Aria Graph with an ambitious promise: a single place to collect and normalize resource data from all major clouds, connect it into a unified graph, and give operators a true multi-cloud inventory and control plane. It was one of the first serious attempts to address the challenge of understanding relationships between cloud resources spread across different providers.

VMware Aria Hub Dashboard

The idea resonated with anyone who has struggled to map sprawling cloud estates or enforce consistent governance policies in a multi-cloud world. A central graph of every resource, dependency, and configuration sounded like a game-changer. Not only for visualization, but also for powerful queries, security insights, and cost management.

But when Broadcom acquired VMware, they shifted focus away from VMware’s SaaS portfolio. Many SaaS-based offerings were sunset or sidelined, including Aria Hub and Aria Graph, effectively burying the vision of a unified multi-cloud inventory platform along with them.

I still wonder: did VMware Aria Hub and Graph show us a glimpse of what multi-cloud operations could look like if we dared to standardize resource relationships across clouds? Or did it simply arrive before its time, in an industry not yet ready to embrace such a radical approach?

Either way, it makes me even more curious about whether we might one day revisit this idea and how much value a unified resource graph could unlock in a world where multi-cloud complexity continues to grow.

Final Thoughts

I don’t think there’s a definitive answer yet to whether we need a unified multi-cloud inventory or command center today. Some organizations already have mature processes and tooling that work well enough, even if they are built on scripts, spreadsheets, or point solutions glued together. But as multi-cloud strategies evolve, and as more teams start building apps that intentionally spread across multiple providers, I find myself increasingly curious about whether we will see renewed demand for a shared data model of our entire cloud footprint.

Because with each new cloud we adopt, complexity grows exponentially. Our assets scatter, our identities and permissions multiply, and our ability to keep track of everything by memory or siloed dashboards fades. Even something simple, like understanding “what resources talk to this database?” becomes a detective story across clouds.

A solution that offers unified visibility, context, and even policy controls feels almost inevitable if multi-cloud architectures continue to accelerate. And yet, I’m also aware of how hard this problem is to solve. Each cloud provider evolves quickly, their APIs change, and mapping their semantics into a single, consistent model is an enormous challenge.

That’s why, for now, I see this more as a hypothesis. An idea to keep exploring rather than a clear requirement. I’m fascinated by the thought of what a central multi-cloud “graph” could unlock: faster investigations, smarter automation, tighter security, and perhaps a simpler way to make sense of our expanding environments.

Whether we build it ourselves, wait for a vendor to try again, or discover a new way to approach the problem, I’m eager to see how the industry experiments with this space in the years ahead. Because in the end, the more curious we stay, the better prepared we’ll be when the time comes to simplify the complexity we’ve created.

Open Source in the Cloud Era – Still Free, but Never Cheap?

Open Source in the Cloud Era – Still Free, but Never Cheap?

This article continues the conversation started in “Open source can help with portability and lock-in – but it is not a silver bullet”, where we explored how open source technologies can reduce cloud lock-in, but aren’t a universal fix. Now we go one step further.

Open source software (OSS) is the unsung hero behind much of the innovation we see in the cloud today. From container runtimes powering serverless workloads to the databases running mission-critical apps, OSS is everywhere. But now the question arises: how do we make open source sustainable and what role do the cloud providers play?

Some say the hyperscalers are the villains in this story. I see it differently.

I believe the major cloud platforms including AWS, Azure, Google Cloud, and Oracle Cloud Infrastructure (OCI) are not undermining open source. On the contrary, they are expanding its reach, accelerating its maturity, and making it more accessible than ever before.

Open Source Is The Backbone of the Cloud

The most exciting thing about cloud platforms today is how accessible open source technology has become. Technologies like Kubernetes, Prometheus, MySQL, Redis, and Postgres are no longer just community-maintained stacks. They are global services delivered with enterprise reliability. What hyperscalers such as AWS, Azure, and Oracle Cloud have done is operationalize these tools at scale, offering managed services that developers trust, without caring for patching, HA or backups. The result is remarkable: global systems running OSS as a service.

In other words, turning OSS into mainstream infrastructure. That is not to be understated.

Running Open Source at Scale Is Hard (And Expensive)

Yes, open source is free to use. But it’s not free to run.

Anyone can deploy an open source application. Running it at scale, though? That’s a different story. It takes discipline, expertise, and relentless operational focus:

  • high availability setups,
  • automatic failover,
  • performance tuning,
  • deep telemetry,
  • continuous patching,
  • secure configurations,
  • IAM integration,
  • versioning strategy,
  • backup orchestration,
  • and regular upgrades.

They are day-to-day realities for teams operating at scale.

That’s why managed services from hyperscalers exist and why they are so widely adopted. Platforms like Amazon RDS, Azure Database for PostgreSQL, Google Cloud Memorystore, or Oracle MySQL HeatWave take the core of a powerful open source engine and remove the heavy lifting. You are not just getting hosted software, you are getting resilience, automation, and accountability.

When you consume Google’s GKE or Oracle Kubernetes Engine (OKE), you are effectively outsourcing operations. You gain predictability and uptime without building a 24/7 SRE team. That’s not lock-in. It’s operational leverage!

Hyperscalers aren’t restricting choice. They are offering a second path. One designed for teams that need focus, speed, and as little downtime as possible.

A Fair Critique – OSS Creators Left Behind?

Of course, there’s another side to this story. One that deserves attention.

Some open source creators and maintainers feel left behind in this cloud-powered success story. Their argument is simple: hyperscalers are monetizing open source projects at massive scale, often without contributing back in proportion – either in engineering resources, funding, or visibility.

And they have a point. Popular tools like MongoDB, Redis, and Elasticsearch were widely adopted, then productized by cloud platforms without formal partnerships. As a response, these projects changed their licenses to restrict commercial use by cloud providers. That, in turn, led to forks like OpenSearch (from Elasticsearch), Valkey (from Redis), or OpenTofu (from Terraform).

Keine alternative Textbeschreibung für dieses Bild vorhanden

But this isn’t really a cloud problem, it’s an economic problem.

Open source used to be a side project or a contribution model. Today, it powers mission-critical infrastructure. That shift from volunteer-based innovation to always-on enterprise backbone created a funding gap. It’s no longer enough to push code to GitHub and wait for donations. Projects need full-time maintainers, security audits, documentation, roadmap planning, and long-term governance. That requires sustainable business models.

Cloud providers, on the other hand, rely on open source for customer value and velocity. Innovation doesn’t just come from inside hyperscaler walls, it flows in from the OSS community as well. The relationship is symbiotic. And it must evolve.

Yes, cloud vendors benefit from open ecosystems. But many are starting to give back – through engineering contributions, visibility programs, upstream engagement, and community funding. Oracle, for example, contributes to OpenJDK, GraalVM, and Helidon, and backs Linux Foundation efforts. Microsoft sponsors maintainers through GitHub Sponsors and supports dozens of OSS projects. Even AWS, who was long seen as an outsider, is now actively involved in maintaining forks like OpenSearch.

The path forward isn’t about choosing sides. It’s about redefining the balance: between freedom and funding, between platform and project. OSS maintainers need economic models that work. Hyperscalers need the trust and innovation open source brings. Everyone benefits when the relationship is healthy. Right?

Cloud and Open Source – Not a Rivalry, But a Partnership

The old “cloud versus open source” debate is no longer useful, because it no longer reflects reality.

We are not watching a rivalry unfold. We are witnessing mutual acceleration. Open source is the engine that drives much of today’s cloud innovation. And cloud platforms are the distribution channels that scale it to the world. One without the other? Still powerful, but far less impactful.

Today’s enterprise IT landscape is built on this pairing. We have Kubernetes running on managed clusters. It’s open telemetry pipelines feeding cloud-native observability. Then there is Linux, Postgres, Redis, and Java. All delivered as secure, scalable, managed services.

As you can see, behind the scenes, hyperscalers are contributing more than compute. They are actively investing in the open source ecosystem. And these aren’t isolated contributions, they signal a larger trend: cloud and OSS are no longer separate spheres. They are interdependent, each shaping the roadmap of the other.

And the real winners? Customers.

Enterprises benefit when innovation from open communities meets the scale, automation, and security of cloud platforms. You get the openness you want, and the reliability you need. You gain velocity without sacrificing visibility. You build on open standards while delivering business outcomes.

When cloud providers and OSS communities collaborate (and not compete), modern IT gets better for everyone.

Sustainable Collaboration

So, where does this go from here?

We are entering a phase where co-evolution between open source and cloud platforms becomes the norm. Sustainability is no longer just a community conversation. It’s becoming a core pillar of enterprise architecture and vendor strategy.

We will likely see a continued rise in permissive-but-protective licenses with models like Polyform, BSL, or even custom usage clauses that allow free adoption but limit monetization without contribution. These licenses won’t solve every conflict, but they are a step toward fairness by keeping projects open while preserving the creator’s ability to fund long-term development.

On the cloud provider side, we will see more intentional programs designed to give back. That could mean upstream engineering contributions, visibility via marketplace integration, or funding through sponsorships,

Meanwhile, OSS vendors and maintainers are moving beyond “just licenses” toward hybrid monetization. Some go SaaS-first. Some offer premium support or managed versions of their tools. We will also likely see more partnerships between OSS projects and cloud platforms, where integration, co-marketing, and joint roadmaps replace conflict with alignment.

And the payoff?

Enterprises will benefit the most. They will be able to build with the freedom and transparency of open source, while still consuming services with the resilience, automation, and support that modern business demands. No one wants to reinvent patching pipelines, build observability stacks from scratch, or manage HA for distributed databases. Managed services let teams focus on value, not plumbing.

The future isn’t about choosing between “cloud” or “open”, it’s about building systems that are both open and operable, both innovative and sustainable.

Because that’s the direction modern IT is already moving. Whether we plan for it or not.

Final Thoughts

Cloud platforms took tools from hobby projects and universities and turned them into the foundation of global infrastructure. That’s something worth acknowledging, even celebrating!

Of course, the discussion isn’t over. Sustainability matters. Transparency matters. But painting cloud providers as the problem risks missing the bigger opportunity.

Let us focus on building systems that are both open and operable. Let’s support OSS maintainers, not just in code, but in business. And let’s keep the conversation moving – not from a place of blame, but from a vision of shared success.

 

Why Emulating the Cloud Isn’t the Same as Being One

Why Emulating the Cloud Isn’t the Same as Being One

It’s easy to mistake progress for innovation. VMware Cloud Foundation 9.0 (VCF) introduces long-awaited features like VPC-style networking, developer-centric automation, and bundled services. But let’s be honest: this is not the future of cloud. This is infrastructure catching up to where the public cloud world already was ten years ago.

Example: Moving some concepts and features from VMware Cloud Director (vCD) to Aria Automation and then calling it VCF Automation is also not innovative. It was the right thing to do, as vCD and Aria Automation (formerly known as vRealize Automation) shared many overlapping features and concepts. In other words, we can expect VCF Automation to be the future and vCD will be retired in a few years.

Anyway, there’s a pattern here. Platform vendors continue to position themselves as “private cloud providers”, yet the experience they offer remains rooted in managing hardware, scaling clusters, and applying patches. Whether it’s VCF or Nutanix, the story is always the same: it’s better infrastructure. But that’s the problem. It’s still infrastructure.

In contrast, the real shift toward cloud doesn’t start with software-defined storage or NSX overlay networks. It starts with the service model. That’s what makes cloud work. That’s what makes it scalable, elastic, and developer-first. That’s what customers actually need.

Let’s unpack where VCF 9.0 lands and why it still misses the mark.

What’s New in VCF 9.0. And What’s Not.

Broadcom deserves credit for moving VCF closer to what customers have been asking for since at least 2020. The platform now includes a proper developer consumption layer, integrated VPC-style networking, a simplified control plane, and aligned software versions for different products. Yes, it feels more like a cloud. It automates more, hides more complexity, and makes day 2 operations less painful. All good steps!

The new virtual private cloud constructs let teams carve out self-contained network domains – complete with subnets, NAT, firewall rules, and load balancers – all provisioned from a central interface. That’s a meaningful upgrade from the old NSX workflows. Now, transit gateways can be deployed automatically, reducing the friction of multi-domain connectivity. The whole setup is better, simpler, and more cloud-like. Well done.

On the consumption side, there’s a proper push toward unified APIs. Terraform support, policy-as-code blueprints in YAML, and native Kubernetes provisioning give developers a way to consume infrastructure more like they would in a hyperscaler environment. VCF customers can onboard teams faster, and the lifecycle engine behind the scenes handles upgrades, certificates, and best-practice configurations with far less manual effort.

So yes, VCF 9.0 is a big step forward for Broadcom and for existing VMware customers. But let’s put that progress into perspective.

Cloud Features Delivered Years Too Late

The features we’re seeing now – developer APIs, VPCs, self-service provisioning, built-in security, elastic-like networking – these aren’t breakthroughs. They are basic expectations. Public cloud providers like AWS and Azure introduced the VPC concept more than 10 years ago. Public clouds have offered full-stack policy automation, service mesh observability, and integrated load balancing for most of the last decade.

What VCF 9.0 delivers in 2025 is essentially what existing on-premises customers were asking for back in 2020.

The bigger concern is that VMware has always been the benchmark for enterprise-grade virtualization and private infrastructure. When customers bought into VCF years ago, they expected these capabilities then, not now. Broadcom has simply shipped the version of VCF that many customers assumed was already on the roadmap, five years ago.

And even now, many of the services (add-ons) in VCF 9.0 like Avi load balancing, vDefend IDS/IPS, integrated databases, and AI services, are optional components, mostly manually deployed, and not fully elastic or usage-based. These are integrations, not native services. You still need to operate them.

The Core Problem: It’s Still Infrastructure-Led

That’s the real difference. VCF and Nutanix remain infrastructure-led platforms. They require hardware planning, capacity management, lifecycle orchestration, and dependency tracking. Yes, they have APIs. Yes, they support Kubernetes. But at their core, they are platforms you need to own, operate, and scale yourself.

Cloud, on the other hand, is not about owning anything. It’s about consuming outcomes. VCF 9.0 and others are just not there yet.

The Illusion of a Private Cloud

This is why it’s time to call out the difference. Just because something looks like cloud – has some APIs, supports Kubernetes, uses words like “consumption” and “developer self-service” – doesn’t mean it actually behaves like cloud.

The illusion of a “private cloud” is seductive. You get to keep control. You get to use familiar tools. But control also means responsibility. Familiar tools mean legacy thinking. And a so-called private cloud, in most cases, just means more complex infrastructure with higher expectations.

That’s not transformation. That’s rebranding.

What VCF 9.0 delivers is an important evolution of VMware’s private infrastructure platform. But let’s not confuse that with cloud. Broadcom has moved in the right direction. They have shipped what customers needed years ago. But they are still delivering (virtual) infrastructure. Just better packaged.

Final Thought

You don’t transform your IT strategy by modernizing clusters. You transform it by changing how you consume and operate technology.

So the question isn’t whether your stack looks like “the cloud”. The question is whether you can stop operating infrastructure and start consuming services.

That’s the real line between emulating the cloud and actually being one. And as of today, VCF (and Nutanix) are still on the other side of that line. It’s not good. It’s not bad. It is what it is.

Oracle Compute Cloud@Customer – The Sovereign Cloud Platform Europe Has Been Waiting For

Oracle Compute Cloud@Customer – The Sovereign Cloud Platform Europe Has Been Waiting For

Europe has always taken data privacy, neutrality, and independence seriously. Whether you are operating in government, healthcare, banking, or energy, the message is clear: sensitive workloads need to stay within national borders. However, sovereignty shouldn’t come at the expense of innovation, agility, or cost efficiency. This is exactly where Oracle Compute Cloud@Customer (C3) steps in.

With C3, you are not forced to choose between the benefits of public cloud and the control of on-prem infrastructure. You get both. Oracle brings a consistent, fully managed OCI experience directly into your data center or trusted hosting environment.

This is cloud designed for data residency and regulatory alignment, without compromise. Customers retain full operational control thanks to Oracle’s secure Operator Control and disconnected operating model, giving you full autonomy over who can access what and when. If you don’t want Oracle to touch it, they won’t.

But this isn’t just about compliance, it’s about enabling innovation. With C3, organizations can develop once and run anywhere. You can build modern applications on OCI using containers, Kubernetes, or virtual machines (VMs), and then deploy them on-prem with C3, in a public OCI region, or any hybrid setup. This gives developers and architects freedom, without forcing the business into compliance headaches.

Even more compelling: C3 is priced the same as the public OCI regions. No “on-prem premium.” Unlike other hyperscalers that charge more for bringing cloud services into your data center, Oracle keeps the economics consistent. That means you can deploy at scale wherever you need it, without blowing your IT budget. And because OCI is up to 60% cheaper than competitors – especially for IaaS-heavy workloads and managed Kubernetes – C3 becomes not just a compliance play, but a strategic cost advantage.

For organizations already running Exadata Cloud@Customer (ExaCC), the transition to C3 is seamless. You extend the same OCI architecture from your Oracle Database infrastructure to your full application landscape – compute, storage, network, containers, and more – all under one public OCI control plane. One architecture, one operational model, full sovereignty.

And for those looking to modernize full application stacks from databases to middleware to frontend services, C3 provides the flexibility to run both Oracle and open-source technologies.

Note: Those requiring the full breadth of OCI services in a sovereign, connected environment, Oracle also offers OCI Dedicated Region

Oracle Compute Cloud@Customer Isolated – The Next Level of Sovereignty

Oracle has taken the concept of sovereign cloud one step further. With Oracle Compute Cloud@Customer Isolated (C3I), organizations can now run cloud-native workloads in a fully air-gapped environment, without any operational dependency on Oracle. No outbound connections. No Oracle-managed control plane. No shared infrastructure. Just full autonomy and local control. C3I Oracle owned and customer/partner managed.

it’s a real, production-ready deployment model for mission-critical and highly regulated environments. Designed specifically for governments, defense, intelligence, and critical infrastructure operators like Telcos, Compute Cloud@Customer Isolated addresses scenarios where even a standard sovereign cloud isn’t enough.

The platform runs the same core OCI services  (compute, storage, networking, Kubernetes) but is completely disconnected from Oracle’s global cloud infrastructure. Everything is deployed on-premises in your trusted facility, and operated entirely by your own team or a national partner under your control. Oracle is not in the loop. No telemetry is sent back. No patching happens unless you initiate it.

For Europe, this matters. Regulations are tightening. Risk tolerance is dropping. And cloud decisions now sit under the spotlight of data strategy, digital self-determination, and public trust. With C3I, organizations don’t need to compromise. You can modernize legacy infrastructure, run secure workloads, and meet the strictest data protection laws without handing over operational control to a foreign hyperscaler.

Oracle Compute Cloud@Customer Isolated

So if you’re building for maximum sovereignty, whether for a national security project, a classified analytics platform, or a regulated healthcare system, C3I gives you the control you need, without the complexity of building it all from scratch.

Note: those requiring the full breadth of OCI services in a sovereign, air-gapped environment, Oracle also offers an Isolated Region. It delivers the complete OCI stack, including advanced PaaS and data services, fully disconnected and deployed inside your own data center. It’s the natural next step when C3I isn’t enough.

Cloud-Native at Home – Modernizing Legacy Workloads on C3

Whether you are building microservices, deploying containers with Kubernetes, or refactoring legacy applications, C3 gives you the flexibility and tools to modernize at your own pace without sending data to the public cloud.

For many organizations, this is especially relevant when looking at existing on-premises environments. C3 opens a new path for modernizing applications without a full lift-and-shift. You can gradually move critical services from traditional virtual machines into containers, adopt infrastructure-as-code practices, and standardize on CI/CD pipelines. All within a compliant, in-country environment that mirrors public OCI.

Using OCI services like OKE (Oracle Kubernetes Engine) on C3, teams can deploy cloud-native apps alongside traditional workloads. It is entirely possible to run a legacy database VM next to containerized microservices, with consistent networking, storage, and security policies across both. This hybrid model is ideal for customers who want to modernize existing applications incrementally, without taking unnecessary risks.

For VMware and Nutanix customers, C3 provides a future-ready landing zone. You can continue to run VM-based workloads on OCI-compatible compute shapes and use that as the foundation to containerize where it makes sense. This avoids expensive rewrites or disruptive replatforming. Instead, C3 supports a phased modernization strategy.

Note: OKE on C3 is free. Standard OCI pricing for VM nodes applies. 

Oracle Compute Cloud@Customer Supports Red Hat OpenShift

Oracle Compute Cloud@Customer (C3) keeps expanding its capabilities for customers, and a key recent addition is support for Red Hat OpenShift.

Artificial Intelligence on Compute Cloud@Customer

With Oracle’s announcement in February 2025, customers can add Nvidia GPUs to C3 deployments with the following key features:

  • Independent scaling of GPUs, compute, and storage: up to 48 L40S NVIDIA GPUs, 6,624 OCPUs with 80.4 TB of memory, and a mix of up to 3.65 PB of high-capacity storage and 1.2 PB of high-performance storage.
  • Powerful GPU VMs: up to four NVIDIA L40S GPUs, 108 Intel Xeon 8480+ CPU cores, 800-GB DDR5 memory, and 400 Gbps network bandwidth for the most demanding workloads.
  • Ultra-fast network connectivity: 800-Gbps data center connectivity that can directly connect an Exadata Cloud@Customer Machine to combine the power of GPUs with Oracle Database 23ai’s integrated AI Vector Search.

Description of multicloud-customer-and-oci.png follows

EU Sovereign Operations for Oracle Compute Cloud@Customer

In May 2025, Oracle announced the availability of Oracle EU Sovereign Operations for C3. This means, that C3 now also runs in the EU Sovereign Cloud, with the same pricing and the same service you know from commercial OCI regions.

Previously, operations and automation for Compute Cloud @ Customer were handled via global OCI control planes. With EU Sovereign Operations, that changes:

  • All automation and admin services now reside within Oracle’s EU Sovereign Cloud regions

  • Operations are managed by Oracle teams based in the EU, ensuring compliance

  • Hardware deployment and support is delivered by personnel authorized to work in the customer’s country

EU Sovereign Operations for Compute Cloud@Customer is offered with the control plane located in one of Oracle EU Sovereign Cloud regions, currently either Madrid, Spain or Frankfurt, Germany. This service is offered in European Union member countries and other select countries in Europe. The service delivers the same features, functions, value and service level objectives (SLOs) offered with Compute Cloud@Customer service with control planes from OCI Compute public regions.

Last Comments

In short, Oracle Compute Cloud@Customer is not just a cloud, it’s your sovereign cloud. It gives enterprises the tools they need to stay compliant, stay competitive, and stay in control. And that is what the next generation of digital sovereignty should look like.