The Cloud Isn’t Eating Everything. And That’s a Good Thing

The Cloud Isn’t Eating Everything. And That’s a Good Thing

A growing number of experts warn that governments and enterprises are “digitally colonized” by U.S. cloud giants. A provocative claim and a partial truth. It’s an emotionally charged view, and while it raises valid concerns around sovereignty and strategic autonomy, it misses the full picture.

Because here’s the thing. Some (if not most) workloads in enterprise and public sector IT environments are still hosted on-premises. This isn’t due to resistance or stagnation. It’s the result of deliberate decisions made by informed IT leaders. Leaders who understand their business, compliance landscape, operational risks, and technical goals.

We are no longer living in a world where the public cloud is the default. We are living in a world where “cloud” is a choice and is used strategically. This is not failure. It’s maturity.

A decade ago, “cloud-first”  was often a mandate. CIOs and IT strategists were encouraged, sometimes pressured, to move as much as possible to the public cloud. It was seen as the only way forward. The public cloud was marketed as cheaper, faster, and more innovative by default.

But that narrative didn’t survive contact with reality. As migrations progressed, enterprises quickly discovered that not every workload belongs in the cloud. The benefits were real, but so were the costs, complexities, and trade-offs.

Today, most organizations operate with a much more nuanced perspective. They take the time to evaluate each application or service based on its characteristics. Questions like: Is this workload latency-sensitive? What are the data sovereignty requirements? Can we justify the ongoing operational cost at scale? Is this application cloud-native or tightly coupled to legacy infrastructure? What are the application’s dependencies?

This is what maturity looks like. It’s not about saying “yes” or “no” to the cloud in general. It’s about using the right tool for the right job. Public cloud remains an incredibly powerful option. But it is no longer a one-size-fits-all solution. And that’s a good thing.

On-Premises Infrastructure Is Still Valid

There is this persistent myth that running your own datacenter, or even part of your infrastructure, is a sign that you are lagging behind. That if you are not in the cloud, you are missing out on agility, speed, and innovation. That view simply doesn’t hold up.

In reality, on-premises infrastructure is still a valid, modern, and strategic choice for many enterprises, especially in regulated industries like healthcare, finance, manufacturing, and public services. These sectors often have clear, non-negotiable requirements around data locality, compliance, and performance. In many of these cases, operating infrastructure locally is not just acceptable. It’s the best option available.

Modern on-prem environments are nothing like the datacenters of the past. Thanks to advancements in software-defined infrastructure, automation, and platform engineering, on-prem can offer many of the same cloud-like capabilities: self-service provisioning, infrastructure-as-code, and full-stack observability. When properly built and maintained, on-prem can be just as agile as the public cloud.

That said, it’s important to acknowledge a key difference. While private infrastructure gives you full control, it can take longer to introduce new services and capabilities. You are not tapping into a global marketplace of pre-integrated services and APIs like you would with Oracle Cloud or Microsoft Azure. You are depending on your internal teams to evaluate, integrate, and manage each new component.

And that’s totally fine, if your CIO’s focus is stability, compliance, and predictable innovation cycles. For many organizations, that’s (still) exactly what’s needed. But if your business thrives on emerging technologies, needs instant access to the latest AI or analytics platforms, or depends on rapid go-to-market execution, then public cloud innovation cycles might offer an advantage that’s hard to replicate internally.

Every Enterprise Can Still Build Their Own Data Center Stack

It’s easy to assume that the era of enterprises building and running their own cloud-like platforms is over. After all, hyperscalers move faster, operate at massive scale (think about the thousands of engineers and product managers), and offer integrated services that are hard to match. For many organizations, especially those lacking deep infrastructure expertise or working with limited budgets, the public cloud is the most practical and cost-effective option.

But that doesn’t mean enterprises can’t or shouldn’t build their own platforms, especially when they have strong reasons to do so. Many still do, and do it effectively. With the right people, architecture, and operational discipline, it’s entirely possible to build private or hybrid environments that are tailored, secure, and strategically aligned.

The point isn’t to compete with hyperscalers on scale, it’s to focus on fit. Enterprises that understand their workloads, compliance requirements, and business goals can create infrastructure that’s more focused and more integrated with their internal systems.

Yes, private platforms may evolve more slowly. They may require more upfront investment and long-term commitment. But in return, they offer control, transparency, and alignment. Advantages that can outweigh speed in the right contexts!

And critically, the tooling has matured. Today’s internal platforms aren’t legacy silos but are built with the same modern engineering principles: Kubernetes, GitOps, telemetry, CI/CD, and self-service automation.

Note: If a customer wants the best of both worlds, there are options like OCI Dedicated Region.

The Right to Choose the Right Cloud

One of the most important shifts we are seeing in enterprise IT is the move away from single-platform thinking. No one-size-fits-all platform exists. And that’s precisely why the right to choose the right cloud matters.

Public cloud makes sense in many scenarios. Organizations might choose Azure because of its tight integration with Microsoft tools. They might select Oracle Cloud for better pricing or AI capabilities. At the same time, they continue to operate significant workloads on-premises, either by design or necessity.

This is the real world of enterprise IT: mixed environments, tailored solutions, and pragmatic trade-offs. These aren’t poor decisions or “technical debt”. Often, they are deliberate architectural choices made with a full understanding of the business and operational landscape. 

What matters most is flexibility. Organizations need the freedom to match workloads to the environments that best support them, without being boxed in by ideology, procurement bias, or compliance roadblocks. And that flexibility is what enables long-term resilience.

What the Cloud Landscape Actually Looks Like

Step into any enterprise IT environment today, and you will find a blend of technologies, platforms, and operational models. And the mix varies based on geography, industry, compliance rules, and historical investments.

The actual landscape is not black or white. It’s a continuum of choices. Some services live in hyperscale clouds. Others are hosted in sovereign, regional datacenters. Many still run in private infrastructure owned and operated by the organization itself.

This hybrid approach isn’t messy. It’s intentional and reflects the complexity of enterprise IT and the need to balance agility with governance, innovation with stability, and cost with performance.

What defines modern IT today is the operating model. The cloud is not a place. It’s a way of working. Whether your infrastructure is on-prem, in the public cloud, or somewhere in between, the key is how it’s automated, how it’s managed, how it integrates with developers and operations, and how it evolves with the business.

Conclusion: Strategy Over Hype – And Over Emotion

There’s no universal right or wrong when it comes to cloud strategy. Only what works for your organization based on risk, requirements, talent, and timelines. But we also can’t ignore the reality of the current market landscape.

Today, U.S. hyperscalers control over 70% of the European cloud market. Across infrastructure layers like compute, storage, networking, and software stacks, Europe’s digital economy relies on U.S. technologies for 85 to 90% of its foundational capabilities. 

But these numbers didn’t appear out of nowhere.

Let’s be honest: it’s not the fault of hyperscalers that enterprises and public sector organizations chose to adopt their platforms. Those were decisions made by people – CIOs, procurement teams, IT strategists – driven by valid business goals: faster time-to-market, access to innovation, cost modeling, availability of talent, or vendor consolidation.

These choices might deserve reevaluation, yes. But they don’t deserve emotional blame.

We need to stop framing the conversation as if U.S. cloud providers “stole” the European market. That kind of narrative doesn’t help anyone. The reality is more complex and far more human. Companies chose platforms that delivered, and hyperscalers were ready with the talent, services, and vision to meet that demand.

If we want alternatives, if we want European options to succeed, we need to stop shouting at the players and start changing the rules of the game. That means building competitive offerings, investing in skills, aligning regulation with innovation, and making sovereignty a business advantage, not just a political talking point.

Can a Unified Multi-Cloud Inventory Transform Cloud Management?

Can a Unified Multi-Cloud Inventory Transform Cloud Management?

When we spread our workloads across clouds like Oracle Cloud, AWS, Azure, Google Cloud, maybe even IBM, or smaller niche players, we knowingly accept complexity. Each cloud speaks its own language, offers its own services, and maintains its own console. What if there were a central place where we could see everything: every resource, every relationship, across every cloud? A place that lets us truly understand how our distributed architecture lives and breathes?

I find myself wondering if we could one day explore a tool or approach that functions as a multi-cloud inventory, keeping track of every VM, container, database, and permission – regardless of the platform. Not because it’s a must-have today, but because the idea sparks curiosity: what would it mean for cloud governance, cost transparency, and risk reduction if we had this true single pane of glass?

Who feels triggered now because I said “single pane of glass”? 😀 Let’s move on!

Could a Multi-Cloud Command Center Change How We Visualize Our Environment?

Let’s imagine it: a clean interface, showing not just lists of resources, but the relationships between them. Network flows across cloud boundaries. Shared secrets between apps on “cloud A” and databases on “cloud B”. Authentication tokens moving between clouds.

What excites me here isn’t the dashboard itself, but the possibility of visualizing the hidden links across clouds. Instead of troubleshooting blindly, or juggling a dozen consoles, we could zoom out for a bird’s-eye view. Seeing in one place how data and services crisscross providers.

Multi-Cloud Insights

I don’t know if we’ll get there anytime soon (or if such a solution already exists) but exploring the idea of a unified multi-cloud visualization tool feels like an adventure worth considering.

Multi-Cloud Search and Insights

When something breaks, when we are chasing a misconfiguration, or when we want to understand where we might be exposed, it often starts with a question: Where is this resource? Where is that permission open?

What if we could type that question once and get instant answers across clouds? A global search bar that could return every unencrypted public bucket or every server with a certain tag, no matter which provider it’s on.

Multi-Cloud Graph Query

Wouldn’t it be interesting if that search also showed contextual information: connected resources, compliance violations, or cost impact? It’s a thought I keep returning to because the journey toward proactive multi-cloud operations might start with simple, unified answers.

Could a True Multi-Cloud App Require This Kind of Unified Lens?

Some teams are already building apps that stretch across clouds: an API front-end in one provider, authentication in another, ML workloads on specialized platforms, and data lakes somewhere else entirely. These aren’t cloud-agnostic apps, they are “cloud-diverse” apps. Purpose-built to exploit best-of-breed services from different providers.

That makes me wonder: if an app inherently depends on multiple clouds, doesn’t it deserve a control plane that’s just as distributed? Something that understands the unique role each cloud plays, and how they interact, in one coherent operational picture?

I don’t have a clear answer, but I can’t help thinking about how multi-cloud-native apps might need true multi-cloud-native management.

VMware Aria Hub and Graph – Was It a Glimpse of the Future?

Not so long ago, VMware introduced Aria Hub and Aria Graph with an ambitious promise: a single place to collect and normalize resource data from all major clouds, connect it into a unified graph, and give operators a true multi-cloud inventory and control plane. It was one of the first serious attempts to address the challenge of understanding relationships between cloud resources spread across different providers.

VMware Aria Hub Dashboard

The idea resonated with anyone who has struggled to map sprawling cloud estates or enforce consistent governance policies in a multi-cloud world. A central graph of every resource, dependency, and configuration sounded like a game-changer. Not only for visualization, but also for powerful queries, security insights, and cost management.

But when Broadcom acquired VMware, they shifted focus away from VMware’s SaaS portfolio. Many SaaS-based offerings were sunset or sidelined, including Aria Hub and Aria Graph, effectively burying the vision of a unified multi-cloud inventory platform along with them.

I still wonder: did VMware Aria Hub and Graph show us a glimpse of what multi-cloud operations could look like if we dared to standardize resource relationships across clouds? Or did it simply arrive before its time, in an industry not yet ready to embrace such a radical approach?

Either way, it makes me even more curious about whether we might one day revisit this idea and how much value a unified resource graph could unlock in a world where multi-cloud complexity continues to grow.

Final Thoughts

I don’t think there’s a definitive answer yet to whether we need a unified multi-cloud inventory or command center today. Some organizations already have mature processes and tooling that work well enough, even if they are built on scripts, spreadsheets, or point solutions glued together. But as multi-cloud strategies evolve, and as more teams start building apps that intentionally spread across multiple providers, I find myself increasingly curious about whether we will see renewed demand for a shared data model of our entire cloud footprint.

Because with each new cloud we adopt, complexity grows exponentially. Our assets scatter, our identities and permissions multiply, and our ability to keep track of everything by memory or siloed dashboards fades. Even something simple, like understanding “what resources talk to this database?” becomes a detective story across clouds.

A solution that offers unified visibility, context, and even policy controls feels almost inevitable if multi-cloud architectures continue to accelerate. And yet, I’m also aware of how hard this problem is to solve. Each cloud provider evolves quickly, their APIs change, and mapping their semantics into a single, consistent model is an enormous challenge.

That’s why, for now, I see this more as a hypothesis. An idea to keep exploring rather than a clear requirement. I’m fascinated by the thought of what a central multi-cloud “graph” could unlock: faster investigations, smarter automation, tighter security, and perhaps a simpler way to make sense of our expanding environments.

Whether we build it ourselves, wait for a vendor to try again, or discover a new way to approach the problem, I’m eager to see how the industry experiments with this space in the years ahead. Because in the end, the more curious we stay, the better prepared we’ll be when the time comes to simplify the complexity we’ve created.

Sovereign Clouds and the VMware Earthquake: Dependency Isn’t Just a Hyperscaler Problem

Sovereign Clouds and the VMware Earthquake: Dependency Isn’t Just a Hyperscaler Problem

The concept of “sovereign cloud” has been making waves across Europe and beyond. Politicians talk about it. Regulators push for it. Enterprises (re-)evaluate it. On the surface, it sounds like a logical evolution: regain control, keep data within national borders, reduce exposure to foreign jurisdictions, and while you are at it, maybe finally break free from the gravitational pull of the U.S. hyperscalers.

After all, hyperscaler dependency is seen as the big bad wolf. If your workloads live in AWS, Azure, Google Cloud or Oracle Cloud Infrastructure, you are automatically exposed to price increases, data sovereignty concerns, U.S. legal reach (hello, CLOUD Act), and a sense of vendor lock-in that seems harder to escape with every commit to infrastructure-as-code.

So, the solution appears simple: go local, go sovereign, go safe.

But if only it were that easy.

The truth is: sovereignty isn’t something you can just buy off the shelf. It’s not a matter of switching cloud logos or picking the provider that wraps their marketing in your national flag. Because even within your own datacenter, even with platforms that have long been considered “sovereign” and independent, the same risks apply.

The best example? VMware.

What happened in the VMware ecosystem over the past year should be a wake-up call for anyone who thinks sovereignty equals control. Because, as we have now seen, control can vanish. Fast. Very fast.

VMware’s Rapid Fall from Grace

Take VMware. For years, it was the go-to platform for building secure, sovereign private clouds. Whether in your own datacenter or hosted by a trusted service provider in your region, VMware felt like the safe, stable choice. No vendor lock-in (allegedly), no forced cloud-native rearchitecture, and full control over your workloads. Rainbows, unicorns, and that warm fuzzy feeling of sovereignty.

Then came the Broadcom acquisition, and with it, a cold splash of reality.

As Hock Tan , our President and CEO, shared in today's General Session at  VMware Explore Barcelona, European customers want control over their data  and processes. | Broadcom

Practically overnight, prices shot up. In some cases, more than doubled. Features were suddenly stripped out or repackaged into higher-priced bundles. Longstanding partner agreements were shaken, if not broken. Products disappeared or were drastically repositioned. Customers and partners were caught off guard. Not just by the changes, but by how quickly they hit.

And just like that, a platform once seen as a cornerstone of sovereign IT became a textbook example of how fragile that sovereignty really is.

Sovereignty Alone Doesn’t Save You

The VMware story exposes a hard truth: so-called “sovereign” infrastructure isn’t immune to disruption. Many assume risk only lives in the public cloud under the branding of AWS, Azure, or Oracle Cloud. But in reality, the triggers for a “cloud exit” or forced platform shift can be found anywhere. Also on-premises!

A sudden licensing change. An unexpected acquisition. A new product strategy that leaves your current setup stranded. None of these things care whether your workloads are in a public cloud region or a private rack in your basement. Dependency is dependency, and it doesn’t always come with a hyperscaler logo.

It’s Not About Picking the Right Vendor. It’s About Being Ready for the Wrong One.

That’s why sovereignty, in the real world, isn’t something you just buy. It’s something you design for.

Note: Some hyperscalers now offer “sovereign by design” solutions but even these require deeper architectural thinking.

Sure, a Greenfield build on a sovereign cloud stack sounds great. Fresh start, full control, compliance checkboxes all ticked. But the reality for most organizations is very different. They have already invested years into specific platforms, tools, and partnerships. There are skill gaps, legacy systems, ongoing projects, and plenty of inertia. Ripping it all out for the sake of “clean” sovereignty just isn’t feasible.

That’s what makes architecture, flexibility, and diversification so critical. A truly resilient IT strategy isn’t just about where your data lives or which vendor’s sticker is on the server. It’s about being ready (structurally, operationally, and contractually) for things to change.

Because they will change.

Open Source ≠ Sovereign by Default

Spoiler: Open source won’t save you either

Let’s address another popular idea in the sovereignty debate. The belief that open source is the magic solution. The holy grail. The thinking goes: “If it’s open, it’s sovereign”. You have the source code, you can run it anywhere, tweak it however you like, and you are free from vendor lock-in. Yeah, right.

Sounds great. But in practice? It’s not that simple.

Yes, open source can enable sovereignty, but it doesn’t guarantee it. Just because something is open doesn’t mean it’s free of risk. Most open-source projects rely on a global contributor base, and many are still controlled, governed, or heavily influenced by large commercial vendors – often headquartered in the same jurisdictions we are supposedly trying to avoid. Yes, that’s good and bad at the same time, isn’t it?

And let’s be honest: having the source code doesn’t mean you suddenly have a DevOps army to maintain it, secure it, patch it, integrate it, scale it, monitor it, and support it 24/7. In most cases, you will need commercial support, managed services, or skilled specialists. And with that, new dependencies emerge.

So what have you really achieved? Did you eliminate risk or just shift it?

Open source is a fantastic ingredient in a sovereign architecture – in any cloud architecture. But it’s not a silver bullet.

Behind the Curtain – Complexity, Not Simplicity

From the outside, especially for non-IT people, the sovereign cloud debate can look like a clear binary: US hyperscaler = risky, local provider = safe. But behind the curtain, it’s much more nuanced. You are dealing with a web of relationships, existing contracts, integrated platforms, and real-world limitations.

The Broadcom-VMware shake-up was a loud and very public reminder that disruption can come from any direction. Even the platforms we thought were untouchable can suddenly become liabilities.

So the question isn’t: “How do we go sovereign?”

It’s: “How do we stay in control, no matter what happens?”

That’s the real sovereignty.

Open Source in the Cloud Era – Still Free, but Never Cheap?

Open Source in the Cloud Era – Still Free, but Never Cheap?

This article continues the conversation started in “Open source can help with portability and lock-in – but it is not a silver bullet”, where we explored how open source technologies can reduce cloud lock-in, but aren’t a universal fix. Now we go one step further.

Open source software (OSS) is the unsung hero behind much of the innovation we see in the cloud today. From container runtimes powering serverless workloads to the databases running mission-critical apps, OSS is everywhere. But now the question arises: how do we make open source sustainable and what role do the cloud providers play?

Some say the hyperscalers are the villains in this story. I see it differently.

I believe the major cloud platforms including AWS, Azure, Google Cloud, and Oracle Cloud Infrastructure (OCI) are not undermining open source. On the contrary, they are expanding its reach, accelerating its maturity, and making it more accessible than ever before.

Open Source Is The Backbone of the Cloud

The most exciting thing about cloud platforms today is how accessible open source technology has become. Technologies like Kubernetes, Prometheus, MySQL, Redis, and Postgres are no longer just community-maintained stacks. They are global services delivered with enterprise reliability. What hyperscalers such as AWS, Azure, and Oracle Cloud have done is operationalize these tools at scale, offering managed services that developers trust, without caring for patching, HA or backups. The result is remarkable: global systems running OSS as a service.

In other words, turning OSS into mainstream infrastructure. That is not to be understated.

Running Open Source at Scale Is Hard (And Expensive)

Yes, open source is free to use. But it’s not free to run.

Anyone can deploy an open source application. Running it at scale, though? That’s a different story. It takes discipline, expertise, and relentless operational focus:

  • high availability setups,
  • automatic failover,
  • performance tuning,
  • deep telemetry,
  • continuous patching,
  • secure configurations,
  • IAM integration,
  • versioning strategy,
  • backup orchestration,
  • and regular upgrades.

They are day-to-day realities for teams operating at scale.

That’s why managed services from hyperscalers exist and why they are so widely adopted. Platforms like Amazon RDS, Azure Database for PostgreSQL, Google Cloud Memorystore, or Oracle MySQL HeatWave take the core of a powerful open source engine and remove the heavy lifting. You are not just getting hosted software, you are getting resilience, automation, and accountability.

When you consume Google’s GKE or Oracle Kubernetes Engine (OKE), you are effectively outsourcing operations. You gain predictability and uptime without building a 24/7 SRE team. That’s not lock-in. It’s operational leverage!

Hyperscalers aren’t restricting choice. They are offering a second path. One designed for teams that need focus, speed, and as little downtime as possible.

A Fair Critique – OSS Creators Left Behind?

Of course, there’s another side to this story. One that deserves attention.

Some open source creators and maintainers feel left behind in this cloud-powered success story. Their argument is simple: hyperscalers are monetizing open source projects at massive scale, often without contributing back in proportion – either in engineering resources, funding, or visibility.

And they have a point. Popular tools like MongoDB, Redis, and Elasticsearch were widely adopted, then productized by cloud platforms without formal partnerships. As a response, these projects changed their licenses to restrict commercial use by cloud providers. That, in turn, led to forks like OpenSearch (from Elasticsearch), Valkey (from Redis), or OpenTofu (from Terraform).

Keine alternative Textbeschreibung für dieses Bild vorhanden

But this isn’t really a cloud problem, it’s an economic problem.

Open source used to be a side project or a contribution model. Today, it powers mission-critical infrastructure. That shift from volunteer-based innovation to always-on enterprise backbone created a funding gap. It’s no longer enough to push code to GitHub and wait for donations. Projects need full-time maintainers, security audits, documentation, roadmap planning, and long-term governance. That requires sustainable business models.

Cloud providers, on the other hand, rely on open source for customer value and velocity. Innovation doesn’t just come from inside hyperscaler walls, it flows in from the OSS community as well. The relationship is symbiotic. And it must evolve.

Yes, cloud vendors benefit from open ecosystems. But many are starting to give back – through engineering contributions, visibility programs, upstream engagement, and community funding. Oracle, for example, contributes to OpenJDK, GraalVM, and Helidon, and backs Linux Foundation efforts. Microsoft sponsors maintainers through GitHub Sponsors and supports dozens of OSS projects. Even AWS, who was long seen as an outsider, is now actively involved in maintaining forks like OpenSearch.

The path forward isn’t about choosing sides. It’s about redefining the balance: between freedom and funding, between platform and project. OSS maintainers need economic models that work. Hyperscalers need the trust and innovation open source brings. Everyone benefits when the relationship is healthy. Right?

Cloud and Open Source – Not a Rivalry, But a Partnership

The old “cloud versus open source” debate is no longer useful, because it no longer reflects reality.

We are not watching a rivalry unfold. We are witnessing mutual acceleration. Open source is the engine that drives much of today’s cloud innovation. And cloud platforms are the distribution channels that scale it to the world. One without the other? Still powerful, but far less impactful.

Today’s enterprise IT landscape is built on this pairing. We have Kubernetes running on managed clusters. It’s open telemetry pipelines feeding cloud-native observability. Then there is Linux, Postgres, Redis, and Java. All delivered as secure, scalable, managed services.

As you can see, behind the scenes, hyperscalers are contributing more than compute. They are actively investing in the open source ecosystem. And these aren’t isolated contributions, they signal a larger trend: cloud and OSS are no longer separate spheres. They are interdependent, each shaping the roadmap of the other.

And the real winners? Customers.

Enterprises benefit when innovation from open communities meets the scale, automation, and security of cloud platforms. You get the openness you want, and the reliability you need. You gain velocity without sacrificing visibility. You build on open standards while delivering business outcomes.

When cloud providers and OSS communities collaborate (and not compete), modern IT gets better for everyone.

Sustainable Collaboration

So, where does this go from here?

We are entering a phase where co-evolution between open source and cloud platforms becomes the norm. Sustainability is no longer just a community conversation. It’s becoming a core pillar of enterprise architecture and vendor strategy.

We will likely see a continued rise in permissive-but-protective licenses with models like Polyform, BSL, or even custom usage clauses that allow free adoption but limit monetization without contribution. These licenses won’t solve every conflict, but they are a step toward fairness by keeping projects open while preserving the creator’s ability to fund long-term development.

On the cloud provider side, we will see more intentional programs designed to give back. That could mean upstream engineering contributions, visibility via marketplace integration, or funding through sponsorships,

Meanwhile, OSS vendors and maintainers are moving beyond “just licenses” toward hybrid monetization. Some go SaaS-first. Some offer premium support or managed versions of their tools. We will also likely see more partnerships between OSS projects and cloud platforms, where integration, co-marketing, and joint roadmaps replace conflict with alignment.

And the payoff?

Enterprises will benefit the most. They will be able to build with the freedom and transparency of open source, while still consuming services with the resilience, automation, and support that modern business demands. No one wants to reinvent patching pipelines, build observability stacks from scratch, or manage HA for distributed databases. Managed services let teams focus on value, not plumbing.

The future isn’t about choosing between “cloud” or “open”, it’s about building systems that are both open and operable, both innovative and sustainable.

Because that’s the direction modern IT is already moving. Whether we plan for it or not.

Final Thoughts

Cloud platforms took tools from hobby projects and universities and turned them into the foundation of global infrastructure. That’s something worth acknowledging, even celebrating!

Of course, the discussion isn’t over. Sustainability matters. Transparency matters. But painting cloud providers as the problem risks missing the bigger opportunity.

Let us focus on building systems that are both open and operable. Let’s support OSS maintainers, not just in code, but in business. And let’s keep the conversation moving – not from a place of blame, but from a vision of shared success.

 

Why Emulating the Cloud Isn’t the Same as Being One

Why Emulating the Cloud Isn’t the Same as Being One

It’s easy to mistake progress for innovation. VMware Cloud Foundation 9.0 (VCF) introduces long-awaited features like VPC-style networking, developer-centric automation, and bundled services. But let’s be honest: this is not the future of cloud. This is infrastructure catching up to where the public cloud world already was ten years ago.

Example: Moving some concepts and features from VMware Cloud Director (vCD) to Aria Automation and then calling it VCF Automation is also not innovative. It was the right thing to do, as vCD and Aria Automation (formerly known as vRealize Automation) shared many overlapping features and concepts. In other words, we can expect VCF Automation to be the future and vCD will be retired in a few years.

Anyway, there’s a pattern here. Platform vendors continue to position themselves as “private cloud providers”, yet the experience they offer remains rooted in managing hardware, scaling clusters, and applying patches. Whether it’s VCF or Nutanix, the story is always the same: it’s better infrastructure. But that’s the problem. It’s still infrastructure.

In contrast, the real shift toward cloud doesn’t start with software-defined storage or NSX overlay networks. It starts with the service model. That’s what makes cloud work. That’s what makes it scalable, elastic, and developer-first. That’s what customers actually need.

Let’s unpack where VCF 9.0 lands and why it still misses the mark.

What’s New in VCF 9.0. And What’s Not.

Broadcom deserves credit for moving VCF closer to what customers have been asking for since at least 2020. The platform now includes a proper developer consumption layer, integrated VPC-style networking, a simplified control plane, and aligned software versions for different products. Yes, it feels more like a cloud. It automates more, hides more complexity, and makes day 2 operations less painful. All good steps!

The new virtual private cloud constructs let teams carve out self-contained network domains – complete with subnets, NAT, firewall rules, and load balancers – all provisioned from a central interface. That’s a meaningful upgrade from the old NSX workflows. Now, transit gateways can be deployed automatically, reducing the friction of multi-domain connectivity. The whole setup is better, simpler, and more cloud-like. Well done.

On the consumption side, there’s a proper push toward unified APIs. Terraform support, policy-as-code blueprints in YAML, and native Kubernetes provisioning give developers a way to consume infrastructure more like they would in a hyperscaler environment. VCF customers can onboard teams faster, and the lifecycle engine behind the scenes handles upgrades, certificates, and best-practice configurations with far less manual effort.

So yes, VCF 9.0 is a big step forward for Broadcom and for existing VMware customers. But let’s put that progress into perspective.

Cloud Features Delivered Years Too Late

The features we’re seeing now – developer APIs, VPCs, self-service provisioning, built-in security, elastic-like networking – these aren’t breakthroughs. They are basic expectations. Public cloud providers like AWS and Azure introduced the VPC concept more than 10 years ago. Public clouds have offered full-stack policy automation, service mesh observability, and integrated load balancing for most of the last decade.

What VCF 9.0 delivers in 2025 is essentially what existing on-premises customers were asking for back in 2020.

The bigger concern is that VMware has always been the benchmark for enterprise-grade virtualization and private infrastructure. When customers bought into VCF years ago, they expected these capabilities then, not now. Broadcom has simply shipped the version of VCF that many customers assumed was already on the roadmap, five years ago.

And even now, many of the services (add-ons) in VCF 9.0 like Avi load balancing, vDefend IDS/IPS, integrated databases, and AI services, are optional components, mostly manually deployed, and not fully elastic or usage-based. These are integrations, not native services. You still need to operate them.

The Core Problem: It’s Still Infrastructure-Led

That’s the real difference. VCF and Nutanix remain infrastructure-led platforms. They require hardware planning, capacity management, lifecycle orchestration, and dependency tracking. Yes, they have APIs. Yes, they support Kubernetes. But at their core, they are platforms you need to own, operate, and scale yourself.

Cloud, on the other hand, is not about owning anything. It’s about consuming outcomes. VCF 9.0 and others are just not there yet.

The Illusion of a Private Cloud

This is why it’s time to call out the difference. Just because something looks like cloud – has some APIs, supports Kubernetes, uses words like “consumption” and “developer self-service” – doesn’t mean it actually behaves like cloud.

The illusion of a “private cloud” is seductive. You get to keep control. You get to use familiar tools. But control also means responsibility. Familiar tools mean legacy thinking. And a so-called private cloud, in most cases, just means more complex infrastructure with higher expectations.

That’s not transformation. That’s rebranding.

What VCF 9.0 delivers is an important evolution of VMware’s private infrastructure platform. But let’s not confuse that with cloud. Broadcom has moved in the right direction. They have shipped what customers needed years ago. But they are still delivering (virtual) infrastructure. Just better packaged.

Final Thought

You don’t transform your IT strategy by modernizing clusters. You transform it by changing how you consume and operate technology.

So the question isn’t whether your stack looks like “the cloud”. The question is whether you can stop operating infrastructure and start consuming services.

That’s the real line between emulating the cloud and actually being one. And as of today, VCF (and Nutanix) are still on the other side of that line. It’s not good. It’s not bad. It is what it is.

Open-Source Can Help With Portability And Lock-In But It Is Not A Silver Bullet

Open-Source Can Help With Portability And Lock-In But It Is Not A Silver Bullet

We have spent years chasing cloud portability and warning against vendor lock-in. And yet, every enterprise I have worked with is more locked in today than ever. Not because they failed to use open-source software (OSS). Not because they made bad decisions, but because real-world architecture, scale, and business momentum don’t care about ideals. They care about outcomes.

The public cloud promised freedom. APIs, managed services, and agility. Open-source added hope. Kubernetes, Terraform, Postgres. Tools that could, in theory, run anywhere. And so we bought into the idea that we were building “portable” infrastructure. That one day, if pricing changed or strategy shifted, we could pack up our workloads and move. But now, many enterprises are finding out the truth:

Portability is not a feature. It is a myth, and for most large organizations, it is a unicorn, but elusive in reality.

Let me explain, and before I do, talk about interclouds again.

Remember Interclouds?

Interclouds, once hyped as the answer to cloud portability (and lock-in), promised a seamless way to abstract infrastructure across providers, enabling workloads to move freely between clouds. In theory, they would shield enterprises from vendor dependency by creating a uniform control plane and protocols across AWS, Azure, GCP, OCI and beyond.

David Bernstein Intercloud

Note: An idea and concept that was discussed in 2012. It is 2025, and not much has happened since then.

But in practice, intercloud platforms failed to solve the lock-in problem because they only masked it, not removed it. Beneath the abstraction layer, each provider still has its own APIs, services, network behaviors, and operational peculiarities.

Enterprises quickly discovered that you can’t abstract your way out of data gravity, compliance policies, or deeply integrated PaaS services. Instead of enabling true portability, interclouds just delayed the inevitable realization: you still have to commit somewhere.

The Trigger Nobody Plans For

Imagine you are running a global enterprise with 500 or 1’000 applications. They span two public clouds. Some are modern, containerized, and well-defined in Terraform. Others are legacy, fragile, lifted, and shifted years ago in a hurry. A few run in third-party SaaS platforms.

Then the call comes: “We need to exit one of our clouds. Legal, compliance, pricing. Doesn’t matter why. It has to go.”

Suddenly, that portability you thought you had? It is smoke. The Kubernetes clusters are portable in theory, but the CI/CD tooling, monitoring stack, and security policies are not. Dozens of apps use PaaS services tightly coupled to their original cloud. Even the apps that run in containers still need to be re-integrated, re-tested, and re-certified in the new environment.

This isn’t theoretical. I have seen it firsthand. The dream of being “cloud neutral” dies the moment you try to move production workloads – at scale, with real dependencies, under real deadlines.

Open-Source – Freedom with Strings Attached

It is tempting to think that open-source will save you. After all, it is portable, right? It is not tied to any vendor. You can run it anywhere. And that is true on paper.

But the moment you run it in production, at enterprise scale, a new reality sets in. You need observability, governance, upgrades, SLAs. You start relying on managed services for these open-source tools. Or you run them yourself, and now your internal teams are on the hook for uptime, performance, and patching.

You have simply traded one form of lock-in for another: the operational lock-in of owning complexity.

So yes, open-source gives you options. But it doesn’t remove friction. It shifts it.

The Other Lock-Ins No One Talks About

When we talk about “avoiding lock-in”, we usually mean avoiding proprietary APIs or data formats. But in practice, most enterprises are locked in through completely different vectors:

Data gravity makes it painful to move large volumes of information, especially when compliance and residency rules come into play. The real issue is the latency, synchronization, and duplication challenges that come with moving data between clouds.

Tooling ecosystems create invisible glue. Your CI/CD pipelines, security policies, alerting, cost management. These are all tightly coupled to your cloud environment. Even if the core app is portable, rebuilding the ecosystem around it is expensive and time-consuming.

Skills and culture are rarely discussed, but they are often the biggest blockers. A team trained to build in cloud A doesn’t instantly become productive in cloud B. Tooling changes. Concepts shift. You have to retrain, re-hire, or rely on partners.

So, the question becomes: is lock-in really about technology or inertia (of an enterprise’s IT team)?

Data Gravity

Data gravity is one of the most underestimated forces in cloud architecture. Whether you are using proprietary services or open-source software. The idea is simple: as data accumulates, everything else like compute, analytics, machine learning, and governance, tends to move closer to it.

In practice, this means that once your data reaches a certain scale or sensitivity, it becomes extremely hard to move, regardless of whether it is stored in a proprietary cloud database or an open-source solution like PostgreSQL or Kafka.

With proprietary platforms, the pain comes from API compatibility, licensing, and high egress costs. With open-source tools, it is about operational entanglement: complex clusters, replication lag, security hardening, and integration sprawl.

Either way, once data settles, it anchors your architecture, creating a gravitational pull that resists even the most well-intentioned portability efforts.

The Cost of Chasing Portability

Portability is often presented as a best practice. But there is a hidden cost.

To build truly portable applications, you need to avoid proprietary features, abstract your infrastructure, and write for the lowest common denominator. That often means giving up performance, integration, and velocity. You are paying an “insurance premium” for a theoretical future event like cloud exit or vendor failure, that may never come.

Worse, in some cases, over-engineering for portability can slow down innovation. Developers spend more time writing glue code or dealing with platform abstraction layers than delivering business value.

If the business needs speed and differentiation, this trade-off rarely holds up.

So… What Should We Do?

Here is the hard truth: lock-in is not the problem. Lack of intention is.

Lock-in is unavoidable. Whether it is a cloud provider, a platform, a SaaS tool, or even an open-source ecosystem. You are always choosing dependencies. What matters is knowing what you are committing to, why you are doing it, and what the exit cost will be. That is where most enterprises fail.

And let us be honest for a moment. A lot of enterprises call it lock-in because their past strategic decision doesn’t feel right anymore. And then they blame their “strategic” partner.

The better strategy? Accept lock-in, but make it intentional. Know your critical workloads. Understand where your data lives. Identify which apps are migration-ready and which ones never will be. And start building the muscle of exit-readiness. Not for all 1’000 apps, but for the ones that matter most.

True portability isn’t binary. And in most large enterprises, it only applies to the top 10–20% of apps that are already modernized, loosely coupled, and containerized. The rest? They are staying where they are until there is a budget, a compliance event, or a crisis.

Avoiding U.S. Public Clouds And The Illusion of Independence

While independence from the U.S. hyperscalers and the potential risks associated with the CLOUD Act may seem like a compelling reason to adopt open-source solutions, it is not always the silver bullet it appears to be. The idea is appealing: running your infrastructure on open-source tools in order to avoid being dependent on any single cloud provider, especially those based in the U.S., whose data may be subject to foreign government access under the CLOUD Act.

However, this approach introduces its own set of challenges.

First, by attempting to cut ties with US providers, organizations often overlook the global nature of the cloud. Most open-source tools still rely on cloud providers for deployment, support, and scalability. Even if you host your open-source infrastructure on non-U.S. clouds, the reality is that many key components of your stack, like databases, messaging systems, or AI tools, may still be indirectly influenced by U.S.-based tech giants.

Second, operational complexity increases as you move away from managed services, requiring more internal resources to manage security, compliance, and performance. Rather than providing true sovereignty, the focus on avoiding U.S. hyperscalers may result in an unintended shift of lock-in from the provider to the infrastructure itself, where the trade-off is a higher cost in complexity and operational overhead.

Top Contributors To Key Open-Source Projects

U.S. public cloud providers like Google, Amazon, Microsoft, Oracle and others are not just spectators in this space. They’re driving the innovation and development of key projects:

  1. Kubernetes remains the flagship project of the CNCF, offering a robust container orchestration platform that has become essential for cloud-native architectures. The project has been significantly influenced by a variety of contributors, with Google being the original creator.
  2. Prometheus, the popular monitoring and alerting toolkit, was created by SoundCloud and is now widely adopted in cloud-native environments. The project has received significant contributions from major players, including Google, Amazon, Facebook, IBM, Lyft, and Apple. 
  3. Envoy, a high-performance proxy and communication bus for microservices, was developed by Lyft, with broad support from Google, Amazon, VMware, and Salesforce.
  4. Helm is the Kubernetes package manager, designed to simplify the deployment and management of applications on Kubernetes. It has a strong community with contributions from Microsoft (via Deis, which they acquired), Google, and other cloud providers.
  5. OpenTelemetry provides a unified standard for distributed tracing and observability, ensuring applications are traceable across multiple systems. The project has seen extensive contributions from Google, Microsoft, Amazon, Red Hat, and Cisco, among others. 

While these projects are open-source and governed by the CNCF (Cloud Native Computing Foundation), the influence of these tech companies cannot be understated. They not only provide the tools and resources necessary to drive innovation but also ensure that the technologies powering modern cloud infrastructures remain at the cutting edge of industry standards.

Final Thoughts

Portability has become the rallying cry of modern cloud architecture. Real-world enterprises aren’t moving between clouds every year. They are digging deeper into ecosystems, relying more on managed services, and optimizing for speed.

So maybe the conversation shouldn’t be about avoiding lock-in but about managing it. Perhaps more about understanding it. And, above all, owning it. The problem isn’t lock-in itself. The problem is treating lock-in like a disease, rather than what it really is: an architectural and strategic trade-off.

This is where architects and technology leaders have a critical role to play. Not in pretending we can design our way out of lock-in, but in navigating it intentionally. That means knowing where you can afford to be tightly coupled, where you should invest in optionality, and where it is simply not worth the effort to abstract away.