Can a Unified Multi-Cloud Inventory Transform Cloud Management?

Can a Unified Multi-Cloud Inventory Transform Cloud Management?

When we spread our workloads across clouds like Oracle Cloud, AWS, Azure, Google Cloud, maybe even IBM, or smaller niche players, we knowingly accept complexity. Each cloud speaks its own language, offers its own services, and maintains its own console. What if there were a central place where we could see everything: every resource, every relationship, across every cloud? A place that lets us truly understand how our distributed architecture lives and breathes?

I find myself wondering if we could one day explore a tool or approach that functions as a multi-cloud inventory, keeping track of every VM, container, database, and permission – regardless of the platform. Not because it’s a must-have today, but because the idea sparks curiosity: what would it mean for cloud governance, cost transparency, and risk reduction if we had this true single pane of glass?

Who feels triggered now because I said “single pane of glass”? 😀 Let’s move on!

Could a Multi-Cloud Command Center Change How We Visualize Our Environment?

Let’s imagine it: a clean interface, showing not just lists of resources, but the relationships between them. Network flows across cloud boundaries. Shared secrets between apps on “cloud A” and databases on “cloud B”. Authentication tokens moving between clouds.

What excites me here isn’t the dashboard itself, but the possibility of visualizing the hidden links across clouds. Instead of troubleshooting blindly, or juggling a dozen consoles, we could zoom out for a bird’s-eye view. Seeing in one place how data and services crisscross providers.

Multi-Cloud Insights

I don’t know if we’ll get there anytime soon (or if such a solution already exists) but exploring the idea of a unified multi-cloud visualization tool feels like an adventure worth considering.

Multi-Cloud Search and Insights

When something breaks, when we are chasing a misconfiguration, or when we want to understand where we might be exposed, it often starts with a question: Where is this resource? Where is that permission open?

What if we could type that question once and get instant answers across clouds? A global search bar that could return every unencrypted public bucket or every server with a certain tag, no matter which provider it’s on.

Multi-Cloud Graph Query

Wouldn’t it be interesting if that search also showed contextual information: connected resources, compliance violations, or cost impact? It’s a thought I keep returning to because the journey toward proactive multi-cloud operations might start with simple, unified answers.

Could a True Multi-Cloud App Require This Kind of Unified Lens?

Some teams are already building apps that stretch across clouds: an API front-end in one provider, authentication in another, ML workloads on specialized platforms, and data lakes somewhere else entirely. These aren’t cloud-agnostic apps, they are “cloud-diverse” apps. Purpose-built to exploit best-of-breed services from different providers.

That makes me wonder: if an app inherently depends on multiple clouds, doesn’t it deserve a control plane that’s just as distributed? Something that understands the unique role each cloud plays, and how they interact, in one coherent operational picture?

I don’t have a clear answer, but I can’t help thinking about how multi-cloud-native apps might need true multi-cloud-native management.

VMware Aria Hub and Graph – Was It a Glimpse of the Future?

Not so long ago, VMware introduced Aria Hub and Aria Graph with an ambitious promise: a single place to collect and normalize resource data from all major clouds, connect it into a unified graph, and give operators a true multi-cloud inventory and control plane. It was one of the first serious attempts to address the challenge of understanding relationships between cloud resources spread across different providers.

VMware Aria Hub Dashboard

The idea resonated with anyone who has struggled to map sprawling cloud estates or enforce consistent governance policies in a multi-cloud world. A central graph of every resource, dependency, and configuration sounded like a game-changer. Not only for visualization, but also for powerful queries, security insights, and cost management.

But when Broadcom acquired VMware, they shifted focus away from VMware’s SaaS portfolio. Many SaaS-based offerings were sunset or sidelined, including Aria Hub and Aria Graph, effectively burying the vision of a unified multi-cloud inventory platform along with them.

I still wonder: did VMware Aria Hub and Graph show us a glimpse of what multi-cloud operations could look like if we dared to standardize resource relationships across clouds? Or did it simply arrive before its time, in an industry not yet ready to embrace such a radical approach?

Either way, it makes me even more curious about whether we might one day revisit this idea and how much value a unified resource graph could unlock in a world where multi-cloud complexity continues to grow.

Final Thoughts

I don’t think there’s a definitive answer yet to whether we need a unified multi-cloud inventory or command center today. Some organizations already have mature processes and tooling that work well enough, even if they are built on scripts, spreadsheets, or point solutions glued together. But as multi-cloud strategies evolve, and as more teams start building apps that intentionally spread across multiple providers, I find myself increasingly curious about whether we will see renewed demand for a shared data model of our entire cloud footprint.

Because with each new cloud we adopt, complexity grows exponentially. Our assets scatter, our identities and permissions multiply, and our ability to keep track of everything by memory or siloed dashboards fades. Even something simple, like understanding “what resources talk to this database?” becomes a detective story across clouds.

A solution that offers unified visibility, context, and even policy controls feels almost inevitable if multi-cloud architectures continue to accelerate. And yet, I’m also aware of how hard this problem is to solve. Each cloud provider evolves quickly, their APIs change, and mapping their semantics into a single, consistent model is an enormous challenge.

That’s why, for now, I see this more as a hypothesis. An idea to keep exploring rather than a clear requirement. I’m fascinated by the thought of what a central multi-cloud “graph” could unlock: faster investigations, smarter automation, tighter security, and perhaps a simpler way to make sense of our expanding environments.

Whether we build it ourselves, wait for a vendor to try again, or discover a new way to approach the problem, I’m eager to see how the industry experiments with this space in the years ahead. Because in the end, the more curious we stay, the better prepared we’ll be when the time comes to simplify the complexity we’ve created.

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

We all know that the future of existing VMware customers has become more complicated and less certain. Many enterprises are reevaluating their reliance on VMware as their core infrastructure stack. So, where to go next?

For enterprises already invested in Oracle technology, or simply those looking for a credible, flexible, and enterprise-grade alternative, Oracle Cloud Infrastructure (OCI) offers a comprehensive set of paths forward. Whether you want to modernize, rehost, or run hybrid workloads, OCI doesn’t force you to pick a single direction. Instead, it gives you a range of options: from going cloud-native, to running your existing VMware stack unchanged, to building your own sovereign cloud footprint.

Here are five realistic strategies for VMware customers considering OCI. Learn how to migrate from VMware to Oracle Cloud Infrastructure. It doesn’t need to be an either-or decision, it can also be an “and” approach.

1. Cloud-Native with OCI – Start Fresh, Leave VMware Behind

For organizations ready to move beyond traditional infrastructure altogether, the cloud-native route is the cleanest break you can make. This is where you don’t just move workloads; you rearchitect them. You replace VMs with containers where possible, and perhaps lift and shift some of the existing workloads. You replace legacy service dependencies with managed cloud services. And most importantly, you replace static, manually operated environments with API-driven infrastructure.

OCI supports this approach with a robust portfolio: you have got compute Instances that scale on demand, Oracle Kubernetes Engine (OKE) for container orchestration, OCI Functions for serverless workloads, and Autonomous Database for data platforms that patch and tune themselves. The tooling is modern, open, and mature – Terraform, Ansible, and native SDKs are all available and well-documented.

This isn’t a quick VMware replacement. It requires a DevOps mindset, application refactoring, and an investment in automation and CI/CD. It is not something you do in a weekend. But it’s the only path that truly lets you leave the baggage behind and design infrastructure the way it should work in 2025.

2. OCVS – Run VMware As-Is, Without the Hardware

If cloud-native is the clean break, then Oracle Cloud VMware Solution (OCVS) is the strategic pause. This is the lift-and-shift strategy for enterprises that need continuity now, but don’t want to double down on on-prem investment.

With OCVS, you’re not running a fully managed service (compared to AWS, Azure, GCP). You get the full vSphere, vSAN, NSX, and vCenter stack deployed on Oracle bare-metal infrastructure in your own OCI tenancy. You’re the admin. You manage the lifecycle. You patch and control access. But you don’t have to worry about hardware procurement, power and cooling, or supply chain delays. And you can integrate natively with OCI services: backup to OCI Object Storage, peer with Exadata, and extend IAM policies across the board.

Oracle Cloud VMware Solution

The migration is straightforward. You can replicate your existing environment (with HCX), run staging workloads side-by-side, and move VMs with minimal friction. You keep your operational model, your monitoring stack, and your tools. The difference is, you get out of your data center contract and stop burning time and money on hardware lifecycle management.

This isn’t about modernizing right now. It’s about escaping VMware hardware and licensing lock-in without losing operational control.

3. Hybrid with OCVS, Compute Cloud@Customer, and Exadata Cloud@Customer

Now we’re getting into enterprise-grade architecture. This is the model where OCI becomes a platform, not just a destination. If you’re in a regulated industry and you can’t run everything in the public cloud, but you still want the same elasticity, automation, and control, this hybrid model makes a lot of sense.

A diagram showing your tenancy in an OCI region, and how it connects to Compute Cloud@Customer in your data center.

Here’s how it works: you run OCVS in the OCI public region for DR, or workloads that have to stay on vSphere. But instead of moving everything to the cloud, you deploy Compute Cloud@Customer (C3) and Exadata Cloud@Customer (ExaCC) on-prem. That gives you a private cloud footprint with the same APIs and a subset of OCI IaaS/PaaS services but physically located in your own facility, behind your firewall, under your compliance regime.

You manage workloads on C3 using the exact same SDKs, CLI tools, and Terraform modules as the public cloud. You can replicate between on-prem and cloud, burst when needed, or migrate in stages. And with ExaCC running in the same data center, your Oracle databases benefit from the same SLA and performance guarantees, with none of the data residency headaches.

This model is ideal if you’re trying to modernize without breaking compliance. It keeps you in control, avoids migration pain, and still gives you access to the full OCI ecosystem when and where you need it.

4. OCI Dedicated Region – A Public Cloud That Lives On-Prem

When public cloud is not an option, OCI Dedicated Region becomes the answer.

This isn’t a rack. It is an entire cloud region. You get all OCI services like compute, storage, OCVS, OKE, Autonomous DB, identity, even SaaS, deployed inside your own facility. You retain data sovereignty and you control physical access. You also enforce local compliance rules and operate everything with the same OCI tooling and automation used in Oracle’s own hyperscale regions.

サーバラック3つで自分のOracle Cloudリージョンが持てる「Oracle Dedicated Region 25」発表 - Publickey

What makes Dedicated Region different from C3 is the scale and service parity. While C3 delivers core IaaS and some PaaS capabilities, Dedicated Region is literally the full stack. You can run OCVS in there, connect it to your enterprise apps, and have a fully isolated VMware environment that never leaves your perimeter.

For VMware customers, it means you don’t have to choose between control and modernization. You get both.

5. Oracle Alloy – Cloud Infrastructure for Telcos and VMware Service Providers

If you’re a VMware Cloud Director customer or a telco/provider building cloud services for others, then Oracle just handed you an entirely new business model. Oracle Alloy allows you to offer your own cloud under your brand, with your pricing, and your operational control based on the same OCI technology stack Oracle runs themselves.

This is not only reselling, it is operating your own OCI cloud.

Becoming an Oracle Alloy partner  diagram, description below

As a VMware-based cloud provider, Alloy gives you a path to modernize your platform and expand your services without abandoning your customer base. You can run your own VMware environment (OCVS), offer cloud-native services (OKE, DBaaS, Identity, Monitoring), and transition your customers at your own pace. All of it on a single platform, under your governance.

What makes Alloy compelling is that it doesn’t force you to pick between VMware and OCI, it lets you host both side by side. You keep your high-value B2B workloads and add modern, cloud-native services that attract new tenants or internal business units.

For providers caught in the middle of the VMware licensing storm, Alloy might be the most strategic long-term play available right now.

 

Open-Source Can Help With Portability And Lock-In But It Is Not A Silver Bullet

Open-Source Can Help With Portability And Lock-In But It Is Not A Silver Bullet

We have spent years chasing cloud portability and warning against vendor lock-in. And yet, every enterprise I have worked with is more locked in today than ever. Not because they failed to use open-source software (OSS). Not because they made bad decisions, but because real-world architecture, scale, and business momentum don’t care about ideals. They care about outcomes.

The public cloud promised freedom. APIs, managed services, and agility. Open-source added hope. Kubernetes, Terraform, Postgres. Tools that could, in theory, run anywhere. And so we bought into the idea that we were building “portable” infrastructure. That one day, if pricing changed or strategy shifted, we could pack up our workloads and move. But now, many enterprises are finding out the truth:

Portability is not a feature. It is a myth, and for most large organizations, it is a unicorn, but elusive in reality.

Let me explain, and before I do, talk about interclouds again.

Remember Interclouds?

Interclouds, once hyped as the answer to cloud portability (and lock-in), promised a seamless way to abstract infrastructure across providers, enabling workloads to move freely between clouds. In theory, they would shield enterprises from vendor dependency by creating a uniform control plane and protocols across AWS, Azure, GCP, OCI and beyond.

David Bernstein Intercloud

Note: An idea and concept that was discussed in 2012. It is 2025, and not much has happened since then.

But in practice, intercloud platforms failed to solve the lock-in problem because they only masked it, not removed it. Beneath the abstraction layer, each provider still has its own APIs, services, network behaviors, and operational peculiarities.

Enterprises quickly discovered that you can’t abstract your way out of data gravity, compliance policies, or deeply integrated PaaS services. Instead of enabling true portability, interclouds just delayed the inevitable realization: you still have to commit somewhere.

The Trigger Nobody Plans For

Imagine you are running a global enterprise with 500 or 1’000 applications. They span two public clouds. Some are modern, containerized, and well-defined in Terraform. Others are legacy, fragile, lifted, and shifted years ago in a hurry. A few run in third-party SaaS platforms.

Then the call comes: “We need to exit one of our clouds. Legal, compliance, pricing. Doesn’t matter why. It has to go.”

Suddenly, that portability you thought you had? It is smoke. The Kubernetes clusters are portable in theory, but the CI/CD tooling, monitoring stack, and security policies are not. Dozens of apps use PaaS services tightly coupled to their original cloud. Even the apps that run in containers still need to be re-integrated, re-tested, and re-certified in the new environment.

This isn’t theoretical. I have seen it firsthand. The dream of being “cloud neutral” dies the moment you try to move production workloads – at scale, with real dependencies, under real deadlines.

Open-Source – Freedom with Strings Attached

It is tempting to think that open-source will save you. After all, it is portable, right? It is not tied to any vendor. You can run it anywhere. And that is true on paper.

But the moment you run it in production, at enterprise scale, a new reality sets in. You need observability, governance, upgrades, SLAs. You start relying on managed services for these open-source tools. Or you run them yourself, and now your internal teams are on the hook for uptime, performance, and patching.

You have simply traded one form of lock-in for another: the operational lock-in of owning complexity.

So yes, open-source gives you options. But it doesn’t remove friction. It shifts it.

The Other Lock-Ins No One Talks About

When we talk about “avoiding lock-in”, we usually mean avoiding proprietary APIs or data formats. But in practice, most enterprises are locked in through completely different vectors:

Data gravity makes it painful to move large volumes of information, especially when compliance and residency rules come into play. The real issue is the latency, synchronization, and duplication challenges that come with moving data between clouds.

Tooling ecosystems create invisible glue. Your CI/CD pipelines, security policies, alerting, cost management. These are all tightly coupled to your cloud environment. Even if the core app is portable, rebuilding the ecosystem around it is expensive and time-consuming.

Skills and culture are rarely discussed, but they are often the biggest blockers. A team trained to build in cloud A doesn’t instantly become productive in cloud B. Tooling changes. Concepts shift. You have to retrain, re-hire, or rely on partners.

So, the question becomes: is lock-in really about technology or inertia (of an enterprise’s IT team)?

Data Gravity

Data gravity is one of the most underestimated forces in cloud architecture. Whether you are using proprietary services or open-source software. The idea is simple: as data accumulates, everything else like compute, analytics, machine learning, and governance, tends to move closer to it.

In practice, this means that once your data reaches a certain scale or sensitivity, it becomes extremely hard to move, regardless of whether it is stored in a proprietary cloud database or an open-source solution like PostgreSQL or Kafka.

With proprietary platforms, the pain comes from API compatibility, licensing, and high egress costs. With open-source tools, it is about operational entanglement: complex clusters, replication lag, security hardening, and integration sprawl.

Either way, once data settles, it anchors your architecture, creating a gravitational pull that resists even the most well-intentioned portability efforts.

The Cost of Chasing Portability

Portability is often presented as a best practice. But there is a hidden cost.

To build truly portable applications, you need to avoid proprietary features, abstract your infrastructure, and write for the lowest common denominator. That often means giving up performance, integration, and velocity. You are paying an “insurance premium” for a theoretical future event like cloud exit or vendor failure, that may never come.

Worse, in some cases, over-engineering for portability can slow down innovation. Developers spend more time writing glue code or dealing with platform abstraction layers than delivering business value.

If the business needs speed and differentiation, this trade-off rarely holds up.

So… What Should We Do?

Here is the hard truth: lock-in is not the problem. Lack of intention is.

Lock-in is unavoidable. Whether it is a cloud provider, a platform, a SaaS tool, or even an open-source ecosystem. You are always choosing dependencies. What matters is knowing what you are committing to, why you are doing it, and what the exit cost will be. That is where most enterprises fail.

And let us be honest for a moment. A lot of enterprises call it lock-in because their past strategic decision doesn’t feel right anymore. And then they blame their “strategic” partner.

The better strategy? Accept lock-in, but make it intentional. Know your critical workloads. Understand where your data lives. Identify which apps are migration-ready and which ones never will be. And start building the muscle of exit-readiness. Not for all 1’000 apps, but for the ones that matter most.

True portability isn’t binary. And in most large enterprises, it only applies to the top 10–20% of apps that are already modernized, loosely coupled, and containerized. The rest? They are staying where they are until there is a budget, a compliance event, or a crisis.

Avoiding U.S. Public Clouds And The Illusion of Independence

While independence from the U.S. hyperscalers and the potential risks associated with the CLOUD Act may seem like a compelling reason to adopt open-source solutions, it is not always the silver bullet it appears to be. The idea is appealing: running your infrastructure on open-source tools in order to avoid being dependent on any single cloud provider, especially those based in the U.S., whose data may be subject to foreign government access under the CLOUD Act.

However, this approach introduces its own set of challenges.

First, by attempting to cut ties with US providers, organizations often overlook the global nature of the cloud. Most open-source tools still rely on cloud providers for deployment, support, and scalability. Even if you host your open-source infrastructure on non-U.S. clouds, the reality is that many key components of your stack, like databases, messaging systems, or AI tools, may still be indirectly influenced by U.S.-based tech giants.

Second, operational complexity increases as you move away from managed services, requiring more internal resources to manage security, compliance, and performance. Rather than providing true sovereignty, the focus on avoiding U.S. hyperscalers may result in an unintended shift of lock-in from the provider to the infrastructure itself, where the trade-off is a higher cost in complexity and operational overhead.

Top Contributors To Key Open-Source Projects

U.S. public cloud providers like Google, Amazon, Microsoft, Oracle and others are not just spectators in this space. They’re driving the innovation and development of key projects:

  1. Kubernetes remains the flagship project of the CNCF, offering a robust container orchestration platform that has become essential for cloud-native architectures. The project has been significantly influenced by a variety of contributors, with Google being the original creator.
  2. Prometheus, the popular monitoring and alerting toolkit, was created by SoundCloud and is now widely adopted in cloud-native environments. The project has received significant contributions from major players, including Google, Amazon, Facebook, IBM, Lyft, and Apple. 
  3. Envoy, a high-performance proxy and communication bus for microservices, was developed by Lyft, with broad support from Google, Amazon, VMware, and Salesforce.
  4. Helm is the Kubernetes package manager, designed to simplify the deployment and management of applications on Kubernetes. It has a strong community with contributions from Microsoft (via Deis, which they acquired), Google, and other cloud providers.
  5. OpenTelemetry provides a unified standard for distributed tracing and observability, ensuring applications are traceable across multiple systems. The project has seen extensive contributions from Google, Microsoft, Amazon, Red Hat, and Cisco, among others. 

While these projects are open-source and governed by the CNCF (Cloud Native Computing Foundation), the influence of these tech companies cannot be understated. They not only provide the tools and resources necessary to drive innovation but also ensure that the technologies powering modern cloud infrastructures remain at the cutting edge of industry standards.

Final Thoughts

Portability has become the rallying cry of modern cloud architecture. Real-world enterprises aren’t moving between clouds every year. They are digging deeper into ecosystems, relying more on managed services, and optimizing for speed.

So maybe the conversation shouldn’t be about avoiding lock-in but about managing it. Perhaps more about understanding it. And, above all, owning it. The problem isn’t lock-in itself. The problem is treating lock-in like a disease, rather than what it really is: an architectural and strategic trade-off.

This is where architects and technology leaders have a critical role to play. Not in pretending we can design our way out of lock-in, but in navigating it intentionally. That means knowing where you can afford to be tightly coupled, where you should invest in optionality, and where it is simply not worth the effort to abstract away.

From Monolithic Data Centers to Modern Private Clouds

From Monolithic Data Centers to Modern Private Clouds

Behind every shift from old-school to new-school, there is a bigger story about people, power, and most of all, trust. And nowhere is that clearer than in the move from traditional monolithic data centers to what we now call a modern private cloud infrastructure.

A lot of people still think this evolution is just about better technology, faster hardware, or fancier dashboards. But it is not. If you zoom out, the core driver is not features or functions, it is trust in the executive vision, and the willingness to break from the past.

Monolithic data centers stall innovation

But here is the problem: monoliths do not scale in a modern world (or cloud). They slow down innovation, force one-size-fits-all models, and lock organizations into inflexible architectures. And as organizations grew, the burden of managing these environments became more political than practical.

The tipping point was not when better tech appeared. It was when leadership stopped trusting that the monolithic data centers with the monolithic applications could deliver what the business actually needed. That is the key. The failure of monolithic infrastructure was not technical – it was cultural.

Hypervisors are not the platform you think

Let us make that clear: hypervisor are not platforms! They are just silos and one piece of a bigger puzzle.

Yes, they play a role in virtualization. Yes, they helped abstract hardware and brought some flexibility. But let us not overstate it, they do not define modern infrastructure or a private cloud. Hypervisors solve a problem from a decade ago. Modern private infrastructure is not about stacking tools, it is about breaking silos, including the ones created by legacy virtualization models.

Private Cloud – Modern Infrastructure

So, what is a modern private infrastructure? What is a private cloud? It is not just cloud-native behind your firewall. It is not just running Kubernetes on bare metal. It is a mindset.

You do not get to “modern” by chasing features or by replacing one virtualization solution with another vendor. You get there by believing in the principles of openness, automation, decentralization, and speed. And that trust has to start from the top. If your CIO or CTO is still building for audit trails and risk reduction as their north star, you will end up with another monolithic data center stack. Just with fancier logos.

But if leadership leans into trust – trust in people, in automation, in feedback loops – you get a system that evolves. Call it modern. Call it next-gen.

It was never about the technology

We moved from monolithic data centers not because the tech got better (though it did), but because people stopped trusting the old system to serve the new mission.

And as we move forward, we should remember: it is not hypervisors or containers or even clouds that shape the future. It is trust in execution, leadership, and direction. That is the real platform everything else stands on. If your architecture still assumes manual control, ticketing systems, and approvals every step of the way, you are not building a modern infrastructure. You are simply replicating bureaucracy in YAML. A modern infra is about building a cloud that does not need micro-management.

Platform Thinking versus Control

A lot of organizations say they want a platform, but what they really want is control. Big difference.

Platform thinking is rooted in enablement. It is about giving teams consistent experiences, reusable services, and the freedom to ship without opening a support ticket every time they need a VM or a namespace.

And platform thinking only works when there is trust as well:

  • Trust in dev teams to deploy responsibly
  • Trust in infrastructure to self-heal and scale
  • Trust in telemetry and observability to show the truth

Trust is a leadership decision. It starts when execs stop treating infrastructure as a cost center and start seeing it as a product. Something that should deliver value, be measured, and evolve.

It is easy to get distracted. A new storage engine, a new control plane, a new AI-driven whatever. Features are tempting because they are measurable. You can point at them in a dashboard or a roadmap.

But features don’t create trust. People do. The most advanced platform in the world is useless if teams do not trust it to be available, understandable, and usable. 

So instead of asking “what tech should we buy?”, the real question is:

“Do we trust ourselves enough to let go of the old way?”

Because that is what building a modern private cloud is really about.

Trust at Scale

In Switzerland, we like things to work. Predictably. Reliably. On time. With the current geopolitical situation in the world, and especially when it comes to public institutions, that expectation is non-negotiable.

The systems behind those services are under more pressure than ever. Demands are rising and talent is shifting. Legacy infrastructure is getting more fragile and expensive. And at the same time, there is this quiet but urgent question being asked in every boardroom and IT strategy meeting:

Can we keep up without giving up control?

Public sector organizations (not only in Switzerland) face a unique set of constraints:

  • Critical infrastructure cannot go down, ever
  • Compliance and data protection are not just guidelines, they are legal obligations
  • Internal IT often has to serve a wide range of users, platforms, and expectations

So, it is no surprise that many of these organizations default to monolithic, traditional data centers. The logic is understandable: “If we can touch it, we can control it.”

But here is the reality though: control does not scale. And legacy does not adapt. Staying “safe” with old infrastructure might feel responsible, but it actually increases long-term risk, cost, and technical debt. There is a temptation to approach modernization as a procurement problem: pick a new vendor, install a new platform, run a few migrations, and check the box. Done.

But transformation doesn’t work that way. You can’t buy your way out of a culture that does not trust change.

In understand, this can feel uncomfortable. Many institutions are structured to avoid mistakes. But modern IT success requires a shift from control to resilience, and it is not about perfection. It is only perfect until you need to adapt again.

How to start?

By now, it is clear: modern private cloud infrastructure is not about chasing trends or blindly “moving to the cloud.” It’s about designing systems that reflect what your organization values: reliability, control, and trust, while giving teams the tools to evolve. But that still leaves the hardest question of all:

Where do we start?

First, ransparency is the first ingredient of trust. You can’t fix what you won’t name.

Second, modernizing safely does not mean boiling the ocean. It means starting with a thin slice of the future.

The goal is to identify a use case where you can:

  • Show real impact in under six months

  • Reduce friction for both IT and internal users

  • Create confidence that change is possible without risk

In short, it is about finding use cases with high impact but low risk.

Third, this is where a lot of transformation efforts stall. Organizations try to modernize the tech, but keep the old permission structures. The result? A shinier version of the same bottlenecks. Instead, shift from control to guardrails. Think less about who can approve what, and more about how the system enforces good behavior by default. For example:

  • Implement policy-as-code: rules embedded into the platform, not buried in documents

  • Automate security scans, RBAC, and drift detection

  • Give teams safe, constrained freedom instead of needing to ask for access

Guardrails enable trust without giving up safety. That’s the core of a modern infrastructure (private or public cloud).

And lastly, make trust measurable. Not just with uptime numbers or dashboards but with real signals:

  • Are teams delivering faster?

  • Are incidents down?

  • etc.

Make this measurable, visible, and repeatable. Success builds trust. Trust creates momentum.

Final Thoughts

IT organizations do not need moonshots. They need measured, meaningful modernization. The kind that builds belief internally, earns trust externally, and makes infrastructure feel like an asset again.

The technology matters, but how you introduce it matters even more. 

Why OCI Dedicated Region and Oracle Cloud VMware Solution are a Winning Combination

Why OCI Dedicated Region and Oracle Cloud VMware Solution are a Winning Combination

 

In this article, we will explore what makes OCI Dedicated Region and Oracle Cloud VMware Solution (OCVS) a unique and powerful combination, cover their core features, how they address key IT challenges, and why CIOs should consider this pairing as a strategic investment for a future-proof IT environment.

What is OCI Dedicated Region?

OCI Dedicated Region is Oracle’s fully managed public cloud region that is deployed directly in a customer’s data center. It provides all of Oracle’s public cloud services (including Oracle Autonomous Database, and AI/ML capabilities) while meeting strict data residency, latency, and regulatory requirements. This allows organizations to enjoy the benefits of a public cloud while retaining physical control over data and infrastructure:

  • Data Residency and Compliance: By deploying cloud services in a customer’s data center, OCI Dedicated Region ensures data remains within the organization’s control, meeting data residency and compliance requirements critical in industries like finance, healthcare, and government.
  • Operational Consistency: Organizations get access to the same tools, APIs, and SLAs as Oracle’s public cloud, which ensures a consistent operational experience across on-premises and cloud environments.
  • Scalability and Flexibility: OCI Dedicated Region provides elastic scaling for workloads without the need for substantial capital expenditure on hardware. 
  • Cost-Effective: By consolidating on-premises and cloud infrastructure, OCI Dedicated Region reduces operational complexity and costs associated with data center management, disaster recovery, and infrastructure procurement.

What is Oracle Cloud VMware Solution?

For many enterprises, VMware is a cornerstone of their infrastructure, powering mission-critical applications and handling sensitive workloads. Migrating these workloads to the cloud has the potential to unlock new efficiencies, but it also brings challenges related to compatibility, risk, and cost.

Oracle Cloud VMware Solution (OCVS) is an answer to these challenges, enabling organizations to extend or migrate VMware environments to Oracle Cloud Infrastructure (OCI) without re-architecting applications:

  • Minimal Disruption: Since OCVS is a VMware-certified solution, applications continue running as they did on-premises, ensuring continuity.
  • Reduced Risk: By leveraging familiar VMware tools and processes, the learning curve is minimized, reducing operational risk.
  • Lower Migration Costs: Avoiding re-architecting means lower costs and faster time-to-value.
  • Enhanced Security: OCVS inherits OCI’s strong security posture, ensuring that data is safeguarded at every layer, from infrastructure to application.
  • Reduced Hardware Spending: Since OCVS runs on OCI, there’s no need to invest in new data center hardware.
  • Disaster Recovery: Allowing enterprises to establish OCI as a disaster recovery site, reducing capital costs on duplicate infrastructure.

The Synergy Between OCI Dedicated Region and OCVS

Using OCVS as a part of an OCI Dedicated Region and brings a unique set of advantages to private clouds. Together, they provide a solution that addresses the pressing demands for data sovereignty, cloud flexibility, and seamless application modernization.

OCI Dedicated Region and OCVS enable operational consistency across cloud and on-premises environments. Teams familiar with Oracle’s public cloud or VMware’s suite of tools can manage both environments with ease. This consistency allows CIOs to retain talent by providing a familiar technology landscape and reduces the need for retraining, thereby improving productivity.

Additionally, this combination allows the creation of a hybrid cloud architecture that seamlessly integrates on-premises infrastructure with cloud resources. OCI Dedicated Region provides a cloud environment within the customer’s data center, while OCVS allows existing VMware workloads to shift to this region without disruptions.

Conclusion

OCI Dedicated Region and Oracle Cloud VMware Solution together offer a powerful, flexible, and compliant infrastructure that empowers CIOs to meet the complex demands of modern enterprise IT. By combining the control of on-premises with the agility/flexibility of the cloud, this combined solution helps organizations achieve operational excellence, reduce risk, and accelerate digital transformation.

For decision-makers looking to strike a balance between legacy infrastructure and future-oriented cloud solutions, OCI Dedicated Region and OCVS represent a strategic investment that brings immediate and long-term value to the enterprise. This combination is not just about technology – it is about enabling business growth, operational resilience, and competitive advantage in a digital-first world.

Distributed Hybrid Infrastructure Offerings Are The New Multi-Cloud

Distributed Hybrid Infrastructure Offerings Are The New Multi-Cloud

Since VMware belongs to Broadcom, there was less focus and messaging on multi-cloud or supercloud architectures. Broadcom has drastically changed the available offerings and VMware Cloud Foundation is becoming the new vSphere. Additionally, we have seen big changes regarding the partnerships with hyperscalers (the Azures and AWSes of this world) and the VMware Cloud partners and providers. So, what happened to multi-cloud and how come that nobody (at Broadcom) talks about it anymore?

What is going on?

I do not know if it’s only me, but I do not see the term “multi-cloud” that often anymore. Do you? My LinkedIn feed is full of news about artificial intelligence (AI) and how Nvidia employees got rich. So, I have to admit that I lost track of hybrid clouds, multi-clouds, or hybrid multi-cloud architectures. 

Cloud-Inspired and Cloud-Native Private Clouds

It seems to me that the initial idea of multi-cloud has changed in the meantime and that private clouds are becoming platforms with features. Let me explain.

Organizations have built monolithic private clouds in their data centers for a long time. In software engineering, the word “monolithic” describes an application that consists of multiple components, which form something larger. To build data centers, we followed the same approach by using different components like compute, storage, and networking. And over time, IT teams started to think about automation and security, and the integration of different solutions from different vendors.

The VMware messaging was always pointing in the right direction: They want to provide a cloud operating system for any hardware and any cloud (by using VMware Cloud Foundation). On top of that, build abstraction layers and leverage a unified control plane (aka consistent automation and operations).

And I told all my customers since 2020 that they need to think like a cloud service provider, get rid of silos, implement new processes, and define a new operating model. That is VMware by Broadcom’s messaging today and this is where they and other vendors are headed: a platform with features that provide cloud services.

In other words, and this is my opinion, VMware Cloud Foundation is today a platform with different components like vSphere, vSAN, NSX, Aria, and so on. Tomorrow, it is still called VMware Cloud Foundation, a platform that includes compute, storage, networking, automation, operations, and other features. No more other product names, just capabilities, and services like IaaS, CaaS, DRaaS or DBaaS. You just choose the specs of the underlying hardware and networking, deploy your private clouds, and then start to build and consume your services.

Replace the name “VMware Cloud Foundation” in the last paragraph with AWS Outposts or Azure Stack. Do you see it now? Distributed unmanaged and managed hybrid cloud offerings with a (service) consumption interface on top.

That is the shift from monolithic data centers to cloud-native private clouds.

From Intercloud to Multi-Cloud

It is not the first time that I write about interclouds, that not many of us know. In 2012, there was this idea that different clouds and vendors need to be interoperable and agree on certain standards and protocols. Think about interconnected private and public clouds, which allow you to provide VM mobility or application portability. Can you see the picture in front of you? What is the difference today in 2024?

In 2023, I truly believed that VMware figured it out when they announced VMware Cloud on Equinix Metal (VMC-E). To me, VMC-E was different and special because of Equinix, who is capable of interconnecting different clouds, and at the same time could provide a baremetal-as-a-service (BMaaS) offering.

Workload Mobility and Application Portability

Almost 2 years ago, I started to write a book about this topic, because I wanted to figure out if workload mobility and application portability are things, that enterprises are really looking for. I interviewed many CIOs, CTOs, chief architects and engineers around the globe, and it became VERY clear: it seems nobody was changing anything to make app portability a design requirement.

Almost all of the people I have spoken to, told me, that a lot of things must happen that could trigger a cloud-exit and therefore they see this as a nice-to-have capability that helps them to move virtual machines or applications faster from one cloud to another.

VMware Workload Mobility

And I have also been told that a lift & shift approach is not providing any value to almost all of them.

But when I talked to developers and operations teams, the answers changed. Most of them did not know that a vendor could provide mobility or portability. Anyway, what has changed now?

Interconnected Multi-Clouds and Distributed Hybrid Clouds

I mentioned it already before. Some vendors have realized that they need to deliver a unified and integrated programmable platform with a control plane. Ideally, this control plane can be used on-premises, as a SaaS solution, or both. And according to Gartner, these are the leaders in this area (Magic Quadrant for Distributed Hybrid Infrastructure):

Gartner Magic-Quadrant-for-Distributed-Hybrid-Infrastructure

In my opinion, VMware and Nutanix are providing a hybrid multi-cloud approach.

AWS and Microsoft are providing hybrid cloud solutions. In Microsoft’s case, we see Azure Stack HCI, Azure Kubernetes Service (AKS incl. Hybrid AKS) and Azure Arc extending Microsoft’s Azure services to on-premises data centers and edge locations.

The only vendor, that currently offers true multi-cloud capabilities, is Oracle. Oracle has Dedicated Region Cloud@Customer (DRCC) and Roving Edge, but also partnerships with Microsoft and Google that allow customers to host Oracle databases in Azure and Google Cloud data centers. Both partnerships come with a cross-cloud interconnection.

That is one of the big differences and changes for me at the moment. Multi-cloud has become less about mobility or portability, a single global control plane, or the same Kubernetes distribution in all the clouds, but more about bringing different services from different cloud providers closer together.

This is the image I created for the VMC-E blog. Replace the words “AWS” and “Equinix” with “Oracle” and suddenly you have something that was not there before, an interconnected multi-cloud.

What’s Next?

Based on the conversations with my customers, it does not feel that public cloud migrations are happening faster than in 2020 or 2022 and we still see between 70 and 80% of the workloads hosted on-premises. While we see customers who are interested in a cloud-first approach, we see many following a hybrid multi-cloud and/or multi-cloud approach. It is still about putting the right applications in the right cloud based on the right decisions. This has not changed.

But the narrative of such conversations has changed. We will see more conversations about data residency, privacy, security, gravity, proximity, and regulatory requirements. Then there are sovereign clouds.

Lastly, enterprises are going to deploy new platforms for AI-based workloads. But that could still take a while.

Final Thoughts

As enterprises continue to navigate the above mentioned complexities, the need for flexible, scalable, and secure infrastructure solutions will only grow. There are a few compelling solutions that bridge the gap between traditional on-premises systems and modern cloud environments.

And since most enterprises are still hosting their workloads on-premises, they have to decide if they want to stretch the private cloud to the public cloud, or the other way around. Both options can co-exist, but would make it too big and too complex. What’s your conclusion?