Why One Customer Chose PostgreSQL on Oracle Cloud Infrastructure and Still Saved on Oracle Costs

Why One Customer Chose PostgreSQL on Oracle Cloud Infrastructure and Still Saved on Oracle Costs

A few days ago, I had a really insightful conversation with a colleague, who shared a customer story that perfectly captured something seen more and more across different industries: organizations looking to modernize, reduce complexity, and take control of their database strategy without being locked into one model or one engine.

The story revolved around a customer who decided to move away from their on-premises Oracle databases. At first, it might sound like the classic “cloud migration” narrative everyone has heard before, but this one is different. They weren’t switching cloud providers, and they weren’t turning their back on Oracle either. In fact, they stayed with Oracle. Just in a different way.

They migrated a significant portion of their Oracle workloads to PostgreSQL, but they ran that PostgreSQL on Oracle Cloud Infrastructure (OCI). And even more interesting: they took advantage of Oracle Support Rewards to reduce their overall cloud spend by leveraging the Oracle support contracts they were already paying for on-premises.

It’s one of those use cases that highlights how important flexibility has become in modern enterprise IT. Customers want to modernize, but they also want the freedom to choose the right tools for each workload. With OCI continuing to evolve and expand, Oracle has positioned itself to support both traditional enterprise use cases and modern, open-source-first strategies.

This particular customer story is a strong example of that. Moving from on-prem Oracle database to PostgreSQL, but doing it in a way that continues to benefit from Oracle’s infrastructure, incentives, and long-term roadmap.

A Familiar Starting Point

The customer, a mid-sized company in the DACH region, was running several Oracle database instances on physical hardware in their own data center. The environment was stable, but due to ongoing sovereignty discussions in the market and in order to meet their CIO’s strategic directive to adopt more open-source technologies, the customer decided to move a subset of their Oracle databases to PostgreSQL.

This wasn’t about a full migration or a complete cloud exit. The approach was pragmatic. Only the databases that were suitable for PostgreSQL and could be decoupled from on-prem dependencies were selected for migration. Other Oracle workloads either remained on-premises or weren’t ideal candidates to move at this time. But the customer is also evaluating Oracle Exadata Cloud@Customer as a way to maintain a consistent cloud operating model while keeping certain workloads close to home.

What stood out was that the team didn’t move these PostgreSQL workloads to another cloud provider. They chose to run them on OCI, which allowed them to benefit from modern open-source technologies while remaining inside Oracle’s ecosystem.

PostgreSQL on OCI

PostgreSQL continues to grow in popularity among developers and architects, especially in cloud-native environments. Oracle recognized this demand and responded with OCI Database with PostgreSQL, a fully managed, enterprise-grade PostgreSQL service designed to meet the needs of both modern application development and mission-critical workloads.

In this case, the customer saw OCI PostgreSQL as a natural extension of their existing cloud investments. They were already using OCI for other workloads and appreciated the operational consistency across services. Running PostgreSQL within OCI meant they avoided the complexity and overhead of introducing another cloud provider, ensuring centralized governance, billing, and support across both Oracle and PostgreSQL workloads.

The service itself offers built-in high availability, automated backups, patching, monitoring, and tight integration with OCI’s broader security and identity services. These features gave the customer the reliability they needed without the overhead of managing infrastructure themselves.

Oracle Support Rewards – A Hidden Advantage

One of the most compelling parts of this story wasn’t just technical. It was financial.

Despite moving certain workloads away from Oracle database, the customer continued to benefit from Oracle’s ecosystem through the Oracle Support Rewards (OSR) program. This initiative allows organizations with existing Oracle on-premises support contracts to earn rewards when consuming OCI services.

With Oracle Support Rewards, you can earn rewards when using Oracle Cloud Infrastructure services, and those rewards can be applied to pay for support contracts that customers have for other eligible on-premises Oracle products.

For example, a customer can be both an OCI customer, and have an on-premises Oracle database enterprise license that they invoice for monthly. The customer’s OCI usage allows them to earn rewards that they can apply to pay the Oracle database enterprise support contract invoice. These invoices are paid in a separate Billing Center system outside of OCI. As a result, Oracle Support Rewards can help lower your support bill.

Get more value out of OCI with Oracle Support Rewards. It’s simple. For every dollar spent on OCI, you can accrue $0.25 in Oracle Support Rewards. If you’re an unlimited license agreement (ULA) customer, you can accrue $0.33 for every dollar spent.

That’s right: even though the workloads had moved to open-source PostgreSQL, the customer still used Oracle’s infrastructure and Oracle rewarded them for it.

The Bigger Picture

This migration wasn’t about replacing Oracle. It was about evolving the database landscape while staying on a trusted platform.

For the PostgreSQL use cases, the customer chose to modernize and go cloud-native. But for mission-critical Oracle workloads that require high performance and data residency, they are actively considering Oracle Exadata Cloud@Customer. This would allow them to extend OCI’s cloud operating model into their own data center – delivering the same automation, APIs, and support model as OCI, without compromising on location or control.

This hybrid approach ensures operational consistency across environments while supporting both open-source and enterprise-grade workloads.

OCI provided the customer with the flexibility to adopt open-source technologies, modernize legacy systems, and still benefit from Oracle’s performance, security, and enterprise infrastructure. And with OSR, Oracle helped reduce the cost of that transformation even when it involved PostgreSQL.

Final Thoughts

This use case was a great reminder of how cloud strategy isn’t always black and white. Sometimes the right answer isn’t “migrate everything to X”. It’s about giving customers choices while offering incentives that make those choices easier.

In this case, the customer successfully modernized part of their database environment, adopted open-source technology, and avoided a complex multi-cloud scenario, all while staying within Oracle’s ecosystem and leveraging its financial and operational benefits.

For any organization thinking about PostgreSQL, data sovereignty, or maintaining architectural flexibility with tools like Exadata Cloud@Customer, this is a story worth looking at more closely.

OCI Network Firewall Powered by Palo Alto Networks

OCI Network Firewall Powered by Palo Alto Networks

Enterprises running workloads in Oracle Cloud Infrastructure (OCI) often look for deeper inspection, granular control, and consistent policy enforcement. Capabilities that extend beyond the functionalities of basic security groups and route tables.

Historically, many teams implemented these capabilities by deploying virtual appliances or building complex service chains using native components. While functional, those approaches introduced operational complexity, created scaling challenges, and often led to inconsistent enforcement across environments.

To address this, in May 2022, Oracle introduced the OCI Network Firewall, a fully managed, stateful, Layer 7-aware firewalling service built into the OCI platform powered by Palo Alto Networks’ virtual firewall technology. Unlike virtual appliance deployments, this service is provisioned and managed like any other OCI resource, offering deep packet inspection and policy enforcement without requiring the management of underlying infrastructure.

How the OCI Network Firewall Works

The OCI Network Firewall is deployed as a centralized, stateful firewall instance inside a Virtual Cloud Network (VCN). It operates as an inline traffic inspection point that you can insert into the flow of traffic between subnets, between VCNs, or between a VCN and the public internet. Instead of managing a virtual machine with a firewall image, you provision the firewall through the OCI console or Terraform, like any other managed service.

Under the hood, the firewall is powered by the Palo Alto Networks VM-Series engine. This enables deep packet inspection and supports policies based on application identity, user context, and known threat signatures – not just on IP addresses and ports. Once provisioned, the firewall is inserted into traffic paths using routing rules. You define which traffic flows should be inspected by updating route tables, either in subnets or at the Dynamic Routing Gateway (DRG) level. OCI automatically forwards the specified traffic to the firewall, inspects it based on your defined policies, and then forwards it to its destination.

OCI Network Firewall use case diagram, description below

Logging and telemetry from the firewall are natively integrated with OCI Logging and Monitoring services, making it possible to ingest logs, trigger alerts, or forward data to external tools. Throughput can scale up to 4 Gbps per firewall instance, and high availability is built into the service, with deployments spanning fault domains to ensure resiliency.

OCI Network Firewall is a highly available and scalable instance that you create in a subnet. The firewall applies business logic specified in a firewall policy attached to the network traffic. Routing in the VCN is used to direct traffic to and from the firewall. OCI Network Firewall provides a throughput of 4 Gb/sec, but you can request an increase up to 25 Gb/sec. The first 10 TB of data is processed at no additional charge.

The firewall supports advanced features such as intrusion detection and prevention (IDS/IPS), URL and DNS filtering, and decryption of TLS traffic. Threat signature updates are managed automatically by the service, and policy configuration can be handled via the OCI console, APIs, or integrated with Panorama for centralized policy management across multi-cloud or hybrid environments.

Integration with Existing OCI Networking

OCI Network Firewall fits well into the kinds of network topologies most commonly used in OCI, particularly hub-and-spoke architectures or centralized inspection designs. In many deployments, customers set up multiple workload VCNs (the spokes) that connect to a central hub VCN via local peering. The hub typically hosts shared services such as logging, DNS, internet access via a NAT gateway, and now, the network firewall.

By configuring route tables in the spoke VCNs to forward traffic to the firewall endpoint in the hub, organizations can centralize inspection and policy enforcement across multiple VCNs. This model avoids having to deploy and manage firewalls in each spoke and enables consistent rules for outbound internet traffic, inter-VCN communication, or traffic destined for on-premises networks over the DRG.

OCI Network Firewall Hub and Spoke

Source: https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/learn-oci-network-firewall-with-examples.pdf 

The insertion of the firewall is purely routing-based, meaning there’s no need for IPsec tunnels or overlays to redirect traffic. As a result, latency is minimal and the setup remains relatively simple. With high availability built in and native integration into OCI’s monitoring and logging stack, the firewall becomes a part of the existing infrastructure.

Design Considerations

While the OCI Network Firewall removes much of the operational burden of managing security appliances, there are still architectural choices that influence how effective the deployment is. One key consideration is routing. Since traffic inspection is based on routing decisions, the firewall only sees traffic that has been explicitly directed through it. That means route tables must be carefully designed to forward the correct flows.

Firewall placement also plays a role in the overall design. Centralized deployment in a hub VCN can simplify management and enforce consistent policy, but it might introduce additional hops in the network path. Depending on the traffic patterns and latency sensitivity of your applications, you may need to evaluate whether east-west inspection should also be enabled or limited to specific flows.

Monitoring and visibility should be planned from the start. Logging is not retained indefinitely, so integrating logs with OCI Logging Analytics, pushing them to a SIEM, or exporting them to object storage for long-term retention is a best practice. You should also account for log volume and potential cost, especially in high-throughput environments.

Throughput limits are another factor. As mentioned before, a single firewall instance supports up to 4 Gbps, so if your architecture requires higher performance, you may need to segment inspection points or scale up to 25 Gbps upon request. Understanding these thresholds is important when designing for resilience and future growth.

Finally, while the firewall is managed, policy complexity still needs governance. Deep inspection features can be powerful but require thoughtful rule design to avoid unnecessary latency or policy overlap.

Final Thoughts

The OCI Network Firewall adds a much-needed layer of native, centralized security enforcement to Oracle Cloud. It brings next-generation firewall capabilities into the platform in a way that aligns with how modern infrastructure is built: with automation, centralized control, and reduced operational overhead. For teams that have struggled to scale or standardize firewalling in OCI using virtual appliances, this service simplifies a lot of that work.

That said, it’s still a tool. Its effectiveness depends on how well it’s integrated into your network design and how clearly your security policies are defined. For organizations moving toward more distributed architectures or running regulated workloads in OCI, it’s a step in the right direction – a platform-native way to improve visibility and control without losing flexibility.

Note: Please be aware that OCI Network Firewall design concepts and configurations are part of the Oracle Cloud Infrastructure 2025 Networking Professional exam.

From Castles to Credentials – Why Identities Are the New Perimeter

From Castles to Credentials – Why Identities Are the New Perimeter

The security world has outgrown its castle. For decades, enterprise networks operated on the principle of implicit trust: if a device or user could connect from inside the perimeter, they were granted access. Firewalls and VPNs acted as moats and drawbridges, controlling what entered the fortress. But the rise of clouds, remote work, and APIs has broken down those walls by replacing physical boundaries with something far more fluid: identity.

This shift has led to the emergence of Zero Trust Architecture (ZTA), which flips the traditional model. Instead of trusting users based on their location or device, we now assume that no actor should be trusted by default, whether inside or outside the network. Every access request must be verified, every time, using contextual signals like identity, posture, behavior, and intent.

But “Zero Trust” isn’t just about a philosophical change but about practical design as well. Many organizations start their Zero Trust journey by microsegmenting networks or rolling out identity-aware proxies. That’s a step in the right direction, but a true transformation goes deeper. It redefines identity as the central pillar of security architecture. Not just a gatekeeper, but the control plane through which access decisions are made, enforced, and monitored.

The Inherent Weakness of Place-Based Trust

The traditional security model depends on a dangerous assumption: if you are inside the network, you are trustworthy. That might have worked when workforces were centralized and systems were isolated. With hybrid work, multi-cloud adoption, and third-party integrations, physical locations mean very little nowadays.

Attackers know this. Once a single user account is compromised via phishing, credential stuffing, or social engineering, it can be used to move laterally across the environment, exploiting flat networks and overprovisioned access. The rise of ransomware, supply chain attacks, and insider threats all originate from this misplaced trust in location.

This is where identity-based security becomes essential. Instead of relying on IP addresses or subnet ranges, access policies are tied to who or what is making the request and under what conditions. For example, a user might only get access if their device is healthy, they are connecting from a trusted region, and they pass MFA.

By decoupling access decisions from the network and basing them on identity context, organizations can stop granting more access than necessary and prevent compromised actors from gaining a foothold.

Identities Take Center Stage

Identities are multiplying rapidly, not just users, but also workloads, devices, APIs, and service accounts. This explosion of non-human identities creates a massive attack surface. Yet, in many organizations, these identities are poorly managed, barely monitored, and rarely governed.

Identity-Centric Zero Trust changes that. It places identity at the center of every access flow, ensuring that each identity, human or machine, is:

  • Properly authenticated

  • Authorized for just what it needs

  • Continuously monitored for unusual behavior

Example: A CI/CD pipeline deploys an app into production. With traditional models, that pipeline might have persistent credentials with broad permissions. In an identity-centric model, the deployment service authenticates via workload identity, receives just-in-time credentials, and is granted only the permissions needed for that task.

This model reduces privilege sprawl, limits the blast radius of compromised credentials, and provides clear visibility and accountability. It’s about embedding least privilege, lifecycle management, and continuous validation into the DNA of how access is handled.

Routing With Intent

Zero Trust doesn’t mean the network no longer matters, it means the network must evolve. Today’s networks need to understand and enforce identity, just like the access layer.

A good example of this is Oracle Cloud Infrastructure’s Zero Trust Packet Routing (ZPR). With ZPR, packets are only routed if the source and destination identities are explicitly authorized to communicate. It’s not just about firewall rules or ACLs but also about intent-based networking, where identity and policy guide the flow of traffic. A backend service won’t even see packets from an unauthorized frontend. Routing decisions happen only after both parties are authenticated and authorized.

This is part of a bigger trend. Across the industry, cloud providers and SDN platforms are starting to embed identity metadata into network-level decisions, and routing and access enforcement are being infused with contextual awareness and identity-driven policies.

For architects and security teams, this opens new possibilities for building secure-by-design cloud networks, where you can enforce who talks to what, when, and under what conditions, down to the packet level.

Identity as the Control Plane of Modern Security

If Zero Trust has taught us anything, it’s that identity is the new perimeter and that it’s the control plane for the entire security architecture.

When identity becomes the central decision point, everything changes:

  • Network segmentation is enforced via identity-aware rules

  • Application access is governed by contextual IAM policies

  • Monitoring and detection pivot around behavioral baselines tied to identity

  • Automation and response are triggered by anomalies in identity behavior

This model allows for granular, adaptive, and scalable control, without relying on fixed infrastructure or fragile perimeters. It also provides a better experience for users: access becomes more seamless when trust is built dynamically based on real-time signals, rather than static rules.

Image with no description

Importantly, this approach doesn’t require a big bang overhaul. Organizations can start small by maturing IAM hygiene, implementing least privilege, or onboarding apps into SSO and MFA, and build toward more advanced use cases like workload identity, CIEM (Cloud Infrastructure Entitlement Management), and ITDR (Identity Threat Detection and Response).

Concluding Thoughts

We need a security model that reflects that reality. Perimeters no longer define trust. Location is no longer a proxy for legitimacy. And static controls are no match for dynamic threats – it’s like using static IPs when working with Kubernetes and containers.

Identity-Centric Zero Trust offers a modern foundation and a strategy. One that weaves together people, processes, and technologies to ensure that every access decision is intentional, contextual, and revocable.

Whether you are modernizing a legacy environment or building greenfield in the cloud, start by asking the right question.

Not “where is this request coming from?” but “who is making the request, and should they be allowed?”.

The Cloud Isn’t Eating Everything. And That’s a Good Thing

The Cloud Isn’t Eating Everything. And That’s a Good Thing

A growing number of experts warn that governments and enterprises are “digitally colonized” by U.S. cloud giants. A provocative claim and a partial truth. It’s an emotionally charged view, and while it raises valid concerns around sovereignty and strategic autonomy, it misses the full picture.

Because here’s the thing. Some (if not most) workloads in enterprise and public sector IT environments are still hosted on-premises. This isn’t due to resistance or stagnation. It’s the result of deliberate decisions made by informed IT leaders. Leaders who understand their business, compliance landscape, operational risks, and technical goals.

We are no longer living in a world where the public cloud is the default. We are living in a world where “cloud” is a choice and is used strategically. This is not failure. It’s maturity.

A decade ago, “cloud-first”  was often a mandate. CIOs and IT strategists were encouraged, sometimes pressured, to move as much as possible to the public cloud. It was seen as the only way forward. The public cloud was marketed as cheaper, faster, and more innovative by default.

But that narrative didn’t survive contact with reality. As migrations progressed, enterprises quickly discovered that not every workload belongs in the cloud. The benefits were real, but so were the costs, complexities, and trade-offs.

Today, most organizations operate with a much more nuanced perspective. They take the time to evaluate each application or service based on its characteristics. Questions like: Is this workload latency-sensitive? What are the data sovereignty requirements? Can we justify the ongoing operational cost at scale? Is this application cloud-native or tightly coupled to legacy infrastructure? What are the application’s dependencies?

This is what maturity looks like. It’s not about saying “yes” or “no” to the cloud in general. It’s about using the right tool for the right job. Public cloud remains an incredibly powerful option. But it is no longer a one-size-fits-all solution. And that’s a good thing.

On-Premises Infrastructure Is Still Valid

There is this persistent myth that running your own datacenter, or even part of your infrastructure, is a sign that you are lagging behind. That if you are not in the cloud, you are missing out on agility, speed, and innovation. That view simply doesn’t hold up.

In reality, on-premises infrastructure is still a valid, modern, and strategic choice for many enterprises, especially in regulated industries like healthcare, finance, manufacturing, and public services. These sectors often have clear, non-negotiable requirements around data locality, compliance, and performance. In many of these cases, operating infrastructure locally is not just acceptable. It’s the best option available.

Modern on-prem environments are nothing like the datacenters of the past. Thanks to advancements in software-defined infrastructure, automation, and platform engineering, on-prem can offer many of the same cloud-like capabilities: self-service provisioning, infrastructure-as-code, and full-stack observability. When properly built and maintained, on-prem can be just as agile as the public cloud.

That said, it’s important to acknowledge a key difference. While private infrastructure gives you full control, it can take longer to introduce new services and capabilities. You are not tapping into a global marketplace of pre-integrated services and APIs like you would with Oracle Cloud or Microsoft Azure. You are depending on your internal teams to evaluate, integrate, and manage each new component.

And that’s totally fine, if your CIO’s focus is stability, compliance, and predictable innovation cycles. For many organizations, that’s (still) exactly what’s needed. But if your business thrives on emerging technologies, needs instant access to the latest AI or analytics platforms, or depends on rapid go-to-market execution, then public cloud innovation cycles might offer an advantage that’s hard to replicate internally.

Every Enterprise Can Still Build Their Own Data Center Stack

It’s easy to assume that the era of enterprises building and running their own cloud-like platforms is over. After all, hyperscalers move faster, operate at massive scale (think about the thousands of engineers and product managers), and offer integrated services that are hard to match. For many organizations, especially those lacking deep infrastructure expertise or working with limited budgets, the public cloud is the most practical and cost-effective option.

But that doesn’t mean enterprises can’t or shouldn’t build their own platforms, especially when they have strong reasons to do so. Many still do, and do it effectively. With the right people, architecture, and operational discipline, it’s entirely possible to build private or hybrid environments that are tailored, secure, and strategically aligned.

The point isn’t to compete with hyperscalers on scale, it’s to focus on fit. Enterprises that understand their workloads, compliance requirements, and business goals can create infrastructure that’s more focused and more integrated with their internal systems.

Yes, private platforms may evolve more slowly. They may require more upfront investment and long-term commitment. But in return, they offer control, transparency, and alignment. Advantages that can outweigh speed in the right contexts!

And critically, the tooling has matured. Today’s internal platforms aren’t legacy silos but are built with the same modern engineering principles: Kubernetes, GitOps, telemetry, CI/CD, and self-service automation.

Note: If a customer wants the best of both worlds, there are options like OCI Dedicated Region.

The Right to Choose the Right Cloud

One of the most important shifts we are seeing in enterprise IT is the move away from single-platform thinking. No one-size-fits-all platform exists. And that’s precisely why the right to choose the right cloud matters.

Public cloud makes sense in many scenarios. Organizations might choose Azure because of its tight integration with Microsoft tools. They might select Oracle Cloud for better pricing or AI capabilities. At the same time, they continue to operate significant workloads on-premises, either by design or necessity.

This is the real world of enterprise IT: mixed environments, tailored solutions, and pragmatic trade-offs. These aren’t poor decisions or “technical debt”. Often, they are deliberate architectural choices made with a full understanding of the business and operational landscape. 

What matters most is flexibility. Organizations need the freedom to match workloads to the environments that best support them, without being boxed in by ideology, procurement bias, or compliance roadblocks. And that flexibility is what enables long-term resilience.

What the Cloud Landscape Actually Looks Like

Step into any enterprise IT environment today, and you will find a blend of technologies, platforms, and operational models. And the mix varies based on geography, industry, compliance rules, and historical investments.

The actual landscape is not black or white. It’s a continuum of choices. Some services live in hyperscale clouds. Others are hosted in sovereign, regional datacenters. Many still run in private infrastructure owned and operated by the organization itself.

This hybrid approach isn’t messy. It’s intentional and reflects the complexity of enterprise IT and the need to balance agility with governance, innovation with stability, and cost with performance.

What defines modern IT today is the operating model. The cloud is not a place. It’s a way of working. Whether your infrastructure is on-prem, in the public cloud, or somewhere in between, the key is how it’s automated, how it’s managed, how it integrates with developers and operations, and how it evolves with the business.

Conclusion: Strategy Over Hype – And Over Emotion

There’s no universal right or wrong when it comes to cloud strategy. Only what works for your organization based on risk, requirements, talent, and timelines. But we also can’t ignore the reality of the current market landscape.

Today, U.S. hyperscalers control over 70% of the European cloud market. Across infrastructure layers like compute, storage, networking, and software stacks, Europe’s digital economy relies on U.S. technologies for 85 to 90% of its foundational capabilities. 

But these numbers didn’t appear out of nowhere.

Let’s be honest: it’s not the fault of hyperscalers that enterprises and public sector organizations chose to adopt their platforms. Those were decisions made by people – CIOs, procurement teams, IT strategists – driven by valid business goals: faster time-to-market, access to innovation, cost modeling, availability of talent, or vendor consolidation.

These choices might deserve reevaluation, yes. But they don’t deserve emotional blame.

We need to stop framing the conversation as if U.S. cloud providers “stole” the European market. That kind of narrative doesn’t help anyone. The reality is more complex and far more human. Companies chose platforms that delivered, and hyperscalers were ready with the talent, services, and vision to meet that demand.

If we want alternatives, if we want European options to succeed, we need to stop shouting at the players and start changing the rules of the game. That means building competitive offerings, investing in skills, aligning regulation with innovation, and making sovereignty a business advantage, not just a political talking point.

Can a Unified Multi-Cloud Inventory Transform Cloud Management?

Can a Unified Multi-Cloud Inventory Transform Cloud Management?

When we spread our workloads across clouds like Oracle Cloud, AWS, Azure, Google Cloud, maybe even IBM, or smaller niche players, we knowingly accept complexity. Each cloud speaks its own language, offers its own services, and maintains its own console. What if there were a central place where we could see everything: every resource, every relationship, across every cloud? A place that lets us truly understand how our distributed architecture lives and breathes?

I find myself wondering if we could one day explore a tool or approach that functions as a multi-cloud inventory, keeping track of every VM, container, database, and permission – regardless of the platform. Not because it’s a must-have today, but because the idea sparks curiosity: what would it mean for cloud governance, cost transparency, and risk reduction if we had this true single pane of glass?

Who feels triggered now because I said “single pane of glass”? 😀 Let’s move on!

Could a Multi-Cloud Command Center Change How We Visualize Our Environment?

Let’s imagine it: a clean interface, showing not just lists of resources, but the relationships between them. Network flows across cloud boundaries. Shared secrets between apps on “cloud A” and databases on “cloud B”. Authentication tokens moving between clouds.

What excites me here isn’t the dashboard itself, but the possibility of visualizing the hidden links across clouds. Instead of troubleshooting blindly, or juggling a dozen consoles, we could zoom out for a bird’s-eye view. Seeing in one place how data and services crisscross providers.

Multi-Cloud Insights

I don’t know if we’ll get there anytime soon (or if such a solution already exists) but exploring the idea of a unified multi-cloud visualization tool feels like an adventure worth considering.

Multi-Cloud Search and Insights

When something breaks, when we are chasing a misconfiguration, or when we want to understand where we might be exposed, it often starts with a question: Where is this resource? Where is that permission open?

What if we could type that question once and get instant answers across clouds? A global search bar that could return every unencrypted public bucket or every server with a certain tag, no matter which provider it’s on.

Multi-Cloud Graph Query

Wouldn’t it be interesting if that search also showed contextual information: connected resources, compliance violations, or cost impact? It’s a thought I keep returning to because the journey toward proactive multi-cloud operations might start with simple, unified answers.

Could a True Multi-Cloud App Require This Kind of Unified Lens?

Some teams are already building apps that stretch across clouds: an API front-end in one provider, authentication in another, ML workloads on specialized platforms, and data lakes somewhere else entirely. These aren’t cloud-agnostic apps, they are “cloud-diverse” apps. Purpose-built to exploit best-of-breed services from different providers.

That makes me wonder: if an app inherently depends on multiple clouds, doesn’t it deserve a control plane that’s just as distributed? Something that understands the unique role each cloud plays, and how they interact, in one coherent operational picture?

I don’t have a clear answer, but I can’t help thinking about how multi-cloud-native apps might need true multi-cloud-native management.

VMware Aria Hub and Graph – Was It a Glimpse of the Future?

Not so long ago, VMware introduced Aria Hub and Aria Graph with an ambitious promise: a single place to collect and normalize resource data from all major clouds, connect it into a unified graph, and give operators a true multi-cloud inventory and control plane. It was one of the first serious attempts to address the challenge of understanding relationships between cloud resources spread across different providers.

VMware Aria Hub Dashboard

The idea resonated with anyone who has struggled to map sprawling cloud estates or enforce consistent governance policies in a multi-cloud world. A central graph of every resource, dependency, and configuration sounded like a game-changer. Not only for visualization, but also for powerful queries, security insights, and cost management.

But when Broadcom acquired VMware, they shifted focus away from VMware’s SaaS portfolio. Many SaaS-based offerings were sunset or sidelined, including Aria Hub and Aria Graph, effectively burying the vision of a unified multi-cloud inventory platform along with them.

I still wonder: did VMware Aria Hub and Graph show us a glimpse of what multi-cloud operations could look like if we dared to standardize resource relationships across clouds? Or did it simply arrive before its time, in an industry not yet ready to embrace such a radical approach?

Either way, it makes me even more curious about whether we might one day revisit this idea and how much value a unified resource graph could unlock in a world where multi-cloud complexity continues to grow.

Final Thoughts

I don’t think there’s a definitive answer yet to whether we need a unified multi-cloud inventory or command center today. Some organizations already have mature processes and tooling that work well enough, even if they are built on scripts, spreadsheets, or point solutions glued together. But as multi-cloud strategies evolve, and as more teams start building apps that intentionally spread across multiple providers, I find myself increasingly curious about whether we will see renewed demand for a shared data model of our entire cloud footprint.

Because with each new cloud we adopt, complexity grows exponentially. Our assets scatter, our identities and permissions multiply, and our ability to keep track of everything by memory or siloed dashboards fades. Even something simple, like understanding “what resources talk to this database?” becomes a detective story across clouds.

A solution that offers unified visibility, context, and even policy controls feels almost inevitable if multi-cloud architectures continue to accelerate. And yet, I’m also aware of how hard this problem is to solve. Each cloud provider evolves quickly, their APIs change, and mapping their semantics into a single, consistent model is an enormous challenge.

That’s why, for now, I see this more as a hypothesis. An idea to keep exploring rather than a clear requirement. I’m fascinated by the thought of what a central multi-cloud “graph” could unlock: faster investigations, smarter automation, tighter security, and perhaps a simpler way to make sense of our expanding environments.

Whether we build it ourselves, wait for a vendor to try again, or discover a new way to approach the problem, I’m eager to see how the industry experiments with this space in the years ahead. Because in the end, the more curious we stay, the better prepared we’ll be when the time comes to simplify the complexity we’ve created.

Sovereign Clouds and the VMware Earthquake: Dependency Isn’t Just a Hyperscaler Problem

Sovereign Clouds and the VMware Earthquake: Dependency Isn’t Just a Hyperscaler Problem

The concept of “sovereign cloud” has been making waves across Europe and beyond. Politicians talk about it. Regulators push for it. Enterprises (re-)evaluate it. On the surface, it sounds like a logical evolution: regain control, keep data within national borders, reduce exposure to foreign jurisdictions, and while you are at it, maybe finally break free from the gravitational pull of the U.S. hyperscalers.

After all, hyperscaler dependency is seen as the big bad wolf. If your workloads live in AWS, Azure, Google Cloud or Oracle Cloud Infrastructure, you are automatically exposed to price increases, data sovereignty concerns, U.S. legal reach (hello, CLOUD Act), and a sense of vendor lock-in that seems harder to escape with every commit to infrastructure-as-code.

So, the solution appears simple: go local, go sovereign, go safe.

But if only it were that easy.

The truth is: sovereignty isn’t something you can just buy off the shelf. It’s not a matter of switching cloud logos or picking the provider that wraps their marketing in your national flag. Because even within your own datacenter, even with platforms that have long been considered “sovereign” and independent, the same risks apply.

The best example? VMware.

What happened in the VMware ecosystem over the past year should be a wake-up call for anyone who thinks sovereignty equals control. Because, as we have now seen, control can vanish. Fast. Very fast.

VMware’s Rapid Fall from Grace

Take VMware. For years, it was the go-to platform for building secure, sovereign private clouds. Whether in your own datacenter or hosted by a trusted service provider in your region, VMware felt like the safe, stable choice. No vendor lock-in (allegedly), no forced cloud-native rearchitecture, and full control over your workloads. Rainbows, unicorns, and that warm fuzzy feeling of sovereignty.

Then came the Broadcom acquisition, and with it, a cold splash of reality.

As Hock Tan , our President and CEO, shared in today's General Session at  VMware Explore Barcelona, European customers want control over their data  and processes. | Broadcom

Practically overnight, prices shot up. In some cases, more than doubled. Features were suddenly stripped out or repackaged into higher-priced bundles. Longstanding partner agreements were shaken, if not broken. Products disappeared or were drastically repositioned. Customers and partners were caught off guard. Not just by the changes, but by how quickly they hit.

And just like that, a platform once seen as a cornerstone of sovereign IT became a textbook example of how fragile that sovereignty really is.

Sovereignty Alone Doesn’t Save You

The VMware story exposes a hard truth: so-called “sovereign” infrastructure isn’t immune to disruption. Many assume risk only lives in the public cloud under the branding of AWS, Azure, or Oracle Cloud. But in reality, the triggers for a “cloud exit” or forced platform shift can be found anywhere. Also on-premises!

A sudden licensing change. An unexpected acquisition. A new product strategy that leaves your current setup stranded. None of these things care whether your workloads are in a public cloud region or a private rack in your basement. Dependency is dependency, and it doesn’t always come with a hyperscaler logo.

It’s Not About Picking the Right Vendor. It’s About Being Ready for the Wrong One.

That’s why sovereignty, in the real world, isn’t something you just buy. It’s something you design for.

Note: Some hyperscalers now offer “sovereign by design” solutions but even these require deeper architectural thinking.

Sure, a Greenfield build on a sovereign cloud stack sounds great. Fresh start, full control, compliance checkboxes all ticked. But the reality for most organizations is very different. They have already invested years into specific platforms, tools, and partnerships. There are skill gaps, legacy systems, ongoing projects, and plenty of inertia. Ripping it all out for the sake of “clean” sovereignty just isn’t feasible.

That’s what makes architecture, flexibility, and diversification so critical. A truly resilient IT strategy isn’t just about where your data lives or which vendor’s sticker is on the server. It’s about being ready (structurally, operationally, and contractually) for things to change.

Because they will change.

Open Source ≠ Sovereign by Default

Spoiler: Open source won’t save you either

Let’s address another popular idea in the sovereignty debate. The belief that open source is the magic solution. The holy grail. The thinking goes: “If it’s open, it’s sovereign”. You have the source code, you can run it anywhere, tweak it however you like, and you are free from vendor lock-in. Yeah, right.

Sounds great. But in practice? It’s not that simple.

Yes, open source can enable sovereignty, but it doesn’t guarantee it. Just because something is open doesn’t mean it’s free of risk. Most open-source projects rely on a global contributor base, and many are still controlled, governed, or heavily influenced by large commercial vendors – often headquartered in the same jurisdictions we are supposedly trying to avoid. Yes, that’s good and bad at the same time, isn’t it?

And let’s be honest: having the source code doesn’t mean you suddenly have a DevOps army to maintain it, secure it, patch it, integrate it, scale it, monitor it, and support it 24/7. In most cases, you will need commercial support, managed services, or skilled specialists. And with that, new dependencies emerge.

So what have you really achieved? Did you eliminate risk or just shift it?

Open source is a fantastic ingredient in a sovereign architecture – in any cloud architecture. But it’s not a silver bullet.

Behind the Curtain – Complexity, Not Simplicity

From the outside, especially for non-IT people, the sovereign cloud debate can look like a clear binary: US hyperscaler = risky, local provider = safe. But behind the curtain, it’s much more nuanced. You are dealing with a web of relationships, existing contracts, integrated platforms, and real-world limitations.

The Broadcom-VMware shake-up was a loud and very public reminder that disruption can come from any direction. Even the platforms we thought were untouchable can suddenly become liabilities.

So the question isn’t: “How do we go sovereign?”

It’s: “How do we stay in control, no matter what happens?”

That’s the real sovereignty.