5 Challenges in Building National Cloud Infrastructures and How to Solve Them

5 Challenges in Building National Cloud Infrastructures and How to Solve Them

Governments around the world are facing increasing pressure to assert control over their digital infrastructure. Whether driven by regulatory mandates, national security concerns, or political developments, the concept of a national cloud or sovereign cloud is gaining serious traction.

But building a national cloud infrastructure is far from straightforward. It is a complex balancing act between innovation, control, compliance, and risk management. Based on my work in the cloud space across Oracle and VMware, and through conversations with customers in the public sector, I have seen the same set of challenges come up again and again.

In this post, I want to walk through five of the biggest challenges governments and regulated industries face when building sovereign cloud environments, and explore some practical ways to solve them.

1. The Data Sovereignty Dilemma

One of the most fundamental challenges is ensuring data remains under the control of the nation that owns it. Most global cloud providers are headquartered in the US and are subject to extraterritorial laws, such as the CLOUD Act. That’s a serious concern for countries in the EU, Middle East, and Asia-Pacific who require sensitive data to remain on national or regional soil, with no foreign access.

Saying that “data is stored in Frankfurt” doesn’t automatically mean it’s sovereign. True data sovereignty requires not only residency, but also legal and operational separation from foreign jurisdictions. This is where traditional hyperscale models fall short.

To address this, vendors like Oracle have started offering sovereign cloud regions – such as the Oracle EU Sovereign Cloud – which are operated and supported entirely from within the EU, by EU-based personnel. That is a major step forward. But ultimately, sovereignty isn’t just a location, it’s an operating model. You need to design the cloud platform from day one with jurisdictional independence and compliance in mind.

2. Securing National-Scale Cloud Platforms

Security is always important in cloud architecture, but when you are talking about a national cloud, the stakes are even higher. You are dealing with mission-critical applications, citizen data, defense information, or classified intelligence systems. A breach or compromise isn’t just a technical issue, it’s a national event.

Unfortunately, many government environments still rely on legacy perimeter models and lack deep cloud-native security architecture. The challenge is how to build a cloud environment that meets zero-trust standards, supports high-assurance workloads, and integrates with national cybersecurity frameworks.

The answer lies in combining hardened cloud regions, private connectivity, data encryption with customer-controlled keys, and isolation mechanisms such as dedicated tenancy or confidential computing. Platforms like Oracle’s National Security Regions (NSRs) offer this level of separation and assurance. But even then, security isn’t just about tools. It’s about governance. Governments must define strict policies and enforce them consistently across cloud and on-prem environments.

3. Operational Control and Cloud Autonomy

A common concern I hear from public sector architects is the fear of losing operational control. Many cloud services are abstracted to a point where customers can’t dictate how and where they run. For governments, that’s not always acceptable. Especially when they want to run critical workloads or classified systems.

There’s a growing demand for operational autonomy: the ability to manage, monitor, and maintain the infrastructure independently or through trusted local entities. This is where concepts like “sovereign operations” come into play.

In a sovereign cloud model, operations – including support, monitoring, and incident response – are handled within the national boundary, by vetted personnel. Oracle has implemented this model in its EU Sovereign Cloud, ensuring no foreign nationals are involved in the operational chain. It is this level of people-based sovereignty, not just technology, that defines real national cloud infrastructure.

4. Keeping Up with the Compliance Maze

Compliance is one of the biggest drivers behind national cloud initiatives, and also one of the most frustrating challenges. The regulatory landscape is constantly evolving. Governments must comply with GDPR, national data protection laws, critical infrastructure regulations, defense policies, and sector-specific standards.

But cloud platforms evolve faster than laws do. It’s hard to maintain compliance across services, especially when new features are released weekly and spread across different regions.

One way to address this is by using compliance automation frameworks. Cloud providers like Oracle offer templates and reference architectures that help you deploy workloads in a compliant-by-default manner. Some even include compliance-as-code, which automates controls and checks during the deployment process.

But even the best frameworks won’t help unless your cloud provider aligns its service roadmap with local regulations. That’s why it’s essential to work with vendors who treat compliance not as a checkbox, but as a core part of their product design and go-to-market strategy.

5. Innovation vs. Risk Aversion

The final challenge is cultural, not technical.

Most public sector organizations know they need to modernize, but they operate in environments where risk is avoided at all costs. Innovation often takes a backseat to auditability and procurement processes. As a result, cloud transformation projects get stuck in POCs, or never leave the pilot phase.

Ironically, sovereign clouds are often seen as “less capable” than commercial regions, reinforcing this hesitance. But that perception is changing. Today, sovereign cloud offerings are increasingly on par with global platforms. And in some cases, they offer more control and greater visibility.

To overcome internal resistance, governments need to create safe innovation spaces. That means using pre-certified landing zones, sandbox environments, and trusted architectural patterns. It also means investing in cloud fluency across teams, so that risk management and agility aren’t mutually exclusive.

Note: “Cloud fluency” refers to the ability of individuals or organizations to understand, use, and make informed decisions about cloud technologies, confidently and effectively.

Final Thoughts

Building a national cloud infrastructure isn’t just a technical project. I’s a long-term strategic effort that combines technology, law, policy, and trust. The challenges are significant, but solvable, especially if they’re tackled early and with the right partners.

Whether it’s data sovereignty, security assurance, operational control, or compliance, governments need platforms that are sovereign-by-design, not just sovereign in name. And vendors need to step up with credible solutions that support national priorities without compromising cloud innovation.

Sovereign cloud is no longer a niche requirement. It’s a mainstream architectural model and one that will shape the next decade of public sector IT strategy.

Oracle Compute Cloud@Customer Isolated – Sovereign Public Sector Hosting for Oracle Partners

Oracle Compute Cloud@Customer Isolated – Sovereign Public Sector Hosting for Oracle Partners

Across Europe, public sector organisations are under increasing pressure to modernise their IT environments while maintaining full control over data, infrastructure, and operations. This is where Oracle partners can step in. With Oracle Compute Cloud@Customer Isolated (C3I), they now have the opportunity to offer sovereign cloud hosting services tailored to the needs of governments and regulated industries.

Oracle’s approach to digital sovereignty is not abstract. It is based on clearly defined principles that are embedded in the platform itself. With C3I, data – whether user data, metadata, or telemetry -remains entirely within the customer’s environment. Nothing is transmitted back to Oracle. The complete OCI control plane runs locally, fully disconnected from Oracle’s global infrastructure. This ensures that compliance requirements can be met without compromise.

Transparency and control are fundamental. There is no ongoing operator access to the system because C3I is an air-gapped, disconnected solution. Once installed, Oracle has no remote access to the environment. The installation and activation – including any expansion, such as GPU or storage racks – is handled on-site by Oracle’s field team. Ongoing operations, monitoring, and support are managed entirely by the hosting service provider (HSP), not by Oracle. Customers define their access policies, manage their own encryption keys, and control every layer of the platform.

Unlike traditional hosted solutions, C3I delivers the full Oracle Cloud Infrastructure (OCI) IaaS portfolio, along with key platform services such as Oracle Kubernetes Engine (OKE), all deployed within the HSP’s own data centre. This empowers Oracle Partners to offer modern, cloud-native infrastructure and container services to public-sector tenants, while keeping everything firmly under local control and governance.

What Makes C3I a Game‑Changer?

Besides OCI Dedicated Region, Alloy, and Oracle Isolated Cloud Region, C3I is Oracle’s most secure and sovereign cloud deployment model. One of the main drivers for adopting Oracle Compute Cloud@Customer Isolated is the need to run classified workloads in fully isolated environments. In this context, governments with strict regulations, ministries of defense, and intelligence services represent the key targeted customers.

What sets C3I apart is that its architecture is the entire control plane, the brain of OCI, is deployed inside the partner’s (or customer’s) premises. Again, there is no connection to Oracle’s public cloud regions, no shared management layer, and no external operator access. Once the system is installed, Oracle no longer has access. There is no remote telemetry, no persistent administrator credentials, and no automated updates. Every action, including patching, must be initiated and approved by the partner’s operators.

Despite its strict isolation, C3I delivers the same developer experience as the public cloud. Users can work with the same APIs, tools, and automation workflows. All core OCI services are available, from compute and storage to networking and IAM. This makes it possible to run modern applications, automate deployments, and enforce security policies. Just like in the public cloud, but with full control.

For Oracle partners, this opens new doors.

Hosting Multiple Tenants with IAM and Compartment Isolation

To serve multiple tenants on shared C3I infrastructure, Oracle relies on the strength of its Identity and Access Management (IAM) framework. Each tenant is hosted in a dedicated compartment, which acts as a logical and administrative boundary. Resources are isolated, policies are scoped, and access is strictly defined. IAM ensures that each tenant sees only what they are supposed to see and nothing more.

With compartments, policies, and groups, providers can implement fine-grained access control while still maintaining a clear operational model.

Oracle Compute Cloud@Customer Hosting Service Provider Model

On the networking side, Virtual Cloud Networks (VCNs) are provisioned per tenant. If connectivity is required between VCNs – let’s say, for shared services or for intercommunication – Dynamic Routing Gateways (DRGs) are used to establish secure and controlled interconnections. This approach allows for scalable, tenant-aware architectures without compromising performance or sovereignty.

C3I is Ready for AI – GPU Expansion Racks

C3I is not just built for traditional workloads. It is also designed to support next-generation applications, including those that require hardware acceleration. Currently, through dedicated GPU expansion racks, Oracle partners can add up to 48 NVIDIA L40S GPUs to a single C3I deployment. These GPUs are integrated into the system’s high-speed network and storage architecture, making them available to tenants just like any other OCI resource.

This capability allows Hosting Service Providers to offer GPU-as-a-Service directly to public-sector clients – ideal for AI, ML, and data analytics workloads that must remain within national borders. All resources are managed through the same local OCI control plane, keeping everything under the same compliance and operational framework.

The sensitive nature of government data demands absolute sovereignty. With Oracle C3I, sovereign AI becomes a reality.

Red Hat OpenShift Support

For Oracle Partners hosting public sector tenants on C3I, delivering enterprise-grade container platforms is critical. That’s why C3I fully supports Red Hat OpenShift, enabling end-customers to run their containerized workloads with confidence and flexibility.

OpenShift brings a comprehensive Kubernetes-based platform with advanced features like developer tools, integrated CI/CD pipelines, and robust security controls. By running OpenShift on C3I, customers benefit from a sovereign, isolated environment that meets strict regulatory demands, while leveraging the rich ecosystem and productivity of Red Hat’s market-leading container platform.

A Sovereign Platform That Grows With You

C3I starts with a strong baseline: 552 cores, 6.7 TB of RAM, and 150 TB of storage. But it doesn’t stop there. The platform can scale to 6’072 cores, 73.7 TB of memory, 3.65 Petabytes of high-capacity storage, and 1.2 Petabytes of high-performance storage.

Unlocking a New Business Model for Oracle Partners

For Oracle Partners, C3I creates a new type of service opportunity. Instead of simply reselling cloud subscriptions, they can operate a sovereign cloud environment, offering secure, isolated, and scalable hosting to public sector clients. It is a cloud environment you can trust, built for those who need to guarantee data residency and operational autonomy.

With C3I, Oracle provides the tools. Now it is time for partners to build the services.

Rethinking Digital Sovereignty – A Response to the Public Cloud Critique

Rethinking Digital Sovereignty – A Response to the Public Cloud Critique

I understand it, believe me. The public cloud promised a lot: speed, scale, flexibility. But over time, cracks have appeared. Bills grow faster than workloads and compliance becomes harder, not easier. And some applications never really fit, especially those that demand low latency or strict control.

So, we are told, companies are pulling workloads back from the public cloud. These reverse cloud migrations are also known as cloud repatriation. You have to understand, it is not a reversal of digital transformation and the abandoning of public cloud – it’s a correction. A realignment based on experience, governance needs, and financial pressure.

But the answer isn’t to go backward, if your expectations have not been met. The challenge is to retain the benefits of the cloud – automation, elasticity, operational efficiency – while regaining the control that is often lost in the public model.

But moving away from the cloud doesn’t mean giving up the cloud. The real question is: how do we keep what worked and fix what didn’t?

Oracle Compute Cloud@Customer (C3) was built precisely for that purpose. It brings Oracle’s public cloud infrastructure and tooling into your data center, under your governance, with the same APIs, security, and operational model. What follows is how C3 directly addresses the core reasons driving repatriation and why many enterprises are choosing a more strategic hybrid path forward. C3 changes everything.

Cost Control Without Surprises

Ask any IT leader what drove their move to the cloud, and chances are “cost savings” is on the list. Ask them what drove them back, and they will likely say “cost surprises.” 🙂

Public cloud can scale, but it also scales your bill. Between data egress fees, idle VM costs, and unpredictable licensing, many organizations find their cloud TCO spiraling. Oracle Cloud Infrastructure, Oracle Compute Cloud@Customer in this scenario, changes the equation. It delivers the OCI experience on-premises, in a consumption-based OpEx model, but with predictability built in. No data egress. No hidden costs. No guessing. Just clear, auditable resource usage within your own data center.

Performance Without Compromise

Latency is often a business risk. Trading platforms, AI inference, and high-speed transaction systems, they all demand millisecond or sub-millisecond responsiveness. But in the public cloud, compute and data are often separated across zones or regions.

With C3, you bring compute right to where your data lives. Ultra-low latency, high-throughput workloads no longer need to be shoehorned into far-off regions. The cloud comes to you and it is backed by high-performance storage, native GPU options, and OCI’s virtual cloud networking.

Data Sovereignty, Security & Compliance – Rebuilt for Reality

Oracle C3 provides on-premises infrastructure, fully managed by Oracle, but entirely controlled by you. Data never leaves your facility unless you allow it. Access is managed through Operator Access Control, which gives you precise control over who can log in, when, and for what. Encryption at rest, in motion, and during access? Built in. Full audit trails? Native. That is the level of control regulators expect and enterprises now demand.

Governance, Visibility & Control

One of the hidden challenges of public cloud? Shadow IT. Teams spin up services without oversight, leading to risks in compliance, billing, and security posture.

With Oracle C3, everything runs within the bounds of your governance framework. You control IAM, compartmentalization, policy enforcement, tagging, metering, and quotas. It is the same control plane as OCI, so your security posture doesn’t depend on where the workload runs.

Operational Resilience You Actually Own

Let’s be honest: handing over infrastructure management can reduce operational overhead, but it can also mean giving up visibility, scheduling flexibility, and recovery control.

Oracle Compute Cloud@Customer delivers the best of both worlds. Oracle manages the infrastructure lifecycle, from firmware updates to patching. But you define the maintenance windows, and the failover behaviour. DR scenarios, backup policies, hardware separation – they are yours to orchestrate.

What Is Operator Access Control?

Oracle Operator Access Control (OpCtl) is a feature used in products like Oracle Compute Cloud@Customer (C3) and Exadata Cloud@Customer, designed to give customers:

  • Explicit approval over Oracle’s administrative access

  • Time-bound, purpose-specific access windows

  • Comprehensive logging and session recording

  • Segregation of duties and multi-party authorization

So, before any Oracle operator can access the C3 environment for maintenance, updates, or troubleshooting, the customer must approve the request, define the time window, and scope the level of access permitted. All sessions are fully audited, with logs available to the customer for compliance and security reviews. This ensures that sensitive workloads and data remain under strict governance, aligning with zero-trust principles and regulatory requirements. 

A diagram showing your tenancy in an OCI region, and how it connects to Compute Cloud@Customer in your data center.

So, in practice, you can say:

“No one from Oracle can access my infrastructure unless I approve it, for a specific task, at a specific time.”

This is an excellent feature and tool for operational governance, auditability, and security assurance.

If you think about the U.S. CLOUD Act, then OpCtl, in my opinion, strengthens your legal and practical posture since you control the external access to the C3 systems. Additionally, you can provide proof and logs that no access occurred without your approval.

Let’s Think Differently. Give It A Try!

A Swiss professor recently outlined four conditions for digital sovereignty in the public cloud. The assumptions are valid, but they are also rooted in a narrow view of how the cloud has to work. If you want cloud, you have to give up control. And if you want sovereignty, you have to give up most of the cloud (services).

That binary thinking doesn’t hold up anymore. And it never should have. 

Let’s be clear: digital sovereignty is not about avoiding cloud, it’s about deploying it on your terms. And that’s exactly what Oracle Compute Cloud@Customer (C3) enables as a third path (besides public cloud and repatriation).

Let’s take the arguments one by one.

1. “Only unmodified open source software ensures sovereignty”

Yes, I agree, open standards matter. But sovereignty isn’t just about code transparency. It’s about control over where software runs, how it’s operated, and who has access.

With C3, you run any open-source stack you want, inside your own data center. But more importantly, you also control the platform it runs on. Compute, storage, and networking stay within your facility, under your governance. You decide the architecture, the patch cycle, and the integrations. And you do it without giving up cloud automation, elasticity, or DevOps tooling.

2. “Internal know-how must be retained”

Agreed. Sovereignty without competence is meaningless.

C3 supports the same APIs, SDKs, Terraform modules, and CLI as the Oracle public cloud. That means your teams build skills once and apply them everywhere – on-premises, in the public cloud, or across hybrid landscapes.

You keep operational knowledge in-house. You train on real cloud-native patterns. And you run them on infrastructure that belongs to you.

3. “Avoid proprietary, specialized services”

This is where things get nuanced.

Most enterprises don’t want to avoid modern services. They just want freedom of movement (aka portability). With C3, you are not locked into proprietary ecosystems. You get the full Oracle Cloud Infrastructure stack but deployed in your data center, on infrastructure fully under your legal and physical control.

Because the environment is API-compatible with OCI, you are not locked in – you are portable by design. Move workloads to Oracle public regions. Or any other cloud. Or don’t. It is your choice. I would call that leverage.

4. “SaaS without data export is unacceptable”

Right again. Exit strategy matters.

C3 isn’t SaaS. It’s IaaS and PaaS delivered as a service inside your firewall. And because you control the storage, the networking, and the OS stack, you always retain the ability to export your data by using open formats, standard tools, and your own access policies.

Want to back up to another system? Build cross-platform failover? Disconnect from Oracle entirely? No problem. Your data stays in your hands.

Final Thought

Cloud repatriation is happening for good reasons. But walking away from cloud entirely isn’t the answer. The better move is to rethink where the cloud belongs and who’s in control of it.

Oracle Compute Cloud@Customer gives you the cloud experience your teams want, with the sovereignty your business needs.

And today, that may be the one most strategic infrastructure choice you can make (besides Oracle’s EU Sovereign Cloud and Dedicated Cloud offerings).

If you are working in the public sector, have a look at this article: Enabling Public Sector Unity – How Oracle Alloy Could Power a Government Cloud and Cross-Agency Collaboration

Oracle Compute Cloud@Customer – The Sovereign Cloud Platform Europe Has Been Waiting For

Oracle Compute Cloud@Customer – The Sovereign Cloud Platform Europe Has Been Waiting For

Europe has always taken data privacy, neutrality, and independence seriously. Whether you are operating in government, healthcare, banking, or energy, the message is clear: sensitive workloads need to stay within national borders. However, sovereignty shouldn’t come at the expense of innovation, agility, or cost efficiency. This is exactly where Oracle Compute Cloud@Customer (C3) steps in.

With C3, you are not forced to choose between the benefits of public cloud and the control of on-prem infrastructure. You get both. Oracle brings a consistent, fully managed OCI experience directly into your data center or trusted hosting environment.

This is cloud designed for data residency and regulatory alignment, without compromise. Customers retain full operational control thanks to Oracle’s secure Operator Control and disconnected operating model, giving you full autonomy over who can access what and when. If you don’t want Oracle to touch it, they won’t.

But this isn’t just about compliance, it’s about enabling innovation. With C3, organizations can develop once and run anywhere. You can build modern applications on OCI using containers, Kubernetes, or virtual machines (VMs), and then deploy them on-prem with C3, in a public OCI region, or any hybrid setup. This gives developers and architects freedom, without forcing the business into compliance headaches.

Even more compelling: C3 is priced the same as the public OCI regions. No “on-prem premium.” Unlike other hyperscalers that charge more for bringing cloud services into your data center, Oracle keeps the economics consistent. That means you can deploy at scale wherever you need it, without blowing your IT budget. And because OCI is up to 60% cheaper than competitors – especially for IaaS-heavy workloads and managed Kubernetes – C3 becomes not just a compliance play, but a strategic cost advantage.

For organizations already running Exadata Cloud@Customer (ExaCC), the transition to C3 is seamless. You extend the same OCI architecture from your Oracle Database infrastructure to your full application landscape – compute, storage, network, containers, and more – all under one public OCI control plane. One architecture, one operational model, full sovereignty.

And for those looking to modernize full application stacks from databases to middleware to frontend services, C3 provides the flexibility to run both Oracle and open-source technologies.

Note: Those requiring the full breadth of OCI services in a sovereign, connected environment, Oracle also offers OCI Dedicated Region

Oracle Compute Cloud@Customer Isolated – The Next Level of Sovereignty

Oracle has taken the concept of sovereign cloud one step further. With Oracle Compute Cloud@Customer Isolated (C3I), organizations can now run cloud-native workloads in a fully air-gapped environment, without any operational dependency on Oracle. No outbound connections. No Oracle-managed control plane. No shared infrastructure. Just full autonomy and local control. C3I Oracle owned and customer/partner managed.

it’s a real, production-ready deployment model for mission-critical and highly regulated environments. Designed specifically for governments, defense, intelligence, and critical infrastructure operators like Telcos, Compute Cloud@Customer Isolated addresses scenarios where even a standard sovereign cloud isn’t enough.

The platform runs the same core OCI services  (compute, storage, networking, Kubernetes) but is completely disconnected from Oracle’s global cloud infrastructure. Everything is deployed on-premises in your trusted facility, and operated entirely by your own team or a national partner under your control. Oracle is not in the loop. No telemetry is sent back. No patching happens unless you initiate it.

For Europe, this matters. Regulations are tightening. Risk tolerance is dropping. And cloud decisions now sit under the spotlight of data strategy, digital self-determination, and public trust. With C3I, organizations don’t need to compromise. You can modernize legacy infrastructure, run secure workloads, and meet the strictest data protection laws without handing over operational control to a foreign hyperscaler.

Oracle Compute Cloud@Customer Isolated

So if you’re building for maximum sovereignty, whether for a national security project, a classified analytics platform, or a regulated healthcare system, C3I gives you the control you need, without the complexity of building it all from scratch.

Note: those requiring the full breadth of OCI services in a sovereign, air-gapped environment, Oracle also offers an Isolated Region. It delivers the complete OCI stack, including advanced PaaS and data services, fully disconnected and deployed inside your own data center. It’s the natural next step when C3I isn’t enough.

Cloud-Native at Home – Modernizing Legacy Workloads on C3

Whether you are building microservices, deploying containers with Kubernetes, or refactoring legacy applications, C3 gives you the flexibility and tools to modernize at your own pace without sending data to the public cloud.

For many organizations, this is especially relevant when looking at existing on-premises environments. C3 opens a new path for modernizing applications without a full lift-and-shift. You can gradually move critical services from traditional virtual machines into containers, adopt infrastructure-as-code practices, and standardize on CI/CD pipelines. All within a compliant, in-country environment that mirrors public OCI.

Using OCI services like OKE (Oracle Kubernetes Engine) on C3, teams can deploy cloud-native apps alongside traditional workloads. It is entirely possible to run a legacy database VM next to containerized microservices, with consistent networking, storage, and security policies across both. This hybrid model is ideal for customers who want to modernize existing applications incrementally, without taking unnecessary risks.

For VMware and Nutanix customers, C3 provides a future-ready landing zone. You can continue to run VM-based workloads on OCI-compatible compute shapes and use that as the foundation to containerize where it makes sense. This avoids expensive rewrites or disruptive replatforming. Instead, C3 supports a phased modernization strategy.

Note: OKE on C3 is free. Standard OCI pricing for VM nodes applies. 

Oracle Compute Cloud@Customer Supports Red Hat OpenShift

Oracle Compute Cloud@Customer (C3) keeps expanding its capabilities for customers, and a key recent addition is support for Red Hat OpenShift.

Artificial Intelligence on Compute Cloud@Customer

With Oracle’s announcement in February 2025, customers can add Nvidia GPUs to C3 deployments with the following key features:

  • Independent scaling of GPUs, compute, and storage: up to 48 L40S NVIDIA GPUs, 6,624 OCPUs with 80.4 TB of memory, and a mix of up to 3.65 PB of high-capacity storage and 1.2 PB of high-performance storage.
  • Powerful GPU VMs: up to four NVIDIA L40S GPUs, 108 Intel Xeon 8480+ CPU cores, 800-GB DDR5 memory, and 400 Gbps network bandwidth for the most demanding workloads.
  • Ultra-fast network connectivity: 800-Gbps data center connectivity that can directly connect an Exadata Cloud@Customer Machine to combine the power of GPUs with Oracle Database 23ai’s integrated AI Vector Search.

Description of multicloud-customer-and-oci.png follows

EU Sovereign Operations for Oracle Compute Cloud@Customer

In May 2025, Oracle announced the availability of Oracle EU Sovereign Operations for C3. This means, that C3 now also runs in the EU Sovereign Cloud, with the same pricing and the same service you know from commercial OCI regions.

Previously, operations and automation for Compute Cloud @ Customer were handled via global OCI control planes. With EU Sovereign Operations, that changes:

  • All automation and admin services now reside within Oracle’s EU Sovereign Cloud regions

  • Operations are managed by Oracle teams based in the EU, ensuring compliance

  • Hardware deployment and support is delivered by personnel authorized to work in the customer’s country

EU Sovereign Operations for Compute Cloud@Customer is offered with the control plane located in one of Oracle EU Sovereign Cloud regions, currently either Madrid, Spain or Frankfurt, Germany. This service is offered in European Union member countries and other select countries in Europe. The service delivers the same features, functions, value and service level objectives (SLOs) offered with Compute Cloud@Customer service with control planes from OCI Compute public regions.

Last Comments

In short, Oracle Compute Cloud@Customer is not just a cloud, it’s your sovereign cloud. It gives enterprises the tools they need to stay compliant, stay competitive, and stay in control. And that is what the next generation of digital sovereignty should look like.

Sovereignty Without Stagnation And The Real Cost of Operational Autonomy

Sovereignty Without Stagnation And The Real Cost of Operational Autonomy

Everyone talks about sovereignty. But few talk about the trade-offs.
Across Europe, especially in Germany and Switzerland, operational autonomy is often seen as the gold standard for digital sovereignty. The idea: full control, no external dependencies, no surprises.

In theory, it’s a strong posture.
In practice? It can easily slow you down.

For highly regulated industries, it’s tempting to build walls around your systems to reduce exposure. But when operational autonomy becomes the central design principle, innovation suffers. You are no longer building for performance or scalability. You are building to minimize risk. And over time, that architecture becomes hard to evolve.

This is the balance we need to strike: Sovereignty without stagnation.

Autonomy Comes at a Cost

Operational autonomy/sovereignty means exactly what it says. It is the ability to run your digital environment independently, without reliance on foreign entities, external support teams, or global platforms. In regulated markets, that’s attractive. It means you control access, processes, and ultimately, risk.

But here’s the thing: autonomy isolates.

To maintain autonomy, many institutions move to self-managed stacks, siloed environments, or custom platforms that minimize external control, but also block external innovation.

Security updates? Slower.
Platform upgrades? Riskier.
Integration with modern SaaS or AI services? Most probably not.

In Germany and Switzerland, I have seen several projects stall for months. Not because the technology wasn’t ready, but because the operational model couldn’t support agile change. Teams were so focused on controlling every layer that they lost the ability to adopt new capabilities at speed.

Autonomy must not come at the cost of adaptability!

What really matters is who controls your operations:

  • Who can push updates to your systems?

  • Who manages escalation paths during outages?

  • Whose legal jurisdiction governs your support team?

This is the level of detail that regulators (and boards) now care about.
And yes, achieving this depth of control is hard. That is why many organizations default to “isolation”: they lock down their stack and cut themselves (disconnect) off from global services.

But this model only works for a while. Eventually, innovation pressure builds. AI, automation, cloud-native services – none of that fits cleanly into a closed system. Without a platform to safely absorb innovation, operational autonomy becomes a bottleneck, not a strength.

The Open Source Conversation – Freedom With Limits

Open source has always played an important role in reducing lock-in and increasing transparency. It gives you flexibility, choice, and in many cases even real control.

But we also need to acknowledge its limits, especially in enterprise environments.

Take the example of a Swiss industrial company. They run over 400 applications – a mix of off-the-shelf software, legacy platforms, and newer cloud-native solutions. They have adopted Kubernetes, Grafana, Prometheus, and open-source databases where it made sense. But they also rely on integrated enterprise systems for finance, HR, procurement, and logistics.

Could they replace every component with open source?
Maybe. But at what cost?

Who supports the platform during an audit?
Who integrates change management and compliance controls?
Who signs off on operational resilience?

This is where the promise of open source meets the reality of enterprise IT: not everything can or should be rebuilt just to reduce dependency. Open source is an important ingredient. But sovereignty also means being able to make informed choices, not ideological ones.

What I am seeing is this: teams spend months assembling monitoring stacks, security tools, compliance scripts etc., only to realize they have created something fragile, difficult to maintain, and sometimes completely undocumented for auditors.

The irony? In chasing autonomy, some organizations built systems less resilient than the platforms they were trying to avoid.

This is where pre-built sovereign cloud platforms can help. Not by locking you in, but by giving you compliance-aligned services that still let you move fast. With built-in logging, encryption, incident management, and support under local legal control, the platform handles the regulatory foundation. So your team can focus on what matters.

Isolation vs. Informed Independence

So, to summarize it, there are two paths organizations typically choose:

1. The Isolation Model

Control everything, self-manage infrastructure, and avoid foreign providers. This delivers maximum autonomy but at the cost of agility. Teams fall behind on updates, and integration becomes painful. Yep, innovation slows. Eventually, autonomy becomes a form of isolation.

2. The Informed Independence Model

Use a sovereign cloud platform with built-in compliance, local operations, and enterprise-grade services. Maintain flexibility and adopt open standards. But don’t reinvent what is already secure and certified. This lets you meet regulatory requirements without stalling digital progress. An example would be the EU Sovereign Cloud from Oracle.

Control Matters – But So Does Momentum

Sovereignty is about control. But let’s not forget: innovation needs momentum.

You can’t afford to build static systems in a dynamic world.
Yes, autonomy protects you, but only if you can also evolve, scale, and adapt.

The real challenge in sovereign cloud isn’t just achieving control.
It is doing it without losing your ability to build and innovate.

And that’s the future we need to design for: Sovereignty, without stagnation.

Open-Source Can Help With Portability And Lock-In But It Is Not A Silver Bullet

Open-Source Can Help With Portability And Lock-In But It Is Not A Silver Bullet

We have spent years chasing cloud portability and warning against vendor lock-in. And yet, every enterprise I have worked with is more locked in today than ever. Not because they failed to use open-source software (OSS). Not because they made bad decisions, but because real-world architecture, scale, and business momentum don’t care about ideals. They care about outcomes.

The public cloud promised freedom. APIs, managed services, and agility. Open-source added hope. Kubernetes, Terraform, Postgres. Tools that could, in theory, run anywhere. And so we bought into the idea that we were building “portable” infrastructure. That one day, if pricing changed or strategy shifted, we could pack up our workloads and move. But now, many enterprises are finding out the truth:

Portability is not a feature. It is a myth, and for most large organizations, it is a unicorn, but elusive in reality.

Let me explain, and before I do, talk about interclouds again.

Remember Interclouds?

Interclouds, once hyped as the answer to cloud portability (and lock-in), promised a seamless way to abstract infrastructure across providers, enabling workloads to move freely between clouds. In theory, they would shield enterprises from vendor dependency by creating a uniform control plane and protocols across AWS, Azure, GCP, OCI and beyond.

David Bernstein Intercloud

Note: An idea and concept that was discussed in 2012. It is 2025, and not much has happened since then.

But in practice, intercloud platforms failed to solve the lock-in problem because they only masked it, not removed it. Beneath the abstraction layer, each provider still has its own APIs, services, network behaviors, and operational peculiarities.

Enterprises quickly discovered that you can’t abstract your way out of data gravity, compliance policies, or deeply integrated PaaS services. Instead of enabling true portability, interclouds just delayed the inevitable realization: you still have to commit somewhere.

The Trigger Nobody Plans For

Imagine you are running a global enterprise with 500 or 1’000 applications. They span two public clouds. Some are modern, containerized, and well-defined in Terraform. Others are legacy, fragile, lifted, and shifted years ago in a hurry. A few run in third-party SaaS platforms.

Then the call comes: “We need to exit one of our clouds. Legal, compliance, pricing. Doesn’t matter why. It has to go.”

Suddenly, that portability you thought you had? It is smoke. The Kubernetes clusters are portable in theory, but the CI/CD tooling, monitoring stack, and security policies are not. Dozens of apps use PaaS services tightly coupled to their original cloud. Even the apps that run in containers still need to be re-integrated, re-tested, and re-certified in the new environment.

This isn’t theoretical. I have seen it firsthand. The dream of being “cloud neutral” dies the moment you try to move production workloads – at scale, with real dependencies, under real deadlines.

Open-Source – Freedom with Strings Attached

It is tempting to think that open-source will save you. After all, it is portable, right? It is not tied to any vendor. You can run it anywhere. And that is true on paper.

But the moment you run it in production, at enterprise scale, a new reality sets in. You need observability, governance, upgrades, SLAs. You start relying on managed services for these open-source tools. Or you run them yourself, and now your internal teams are on the hook for uptime, performance, and patching.

You have simply traded one form of lock-in for another: the operational lock-in of owning complexity.

So yes, open-source gives you options. But it doesn’t remove friction. It shifts it.

The Other Lock-Ins No One Talks About

When we talk about “avoiding lock-in”, we usually mean avoiding proprietary APIs or data formats. But in practice, most enterprises are locked in through completely different vectors:

Data gravity makes it painful to move large volumes of information, especially when compliance and residency rules come into play. The real issue is the latency, synchronization, and duplication challenges that come with moving data between clouds.

Tooling ecosystems create invisible glue. Your CI/CD pipelines, security policies, alerting, cost management. These are all tightly coupled to your cloud environment. Even if the core app is portable, rebuilding the ecosystem around it is expensive and time-consuming.

Skills and culture are rarely discussed, but they are often the biggest blockers. A team trained to build in cloud A doesn’t instantly become productive in cloud B. Tooling changes. Concepts shift. You have to retrain, re-hire, or rely on partners.

So, the question becomes: is lock-in really about technology or inertia (of an enterprise’s IT team)?

Data Gravity

Data gravity is one of the most underestimated forces in cloud architecture. Whether you are using proprietary services or open-source software. The idea is simple: as data accumulates, everything else like compute, analytics, machine learning, and governance, tends to move closer to it.

In practice, this means that once your data reaches a certain scale or sensitivity, it becomes extremely hard to move, regardless of whether it is stored in a proprietary cloud database or an open-source solution like PostgreSQL or Kafka.

With proprietary platforms, the pain comes from API compatibility, licensing, and high egress costs. With open-source tools, it is about operational entanglement: complex clusters, replication lag, security hardening, and integration sprawl.

Either way, once data settles, it anchors your architecture, creating a gravitational pull that resists even the most well-intentioned portability efforts.

The Cost of Chasing Portability

Portability is often presented as a best practice. But there is a hidden cost.

To build truly portable applications, you need to avoid proprietary features, abstract your infrastructure, and write for the lowest common denominator. That often means giving up performance, integration, and velocity. You are paying an “insurance premium” for a theoretical future event like cloud exit or vendor failure, that may never come.

Worse, in some cases, over-engineering for portability can slow down innovation. Developers spend more time writing glue code or dealing with platform abstraction layers than delivering business value.

If the business needs speed and differentiation, this trade-off rarely holds up.

So… What Should We Do?

Here is the hard truth: lock-in is not the problem. Lack of intention is.

Lock-in is unavoidable. Whether it is a cloud provider, a platform, a SaaS tool, or even an open-source ecosystem. You are always choosing dependencies. What matters is knowing what you are committing to, why you are doing it, and what the exit cost will be. That is where most enterprises fail.

And let us be honest for a moment. A lot of enterprises call it lock-in because their past strategic decision doesn’t feel right anymore. And then they blame their “strategic” partner.

The better strategy? Accept lock-in, but make it intentional. Know your critical workloads. Understand where your data lives. Identify which apps are migration-ready and which ones never will be. And start building the muscle of exit-readiness. Not for all 1’000 apps, but for the ones that matter most.

True portability isn’t binary. And in most large enterprises, it only applies to the top 10–20% of apps that are already modernized, loosely coupled, and containerized. The rest? They are staying where they are until there is a budget, a compliance event, or a crisis.

Avoiding U.S. Public Clouds And The Illusion of Independence

While independence from the U.S. hyperscalers and the potential risks associated with the CLOUD Act may seem like a compelling reason to adopt open-source solutions, it is not always the silver bullet it appears to be. The idea is appealing: running your infrastructure on open-source tools in order to avoid being dependent on any single cloud provider, especially those based in the U.S., whose data may be subject to foreign government access under the CLOUD Act.

However, this approach introduces its own set of challenges.

First, by attempting to cut ties with US providers, organizations often overlook the global nature of the cloud. Most open-source tools still rely on cloud providers for deployment, support, and scalability. Even if you host your open-source infrastructure on non-U.S. clouds, the reality is that many key components of your stack, like databases, messaging systems, or AI tools, may still be indirectly influenced by U.S.-based tech giants.

Second, operational complexity increases as you move away from managed services, requiring more internal resources to manage security, compliance, and performance. Rather than providing true sovereignty, the focus on avoiding U.S. hyperscalers may result in an unintended shift of lock-in from the provider to the infrastructure itself, where the trade-off is a higher cost in complexity and operational overhead.

Top Contributors To Key Open-Source Projects

U.S. public cloud providers like Google, Amazon, Microsoft, Oracle and others are not just spectators in this space. They’re driving the innovation and development of key projects:

  1. Kubernetes remains the flagship project of the CNCF, offering a robust container orchestration platform that has become essential for cloud-native architectures. The project has been significantly influenced by a variety of contributors, with Google being the original creator.
  2. Prometheus, the popular monitoring and alerting toolkit, was created by SoundCloud and is now widely adopted in cloud-native environments. The project has received significant contributions from major players, including Google, Amazon, Facebook, IBM, Lyft, and Apple. 
  3. Envoy, a high-performance proxy and communication bus for microservices, was developed by Lyft, with broad support from Google, Amazon, VMware, and Salesforce.
  4. Helm is the Kubernetes package manager, designed to simplify the deployment and management of applications on Kubernetes. It has a strong community with contributions from Microsoft (via Deis, which they acquired), Google, and other cloud providers.
  5. OpenTelemetry provides a unified standard for distributed tracing and observability, ensuring applications are traceable across multiple systems. The project has seen extensive contributions from Google, Microsoft, Amazon, Red Hat, and Cisco, among others. 

While these projects are open-source and governed by the CNCF (Cloud Native Computing Foundation), the influence of these tech companies cannot be understated. They not only provide the tools and resources necessary to drive innovation but also ensure that the technologies powering modern cloud infrastructures remain at the cutting edge of industry standards.

Final Thoughts

Portability has become the rallying cry of modern cloud architecture. Real-world enterprises aren’t moving between clouds every year. They are digging deeper into ecosystems, relying more on managed services, and optimizing for speed.

So maybe the conversation shouldn’t be about avoiding lock-in but about managing it. Perhaps more about understanding it. And, above all, owning it. The problem isn’t lock-in itself. The problem is treating lock-in like a disease, rather than what it really is: an architectural and strategic trade-off.

This is where architects and technology leaders have a critical role to play. Not in pretending we can design our way out of lock-in, but in navigating it intentionally. That means knowing where you can afford to be tightly coupled, where you should invest in optionality, and where it is simply not worth the effort to abstract away.