Cloud Exit Triggers – What Happens When the Exit Button Isn’t Optional?

Cloud Exit Triggers – What Happens When the Exit Button Isn’t Optional?

It is becoming clearer by the day: geopolitical realities are forcing CIOs and regulators to revisit their cloud strategy, not just for performance or innovation, but for continuity, legal control, and sovereignty. The past few years have been a story of cloud-first, then cloud-smart, and then about cloud repatriation. The next chapter is about cloud control. And with the growing influence of U.S. legislation like the CLOUD Act, many in Europe’s regulated sectors are starting to ask: what happens when we need to exit?

Now add another layer: what if your cloud provider is still technically and legally subject to a foreign jurisdiction, even when the cloud lives in your own country and your own data centers?

That’s the fundamental tension/question with models like Oracle Alloy (or OCI Dedicated Region), a promising construct that brings full public cloud capabilities into local hands, but with a control plane and infrastructure still operated by Oracle itself. So what if something changes (for example, politically) and you need to exit?

Let’s explore what that exit could look like in practice, and whether Oracle’s broader portfolio provides a path forward for such a scenario.

Local Control – How Far Does Oracle Alloy Really Go?

Oracle Alloy introduces a compelling model for delivering public cloud services with local control. For providers like cloud13 (that’s the fictitious company I am using for this article), this means the full OCI service catalogue can run under the cloud13 brand, with customer relationships, onboarding, and support all handled locally. Critically, the Alloy control plane itself is deployed on-premises in cloud13’s own data center, not remotely managed from an Oracle facility. This on-site architecture ensures that operational control, including provisioning, monitoring, and lifecycle management, remains firmly within Swiss borders.

But while the infrastructure and control plane are physically hosted and operated by cloud13, Oracle still provides and maintains the software stack. The source code, system updates, telemetry architecture, and core service frameworks are still Oracle-owned IP, and subject to Oracle’s global governance and legal obligations. 

Please note: Even in disconnected Alloy scenarios, update mechanisms or security patches may require periodic Oracle coordination. Understanding how these touchpoints are logged and audited will be crucial in high-compliance sectors.

Oracle Alloy

So, while cloud13 ensures data residency, operational proximity, and sovereign service branding, the legal perimeter around the software stack itself may still inherit external jurisdictional influence.

For some sectors, this hybrid control model strikes the right balance. But for others, particularly those anticipating geopolitical triggers (even highly unlikely!) or regulatory shifts, it raises a question: what if you need to exit Alloy entirely?

What a Cloud Exit Really Costs – From Oracle to Anywhere

Let’s be honest and realistic: moving cleanly from Oracle Cloud Infrastructure (OCI) to a hyperscaler like AWS or Azure is anything but simple. OCI’s services are deeply intertwined. If you are running Oracle-native PaaS or database services, you are looking at significant rework – sometimes a full rebuild – to get those workloads running smoothly in a different cloud ecosystem.

On top of that, data egress fees can quickly pile up, and when you add the cost and time of re-certification, adapting security policies, and retraining your teams on new tools, the exit suddenly becomes expensive and drawn out.

That brings us to the critical question: if you are already running workloads in Oracle Alloy, what are your realistic exit paths, especially on-premises?

Going the VMware, Nutanix, or Platform9 route doesn’t solve the problem much either. Sure, they offer a familiar infrastructure layer, but they don’t come close to the breadth of integrated platform services Oracle provides. Every native service dependency you have will need to be rebuilt or replaced.

Then There’s Azure Local and Google Distributed Cloud

Microsoft and Google both offer sovereign cloud variants that come in connected and disconnected flavours.

While Azure Local and Google Distributed Cloud are potential alternatives, they behave much like public cloud platforms. If your workloads already live in Azure or Google Cloud, these services might offer a regulatory bridge. But if you are not already in those ecosystems, and like in our case, are migrating from an Oracle-based platform, you are still facing a full cloud migration.

Yes, that’s rebuilding infrastructure, reconfiguring services, and potentially rearchitecting dozens or even hundreds of applications.

And it’s not just about code. Legacy apps often depend on specific runtimes, custom integrations, or licensed software that doesn’t map easily into a new stack. Even containerised workloads need careful redesign to match new orchestration, security, and networking models. Multiply that across your application estate, and you are no longer talking about a pivot.

You are talking about a multi-year transformation programme.

That’s before you even consider the physical reality. To run such workloads locally, you would need enough data center space (image repatriation or a dual-vendor strategy), power, cooling, network integration, and a team that can operate it all at scale. These alternatives aren’t just expensive to build. They also require a mature operational model and skills that most enterprises simply don’t have ready.

One cloud is already challenging enough. Now, imagine a multi-cloud setup and pressure to migrate.

From Alloy to Oracle Compute Cloud@Customer Isolated – An Exit Without Downtime

Oracle’s architecture allows customers to move their cloud workloads from Alloy into an Oracle Compute Cloud@Customer environment (known as C3I), with minimal disruption. Because these environments use the exact same software stack and APIs as the public OCI cloud, workloads don’t need to be rewritten or restructured. You maintain the same database services, the same networking constructs, and the same automation frameworks.

This makes the transition more of a relocation than a rebuild. Everything stays intact – your code, your security model, your SLAs. The only thing that changes is the control boundary. In the case of C3I, Oracle has no remote access. All infrastructure remains physically isolated, and operational authority rests entirely with the customer.

Oracle Compute Cloud@Customer Isolated

By contrast, shifting to another public or private cloud requires rebuilding and retesting. And while VMware or similar platforms might accommodate general-purpose workloads, they still lack the cloud experience.

Note: Oracle Compute Cloud@Customer offers OCI’s full IaaS and a subset of PaaS services.

While C3I doesn’t yet deliver the full OCI portfolio, it includes essential services like Oracle Linux, Autonomous Database, Vault, IAM, and Observability & Management, making it viable for most regulated use cases.

Alloy as a Strategic Starting Point

So, should cloud13 even start with Alloy?

That depends on the intended path. For some, Alloy is a fast way to enter the market, leveraging OCI’s full capabilities with local branding and customer intimacy. But it should never be a one-way road. The exit path, no matter what the destination is, must be designed, validated, and ready before geopolitical conditions force a decision.

This isn’t a question of paranoia. It’s good cloud design. You want to have an answer for the regulators. You want to be prepared and feel safe.

The customer experience must remain seamless. And when required, ideally, the workloads must move within the same cloud logic, same automation, and same/some platform services.

Could VMware Be Enough?

For some customers, VMware might remain a logical choice, particularly where traditional applications and operational familiarity dominate. It enables a high degree of portability, and for infrastructure-led workloads, it’s an acceptable solution. But VMware environments lack integrated PaaS. You don’t get Autonomous DB. You get limited monitoring, logging, or modern analytics services. You don’t get out-of-the-box identity federation or application delivery pipelines.

Ultimately, you are buying infrastructure, not a cloud.

The Sovereign Stack – C3I and Exadata Cloud@Customer

That’s why Oracle’s C3I, especially when paired with Exadata Cloud@Customer (ExaCC) or a future isolated variant of it, offers a more complete solution. It delivers the performance, manageability, and sovereignty that today’s regulated industries demand. It lets you operate a true cloud on your own terms – local, isolated, yet fully integrated with Oracle’s broader cloud ecosystem.

C3I may not yet fit every use case. Its scale and deployment model must match customer expectations. But for highly regulated workloads, and especially for organizations planning for long-term legal and geopolitical shifts, it represents the most strategic exit vector available.

Final Thought

Cloud exit should never be a last-minute decision. In an IT landscape where laws, alliances, and risks shift quickly, exit planning is not a sign of failure. It’s considered a mark of maturity!

Oracle’s unique ecosystem, from Alloy to C3I, is one of the few that lets you build with that maturity from day one.

Whether you are planning a sovereign cloud, or are already deep into a regulated workload strategy, now is the time to assess exit options before they are needed. Make exit architecture part of your initial cloud blueprint.

Why OCI Dedicated Region Is the Missing Piece for Agentic Workloads

Why OCI Dedicated Region Is the Missing Piece for Agentic Workloads

In my last blog post, I explored how OCI Dedicated Region helps enterprises retrofit AI workloads into their existing data centers. We discussed how bringing Oracle’s cloud infrastructure on-premises addresses challenges such as GPU availability, latency, and data sovereignty, thereby removing many barriers to AI adoption.

Today, I want to take this further and explore the next wave of AI evolution, agentic AI, which not only responds to prompts but also takes autonomous actions. This isn’t just about having powerful models, it’s about embedding intelligence where it counts most: right next to your critical legacy systems.

The Rise of Agentic AI and Why It’s Different

Agentic AI represents a shift from passive AI tools to systems that can observe, decide, and act independently. Imagine AI agents that don’t just answer questions but manage workflows, orchestrate cloud resources, or automate incident response. This means giving AI the ability to interact with APIs, monitor real-time data streams, and adjust systems dynamically without human intervention.

The challenge? Most organizations’ critical data and applications still live in legacy platforms or tightly controlled environments. These environments were never built with autonomous AI in mind. Simply putting agentic AI in the public cloud and hoping it will integrate smoothly is not realistic. The physical and architectural distance creates latency, security risks, and compliance headaches that slow down adoption.

Legacy Systems and the Limits of Retrofitting

In my previous article, I described how OCI Dedicated Region helps organizations retrofit their existing infrastructure to support AI workloads by providing cloud-native GPU compute and AI services on-premises. While this approach is a game changer for many pilot projects and inference jobs, agentic AI demands something more foundational.

Agentic AI needs to be deeply integrated into the operational fabric of an enterprise. It requires direct, low-latency connections to databases, enterprise resource planning systems, and mission-critical applications that govern day-to-day business. Integrating AI compute into existing traditional infrastructure is a good first step, but it frequently results in complicated networks and security setups that raise operational risks.

Beyond Retrofit – OCI Dedicated Region as a Fully-Integrated AI Platform

OCI Dedicated Region is not just an add-on for AI, it’s a cloud region deployed inside your data center, delivering the same cloud services and infrastructure as Oracle’s public cloud, but physically under your control. This means you get a fully operational cloud region with high-performance computing, GPU acceleration, storage, networking, and AI services—all seamlessly integrated and ready to connect with your existing systems.

This is a fundamental shift. Instead of adapting your legacy environment to AI, you now place a full cloud region right next to your workloads. The AI agents you deploy can access real-time data, interact with legacy applications through native APIs, and operate within your strict security and compliance boundaries.

This close proximity eliminates latency and trust issues that come with remote public cloud AI deployments. It also reduces the need for complex VPNs or data synchronization layers, making agentic AI not just possible but practical.

Why Proximity Matters for Autonomous AI

Agentic AI thrives on context and immediacy. The closer it is to the systems it manages, the better decisions it can make and the faster it can act. For instance, if an AI agent detects a fault in a manufacturing control system or a spike in financial transaction anomalies, it must respond quickly to minimize disruption.

Running these AI systems in a public cloud region thousands of miles away adds delays and potential security risks, which can be unacceptable in regulated industries or mission-critical environments. OCI Dedicated Region removes those barriers by bringing the cloud to you.

By combining cloud agility with on-premises control, you get a hybrid environment where agentic AI can operate with the speed, reliability, and security enterprises demand.

The Strategic Advantage of OCI Dedicated Region

Most organizations aren’t looking for AI experiments, they want to operationalize AI at scale and embed it within their core processes. OCI Dedicated Region provides the infrastructure foundation to do just that.

It offers enterprise-ready cloud services inside your data center, enabling agentic AI to interact naturally with legacy systems without requiring costly or risky migrations. This means AI-powered automation, orchestration, and decision-making become achievable realities instead of distant goals.

If you want to move beyond retrofitting and truly modernize your AI journey, keeping the cloud close to your data, and your data close to the cloud, is essential. OCI Dedicated Region delivers exactly that.

Retrofitting AI Workloads with OCI Dedicated Region

Retrofitting AI Workloads with OCI Dedicated Region

As AI adoption becomes a strategic priority across nearly every industry, enterprises are discovering that scaling these workloads isn’t as simple as adding more GPUs to their cloud bill. Public cloud platforms like AWS and Azure offer extensive AI infrastructure, but many organizations are now facing steep costs, unpredictable pricing models, and growing concerns about data sovereignty, compliance, long-term scalability, and operational complexity. There are also physical challenges: most enterprise data centers were never designed for the high power, cooling, and interconnect demands of AI infrastructure.

A typical GPU rack can draw between 40 and 100 kW, far beyond the 5-10 kW that traditional racks can handle. Retrofitting a legacy data center to support such density requires high-density power delivery, advanced cooling, reinforced flooring, low-latency networking, and highly parallel storage systems. The investment often ranges from $4-8M per megawatt for retrofits and up to $15M for greenfield builds. Even with this capital outlay, organizations still face integration complexity, deployment delays, and fragmented operations.

This creates a challenging question: how can enterprises gain the agility, scale, and services of the public cloud for AI, without incurring its spiraling costs or rebuilding their entire infrastructure?

Oracle Cloud Infrastructure (OCI) Dedicated Region presents a compelling answer. It delivers the full OCI public cloud experience, including GPU compute, AI services, and cloud-native tooling, within your own data center. Oracle operates and manages the region, while you maintain full control. The result: public cloud performance and capabilities, delivered on-premises, without the compromises.

The Infrastructure Challenge of AI at Scale

AI workloads are no longer experimental, they are driving real business impact. Whether it’s training foundation models, deploying LLMs, or powering advanced search capabilities, these workloads require specialized infrastructure.

Unlike traditional enterprise IT, AI workloads place massive demands on power density, cooling, networking, and storage. GPU racks housing Nvidia H100 or A100 units can exceed 100 kW. Air cooling becomes ineffective, and liquid or hybrid cooling systems become essential. High-throughput, low-latency networks like 100/400 Gbps Ethernet or InfiniBand, are needed to connect compute clusters efficiently. AI workloads also rely heavily on large datasets and require high-bandwidth storage located close to compute.

In many enterprise data centers, this level of performance is simply out of reach. The facilities can’t provide the power or cooling, the racks can’t carry the weight, and the legacy networks can’t keep up.

The High Cost of Retrofitting for AI

For organizations considering bringing AI workloads back on-premises to manage costs, retrofitting is often seen as the obvious next step. But it rarely delivers the value expected.

Upgrading power infrastructure alone demands new transformers, PDUs, backup systems, and complex energy management. Cooling must shift from traditional air-based systems to liquid-based cooling loops or immersion techniques, requiring structural and spatial changes. Enterprise-grade racks are often too lightweight or densely packed for GPU servers, which can weigh over a ton each. Existing data center floors may need reinforcement.

Meanwhile, storage and networking systems must evolve to support I/O-intensive workloads. Parallel file systems, NVMe arrays, and tightly coupled fabrics are all essential, but rarely available in legacy environments. On top of that, most traditional data centers lack the cloud-native software stack needed for orchestration, security, observability, and automation.

Retrofits cost between $4-8M per megawatt. A greenfield build costs $11-15M per megawatt. These figures exclude operational overhead, integration timelines, training, and change management. For many, this is a non-starter.

OCI Dedicated Region – A True Public Cloud in Your Data Center

OCI Dedicated Region sidesteps these challenges. Oracle delivers a complete public cloud region, fully managed and operated by Oracle, inside your own facility. You get all the same infrastructure, services, and APIs as OCI’s public regions, with no loss of capability.

This includes GPU-accelerated compute (think of any Nvidia GPU), AI Services (like Data Science, Generative AI, and Vector Search), high-performance block and object storage, Oracle Autonomous Database, Exadata, analytics, low-latency networking, and full DevOps toolchains.

You also benefit from service mesh, load balancing, Kubernetes (OKE), serverless, observability, and zero-trust security services. From a developer perspective, it’s the same OCI experience – tools, SDKs, Terraform modules, and management consoles all work identically.

Importantly, data locality and sovereignty remain fully under your control. You manage access policies, audit trails, physical security, and compliance workflows.

Shifting from Capital Investment to Operational Efficiency

OCI Dedicated Region transforms infrastructure investment into an operating model. Rather than pouring capital into facilities, power systems, and integration, enterprises consume cloud resources on a predictable subscription basis. This eliminates hidden costs. No GPU spot market pricing, no surprise egress fees, no peak-hour surcharges.

Deployment is significantly faster compared to building or retrofitting infrastructure. Oracle delivers the region as a turnkey service, with pre-integrated compute, storage, AI, networking, and security. This minimizes integration complexity and accelerates time to value.

Operations are also simplified. OCI Dedicated Region maintains service parity with public OCI, which means your teams don’t need to adapt to different environments for hybrid or multi-cloud strategies. Everything runs on a consistent stack, which reduces friction and operational risk.

This model is particularly well-suited to highly regulated industries that require absolute control over data and infrastructure without losing access to modern AI tools.

Built for the Future of AI

OCI Dedicated Region supports a broad range of next-generation AI architectures and operational models. It enables federated AI, edge inference, and hybrid deployment strategies, allowing enterprises to place workloads where they make the most sense, without sacrificing consistency.

For instance, organizations can run real-time inference close to data sources at the edge (for example with Oracle Compute Cloud@Customer connected to your OCI Dedicated Region), while managing training and orchestration centrally. Workloads can burst into the public cloud when needed, leveraging OCI’s public regions without migrating entire stacks. Container-based scaling through Kubernetes ensures policy-driven elasticity and workload portability.

As power and cooling demands continue to rise, most enterprise data centers will be unable to keep the pace. OCI Dedicated Region is designed to absorb these demands, both technically and operationally.

Conclusion – Cloud Economics and Control Without Compromise

AI is quickly becoming a core part of enterprise infrastructure, and it’s exposing the limitations of both traditional data centers and conventional cloud models. Public cloud offers scale and agility, but often at unsustainable cost. On-prem retrofits are slow, expensive, and hard to manage.

OCI Dedicated Region offers a balanced alternative. It provides a complete cloud experience, GPU-ready and AI-optimized, within your own facility. You get the innovation, scale, and flexibility of public cloud, without losing control over data, compliance, or budget.

If your cloud bills are climbing and your infrastructure can’t keep up with the pace of AI innovation, OCI Dedicated Region is worth a serious look.

Why Emulating the Cloud Isn’t the Same as Being One

Why Emulating the Cloud Isn’t the Same as Being One

It’s easy to mistake progress for innovation. VMware Cloud Foundation 9.0 (VCF) introduces long-awaited features like VPC-style networking, developer-centric automation, and bundled services. But let’s be honest: this is not the future of cloud. This is infrastructure catching up to where the public cloud world already was ten years ago.

Example: Moving some concepts and features from VMware Cloud Director (vCD) to Aria Automation and then calling it VCF Automation is also not innovative. It was the right thing to do, as vCD and Aria Automation (formerly known as vRealize Automation) shared many overlapping features and concepts. In other words, we can expect VCF Automation to be the future and vCD will be retired in a few years.

Anyway, there’s a pattern here. Platform vendors continue to position themselves as “private cloud providers”, yet the experience they offer remains rooted in managing hardware, scaling clusters, and applying patches. Whether it’s VCF or Nutanix, the story is always the same: it’s better infrastructure. But that’s the problem. It’s still infrastructure.

In contrast, the real shift toward cloud doesn’t start with software-defined storage or NSX overlay networks. It starts with the service model. That’s what makes cloud work. That’s what makes it scalable, elastic, and developer-first. That’s what customers actually need.

Let’s unpack where VCF 9.0 lands and why it still misses the mark.

What’s New in VCF 9.0. And What’s Not.

Broadcom deserves credit for moving VCF closer to what customers have been asking for since at least 2020. The platform now includes a proper developer consumption layer, integrated VPC-style networking, a simplified control plane, and aligned software versions for different products. Yes, it feels more like a cloud. It automates more, hides more complexity, and makes day 2 operations less painful. All good steps!

The new virtual private cloud constructs let teams carve out self-contained network domains – complete with subnets, NAT, firewall rules, and load balancers – all provisioned from a central interface. That’s a meaningful upgrade from the old NSX workflows. Now, transit gateways can be deployed automatically, reducing the friction of multi-domain connectivity. The whole setup is better, simpler, and more cloud-like. Well done.

On the consumption side, there’s a proper push toward unified APIs. Terraform support, policy-as-code blueprints in YAML, and native Kubernetes provisioning give developers a way to consume infrastructure more like they would in a hyperscaler environment. VCF customers can onboard teams faster, and the lifecycle engine behind the scenes handles upgrades, certificates, and best-practice configurations with far less manual effort.

So yes, VCF 9.0 is a big step forward for Broadcom and for existing VMware customers. But let’s put that progress into perspective.

Cloud Features Delivered Years Too Late

The features we’re seeing now – developer APIs, VPCs, self-service provisioning, built-in security, elastic-like networking – these aren’t breakthroughs. They are basic expectations. Public cloud providers like AWS and Azure introduced the VPC concept more than 10 years ago. Public clouds have offered full-stack policy automation, service mesh observability, and integrated load balancing for most of the last decade.

What VCF 9.0 delivers in 2025 is essentially what existing on-premises customers were asking for back in 2020.

The bigger concern is that VMware has always been the benchmark for enterprise-grade virtualization and private infrastructure. When customers bought into VCF years ago, they expected these capabilities then, not now. Broadcom has simply shipped the version of VCF that many customers assumed was already on the roadmap, five years ago.

And even now, many of the services (add-ons) in VCF 9.0 like Avi load balancing, vDefend IDS/IPS, integrated databases, and AI services, are optional components, mostly manually deployed, and not fully elastic or usage-based. These are integrations, not native services. You still need to operate them.

The Core Problem: It’s Still Infrastructure-Led

That’s the real difference. VCF and Nutanix remain infrastructure-led platforms. They require hardware planning, capacity management, lifecycle orchestration, and dependency tracking. Yes, they have APIs. Yes, they support Kubernetes. But at their core, they are platforms you need to own, operate, and scale yourself.

Cloud, on the other hand, is not about owning anything. It’s about consuming outcomes. VCF 9.0 and others are just not there yet.

The Illusion of a Private Cloud

This is why it’s time to call out the difference. Just because something looks like cloud – has some APIs, supports Kubernetes, uses words like “consumption” and “developer self-service” – doesn’t mean it actually behaves like cloud.

The illusion of a “private cloud” is seductive. You get to keep control. You get to use familiar tools. But control also means responsibility. Familiar tools mean legacy thinking. And a so-called private cloud, in most cases, just means more complex infrastructure with higher expectations.

That’s not transformation. That’s rebranding.

What VCF 9.0 delivers is an important evolution of VMware’s private infrastructure platform. But let’s not confuse that with cloud. Broadcom has moved in the right direction. They have shipped what customers needed years ago. But they are still delivering (virtual) infrastructure. Just better packaged.

Final Thought

You don’t transform your IT strategy by modernizing clusters. You transform it by changing how you consume and operate technology.

So the question isn’t whether your stack looks like “the cloud”. The question is whether you can stop operating infrastructure and start consuming services.

That’s the real line between emulating the cloud and actually being one. And as of today, VCF (and Nutanix) are still on the other side of that line. It’s not good. It’s not bad. It is what it is.

Why Switzerland Needs a Different Kind of Sovereign Cloud

Why Switzerland Needs a Different Kind of Sovereign Cloud

Switzerland doesn’t follow. It observes, evaluates, and decides on its own terms. In tech, in policy, and especially in how it protects its data. That’s why the typical EU sovereign cloud model won’t work here. It solves a different problem, for a different kind of political union.

But what if we could go further? What if the right partner, one that understands vertical integration, local control, and legal separation, could build something actually sovereign?

That partner might just be Oracle.

Everyone is talking about the EU’s digital sovereignty push and Oracle responded with a serious answer: the EU Sovereign Cloud, which celebrated its second anniversary a few weeks ago. It’s a legally ring-fenced, EU-operated, independently staffed infrastructure platform. Built for sovereignty, not just compliance.

That’s the right instinct. But Switzerland is not the EU. And sovereignty here means more than “EU-only.” It means operations bound by Swiss law, infrastructure operated on Swiss soil, and decisions made by Swiss entities.

Oracle Alloy and OCI Dedicated Region – Sovereignty by Design

Oracle’s OCI Dedicated Region and the newer Alloy model were designed with decentralization in mind. Unlike traditional hyperscaler zones, these models bring the entire control plane on-premises, not just the data.

That allows for policy enforcement, tenant isolation, and lifecycle management to happen within the customer’s boundaries, without default exposure to centralized cloud control towers. In short, the foundation for digital sovereignty is already there.

But Switzerland, especially the public sector, expects more.

What Still Needs to Be Solved for Switzerland

Switzerland doesn’t just care about where data sits. It cares about who holds the keys, who manages the lifecycle, and under which jurisdiction they operate.

While OCI Dedicated Region and Alloy keep the control plane local, certain essential services, such as telemetry, patch delivery, and upgrade mechanisms, still depend on Oracle’s global backbone. In the Swiss context, even a low-level dependency can raise concerns about jurisdictional risk, including exposure to laws like the U.S. CLOUD Act.

Support must remain within Swiss borders. Sovereign regions that rely on non-Swiss teams or legal entities to resolve incidents still carry legal and operational exposure – but this data can be anonymized. Sovereignty includes not only local infrastructure, but also patch transparency, cryptographic root trust, and full legal separation from foreign jurisdictions. 

Yes, operational teams must be Swiss-based, except at the tier 2 or tier 3 level.

Avaloq Is Already Leading the Way

This isn’t just theory. Switzerland already has a working example: Avaloq, the Swiss financial technology provider, is running core workloads on OCI Dedicated Region.

These are not edge apps or sandbox environments. Avaloq supports mission-critical platforms for regulated financial institutions. If they trust Oracle’s architecture with that responsibility, the model is clearly feasable. From a sovereignty, security, and compliance perspective.

Avaloq’s deployment shows that Swiss-regulated workloads can run securely, locally, and independently. And if one of Switzerland’s most finance-sensitive firms went down this path, others across government, healthcare, and infrastructure should be paying attention.

Sovereignty doesn’t mean reinventing everything. It means learning from those already building it.

The Bottom Line

Switzerland doesn’t need more cloud. It needs a cloud built for Swiss values: neutrality, autonomy, and legal independence.

Oracle is closer to that model than most. Its architecture is already designed for local control. Its EU Sovereign Cloud shows it understands the legal and operational dimensions of sovereignty. And with Avaloq already in production on OCI Dedicated Region, the proof is there.

The technology is ready. The reference customer is live.

What comes next is a question of commitment.

Why Sovereignty Needs Both Centralization and Decentralization

Why Sovereignty Needs Both Centralization and Decentralization

Europe’s cloud strategy is at a crossroads. There’s pressure to take control, to define sovereignty in infrastructure terms, and to reduce exposure to non-European hyperscalers. But somewhere along the way, the conversation fell into a binary trap: either centralize for control or decentralize for autonomy.

That’s not how modern systems work. And it’s not how sovereignty is going to work either.

Europe’s reliance on U.S. hyperscalers remains a security and sovereignty risk. While countries like France, Germany, and Italy invest in local providers, their efforts remain siloed. Without alignment, Europe is left with patchwork sovereignty.

I said it many times already, but let me repeat it. Sovereignty in the cloud isn’t a matter of choosing one path or the other. It’s about building an architecture that combines both centralized coordination with decentralized execution. Anything else either fragments too fast or becomes too rigid to function in a sovereign context.

Centralization Is About Structure, Not Control

Centralization plays a role. A central governance framework helps define what sovereignty means: where data can move, who operates the infrastructure, how compliance is enforced, how identity is managed. Without some level of central alignment, sovereignty collapses into 27 different interpretations with no common ground.

This is where Europe still struggles. They are good at setting principles – GDPR, data residency, trusted cloud labels – but they lack shared operational infrastructure. What they do have is fragmented: agencies, member states, and industry sectors each building their own versions of sovereignty with little interoperability between them.

Centralization brings order, but order alone isn’t resilience.

Sovereign Microzones: Fragmented by Design

The current trajectory in many parts of Europe is toward what could be called sovereign microzones, individually defined, locally controlled, often incompatible cloud environments built by national or sector-specific entities.

These microzones are born out of good intentions: protect critical data, maintain legal oversight, reduce dependency. But most of them don’t scale well. Each implementation introduces its own stack, its own compliance logic, and often its own interpretation of what “sovereign” really means.

In practice, this results in technical fragmentation and governance friction. Cross-border collaboration becomes harder. Data sharing between sectors stalls. Innovation slows as cloud-native capabilities are stripped back to meet narrow compliance targets.

Sovereignty was never meant to be a bunker. If these microzones can’t interoperate, they may become silos. Silos with flags on top.

Operational Autonomy vs. Strategic Realism

One idea gaining momentum is operational autonomy, ensuring that European workloads can continue to run even in the event of a political embargo or legal dispute with a non-EU provider. It’s a serious concept, grounded in real geopolitical concerns. The fear is that U.S. cloud vendors could, under extraordinary circumstances, be forced to restrict services in Europe. I get it.

But let’s be honest: the probability of a coordinated embargo cutting off hundreds of cloud regions across Europe is extremely low. Not zero. But close. The legal, economic, and political blowback would be enormous. The U.S. has more to lose than gain by treating Europe as an adversary in this space.

Still, operational autonomy has value. It’s about having options. The ability to reassign workloads, shift operational control, and decouple governance when needed. But building that autonomy doesn’t mean rejecting foreign technology outright. It means investing in layered sovereignty: trusted deployment models, contractual separation, technical isolation when necessary, and above all – control over the control plane.

Note: Operational autonomy is not the same as autarky.

What Oracle Cloud Enables

Oracle Cloud fits this model in ways that are often overlooked. Not because it’s European, it isn’t, but because it supports the technical and operational diversity that sovereignty requires.

With OCI Dedicated Region and the Cloud@Customer portfolio, institutions can run full-featured or a subset of Oracle Cloud environments inside their own data centers, with some/complete control over operations, updates, and access.

Oracle’s EU Sovereign Cloud further separates European workloads from global infrastructure, with EU-based personnel and compliance boundaries. And unlike some providers, Oracle doesn’t require full-stack standardization to make it work. It’s open, modular, and designed for interoperability.

A good example of this approach is the new partnership between Oracle and Nextcloud. It brings together a leading European open-source collaboration platform with Oracle’s sovereign cloud infrastructure. The result is a deployment model where public sector organizations can run Nextcloud in a scalable, cloud-native environment while maintaining full data control and legal jurisdiction within the EU. It’s an antidote to sovereign fragmentation: a solution that respects both European values and operational pragmatism.

This kind of flexibility matters. It respects the complexity of European sovereignty rather than trying to erase it.

Conclusion: Not Either/Or – Both

Europe shouldn’t have to choose between centralization and decentralization. In fact, it can’t. Real sovereignty (political, technical, operational) lives in the tension between the two.

The false choice only leads to false confidence. The reality is more difficult, but also more durable: structure without rigidity, autonomy without fragmentation.

Sovereignty isn’t built in isolation. It’s coordinated. Together.