What If Cloud Was Never the Destination But Just One Chapter In A Longer Journey

What If Cloud Was Never the Destination But Just One Chapter In A Longer Journey

For more than a decade, IT strategies were shaped by a powerful promise that the public cloud was the final destination. Enterprises were told that everything would eventually run there, that the data center would become obsolete, and that the only rational strategy was “cloud-first”. For a time, this narrative worked. It created clarity in a complex world and provided decision-makers with a guiding principle.

Hyperscalers accelerated digital transformation in ways no one else could have. Without their scale and speed, the last decade of IT modernization would have looked very different. But what worked as a catalyst does not automatically define the long-term architecture.

But what if that narrative was never entirely true? What if the cloud was not the destination at all, but only a chapter? A critical accelerator in the broader evolution of enterprise infrastructure? The growing evidence suggests exactly that. Today, we are seeing the limits of mono-cloud thinking and the emergence of something new. A shift towards adaptive platforms that prioritize autonomy over location.

The Rise and Fall of Mono-Cloud Thinking

The first wave of cloud adoption was almost euphoric. Moving everything into a single public cloud seemed not just efficient but inevitable. Infrastructure management became simpler, procurement cycles shorter, and time-to-market faster. For CIOs under pressure to modernize, the benefits were immediate and tangible.

Yet over time, the cost savings that once justified the shift started to erode. What initially looked like operational efficiency transformed into long-term operating expenses that grew relentlessly with scale. Data gravity added another layer of friction. While applications were easy to deploy, the vast datasets they relied on were not as mobile. And then came the growing emphasis on sovereignty and compliance. Governments and regulators, citizens and journalists as well, started asking difficult questions about who ultimately controlled the data and under what jurisdiction.

These realities did not erase the value of the public cloud, but they reframed it. Mono-cloud strategies, while powerful in their early days, increasingly appeared too rigid, too costly, and too dependent on external factors beyond the control of the enterprise.

Multi-Cloud as a Halfway Step

In response, many organizations turned to multi-cloud. If one provider created lock-in, why not distribute workloads across two or three? The reasoning was logical. Diversify risk, improve resilience, and gain leverage in vendor negotiations.

But as the theory met reality, the complexity of multi-cloud began to outweigh its promises. Each cloud provider came with its own set of tools, APIs, and management layers, creating operational fragmentation rather than simplification. Policies around security and compliance became harder to enforce consistently. And the cost of expertise rose dramatically, as teams were suddenly required to master multiple environments instead of one.

Multi-cloud, in practice, became less of a strategy and more of a compromise. It revealed the desire for autonomy, but without providing the mechanisms to truly achieve it. What emerged was not freedom, but another form of dependency. This time, on the ability of teams to stitch together disparate environments at great cost and complexity.

The Adaptive Platform Hypothesis

If mono-cloud was too rigid and multi-cloud too fragmented, then what comes next? The hypothesis that is now emerging is that the future will be defined not by a place – cloud, on-premises, or edge – but by the adaptability of the platform that connects them.

Adaptive platforms are designed to eliminate friction, allowing workloads to move freely when circumstances change. They bring compute to the data rather than forcing data to move to compute, which becomes especially critical in the age of AI. They make sovereignty and compliance part of the design rather than an afterthought, ensuring that regulatory shifts do not force expensive architectural overhauls. And most importantly, they allow enterprises to retain operational autonomy even as vendors merge, licensing models change, or new technologies emerge.

This idea reframes the conversation entirely. Instead of asking where workloads should run, the more relevant question becomes how quickly and easily they can be moved, scaled, and adapted. Autonomy, not location, becomes the decisive metric of success.

Autonomy as the New Metric?

The story of the cloud is not over, but the chapter of cloud as a final destination is closing. The public cloud was never the endpoint, but it was a powerful catalyst that changed how we think about IT consumption. But the next stage is already being written, and it is less about destinations than about options.

Certain workloads will always thrive in a hyperscale cloud – think collaboration tools, globally distributed apps, or burst capacity. Others, especially those tied to sovereignty, compliance, or AI data proximity, demand a different approach. Adaptive platforms are emerging to fill that gap.

Enterprises that build for autonomy will be better positioned to navigate an unpredictable future. They will be able to shift workloads without fear of vendor lock-in, place AI infrastructure close to where data resides, and comply with sovereignty requirements without slowing down innovation.

The emerging truth is simple: Cloud was never the destination. It was only one chapter in a much longer journey. The next chapter belongs to adaptive platforms and to organizations bold enough to design for freedom rather than dependency.

Finally, People Start Realizing Sovereignty Is a Spectrum

Finally, People Start Realizing Sovereignty Is a Spectrum

For months and years, the discussion about cloud and digital sovereignty has been dominated by absolutes. It was framed as a black-and-white choice. Either you are sovereign, or you are not. Either you trust hyperscalers, or you don’t. Either you build everything yourself, or you hand it all over. But over the past two years, organizations, governments, and even the vendors themselves have started to recognize that this way of thinking doesn’t reflect reality. Sovereignty is seen as a spectrum now.

When I look at the latest Gartner Magic Quadrant (MQ) for Distributed Hybrid Infrastructure (DHI), this shift becomes even more visible. In the leader’s quadrant, we find AWS, Microsoft, Oracle, Broadcom (VMware), and Nutanix. Each of them is positioned differently, but they all share one thing in common. They now operate somewhere along this sovereignty spectrum. Some of them are “fully” sovereign and some of them are dependent. The truth lies in between, and it is about how much control you want to retain versus how much you are willing to outsource. But it’s also possible to have multiple vendors and solutions co-existing.

Gartner MQ DHI 2025

The Bandwidth of Sovereignty

To make this shift more tangible, think of sovereignty as a bandwidth rather than a single point. On the far left, you give up almost all control and rely fully on global hyperscalers, following their rules, jurisdictions, and technical standards. On the far right, you own and operate everything in your data center, with full control but also full responsibility. Most organizations today are somewhere in between (using a mix of different vendors and clouds).

This bandwidth allows us to rate the leaders in the MQ not as sovereign or non-sovereign, but according to where they sit on the spectrum:

  • AWS stretches furthest toward global reach and scalability. They are still in the process of building a sovereign cloud, and until that becomes reality, none of their extensions (Outposts, Wavelength, Local Zones) can truly be seen as sovereign (please correct me if I am wrong). Their new Dedicated Local Zones bring infrastructure closer, but AWS continues to run the show. Meaning sovereignty is framed through compliance, not operational autonomy.

  • Microsoft sits closer to the middle. With Microsoft’s Sovereign Cloud initiatives in Europe, they acknowledge the political and regulatory reality. Customers gain some control over data residency and compliance, but the operational steering remains with Microsoft (except for their “Sovereign Private Cloud” offering, which consists of Azure Local + Microsoft 365 Local).

  • Oracle has its EU Sovereign Cloud, which is already available today, and with offerings like OCI Dedicated Region and Alloy that push sovereignty closer to customers. Still, these don’t offer operational autonomy, as Oracle continues to manage much of the infrastructure. For full isolation, Oracle provides Oracle Cloud Isolated Region and the smaller Oracle Compute Cloud@Customer Isolated (C3I). These are unique in the hyperscaler landscape and move Oracle further to the right.

  • Broadcom (VMware) operates in a different zone of the spectrum. With VMware’s Cloud Foundation stack, customers can indeed build sovereign clouds with operational autonomy in their own data centers. This puts them further right than most hyperscalers. But Gartner and recent market realities also show that dependency risks are not exclusive to AWS or Azure. VMware customers face uncertainty tied to Broadcom’s licensing models and strategic direction, which balances out their autonomy.

  • Google does not appear in the leader’s quadrant yet, but their Google Distributed Cloud (GDC) deserves mention. Gartner highlights how GDC is strategically advancing, winning sovereign cloud projects with governments and partners, and embedding AI capabilities on-premises. Their trajectory is promising, even if their current market standing hasn’t brought them into the top right yet.

  • Nutanix stands out by offering a comprehensive single product – the Nutanix Cloud Platform (NCP). Gartner underlines that NCP is particularly suited for sovereign workloads, hybrid infrastructure management, and edge multi-cloud deployments. Unlike most hyperscalers, Nutanix delivers one unified stack, including its “own hypervisor as a credible ESXi alternative”. That makes it possible to run a fully sovereign private cloud with operational autonomy, without sacrificing cloud-like agility and elasticity.

Why the Spectrum Matters

This sovereignty spectrum changes how CIOs and policymakers make decisions. Instead of asking “Am I sovereign or not?”, the real question becomes:

How far along the spectrum do I want to be and how much am I willing to compromise for flexibility, cost, or innovation?

It is no longer about right or wrong. Choosing AWS does not make you naive. Choosing Nutanix does not make you paranoid. Choosing Oracle does not make you old-fashioned. Choosing Microsoft doesn’t make you a criminal. Each decision reflects an organization’s position along the bandwidth, balancing risk, trust, cost, and control.

Where We Go From Here

The shift to this spectrum-based view has major consequences. Vendors will increasingly market not only their technology but also their place on the sovereignty bandwidth. Governments will stop asking for absolute sovereignty and instead demand clarity about where along the spectrum a solution sits. And organizations will begin to treat sovereignty not as a one-time decision but as a dynamic posture that can move left or right over time, depending on regulation, innovation, and geopolitical context.

The Gartner MQ shows that the leaders are already converging around this reality. The differentiation now lies in how transparent they are about it and how much choice they give their customers to slide along the spectrum. Sovereignty, in the end, is not a fixed state. It is a journey.

What Is Sovereign Enough for You?

What Is Sovereign Enough for You?

Digital sovereignty has become one of the defining topics for governments, regulators, and organizations that operate with sensitive data. Everyone wants to know how much control they truly have over their cloud environment. But the question that should be asked first is: what is sovereign enough for you? Because sovereignty is not a binary choice. It comes in different layers, with different levels of autonomy, control, and dependencies on global vendors like Oracle.

Oracle has designed its portfolio with exactly this variety in mind. Not every government or regulated organization needs or wants a fully isolated environment. Some are fine with Oracle managing the service end-to-end, while others require absolute control down to operations and staffing. Let’s walk through the different operating models and connectivity dependencies, so you can decide where your needs for sovereignty fit in.

1) Building a Full Sovereign Cloud With Local Personnel

At the very top of the sovereignty spectrum sits the option to build a national or regional sovereign cloud that is completely separated from Oracle’s global public regions (separate realms). These environments are staffed and operated by locally cleared personnel, and the legal entity running the cloud sits within the country or region itself.

A graphic depicting OCI realms with separation.

Examples today are the UK Government & Defence Cloud and the Oracle EU Sovereign Cloud. Here, sovereignty is not only a technical matter but also an organizational one. Governments get the guarantee that operations, compliance, and support are entirely bound by local regulations and cannot be accessed or influenced from outside.

Full operational autonomy by design. The control plane, the data plane, and the people managing the systems are all local. Oracle provides the technology, but the control and day-to-day operations are delegated to the sovereign entity.

From a policy-maker’s perspective, this is the gold standard. Independence from foreign jurisdictions, complete local control of staff, audits, and processes, and guaranteed resilience even without global connectivity. It comes with the highest costs and commitments, but also the strongest assurance.

2) OCI Dedicated Region

OCI Dedicated Region is a full public cloud region deployed directly into a customer’s data center. It includes all Oracle Cloud services, runs on dedicated infrastructure, and provides the same service catalog as Oracle’s global regions.

Diagram of OCI in a dedicated region, description below

From a sovereignty perspective, this model ensures that data never leaves the country if the customer so desires. The region is connected to Oracle’s global backbone (still separate realms), which allows it to remain consistent in updates, operations, and service integration. However, the control plane still depends on Oracle. Updates, patches, and lifecycle management (~15’000 changes per month) are performed by Oracle engineers, who operate under strict contracts and compliance rules.

Examples in practice:

  • Vodafone has deployed six Dedicated Regions across Europe (Ireland, Italy, Germany) to modernize core systems, ensure compliance, and keep data inside the EU while leveraging Oracle’s global innovation cycle.

  • Avaloq, the Swiss banking software leader, runs its own Dedicated Region to provide compliant, modern infrastructure for financial services, combining local control of data with Oracle’s managed operations.

Dedicated Region is often sovereign enough: sensitive workloads stay in-country, under national regulation, while Oracle ensures consistency, security, and ongoing modernization. Operational autonomy is not fully local, but the trade-off is efficiency and scale.

3) Oracle Alloy

Oracle Alloy takes sovereignty one step further by introducing a cloud-as-a-service model for partners. Service providers, system integrators, or governments can license Oracle’s technology stack and operate their own branded cloud. Alloy provides the full OCI service catalog but allows the partner to take over customer relationships, billing, compliance, and front-line operations.

Becoming an Oracle Alloy partner  diagram, description below

This is highly attractive for countries and organizations that want to build their own cloud business or national platforms without developing hyperscale technology themselves. Still, Alloy maintains a technical tether to Oracle. While the partner gains control over operations and branding, Oracle remains in charge of certain aspects of lifecycle management, support escalation (tier 2 and tier 3 support), and roadmap alignment.

Examples in practice:

  • In Saudi Arabia, telecom leader stc is using Alloy to deliver sovereign cloud services that meet local data residency and regulatory requirements.

  • In Japan, Fujitsu uses Oracle Alloy to provide sovereign cloud and AI capabilities tailored to Japanese compliance needs.

  • In Italy, the Polo Strategico Nazionale (PSN) leverages Alloy as part of its managed public cloud offering to deliver secure and compliant cloud services to public administrations.

Alloy strikes a strong balance: It empowers local ecosystems to run a nationally branded sovereign cloud, keeping control of data and compliance, while Oracle provides the innovation foundation. It is not full independence, but it delivers sovereignty in practice for many use cases.

4) Oracle Compute Cloud@Customer

Oracle Compute Cloud@Customer (C3) is a private cloud appliance that delivers OCI services inside your data center. It is ideal for customers who want cloud elasticity and API compatibility with OCI, but who also need workloads to run locally due to latency, compliance, or data residency requirements.

However, the control plane is still managed by Oracle (in a public OCI region or connected to the EU Sovereign Cloud). This means patching, upgrades, and critical operations are carried out by Oracle teams, even though the infrastructure is located in your facility. Connectivity back to Oracle is also required for normal lifecycle operations.

An additional strength of C3 is its seamless integration with OCI Dedicated Region. Customers can connect their local C3 instances to their Dedicated Region, effectively combining on-premises elasticity with the scale and service catalog of a full cloud region running inside their country. This creates a flexible architecture where workloads can be placed optimally on C3 for local control and performance, or on Dedicated Region for broader cloud capabilities.

Oracle Compute Cloud@Customer key capabilities

This model is a pragmatic compromise. It guarantees data sovereignty by keeping workloads in-country, but governments or regulated organizations don’t need to staff or manage the complexity of lifecycle operations. With the option to connect to a Dedicated Region, C3 also opens the door to a multi-layer sovereign cloud strategy, blending local control with the breadth of a national-scale cloud.

5) Oracle Cloud Isolated Region (OCIR)

Oracle Cloud Isolated Region (OCIR) is a specialized deployment model designed for environments that require enhanced security, autonomy, and in-country governance at national or organizational scale. Like a Dedicated Region, it is a full OCI cloud region deployed on-premises, but it is operated in a more restricted and isolated mode, with minimized dependency on Oracle’s global backbone.

Oracle Cloud Isolated Region

Example in practice: Singapore’s Defence Science and Technology Agency (DSTA) has selected OCIR to support the Ministry of Defence (MINDEF) and the Singapore Armed Forces (SAF). The deployment provides an air-gapped, sovereign cloud with high-performance compute and AI capabilities to strengthen C4 (Command, Control, Communications, and Computers) systems. It ensures secure operations, scalability, and rapid decision-making, all within a fully national framework.

OCIR is particularly suited to government ministries, defense, or intelligence organizations that want a nationally hosted cloud hub with strong autonomy, while still retaining the scalability and service richness of a hyperscale platform.

OCIR represents a strategic anchor, providing a sovereign cloud backbone for an entire country or authority that combines national-scale control with the innovation and reliability of Oracle Cloud.

6) Oracle Compute Cloud@Customer Isolated (C3I)

Oracle Compute Cloud@Customer Isolated (C3I) is the solution for organizations that need the highest degree of operational autonomy without running a full sovereign cloud program.

C3I is deployed into a customer’s data center and runs in an air-gapped configuration. The control plane is hosted locally and does not rely on Oracle’s global backbone. This means the customer or the designated authority is in charge of lifecycle operations, updates, and connectivity policies. Oracle provides the technology stack and ensures that the platform can evolve, but operations and control are fully handed to the customer/partner.

Scenario for defense organizations: Imagine a defense ministry operating an Oracle Cloud Isolated Region (OCIR) as its central sovereign cloud hub. Non-sensitive workloads such as HR, logistics, or training could run on regular C3 instances connected to OCIR’s control plane. At the same time, highly sensitive or tactical workloads, such as battlefield data analysis, mission planning, or classified operations, could be deployed on C3I instances. These isolated instances would be managed by local defense teams in the field, operating autonomously even in disconnected or hostile environments. This dual approach allows governments to combine centralized governance through OCIR with operational independence in mission-critical scenarios.

Oracle Compute Cloud@Customer Isolated

C3I represents the pinnacle of autonomy: The ability to maintain full local control for sensitive workloads while integrating into a broader sovereign cloud architecture anchored by OCIR.

Where Do You Stand on the Spectrum?

When thinking about sovereignty, it is essential to recognize that not every organization needs the same level of control. For some, a Dedicated Region is already sovereign enough, as it keeps all data within national borders while still benefiting from Oracle’s global expertise. For others, Alloy provides the right balance of local branding, compliance, and ecosystem building. For governments requiring national-scale autonomy, OCIR acts as a sovereign cloud hub. And for those with the most demanding tactical requirements, C3I ensures full local independence.

So the question remains: What is sovereign enough for you? The answer depends on your data sensitivity, regulatory environment, budget and strategic goals. Oracle has built a portfolio that allows you to choose. The challenge is to define your threshold and then pick the model that aligns with it.

Swiss Government Cloud – Possible Paths, Components and Vendors Compared

Swiss Government Cloud – Possible Paths, Components and Vendors Compared

Disclaimer: This article reflects my personal opinion and not that of my employer.

The discussion around a Swiss Government Cloud (SGC) has gained significant momentum in recent months. Digital sovereignty has become a political and technological necessity with immediate relevance. Government, politics, and industry are increasingly asking how a national cloud infrastructure could look. This debate is not just about technology, but also about governance, funding, and the ability to innovate.

Today’s starting point is relatively homogeneous. A significant portion of Switzerland’s public sector IT infrastructure still relies on VMware. This setup is stable but not designed for the future. When we talk about the Swiss Government Cloud, the real question is how this landscape evolves from pragmatic use of the public cloud to the creation of a fully sovereign cloud, operated under Swiss control.

For clarity: I use the word “component” (Stufe), in line with the Federal Council’s report, to describe the different maturity levels and deployment models.

A Note on the Federal Council’s Definition of Digital Sovereignty

According to the official report, digital sovereignty includes:

  • Data and information sovereignty: Full control over how data is collected, stored, processed, and shared.

  • Operational autonomy: The ability of the Federal Office of Information Technology (FOITT) to run systems with its own staff for extended periods, without external partners.

  • Jurisdiction and governance: Ensuring that Swiss law, not foreign regulation, defines control and access.

This definition is crucial when assessing whether a cloud model is “sovereign enough.”

Component 1 – Public Clouds Standard

The first component describes the use of the public cloud in its standard form. Data can be stored anywhere in the world, but not necessarily in Switzerland. Hyperscalers such as AWS, Microsoft Azure, or Oracle Cloud Infrastructure (OCI) offer virtually unlimited scalability, the highest pace of innovation, and a vast portfolio of services.

Amazon Web Services (AWS) is the broadest platform by far, providing access to an almost endless variety of services and already powering workloads from MeteoSwiss.

Microsoft Azure integrates deeply with Microsoft environments, making it especially attractive for administrations that are already heavily invested in Microsoft 365 and MS Teams. This ecosystem makes Azure a natural extension for many public sector IT landscapes.

Oracle Cloud Infrastructure (OCI), on the other hand, emphasizes efficiency and infrastructure, particularly for databases, and is often more transparent and predictable in terms of cost.

However, component 1 must be seen as an exception. For sovereign and compliant workloads, there is little reason to rely on a global public cloud without local residency or control. This component should not play a meaningful role, except in cases of experimental or non-critical workloads.

And if there really is a need for workloads to run outside of Switzerland, I would recommend using alternatives like the Oracle EU Sovereign Cloud instead of a regular public region, as it offers stricter compliance, governance, and isolation for European customers.

Component 2a – Public Clouds+

Here, hyperscalers offer public clouds with Swiss data residency, combined with additional technical restrictions to improve sovereignty and governance.

AWS, Azure, and Oracle already operate cloud regions in Switzerland today. They promise that data will not leave the country, often combined with features such as tenant isolation or extra encryption layers.

The advantage is clear. Administrations can leverage hyperscaler innovation while meeting legal requirements for data residency.

Component 2b – Public Cloud Stretched Into FOITT Data Centers

This component goes a step further. Here, the public cloud is stretched directly into the local data centers. In practice, this means solutions like AWS Outposts, Azure Local (formerly Azure Stack), Oracle Exadata & Compute Cloud@Customer (C3), or Google Distributed Cloud deploy their infrastructure physically inside FOITT facilities.

Diagram showing an Outpost deployed in a customer data center and connected back to its anchor AZ and parent Region

This creates a hybrid cloud. APIs and management remain identical to the public cloud, while data is hosted directly by the Swiss government. For critical systems that need cloud benefits but under strict sovereignty (which differs between solutions), this is an attractive compromise.

The vendors differ significantly:

Costs and operations are predictable since most of these models are subscription-based.

Component 3 – Federally Owned Private Cloud Standard

Not every path to the Swiss Government Cloud requires a leap into hyperscalers. In many cases, continuing with VMware (Broadcom) represents the least disruptive option, providing the fastest and lowest-risk migration route. It lets institutions build on existing infrastructure and leverage known tools and expertise.

Still, the cloud dependency challenge isn’t exclusive to hyperscalers. Relying solely on VMware also creates a dependency. Whether with VMware or a hyperscaler, dependency remains dependency and requires careful planning.

Another option is Nutanix, which offers a modern hybrid multi-cloud solution with its AHV hypervisor. However, migrating from VMware to AHV is in practice as complex as moving to another public cloud. You need to replatform, retrain staff, and become experts again.

Hybrid Cloud Management diagram

Both VMware and Nutanix offer valid hybrid strategies:

  • VMware: Minimal disruption, continuity, familiarity, low migration risk. Can run VMware Cloud Foundation (VCF) on AWS, Azure, Google Cloud and OCI.

  • Nutanix: Flexibility and hybrid multi-cloud potential, but migration is similar to changing cloud providers.

Crucially, changing platforms or introducing a new cloud is harder than it looks. Organizations are built around years of tailored tooling, integrations, automation, security frameworks, governance policies, and operational familiarity. Adding or replacing another cloud often multiplies complexity rather than reduces it.

Gartner Magic Quadrant for_Distributed_Hybrid_Infrastructure

Source: https://www.oracle.com/uk/cloud/distributed-cloud/gartner-leadership-report/ 

But sometimes it is just about costs, new requirements, or new partnerships, which result in a big platform change. Let’s see how the Leaders Magic Quadrant for “Distributed Hybrid Infrastructure” changes in the coming weeks or months.

Why Not Host A “Private Public Cloud” To Combine Components 2 and 3?

This possibility (let’s call with component 3.5) goes beyond the traditional framework of the Federal Council. In my view, a “private public cloud” represents the convergence of public cloud innovation with private cloud sovereignty.

“The hardware needed to build on-premises solutions should therefore be obtained through a ‘pay as you use’ model rather than purchased. A strong increase in data volumes is expected across all components.”

Source: https://www.fedlex.admin.ch/eli/fga/2024/1408/de#lvl_2/lvl_2.4 (translated from German)

Solutions like OCI Dedicated Region or Oracle Alloy can deliver the entire Oracle Cloud stack– IaaS, PaaS, databases, analytics – on Swiss soil. Unlike AWS Outposts, Azure Local, these solutions are not coming with a limited set of cloud services, but are identical to a full public cloud region with all cloud services, only hosted and operated locally in a customer’s data center.

OCI Dedicated Region Overview

Additionally, Oracle offers a compelling hybrid path. With OCVS (Oracle Cloud VMware Solution) on OCI Dedicated Region or Alloy, organizations can lift-and-shift VMware workloads unchanged to this local public cloud based on Oracle Cloud Infrastructure, then gradually modernize selected applications using Oracle’s PaaS services like databases, analytics, or AI.

OCI Dedicated Region and OCVS

This “lift-and-modernize” journey helps balance risk and innovation while ensuring continuous operational stability.

And this makes component 3.5 unique. It provides the same public-cloud experience (continuous updates, innovation pace; 300-500 changes per day, 10’000-15’000 per month) but under sovereign control. With Alloy, the FOITT could even act as a Cloud Service Provider for cantons, municipalities, and hospitals, creating a federated model without each canton building its own cloud.

Real-world Swiss example: Avaloq was the first customer in Switzerland to deploy an OCI Dedicated Region, enabling it to migrate clients and offer cloud services to other banks in Switzerland. This shows how a private public cloud can serve highly regulated industries while keeping data and operations in-country.

In this model:

  • Component 2b becomes redundant: Why stretch a public cloud into FOITT data centers if a full cloud can already run there?

  • Component 2 can also be partly obsolete: At least for workloads running a local OCI Dedicated Region (or Alloy) deployment, there’s no need for parallel deployments in public regions, since everything can run sovereignly on a local dedicated OCI region in Switzerland (while workloads from MeteoSwiss on AWS remain in component 2).

A private public cloud thus merges sovereignty with innovation while reducing the need for parallel cloud deployments.

Another Possibility – Combine Capabilities From A Private Public Cloud With Component 2b

Another possibility could be to combine elements of components 3.5 and 2b into a federated architecture: Using OCI Dedicated Region or Oracle Alloy as the central control plane in Switzerland, while extending the ecosystem with Oracle Compute Cloud@Customer (C3) for satellite or edge use cases.

Oracle Compute Cloud@Customer – Wichtige Funktionen

This creates a hub-and-spoke model:

  • The Dedicated Region or Alloy in Switzerland acts as the sovereign hub for governance and innovation.

  • C3 appliances extend cloud services into distributed or remote locations, tightly integrated but optimized for local autonomy.

A practical example would be Swiss embassies around the world. Each embassy could host a lightweight C3 edge environment, connected securely back to the central sovereign infrastructure in Switzerland. This ensures local applications run reliably, even with intermittent connectivity, while remaining part of the overall sovereign cloud ecosystem.

By combining these capabilities, the FOITT could:

  • Use component 2b’s distributed deployment model (cloud stretched into local facilities)

  • Leverage component 3.5’s VMware continuity path with OCVS for easy migrations

  • Rely on component 3.5’s sovereignty and innovation model (private public cloud with full cloud parity)

This blended approach would allow Switzerland to centralize sovereignty while extending it globally to wherever Swiss institutions operate.

What If A Private Public Cloud Isn’t Sovereign Enough?

It is possible that policymakers or regulators could conclude that solutions such as OCI Dedicated Region or Oracle Alloy are not “sovereign enough”, given that their operational model still involves vendor-managed updates and tier-2/3 support from Oracle.

In such a scenario, the BIT would have fallback options:

Maintain a reduced local VMware footprint: BIT could continue to run a smaller, fully local VMware infrastructure that is managed entirely by Swiss staff. This would not deliver the breadth of services or pace of innovation of a hyperscale-based sovereign model, but it would provide the maximum possible operational autonomy and align closely with Switzerland’s definition of sovereignty.

Leverage component 4 from the FDJP: Using the existing “Secure Private Cloud” (SPC) from the Federal Department of Justice and Police (FDJP). But I am not sure if the FDJP wants to become a cloud service provider for other departments.

That said, these fallback strategies don’t necessarily exclude the use of OCI. Dedicated Region or Alloy could co-exist with a smaller VMware footprint, giving the FOITT both innovation and control. Alternatively, Oracle could adapt its operating model to meet Swiss sovereignty requirements – for example, by transferring more operational responsibility (tier-2 support) to certified Swiss staff.

This highlights the core dilemma: The closer Switzerland moves to pure operational sovereignty, the more it risks losing access to the innovation and agility that modern cloud architectures bring. Conversely, models like Dedicated Region or Alloy can deliver a full public cloud experience on Swiss soil, but they require acceptance that some operational layers remain tied to the vendor.

Ultimately, the Swiss Government Cloud must strike a balance between autonomy and innovation, and the decision about what “sovereign enough” means will shape the entire strategy.

Conclusion

The Swiss Government Cloud will not be a matter of choosing one single path. A hybrid approach is realistic: Public cloud for agility and non-critical workloads, stretched models for sensitive workloads, VMware or Nutanix for hybrid continuity, sovereign or “private public cloud” infrastructures for maximum control, and federated extensions for edge cases.

But it is important to be clear: Hybrid cloud or even hybrid multi-cloud does not automatically mean sovereign. What it really means is that a sovereign cloud must co-exist with other clouds – public or private.

To make this work, Switzerland (and each organization within it) must clearly define what sovereignty means in practice. Is it about data and information sovereignty? Is it really about operational autonomy? Or about jurisdiction and governance?

Only with such a definition can a sovereign cloud strategy deliver on its promise, especially when it comes to on-premises infrastructure, where control, operations, and responsibilities need to be crystal clear.

PS: Of course, the SGC is about much more than infrastructure. The Federal Council’s plan also touches networking, cybersecurity, automation and operations, commercial models, as well as training, consulting, and governance. In this article, I deliberately zoomed in on the infrastructure side, because it’s already big, complex, and critical enough to deserve its own discussion. And just imagine sitting on the other side, having to go through all the answers of the upcoming tender. Not an easy task.

Why Exadata Cloud@Customer and Compute Cloud@Customer Together Are a Game-Changer

Why Exadata Cloud@Customer and Compute Cloud@Customer Together Are a Game-Changer

When enterprises first adopted Oracle Exadata Cloud@Customer (ExaCC), the message was clear: you can bring the power of the world’s most advanced database platform into your own data center, fully managed by Oracle, but still under your control. It gave organizations the engineered system they were looking for. A system where all components are built to work together, coordinated and optimized as one. The difference was that this time, Oracle took care of the underlying infrastructure, so customers no longer had to manage the basics themselves.

For many organizations, ExaCC was the first step into a true cloud operating model without having to give up sovereignty, control, or the performance they had come to expect. But ExaCC is only one part of the story. When you add Oracle Compute Cloud@Customer (C3) into the mix, the value multiplies. Suddenly, enterprises gain the flexibility to modernize far beyond the database layer, and that combination is proving to be a game-changer.

From Database Engineered System to Cloud-Native Foundation

ExaCC was always about consolidation and optimization for Oracle Databases. Customers know it as a system that just works (designed, tuned, managed as a whole). With Compute Cloud@Customer, that same philosophy now extends to general-purpose workloads. It is not just another stack of compute and storage but a fully managed cloud service delivered in your data center, giving enterprises the same consistency and operational simplicity they know from OCI public regions.

The real surprise for many customers is what they can actually run on C3. It’s not just Linux VMs. You can spin up Windows VMs, move workloads seamlessly between OCI and C3, and deploy Kubernetes or OpenShift clusters with full support. For customers already invested in container platforms, this means adoption is straightforward. No retraining, no complex integration, just a continuation of their existing journey.

Integration Without the Overhead

A major concern for enterprises has always been integration costs. How do you connect database systems with compute environments? How do you manage networking and security across hybrid deployments? With ExaCC and C3, Oracle does the heavy lifting. C3 integrates directly with the ExaCC networking. Customers get a single environment where databases and applications are natively connected and it all comes managed and supported by Oracle.

Oracle Compute Cloud@Customer

Flexibility in the Cloud Journey

One of the most underestimated advantages of C3 is the flexibility it brings to cloud adoption. Enterprises can move VMs from OCI public regions to C3, or the other way around, depending on regulatory requirements, performance needs, or cost considerations. This portability gives CIOs the confidence that they can adapt their strategy over time without locking themselves into specific models.

Description of dr-oracle-compute-cloud-customer-oci.png follows

Oracle Operator Access Control

To further support enterprise-grade security and governance, Oracle Compute Cloud@Customer includes Oracle Operator Access Control (OpCtl), which is a sophisticated system designed to manage and audit privileged access to your on-premises infrastructure by Oracle personnel. Unlike traditional support models, where vendor access can be blurred or overly permissive, OpCtl gives customers explicit control over every support interaction.

A diagram showing your tenancy in an OCI region, and how it connects to Compute Cloud@Customer in your data center.

Before any Oracle operator can access the C3 environment for maintenance, updates, or troubleshooting, the customer must approve the request, define the time window, and scope the level of access permitted. All sessions are fully audited, with logs available to the customer for compliance and security reviews. This ensures that sensitive workloads and data remain under strict governance, aligning with zero-trust principles and regulatory requirements.

Platform-as-a-Service (PaaS) Offerings on C3

For organizations adopting microservices and containerized applications, Oracle Kubernetes Engine (OKE) on C3 provides a fully managed Kubernetes environment. Developers can deploy and manage Kubernetes clusters using the same cloud-native tooling and APIs as in OCI, while operators benefit from lifecycle automation, integrated logging, and metrics collection. OKE on C3 is ideal for hybrid deployments where containers may span on-prem and cloud environments.

Red Hat OpenShift Support

Many enterprises have built their journey around Red Hat OpenShift. With C3, that journey continues smoothly. As of March 2025, Oracle officially supports Red Hat OpenShift versions 4.18+ on Compute Cloud@Customer, ensuring enterprises can bring their standardized container platform into a fully managed, on-premises cloud environment.

This means existing OpenShift workflows, tooling, and operations can extend into the C3 environment with minimal friction. Enterprises benefit from continuity in developer experience, consistent pipelines, and governance models while gaining the integration, performance, and control of Cloud@Customer infrastructure.

Available GPU Options on Compute Cloud@Customer

As enterprises aim to run AI, machine learning, digital twins, and graphics-intensive applications on-premises, Oracle introduced GPU expansion for Compute Cloud@Customer. This enhancement brings NVIDIA L40S GPU power directly into your data center.

Each GPU expansion node in the C3 environment is equipped with four NVIDIA L40S GPUs, and up to six of these nodes can be added to a single rack. For larger deployments, a second expansion rack can be connected, enabling support for a total of 12 nodes and up to 48 GPUs within a C3 deployment.

ExaCC C3 GPU Exp

Oracle engineers deliver and install these GPU racks pre-configured, ensuring seamless integration with the base C3 system. These nodes connect to the existing compute and storage infrastructure over a high-speed spine-leaf network topology and are fully integrated with Oracle’s ZFS storage platform.

EU Sovereign Operations for Oracle Compute Cloud@Customer

In May 2025, Oracle announced the availability of Oracle EU Sovereign Operations for C3. This means, that C3 now also runs in the EU Sovereign Cloud, with the same pricing and the same service you know from commercial OCI regions.

Previously, operations and automation for Compute Cloud @ Customer were handled via global OCI control planes. With EU Sovereign Operations, that changes:

  • All automation and admin services now reside within Oracle’s EU Sovereign Cloud regions

  • Operations are managed by Oracle teams based in the EU, ensuring compliance

  • Hardware deployment and support is delivered by personnel authorized to work in the customer’s country

EU Sovereign Operations for Compute Cloud@Customer is offered with the control plane located in one of Oracle EU Sovereign Cloud regions, currently either Madrid, Spain or Frankfurt, Germany. This service is offered in European Union member countries and other select countries in Europe. The service delivers the same features, functions, value and service level objectives (SLOs) offered with Compute Cloud@Customer service with control planes from OCI Compute public regions.

Compute Cloud@Customer Isolated – For the Most Secure Environments

Not every organization can connect to a public cloud or even allow a managed service to reach out for updates. For government agencies, defense, critical infrastructure, and highly regulated industries, air-gapped environments are a strict requirement.

Oracle has addressed this with Compute Cloud@Customer Isolated (C3I), a new deployment model that delivers the same C3 capabilities but in a fully disconnected form. With C3I, there is no dependency on external connectivity. The control plane, updates, and management are all designed for isolated operation. This makes it possible to run cloud workloads in environments that demand the highest levels of security and sovereignty while retaining the same familiar OCI operational model.

Why This Combination Matters

On their own, ExaCC and C3 are strong offerings. Together, they create a foundation that combines the best of both worlds. The unmatched performance of Exadata for mission-critical databases and the agility of a modern compute platform for applications, containers, and VMs. With GPU support, OpenShift compatibility, and seamless networking, enterprises can consolidate workloads, accelerate AI initiatives, and modernize applications. All while maintaining the control and sovereignty they require.

The message for CIOs is simple. You no longer need to choose between performance, control, and flexibility. With Oracle Exadata Cloud@Customer and Compute Cloud@Customer, you can have all three in your data center, on your terms, and fully integrated by design.

Why Changing Platforms or Adding a New Cloud Is Harder Than It Sounds

Why Changing Platforms or Adding a New Cloud Is Harder Than It Sounds

Switching enterprise platforms isn’t like swapping cars and more like deciding to change the side of the road you drive on while keeping all the existing roads, intersections, traffic lights, and vehicles in motion. The complexity comes from rethinking the entire system around you, from driver behaviour to road markings to the way the city is mapped.

That’s exactly what it feels like when an enterprise tries to replace a long-standing platform or add a new one to an already complex mix. On slides, it’s about agility, portability, and choice. In reality, it’s about untangling years of architectural decisions, retraining teams who have mastered the old way of working, and keeping the business running without any trouble while the shift happens.

Every enterprise carries its own technological history. Architectures have been shaped over years by the tools, budgets, and priorities available at the time, and those choices have solidified into layers of integrations, automations, operational processes, and compliance frameworks. Introducing a new platform or replacing an old one isn’t just a matter of deploying fresh infrastructure. Monitoring tools may only talk to one API. Security controls might be tightly bound to a specific identity model. Ticketing workflows could be designed entirely around the behaviour of the current environment. All of this has to be unpicked and re-stitched before a new platform can run production workloads with confidence.

The human element plays an equally large role. Operational familiarity is one of the most underestimated forms of lock-in. Teams know the details of the current environment, how it fails, and the instinctive fixes that work. Those instincts don’t transfer overnight, and the resulting learning curve can look like a productivity dip in the short term. Something most business leaders are reluctant to accept. Applications also rarely run in isolation. They rely on the timing, latency, and behaviours of the platform beneath them. A database tuned perfectly in one environment might misbehave in another because caching works differently or network latency is just a little higher. Automation scripts often carry hidden assumptions too, which only surface when they break.

Adding another cloud provider or platform to the mix can multiply complexity rather than reduce it. Each one brings its own operational model, cost reporting, security framework, and logging format. Without a unified governance approach, you risk fragmentation – silos that require separate processes, tools, and people to manage. And while the headline price per CPU or GB (or per physical core) might look attractive, the true cost includes migration effort, retraining, parallel operations, and licensing changes. All of this competes with other strategic projects for budget and attention.

Making Change Work in Practice

No migration is ever frictionless, but it doesn’t have to be chaotic. The most effective approach is to stop treating it as a single, monolithic event. Break the journey into smaller stages, each delivering its own value.

Begin with workloads that are low in business risk but high in operational impact. This category often includes development and test environments, analytics platforms processing non-sensitive data, or departmental applications that are important locally but not mission-critical for the entire enterprise. Migrating these first provides a safe but meaningful proving ground. It allows teams to validate network designs, security controls, and operational processes in the new platform without jeopardising critical systems.

Early wins in this phase build confidence, uncover integration challenges while the stakes are lower, and give leadership tangible evidence that the migration strategy is working. They also provide a baseline for performance, cost, and governance that can be refined before larger workloads move.

Build automation, governance, and security policies to be platform-agnostic from the start, using open standards, modular designs, and tooling that works across multiple environments. And invest early in people: cross-train them, create internal champions, and give them the opportunity to experiment without the pressure of immediate production deadlines. If the new environment is already familiar by the time it goes live, adoption will feel less like a disruption and more like a natural progression.

From Complex Private Clouds to a Unified Platform

Consider an enterprise running four separate private clouds: three based on VMware – two with vSphere Foundation and one with VMware Cloud Foundation including vSphere Kubernetes Services (VKS), and one running Red Hat OpenShift on bare metal. Each has its own tooling, integrations, and operational culture.

Migrating the VMware workloads to Oracle Cloud VMware Solution (OCVS) inside an OCI Dedicated Region allows workloads to be moved with vMotion or replication, without rewriting automation or retraining administrators. It’s essentially another vSphere cluster, but one that scales on demand and is hosted in facilities under your control.

OCI Dedicated Region and OCVS

The OpenShift platform can also move into the same Dedicated Region, either as a direct lift-and-shift onto OCI bare metal or VMs, or as part of a gradual transition toward OCI’s native Kubernetes service (OKE). This co-location enables VMware, OpenShift, and OCI-native workloads to run side by side, within the same security perimeter, governance model, and high-speed network. It reduces fragmentation, simplifies scaling, and allows modernisation to happen gradually instead of in a high-risk “big bang” migration.

Unifying Database Platforms

In most large enterprises, databases are distributed across multiple platforms and technologies. Some are Oracle, some are open source like PostgreSQL or MySQL, and others might be Microsoft SQL Server. Each comes with its own licensing, tooling, and operational characteristics.

An OCI Dedicated Region can consolidate all of them. Oracle databases can move to Exadata Database Service or Autonomous Database for maximum performance, scalability, and automation. Non-Oracle databases can run on OCI Compute or as managed services, benefitting from the same network, security, and governance as everything else.

This consolidation creates operational consistency with one backup strategy, one monitoring approach, one identity model across all database engines. It reduces hardware sprawl, improves utilisation, and can lower costs through programmes like Oracle Support Rewards. Just as importantly, it positions the organisation for gradual application modernization, without being forced into rushed timelines.

A Business Case Built on More Than Cost

The value in this approach comes from the ability to run new workloads, whether VM-based, containerized, or fully cloud-native, on the same infrastructure without waiting for new procurement cycles. It’s about strategic control by keeping infrastructure and data within your own facilities, under your jurisdiction, while still accessing the full capabilities of a hyperscale cloud. And it’s about flexibility. Moving VMware first, then OpenShift, then databases, and modernizing over time, with each step delivering value and building confidence for the next.

When done well, this isn’t just a platform change. It’s a simplification of the entire environment, an upgrade in capabilities, and a long-term investment in agility and control.