A Primer On Oracle Compute Cloud@Customer

A Primer On Oracle Compute Cloud@Customer

Enterprises across regulated industries, such as banking, healthcare, and the public sector, often find themselves caught in a dilemma: they want the scale and innovation of the public cloud, but they can’t move their data off-premises due to regulatory, latency, or sovereignty concerns. The answer is not one-size-fits-all, and the market reflects that through several deployment models:

  1. Public cloud vendors extending to on-premises (AWS Outposts, Azure Local + Azure Arc, Google Distributed Cloud Edge)
  2. Software vendors offering a “private cloud” (Nutanix, VMware by Broadcom)
  3. Hardware vendors offering “cloud-like” experiences (HPE GreenLake, Dell APEX, Lenovo TruScale)

Oracle C3 bridges the best of all three worlds:

  • Runs OCI control plane on-prem, with native compute, storage, GPU, and PaaS services
  • Keeps data resident while Oracle manages the infrastructure
  • Oracle manages hardware, software, updates, and lifecycle
  • Integration with Oracle Exadata and Autonomous Database
  • Same APIs, SDKs, CLI, and DevOps tools as OCI

Architecture

The Cloud Control Plane is an advanced software platform that operates within Oracle Cloud Infrastructure (OCI). It serves as the central management interface for deploying and operating resources, including those running on Oracle Compute Cloud@Customer. Customers access the Cloud Control Plane securely via a web browser, command-line interface (CLI), REST APIs, or language-specific SDKs, enabling flexible integration into existing IT and DevOps workflows.

At the heart of the platform is the identity and access management (IAM) system that allows multiple teams or departments to share a single OCI tenancy while maintaining strict control over access. Using compartments, organizations can logically organize and isolate resources such as Compute Cloud@Customer instances, and enforce granular access policies across the environment.

Communication between the Cloud Control Plane and the on-premises C3 system is established through a dedicated, secure tunnel. This encrypted tunnel is hosted by specialized management nodes within the rack. These nodes function as a gateway to the infrastructure, handling all control plane communications. In addition to maintaining the secure connection, they also:

  • Orchestrate cloud automation within the on-premises environment
  • Aggregate and route telemetry and diagnostic data to Oracle Support Services
  • Host software images and updates used for patching and maintenance

A diagram showing your tenancy in an OCI region, and how it connects to Compute Cloud@Customer in your data center.

Important: Even if connectivity between the Cloud Control Plane and the on-premises system is temporarily lost, virtual machines (VMs) and applications continue running uninterrupted on C3. This ensures high availability and operational continuity, even in isolated or restricted network environments.

Beyond deployment and orchestration, the Cloud Control Plane also handles essential lifecycle operations such as provisioning, patching, backup, and monitoring, and supports usage metering and billing.

Core Capabilities & Services

When you sign in to Oracle Compute Cloud@Customer, you gain access to the same types of core infrastructure resources available in the public Oracle Cloud Infrastructure (OCI). Here is what you can create and manage on C3:

  • Compute Instances. You can launch virtual machines (instances) tailored to your application requirements. Choose from various instance shapes based on CPU count, memory size, and network performance. Instances can be deployed using Oracle-provided platform images or custom images you bring yourself.
  • Virtual Cloud Networks (VCNs). A VCN is a software-defined, private network that replicates the structure of traditional physical networks. It includes subnets, route tables, internet/NAT gateways, and security rules. Every compute instance must reside within a VCN. On C3, you can configure the Load Balancing service (LBaaS) to automatically distribute network traffic.
  • Capacity and Performance Storage. Block Volumes, File Storage, Object Storage

Oracle Operator Access Control

To further support enterprise-grade security and governance, Oracle Compute Cloud@Customer includes Oracle Operator Access Control (OpCtl), which is a sophisticated system designed to manage and audit privileged access to your on-premises infrastructure by Oracle personnel. Unlike traditional support models, where vendor access can be blurred or overly permissive, OpCtl gives customers explicit control over every support interaction.

Before any Oracle operator can access the C3 environment for maintenance, updates, or troubleshooting, the customer must approve the request, define the time window, and scope the level of access permitted. All sessions are fully audited, with logs available to the customer for compliance and security reviews. This ensures that sensitive workloads and data remain under strict governance, aligning with zero-trust principles and regulatory requirements. 

Available GPU Options on Compute Cloud@Customer

As enterprises aim to run AI, machine learning, digital twins, and graphics-intensive applications on-premises, Oracle introduced GPU expansion for Compute Cloud@Customer. This enhancement brings NVIDIA L40S GPU power directly into your data center.

Each GPU expansion node in the C3 environment is equipped with four NVIDIA L40S GPUs, and up to six of these nodes can be added to a single rack. For larger deployments, a second expansion rack can be connected, enabling support for a total of 12 nodes and up to 48 GPUs within a C3 deployment.

Oracle engineers deliver and install these GPU racks pre-configured, ensuring seamless integration with the base C3 system. These nodes connect to the existing compute and storage infrastructure over a high-speed spine-leaf network topology and are fully integrated with Oracle’s ZFS storage platform.

Platform-as-a-Service (PaaS) Offerings on C3

For organizations adopting microservices and containerized applications, Oracle Kubernetes Engine (OKE) on C3 provides a fully managed Kubernetes environment. Developers can deploy and manage Kubernetes clusters using the same cloud-native tooling and APIs as in OCI, while operators benefit from lifecycle automation, integrated logging, and metrics collection. OKE on C3 is ideal for hybrid deployments where containers may span on-prem and cloud environments.

The Logical Next Step After Compute Cloud@Customer?

Typically, organizations choose to move to OCI Dedicated Region when their cloud needs outgrow what C3 currently offers. As companies expand their cloud adoption, they require a richer set of PaaS capabilities, more advanced integration and analytics tools, and cloud-native services like AI and DevOps platforms that are not fully available in C3 yet. OCI Dedicated Region is designed to meet these demands by providing a comprehensive, turnkey cloud environment that is fully managed by Oracle but physically isolated within your data center.

I consider OCI Dedicated Region as the next-generation private cloud. If you are a VMware by Broadcom customer and looking for alternatives, have a look at 5 Strategic Paths from VMware to Oracle Cloud Infrastructure.

Final Thought – Choose the Right Model for Your Journey

Every organization is on its own digital transformation journey. For some, that means moving aggressively into the public cloud. For others, it’s about modernizing existing infrastructure or complying with tight regulations. If you need cloud-native services, enterprise-grade compute, and strong data sovereignty, Oracle Compute Cloud@Customer is one of the most complete and future-proof options available.

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

We all know that the future of existing VMware customers has become more complicated and less certain. Many enterprises are reevaluating their reliance on VMware as their core infrastructure stack. So, where to go next?

For enterprises already invested in Oracle technology, or simply those looking for a credible, flexible, and enterprise-grade alternative, Oracle Cloud Infrastructure (OCI) offers a comprehensive set of paths forward. Whether you want to modernize, rehost, or run hybrid workloads, OCI doesn’t force you to pick a single direction. Instead, it gives you a range of options: from going cloud-native, to running your existing VMware stack unchanged, to building your own sovereign cloud footprint.

Here are five realistic strategies for VMware customers considering OCI. Learn how to migrate from VMware to Oracle Cloud Infrastructure. It doesn’t need to be an either-or decision, it can also be an “and” approach.

1. Cloud-Native with OCI – Start Fresh, Leave VMware Behind

For organizations ready to move beyond traditional infrastructure altogether, the cloud-native route is the cleanest break you can make. This is where you don’t just move workloads; you rearchitect them. You replace VMs with containers where possible, and perhaps lift and shift some of the existing workloads. You replace legacy service dependencies with managed cloud services. And most importantly, you replace static, manually operated environments with API-driven infrastructure.

OCI supports this approach with a robust portfolio: you have got compute Instances that scale on demand, Oracle Kubernetes Engine (OKE) for container orchestration, OCI Functions for serverless workloads, and Autonomous Database for data platforms that patch and tune themselves. The tooling is modern, open, and mature – Terraform, Ansible, and native SDKs are all available and well-documented.

This isn’t a quick VMware replacement. It requires a DevOps mindset, application refactoring, and an investment in automation and CI/CD. It is not something you do in a weekend. But it’s the only path that truly lets you leave the baggage behind and design infrastructure the way it should work in 2025.

2. OCVS – Run VMware As-Is, Without the Hardware

If cloud-native is the clean break, then Oracle Cloud VMware Solution (OCVS) is the strategic pause. This is the lift-and-shift strategy for enterprises that need continuity now, but don’t want to double down on on-prem investment.

With OCVS, you’re not running a fully managed service (compared to AWS, Azure, GCP). You get the full vSphere, vSAN, NSX, and vCenter stack deployed on Oracle bare-metal infrastructure in your own OCI tenancy. You’re the admin. You manage the lifecycle. You patch and control access. But you don’t have to worry about hardware procurement, power and cooling, or supply chain delays. And you can integrate natively with OCI services: backup to OCI Object Storage, peer with Exadata, and extend IAM policies across the board.

Oracle Cloud VMware Solution

The migration is straightforward. You can replicate your existing environment (with HCX), run staging workloads side-by-side, and move VMs with minimal friction. You keep your operational model, your monitoring stack, and your tools. The difference is, you get out of your data center contract and stop burning time and money on hardware lifecycle management.

This isn’t about modernizing right now. It’s about escaping VMware hardware and licensing lock-in without losing operational control.

3. Hybrid with OCVS, Compute Cloud@Customer, and Exadata Cloud@Customer

Now we’re getting into enterprise-grade architecture. This is the model where OCI becomes a platform, not just a destination. If you’re in a regulated industry and you can’t run everything in the public cloud, but you still want the same elasticity, automation, and control, this hybrid model makes a lot of sense.

A diagram showing your tenancy in an OCI region, and how it connects to Compute Cloud@Customer in your data center.

Here’s how it works: you run OCVS in the OCI public region for DR, or workloads that have to stay on vSphere. But instead of moving everything to the cloud, you deploy Compute Cloud@Customer (C3) and Exadata Cloud@Customer (ExaCC) on-prem. That gives you a private cloud footprint with the same APIs and a subset of OCI IaaS/PaaS services but physically located in your own facility, behind your firewall, under your compliance regime.

You manage workloads on C3 using the exact same SDKs, CLI tools, and Terraform modules as the public cloud. You can replicate between on-prem and cloud, burst when needed, or migrate in stages. And with ExaCC running in the same data center, your Oracle databases benefit from the same SLA and performance guarantees, with none of the data residency headaches.

This model is ideal if you’re trying to modernize without breaking compliance. It keeps you in control, avoids migration pain, and still gives you access to the full OCI ecosystem when and where you need it.

4. OCI Dedicated Region – A Public Cloud That Lives On-Prem

When public cloud is not an option, OCI Dedicated Region becomes the answer.

This isn’t a rack. It is an entire cloud region. You get all OCI services like compute, storage, OCVS, OKE, Autonomous DB, identity, even SaaS, deployed inside your own facility. You retain data sovereignty and you control physical access. You also enforce local compliance rules and operate everything with the same OCI tooling and automation used in Oracle’s own hyperscale regions.

サーバラック3つで自分のOracle Cloudリージョンが持てる「Oracle Dedicated Region 25」発表 - Publickey

What makes Dedicated Region different from C3 is the scale and service parity. While C3 delivers core IaaS and some PaaS capabilities, Dedicated Region is literally the full stack. You can run OCVS in there, connect it to your enterprise apps, and have a fully isolated VMware environment that never leaves your perimeter.

For VMware customers, it means you don’t have to choose between control and modernization. You get both.

5. Oracle Alloy – Cloud Infrastructure for Telcos and VMware Service Providers

If you’re a VMware Cloud Director customer or a telco/provider building cloud services for others, then Oracle just handed you an entirely new business model. Oracle Alloy allows you to offer your own cloud under your brand, with your pricing, and your operational control based on the same OCI technology stack Oracle runs themselves.

This is not only reselling, it is operating your own OCI cloud.

Becoming an Oracle Alloy partner  diagram, description below

As a VMware-based cloud provider, Alloy gives you a path to modernize your platform and expand your services without abandoning your customer base. You can run your own VMware environment (OCVS), offer cloud-native services (OKE, DBaaS, Identity, Monitoring), and transition your customers at your own pace. All of it on a single platform, under your governance.

What makes Alloy compelling is that it doesn’t force you to pick between VMware and OCI, it lets you host both side by side. You keep your high-value B2B workloads and add modern, cloud-native services that attract new tenants or internal business units.

For providers caught in the middle of the VMware licensing storm, Alloy might be the most strategic long-term play available right now.

 

Disaster Recovery With OCI Dedicated Region

While studying for the OCI 2025 Network Professional exam, I ran into something that is easy to miss at first: OCI Realms. They define boundaries between regions, and they matter a lot, especially when working with OCI Dedicated Regions. One of the most asked questions during technical workshops for OCI Dedicated Region is “Can I only run a single OCI Dedicated Region deployment in my data center and use the public (commercial) OCI region as a secondary site?“.

To answer this question, we have to understand the basic concept of realms first.

What are realms?

Oracle Cloud Infrastructure (OCI) regions are organized into separate cloud realms for customers with differing security and compliance needs. Realms are isolated from each other and share no physical infrastructure, resources, data, accounts, or network connections. OCI has multiple realms, including commercial, government, and dedicated realms. You can’t access regions that aren’t in your realm.

OCI Realms

Customer tenancies exist in a single realm and can access only regions that belong to that realm.

Example: The regions Paris, Frankfurt, Madrid, Stockholm, and Zurich all have the same realm key “OC1” and therefore belong to the same realm. The Serbian region has the realm key “OC20 and belongs to a different realm.

Dedicated Regions are public regions assigned to a single organization. Region-specific details, such as region ID and region key are not available in public documentation. You need to ask your Oracle representative for this information for your OCI Dedicated Region.

OCI Realms - Public and Dedicated Region 

Note: Please be aware that I took this screenshot from the OCI 2025 Network Professional course’s student guide. Follow this link to get an actual view of the currently available cloud regions.

physical and logical isolation between realms

Yes, this also means that the EU Sovereign Cloud realm is completely isolated from the commercial public cloud realm.

So, we have learned that by default, all commercial OCI regions live in the same realm. That means they can talk to each other using native OCI services like VCN peering, object storage replication, IAM policies, etc. 

What else should you know?

Let us come back to our question “Can I only run a single OCI Dedicated Region deployment in my data center and use the public (commercial) OCI region as a secondary site?”.

First of all, we have to ask ourselves: Why do we want to connect a Dedicated (private) Region to a public commercial region? Just because of money? 

Most customers have anyways already have two data center locations. What is stopping you from deploying two OCI Dedicated Regions?

Second, what is the impact if I host my primary site locally in an OCI Dedicated Region and, for disaster recovery purposes, use a commercial region?

Connect an OCI Dedicated Region to another commercial region

Oracle does not provide the tools to connect regions across a realm boundary via their network backbone. But in such cases, it is still possible to leverage OCI FastConnect. From my understanding, we would need to provision/have Virtual Cloud Networks (VCNs) with non-overlapping CIDR blocks. We would then make use of Dynamic Routing Gateways (DRGs), one in your local Dedicated Region and one DRG in Zurich, to allow traffic between both VCNs.

The DRG is a virtual router that provides a path for private network traffic between VCNs in the same region, between a VCN and a network outside the region, such as a VCN in another Oracle Cloud Infrastructure region, an on-premises network, or a network in another cloud provider.

This image shows the basic layout of two VCNs that are remotely peered, each with a remote peering connection on the DRG

Disaster recovery across realms: Not recommended

Oracle recommends configuring disaster recovery (DR) within the same realm due to the isolation between realms.

Some OCI services might support manual DR between realms, others require custom scripts or tools (like rsync, Data Pump, or GoldenGate), and some services (like Autonomous DB or native Object Storage replication) just won’t work across realms. No replication. No failover.

Oracle’s best practices are clear: If you need disaster recovery, keep both OCI (Dedicated) Regions in the same realm.

When you cross realms, you are building everything manually: replication, IAM, automation, failover

The result? You are unsupported by some OCI services (make sure you validate your architecture, requirements, and configuration). And nobody wants a manual, high-risk, and unsupported path. Right? 🙂 

OCI Dedicated Region Is A Strategic Enabler Of Transformation

OCI Dedicated Region Is A Strategic Enabler Of Transformation

Many enterprises are reaching a tipping point. Rather than continuing to extend and maintain aging legacy systems, they are taking a bolder path: building new IT foundations from the ground up. This greenfield approach reflects a desire to move faster, innovate with fewer constraints, and finally free the organization from years of accumulated technical debt. But while the opportunity is clear, the execution is complex. Enterprises need a way to modernize without compromising compliance, performance, or control. Especially in industries where data sensitivity and regulatory oversight are non-negotiable. Oracle Cloud Infrastructure (OCI) Dedicated Region meets this challenge head-on by offering a full public cloud experience delivered inside the enterprise’s own environment, behind its firewall, under its governance.

Build a Modern Foundation Without Constraints

When organizations choose to start fresh with a greenfield architecture, they typically aim to embrace cloud-native design patterns, modernize their application stack, and implement automation from day one. However, many enterprise-grade solutions still force trade-offs between control and capability. Either you give up data residency by using a public cloud, or you sacrifice functionality by deploying a limited private cloud or hybrid solution.

OCI Dedicated Region removes this dilemma. It provides access to the entire suite of Oracle’s cloud services, including high-performance compute, autonomous databases, machine learning, analytics, integration tools, and more. All deployed inside your own data center. This means organizations no longer need to compromise. They can build a modern, scalable, cloud-native platform that meets both their business and regulatory needs, and all without data ever leaving their premises.

OCI Dedicated Region Overview

Minimize Risk While Transforming

Enterprise transformation is rarely about a single cutover. The reality is that legacy systems and new platforms must often coexist for months and sometimes even years during migration. OCI Dedicated Region makes it possible to build your future-state environment in parallel with your current one. This decouples the pace of innovation from the constraints of legacy systems. You can test, iterate, and scale new workloads without immediately touching the systems that still keep the business running.

And because OCI Dedicated Region is operated and managed by Oracle as a service, even though it runs on your premises, your internal teams are freed from much of the operational overhead. This hybrid approach significantly reduces transformation risk, making it easier to modernize core systems without the “big bang” stress that often derails large-scale IT initiatives.

Enabling Organizational Agility

Technology transformation alone isn’t enough. Enterprises also need to rethink how they operate, how teams collaborate, make decisions, and deliver value faster. In traditional environments, IT processes are centralized and slow-moving. Provisioning new infrastructure, accessing secure data sets, or deploying applications often involves multiple layers of approval and coordination, which limits agility.

OCI changes that dynamic. With built-in support for self-service, DevOps workflows, and on-demand resource provisioning, technical and interdisciplinary teams gain the freedom to act quickly within a structured governance model. Whether it’s a development team testing a new product feature or a data team running a machine learning pipeline, OCI Dedicated Region provides the tooling to move fast without waiting. More importantly, these capabilities are consistent whether you’re running in the public OCI cloud or in your own Dedicated Region

Autonomy with Governance

As organizations move toward more distributed operating models, where decisions are pushed closer to the edges of the business, the need for robust governance becomes even more critical. Teams must have the autonomy to act quickly, but within well-defined boundaries. OCI addresses this balance through a rich set of identity, access, and policy management features that let enterprises define who can do what, with which resources, and under what conditions.

With tools like compartments, quotas, tagging policies, and integrated audit logging, IT teams can enforce operational controls without creating friction for teams. OCI Dedicated Region applies these same governance tools locally, ensuring that even when infrastructure is deployed on-premises, the same policies and oversight models can be maintained. This allows organizations to scale innovation across teams and departments while maintaining a consistent approach to security, compliance, and resource management.

Application Portability and Workload Mobility

One of the key advantages of this consistent infrastructure, using OCI and OCI Dedicated Region, is application portability and workload mobility. In many cloud environments, moving workloads between regions, clouds, or on-premises data centers often requires significant re-architecture or compromises in functionality.

OCI takes a fundamentally different approach by ensuring consistency across environments at both the infrastructure and platform levels. Whether you’re running in the public OCI cloud, a Dedicated Region in your data center, or even a hybrid deployment that spans both, the same APIs, services, management tools, and SLAs apply. This makes it much easier to build once and deploy anywhere – without rewriting code, changing dependencies, or retraining staff.

For regulated industries or global enterprises, this enables a flexible deployment strategy where applications and data can move based on changing legal, cost, or performance requirements, and not because of vendor limitations. The result is a true “portable cloud” model where you control the placement of your workloads, not your provider.

While multi-cloud strategies are touted for their potential to mitigate vendor lock-in, they introduce significant operational complexities:

  • Diverse APIs and Management Tools: Managing different cloud platforms requires teams to learn and maintain multiple sets of tools and interfaces.

  • Inconsistent Security Models: Each cloud provider has its own security protocols, complicating unified security management.

  • Fragmented Compliance Postures: Ensuring compliance across multiple clouds can be challenging due to varying standards and certifications.

  • Increased Operational Overhead: Coordinating between different providers can lead to inefficiencies and increased costs.

These challenges often lead organizations to opt for a single cloud provider, accepting the trade-off of potential lock-in for the sake of operational simplicity.

Conclusion

What enterprises need today is not just new infrastructure, they need a platform for change. A platform that enables both IT and business transformation, that reduces friction while increasing security, and that empowers teams to deliver results faster. OCI Dedicated Region provides exactly that. It combines the agility of the public cloud with the control and assurance of on-premises deployment. It supports greenfield initiatives that demand flexibility, coexistence with legacy systems, and scalable governance. And it does all of this in a way that aligns with the realities of large, complex organizations.

Whether you’re reimagining core platforms, enabling AI-driven use cases, or simply creating a future-ready digital foundation, OCI Dedicated Region delivers the architecture, the tools, and the flexibility to move with confidence.

It’s more than an infrastructure choice: it’s a strategic enabler for long-term, enterprise-grade transformation.

From Monolithic Data Centers to Modern Private Clouds

From Monolithic Data Centers to Modern Private Clouds

Behind every shift from old-school to new-school, there is a bigger story about people, power, and most of all, trust. And nowhere is that clearer than in the move from traditional monolithic data centers to what we now call a modern private cloud infrastructure.

A lot of people still think this evolution is just about better technology, faster hardware, or fancier dashboards. But it is not. If you zoom out, the core driver is not features or functions, it is trust in the executive vision, and the willingness to break from the past.

Monolithic data centers stall innovation

But here is the problem: monoliths do not scale in a modern world (or cloud). They slow down innovation, force one-size-fits-all models, and lock organizations into inflexible architectures. And as organizations grew, the burden of managing these environments became more political than practical.

The tipping point was not when better tech appeared. It was when leadership stopped trusting that the monolithic data centers with the monolithic applications could deliver what the business actually needed. That is the key. The failure of monolithic infrastructure was not technical – it was cultural.

Hypervisors are not the platform you think

Let us make that clear: hypervisor are not platforms! They are just silos and one piece of a bigger puzzle.

Yes, they play a role in virtualization. Yes, they helped abstract hardware and brought some flexibility. But let us not overstate it, they do not define modern infrastructure or a private cloud. Hypervisors solve a problem from a decade ago. Modern private infrastructure is not about stacking tools, it is about breaking silos, including the ones created by legacy virtualization models.

Private Cloud – Modern Infrastructure

So, what is a modern private infrastructure? What is a private cloud? It is not just cloud-native behind your firewall. It is not just running Kubernetes on bare metal. It is a mindset.

You do not get to “modern” by chasing features or by replacing one virtualization solution with another vendor. You get there by believing in the principles of openness, automation, decentralization, and speed. And that trust has to start from the top. If your CIO or CTO is still building for audit trails and risk reduction as their north star, you will end up with another monolithic data center stack. Just with fancier logos.

But if leadership leans into trust – trust in people, in automation, in feedback loops – you get a system that evolves. Call it modern. Call it next-gen.

It was never about the technology

We moved from monolithic data centers not because the tech got better (though it did), but because people stopped trusting the old system to serve the new mission.

And as we move forward, we should remember: it is not hypervisors or containers or even clouds that shape the future. It is trust in execution, leadership, and direction. That is the real platform everything else stands on. If your architecture still assumes manual control, ticketing systems, and approvals every step of the way, you are not building a modern infrastructure. You are simply replicating bureaucracy in YAML. A modern infra is about building a cloud that does not need micro-management.

Platform Thinking versus Control

A lot of organizations say they want a platform, but what they really want is control. Big difference.

Platform thinking is rooted in enablement. It is about giving teams consistent experiences, reusable services, and the freedom to ship without opening a support ticket every time they need a VM or a namespace.

And platform thinking only works when there is trust as well:

  • Trust in dev teams to deploy responsibly
  • Trust in infrastructure to self-heal and scale
  • Trust in telemetry and observability to show the truth

Trust is a leadership decision. It starts when execs stop treating infrastructure as a cost center and start seeing it as a product. Something that should deliver value, be measured, and evolve.

It is easy to get distracted. A new storage engine, a new control plane, a new AI-driven whatever. Features are tempting because they are measurable. You can point at them in a dashboard or a roadmap.

But features don’t create trust. People do. The most advanced platform in the world is useless if teams do not trust it to be available, understandable, and usable. 

So instead of asking “what tech should we buy?”, the real question is:

“Do we trust ourselves enough to let go of the old way?”

Because that is what building a modern private cloud is really about.

Trust at Scale

In Switzerland, we like things to work. Predictably. Reliably. On time. With the current geopolitical situation in the world, and especially when it comes to public institutions, that expectation is non-negotiable.

The systems behind those services are under more pressure than ever. Demands are rising and talent is shifting. Legacy infrastructure is getting more fragile and expensive. And at the same time, there is this quiet but urgent question being asked in every boardroom and IT strategy meeting:

Can we keep up without giving up control?

Public sector organizations (not only in Switzerland) face a unique set of constraints:

  • Critical infrastructure cannot go down, ever
  • Compliance and data protection are not just guidelines, they are legal obligations
  • Internal IT often has to serve a wide range of users, platforms, and expectations

So, it is no surprise that many of these organizations default to monolithic, traditional data centers. The logic is understandable: “If we can touch it, we can control it.”

But here is the reality though: control does not scale. And legacy does not adapt. Staying “safe” with old infrastructure might feel responsible, but it actually increases long-term risk, cost, and technical debt. There is a temptation to approach modernization as a procurement problem: pick a new vendor, install a new platform, run a few migrations, and check the box. Done.

But transformation doesn’t work that way. You can’t buy your way out of a culture that does not trust change.

In understand, this can feel uncomfortable. Many institutions are structured to avoid mistakes. But modern IT success requires a shift from control to resilience, and it is not about perfection. It is only perfect until you need to adapt again.

How to start?

By now, it is clear: modern private cloud infrastructure is not about chasing trends or blindly “moving to the cloud.” It’s about designing systems that reflect what your organization values: reliability, control, and trust, while giving teams the tools to evolve. But that still leaves the hardest question of all:

Where do we start?

First, ransparency is the first ingredient of trust. You can’t fix what you won’t name.

Second, modernizing safely does not mean boiling the ocean. It means starting with a thin slice of the future.

The goal is to identify a use case where you can:

  • Show real impact in under six months

  • Reduce friction for both IT and internal users

  • Create confidence that change is possible without risk

In short, it is about finding use cases with high impact but low risk.

Third, this is where a lot of transformation efforts stall. Organizations try to modernize the tech, but keep the old permission structures. The result? A shinier version of the same bottlenecks. Instead, shift from control to guardrails. Think less about who can approve what, and more about how the system enforces good behavior by default. For example:

  • Implement policy-as-code: rules embedded into the platform, not buried in documents

  • Automate security scans, RBAC, and drift detection

  • Give teams safe, constrained freedom instead of needing to ask for access

Guardrails enable trust without giving up safety. That’s the core of a modern infrastructure (private or public cloud).

And lastly, make trust measurable. Not just with uptime numbers or dashboards but with real signals:

  • Are teams delivering faster?

  • Are incidents down?

  • etc.

Make this measurable, visible, and repeatable. Success builds trust. Trust creates momentum.

Final Thoughts

IT organizations do not need moonshots. They need measured, meaningful modernization. The kind that builds belief internally, earns trust externally, and makes infrastructure feel like an asset again.

The technology matters, but how you introduce it matters even more. 

Private Cloud Autarky – You Are Safe Until The World Moves On

Private Cloud Autarky – You Are Safe Until The World Moves On

I believe it was 2023 when the term “autarky” was mentioned during my conversations with several customers, who maintained their own data centers and private clouds. Interestingly, this word popped up again recently at work, but I only knew it from photovoltaic systems. And it kept my mind busy for several weeks.

What is autarky?

To understand autarky in the IT world and its implications for private clouds, an analogy from the photovoltaic (solar power) system world offers a clear parallel. Just as autarky in IT means a private cloud that is fully self-sufficient, autarky in photovoltaics refers to an “off-grid” solar setup that powers a home or facility without relying on the external electrical grid or outside suppliers.

Imagine a homeowner aiming for total energy independence – an autarkic photovoltaic system. Here is what it looks like:

  • Solar Panels: The homeowner installs panels to capture sunlight and generate electricity.
  • Battery: Excess power is stored in batteries (e.g., lithium-ion) for use at night or on cloudy days.
  • Inverter: A device converts solar DC power to usable AC power for appliances.
  • Self-Maintenance: The homeowner repairs panels, replaces batteries, and manages the system without calling a utility company or buying parts. 

This setup cuts ties with the power grid – no monthly bills, no reliance on power plants. It is a self-contained energy ecosystem, much like an autarkic private cloud aims to be a self-contained digital ecosystem.

Question: Which partner (installation company) has enough spare parts and how many homeowners can repair the whole system by themselves?

Let’s align this with autarky in IT:

  • Solar Panels = Servers and Hardware: Just as panels generate power, servers (compute, storage, networking) generate the cloud’s processing capability. Theoretically, an autarkic private cloud requires the organization to build its own servers, similar to crafting custom solar panels instead of buying from any vendor.
  • Battery = Spares and Redundancy: Batteries store energy for later; spare hardware (e.g., extra servers, drives, networking equipment) keeps the cloud running when parts fail. 
  • Inverter = Software Stack: The inverter transforms raw power into usable energy, like how a software stack (OS, hypervisor) turns hardware into a functional cloud.
  • Self-Maintenance = Internal Operations: Fixing a solar system solo parallels maintaining a cloud without vendor support – both need in-house expertise to troubleshoot and repair everything.

Let me repeat it: both need in-house expertise to troubleshoot and repair everything. Everything.

The goal is self-sufficiency and independence. So, what are companies doing?

An autarkic private cloud might stockpile Dell servers or Nvidia GPUs upfront, but that first purchase ties you to external vendors. True autarky would mean mining silicon and forging chips yourself – impractical, just like growing your own silicon crystals for panels.

The problem

In practice, autarky for private clouds sounds like an extreme goal. It promises maximum control. Ideal for scenarios like military secrecy, regulatory isolation, or distrust of global supply chains but clashes with the realities of modern IT:

  • Once the last spare dies, you are done. No new tech without breaking autarky.
  • Autarky trades resilience for stagnation. Your cloud stays alive but grows irrelevant.
  • Autarky’s price tag limits it to tiny, niche clouds – not hyperscale rivals.
  • Future workloads are a guessing game. Stockpile too few servers, and you can’t expand. Too many, and you have wasted millions. A 2027 AI boom or quantum shift could make your equipment useless.

But where is this idea of self-sufficiency or sovereign operations coming from? Nowadays? Geopolitical resilience.

Sanctions or trade wars will not starve your cloud. A private (hyperscale) cloud that answers to no one, free from external risks or influence. That is the whole idea.

What is the probability of such sanctions? Who knows… but this is a number that has to be defined for each case depending on the location/country, internal and external customers, and requirements.

If it happens, is it foreseeable, and what does it force you to do? Does it trigger a cloud-exit scenario?

I just know that if there are sanctions, any hyperscaler in your country has the same problems. No matter if it is a public or dedicated region. That is the blast radius. It is not only about you and your infrastructure anymore.

What about private disconnected hyperscale clouds?

When hosting workloads in the public clouds, organizations care more about data residency, regulations, the US Cloud Act, and less about autarky.

Hyperscale clouds like Microsoft Azure and Oracle Cloud Infrastructure (OCI) are built to deliver massive scale, flexibility, and performance but they rely on complex ecosystems that make full autarky impossible. Oracle offers solutions like OCI Dedicated Region and Oracle Alloy to address sovereignty needs, giving customers more control over their data and operations. However, even these solutions fall short of true autarky and absolute sovereign operations due to practical, technical, and economic realities.

A short explanation from Microsoft gives us a hint why that is the case:

Additionally, some operational sovereignty requirements, like Autarky (for example, being able to run independently of external networks and systems) are infeasible in hyperscale cloud-computing platforms like Azure, which rely on regular platform updates to keep systems in an optimal state.

So, what are customers asking for when they are interested in hosting their own dedicated cloud region in their data centers? Disconnected hyperscale clouds.

But hosting an OCI Dedicated Region in your data center does not change the underlying architecture of Oracle Cloud Infrastructure (OCI). Nor does it change the upgrade or patching process, or the whole operating model.

Hyperscale clouds do not exist in a vacuum. They lean on a web of external and internal dependencies to work:

  • Hardware Suppliers. For example, most public clouds use Nvidia’s GPUs for AI workloads. Without these vendors, hyperscalers could not keep up with the demand.
  • Global Internet Infrastructure. Hyperscalers need massive bandwidth to connect users worldwide. They rely on telecom giants and undersea cables for internet backbone, plus partnerships with content delivery networks (CDNs) like Akamai to speed things up.
  • Software Ecosystems. Open-source tools like Linux and Kubernetes are part of the backbone of hyperscale operations.
  • Operations. Think about telemetry data and external health monitoring.

Innovation depends on ecosystems

The tech world moves fast. Open-source software and industry standards let hyperscalers innovate without reinventing the wheel. OCI’s adoption of Linux or Azure’s use of Kubernetes shows they thrive by tapping into shared knowledge, not isolating themselves. Going it alone would skyrocket costs. Designing custom chips, giving away or sharing operational control or skipping partnerships would drain billions – money better spent on new features, services or lower prices.

Hyperscale clouds are global by nature, this includes Oracle Dedicated Region and Alloy. In return you get:

  • Innovation
  • Scalability
  • Cybersecurity
  • Agility
  • Reliability
  • Integration and Partnerships

Again, by nature and design, hyperscale clouds – even those hosted in your data center as private Clouds (OCI Dedicated Region and Alloy) – are still tied to a hyperscaler’s software repositories, third-party hardware, operations personnel, and global infrastructure.

Sovereignty is real, autarky is a dream

Autarky sounds appealing: a hyperscale cloud that answers to no one, free from external risks or influence. Imagine OCI Dedicated Region or Oracle Alloy as self-contained kingdoms, untouchable by global chaos.

Autarky sacrifices expertise for control, and the result would be a weaker, slower and probably less secure cloud. Self-sufficiency is not cheap. Hyperscalers spend billions of dollars yearly on infrastructure, leaning on economies of scale and vendor deals. Tech moves at lightning speed. New GPUs drop yearly, software patches roll out daily (think about 1’000 updates/patches a month). Autarky means falling behind. It would turn your hyperscale cloud into a relic.

Please note, there are other solutions like air-gapped isolated cloud regions, but those are for a specific industry and set of customers.