Oracle Cloud Infrastructure – The Cloud That Actually Delivers

Oracle Cloud Infrastructure – The Cloud That Actually Delivers

In 2024, I wrote an article about cloud repatriation (aka reverse cloud migrations) and the fact that businesses are adopting a more nuanced workload-centric strategy, instead of just bringing some of the workloads back to on-premises data centers. The article did not focus (primarily) on costs, but now it seems to be the right time to take a closer look at cloud economics and why Oracle Cloud Infrastructure (OCI) is a path worth investigating.

Disclaimer: The views and opinions are my own, not my employer’s.

Why Enterprises Are Bringing Workloads Back and Why They Are Still Missing the Obvious Answer

Let us be honest, cloud repatriation is happening because customers feel let down. The promised land of cloud economics has not delivered. CFOs and CIOs are looking at their monthly cloud bills and asking the same question: “Where are the savings we were promised?”

Many jumped in expecting linear cost efficiency, which means that you pay for what you use, scale when you need, and save when you do not. But instead, they found themselves locked into complex pricing models, surprise data egress charges, and massive bills that just do not line up with actual value. It is no wonder so many companies are taking a second look at their on-prem strategies.

While the cloud offers a ton of advantages, not all clouds are created equal. Most enterprises default to the big two or three providers, thinking that is the safest, smartest choice. But over time, they realize their workloads – especially those that are I/O-heavy, latency-sensitive, or simply require consistent performance – are not running efficiently, and they are costing a fortune.

Is cloud economics more myth than math? The bigger question we should all be asking is: why are we ignoring better options, especially when they are cheaper and more performant?

Geopolitics – The New Cloud Strategy Trigger

Beyond cost and performance, geopolitical uncertainty is now a major factor as well shaping cloud decisions. From shifting data sovereignty laws to rising tensions between global superpowers, enterprises are realizing that overreliance on a single hyperscaler can expose them to regulatory, compliance, and supply chain risks.

Guess what…The problem was always there. It is nothing new. And “the problem” is much bigger than just public clouds. But let us park this topic for a while.

So, why are we ignoring better options?

OCI was architected from the ground up with a modern, high-performance, cost-predictable mindset. And yet, for reasons that still are not 100% clear to me, it gets overlooked/ignored far too often.

Let us talk numbers and say that OCI delivers a 30–50% lower TCO compared to other hyperscalers. Let us assume that this is not fluff but the reality: Compute, storage, and networking are simply cheaper on OCI, and they come with better performance.

What is the Problem?

Here is what I do not get: if cost and performance are the two biggest drivers of cloud repatriation, and OCI excels at both, why are not more customers seriously considering Oracle Cloud? Does it have something to do with the KPIs decision-makers are measured on? Is it about not losing face? Is it about sunk cost fallacy? All the above? Is it because Oracle is not screaming as loudly as the others? Or is it just inertia and sticking with what is familiar, even if it is not working? Or is it about doing the same (mistake) others are making?

Whatever the reason, it is time for a reset. Enterprises need to look beyond the usual suspects and start asking themselves some hard questions:

  • Are we really getting value from our cloud provider?
  • Are we paying a premium for subpar performance?
  • Are we repatriating workloads simply because we chose the wrong cloud in the first place?

The Real Value of OCI – Look Beyond Price

Everyone loves talking about cloud cost savings, and yes, OCI delivers big on that front. But the conversation should not stop there. In fact, some of the biggest reasons customers stay with OCI long term have nothing to do with price tags and everything to do with how it actually feels to run real workloads on a platform built the right way.

Let us talk about security. With OCI, it is not something you bolt on after the fact or pay extra for, it is built in from the ground up. Encryption is always on. Patching is automated and proactive. The way tenancy and network isolation work in OCI gives you far less surface area to worry about.

Again, performance is another big one. It is not just fast, it is predictably fast. You do not have to overprovision or tweak the system to get the speed you need. The network is high-bandwidth and low-latency by default. Compute and storage are designed to handle enterprise-grade workloads without the tuning. It just works, out of the box.

But here is the part that is harder to quantify and just as important: operational peace of mind. With OCI, teams spend less time firefighting and more time building. You are not constantly chasing down unpredictable behaviour or buried in billing surprises (SLAs and prices are the same for ALL regions!).

And if control and flexibility are part of your strategy, then OCI goes even further. With OCI Dedicated Region, you can bring the full power of public cloud into your own data center, fully managed by Oracle, with no compromises. For partners and service providers, Oracle Alloy offers the ability to build and brand your own cloud, running on OCI’s infrastructure, while retaining control over operations and customer relationships. And when data residency, air-gapping, or national security are non-negotiable, Oracle Cloud Isolated Region delivers a physically and logically isolated cloud, fully disconnected from the public internet. No other cloud provider comes close to offering this kind of architectural flexibility at this level of maturity.

Final Thoughts

But now back to topic. 🙂 Cloud repatriation is much more complex and bigger than most of us think. Cloud repatriation should not just be a retreat, it should be a recalibration. And that recalibration should include a serious look at OCI. If you want predictable costs, industry-leading performance, flexibility, real cloud economics, and not just marketing slides, then it is time to rethink what cloud success really looks like.

Because at the end of the day, the cloud should not be about what is trendy. It should be about what works and what pays off. And about who can execute your strategy.

Becoming an Oracle Cloud Infrastructure Certified DevOps Professional Part 2 – DevSecOps with OCI

Becoming an Oracle Cloud Infrastructure Certified DevOps Professional Part 2 – DevSecOps with OCI

DevSecOps is the backbone of modern software delivery. Whether you are a fast-moving digital startup or a large enterprise modernizing legacy systems, having an automated, secure CI/CD pipeline is what separates high-performing teams from those always stuck firefighting. Most teams stitch together DevSecOps pipelines using a mix of open-source tools, third-party platforms, and scripts.

OCI gives you a clean, enterprise-grade stack for DevSecOps that is ready out of the box. We are talking source control, automated builds and deployments, secret management, container orchestration, real-time monitoring, and everything is tightly integrated, deeply secure, and easy to use.

So, the second part of this blog series is about OCI’s developer services. If you missed the first part about Oracle Kubernetes Engine (OKE), click here.

OCI Developer Services

Why Enterprises and Digital Natives Should Look at OCI

Let’s break it down:

  • Enterprises get the compliance, SLAs, and governance they need with a cloud-native platform that integrates with existing Oracle workloads and mission-critical systems.

  • Digital natives and startups get a modern, developer-first experience without juggling 15 different tools. OCI’s pay-as-you-go model and generous free tier also help teams stay lean while scaling.

And it is built for hybrid and multicloud from the start. OCI works whether you are running greenfield Kubernetes apps or still managing monoliths.

How to Build a Complete DevSecOps Pipeline on OCI

As part of the journey of becoming a certified OCI DevOps Professional, you need to understand how you can build a complete and secure pipeline using Oracle Cloud Infrastructure’s native services. Think of this as your blueprint for DevSecOps: secure, scalable and automated from code to production. The following diagram illustrates this reference architecture:

DevSecOps deployment pipeline in OCI.

Plan, Collaborate & Set Up Infrastructure

OCI DevOps Code Repositories

Private Git repositories hosted by the DevOps service. You can store, manage, develop source code with OCI DevOps Code Repositories and create your own private code repositories or connect to external code repositories such as GitHub, GitLab, Bitbucket Cloud, Visual Builder Studio, Bitbucket Server, and GitLab Server. It is perfect for managing application code, Terraform configurations, and CI/CD definitions all in one place.

OCI Resource Manager (Terraform as a Service)

Automate infrastructure provisioning and lifecycle using Oracle’s managed Terraform service:

  • Write declarative infra-as-code
  • Apply it across multiple compartments with consistent governance
  • Integrates with Vault, IAM, and tagging for full automation
  • This lets you define environments (dev/stage/prod) as code, and roll them out safely and repeatably.

The following image represents a generalized view of the Resource Manager workflow:

This image shows the workflow for provisioning infrastructure using Resource Manager.

OCI Vault

Every DevSecOps pipeline needs a central place for secrets and encryption keys. Vaults are logical entities where the Key Management Service creates and durably stores vault keys and secrets.

  • Store passwords, API tokens, certs, and encryption keys securely
  • Integrated with KMS (Key Management Service) for encryption at rest and in transit
    • Integrates encryption with other OCI services such as storage, database, and Fusion Applications for protecting data stored in these services
  • Automate access via IAM policies and code

Develop, Build, and Test Code

OCI DevOps Build Pipelines

A build pipeline takes a commit ID from your source code repositories and uses that source code to run your build instructions. Build pipelines define a set of stages for the build process – building, testing and compiling software artifacts, delivering artifacts to OCI repositories, and optionally triggering a deployment.. You define the flow and instructions of your build run in the build spec file. Define build pipelines using YAML or the console:

  • Automate Java, Python, Node.js, Docker, and Go builds
  • Customize steps for unit tests, code quality scans (e.g., SonarQube)
  • Connect directly to OCI repos or GitHub

The Application Dependency Management (ADM) service provides you with an integrated vulnerability knowledge base that you can use from the Oracle Cloud Infrastructure (OCI) DevOps build pipelines to detect vulnerabilities in the packages used for the build.

OCI Cloud Shell

A browser-based Linux shell.

OCI CLI

Git, Docker, Terraform, kubectl, Helm, and more

Ideal for quick testing, debugging, or managing your pipeline without installing tools locally.

OCI Application Performance Monitoring (APM)

Do not wait until production to spot performance issues:

  • Distributed tracing across microservices
  • Real User Monitoring (RUM)
  • Availability Monitoring
  • Server Monitoring

Shift-Left Security from the Start

OCI Vault (again, because security is never just one step)

Use Vault throughout your pipeline to securely inject secrets into build/deploy steps.

OCI Cloud Guard

Cloud Guard examines your Oracle Cloud Infrastructure resources for security weaknesses related to configuration, and your operators and users for risky activities. Upon detection, Cloud Guard can suggest, assist, or take corrective actions, based on your configuration.

  • Monitors for risky configurations (open ports, unused keys, misconfigured buckets)
  • Uses rules and detectors to flag and respond to threats
  • Integrates with other OCI services for automated remediation

Perfect for enforcing security baselines as part of your CI/CD process.

OCI Security Zones

Apply guardrails with security policies baked into the compartments:

  • Blocks risky actions (e.g., public DBs)
  • Ensures workloads meet compliance and governance standards automatically

Resources in a region are organized into two compartments. One of the compartments is associated with a security zone, a security zone recipe, and a security zone target in Cloud Guard.

Security Zones let you be confident that your resources in Oracle Cloud Infrastructure, including Compute, Networking, Object Storage, Block Volume and Database resources, comply with your security policies.

Deploy Automatically (and Confidently)

OCI DevOps Deployment Pipelines

A sequence of steps for delivering and deploying a set of artifacts to a target environment. The flow and logic of your software release can be controlled by defining stages that can run in serial or parallel. The delivery side of CI/CD:

  • Create multi-stage pipelines with approval gates, rollbacks, and parallel deployments
  • Deploy to OKE, Functions, Compute, or custom targets
  • Track deployment history and success/failure per environment

Works seamlessly with build pipelines for full Git-to-production automation.

OCI Functions

Event-driven, serverless compute built on Fn Project:

  • Write functions in Java, Python, Node.js, Go
  • Scale automatically based on events or triggers
  • Deploy from build artifacts or container images

Great for microservices, APIs, scheduled jobs, or glue logic in your pipeline.

Oracle Kubernetes Engine (OKE)

The reference architecture deploys the OKE cluster as one of the target environments. The worker nodes are deployed on Oracle Linux OS. This architecture uses three worker nodes in the cluster, but you can create up to 5’000 nodes on each cluster. Managed Kubernetes, Oracle-style:

  • CNCF-compliant, fully managed clusters
  • Integrated with IAM, Container Registry, Load Balancers, and Logging
  • Auto-scaling, node pools, and lifecycle management

Perfect for teams building containerized applications or adopting GitOps practices.

OCI Container Registry

This architecture deploys registry as a private Docker registry for internal use. Docker images are pushed to and pulled from the registry. You can also use registry as a public Docker registry, enabling any user with internet access and knowledge of the appropriate URL to pull images from public repositories in OCI.

  • Push/pull images securely
  • Scan images with third-party security tools
  • Deploy directly into OKE or Functions

Acts as the bridge between build and deploy stages in your pipeline.

Observability – Monitor, Operate, and Optimize

OCI Logging

OCI Logging service stores logs related to the deployment. The deployment runtime output and the final results of the deployment are shown as log entries. OCI Notifications service provides visibility into the latest state of the deployment project and its resources and takes any necessary action. For example, you’re notified when an important event, such as a stage in a deploy pipeline waiting for approval. When you receive the notification message, you can go to DevOps deployment pipelines and approve the stage. Centralized logging across all OCI services and custom apps:

  • Collect, search, and filter logs in real time
  • Create custom queries and alerts
  • Export logs to Object Storage or third-party SIEMs

Feeds directly into security tools and helps debug issues post-deployment.

OCI Monitoring

Metric collection and alerting at every level:

  • Out-of-the-box metrics for compute, load balancers, databases, Kubernetes, and more
  • Custom metrics via SDKs
  • Alarms with notifications (e-mail, Slack, etc.)

This image shows metrics and alarms as used in the Monitoring service.

Events

Oracle Cloud Infrastructure Events enables you to create automation based on the state changes of resources throughout your tenancy. Use Events to allow your development teams to automatically respond when a resource changes its state.

Here are some examples of how you might use Events:

  • Send a notification to a DevOps team when a database backup completes.
  • Convert files of one format to another when files are uploaded to an Object Storage bucket.

Final Thoughts

Oracle Cloud Infrastructure might not always be the flashiest name in DevOps circles, but when it comes to building a secure, scalable, all-in-one DevSecOps pipeline, it delivers.

Whether you are modernizing a legacy stack or building cloud-native microservices, OCI gives you the tools to:

  • Automate everything

  • Bake in security and governance

  • Monitor, understand, and optimize

You just need the right foundation, and OCI makes it possible.

Becoming an Oracle Cloud Infrastructure Certified DevOps Professional Part 1 – Introduction to Oracle Kubernetes Engine

Becoming an Oracle Cloud Infrastructure Certified DevOps Professional Part 1 – Introduction to Oracle Kubernetes Engine

I have just started diving into the OCI DevOps Professional certification course, so why not share some lessons and important information I gathered from the official Oracle course as part of my preparation? My goal? To pass the exam in the next few weeks. In this first part, I’m covering the core concept of Oracle Kubernetes Engine (OKE). Please note that I am also copy-pasting parts of the official documentation.

What is Oracle Kubernetes Engine?

Oracle Kubernetes Engine (OKE) is Oracle Cloud Infrastructure’s managed Kubernetes service. It is designed to let you deploy, manage, and scale containerized applications using Kubernetes, but without the heavy lifting of setting up and maintaining the control plane yourself.

OKE is:

  • Certified by the CNCF (Cloud Native Computing Foundation)

  • Fully integrated with OCI services like networking, load balancing, and IAM

  • Designed for production workloads, with a choice between traditional VM-based clusters or serverless options.

OKE Integration with other OCI Services

You get the flexibility and power of Kubernetes, but Oracle handles the control plane: updates, availability, and scaling

Kubernetes Clusters

OKE supports two types of Kubernetes cluster options: Basic and Enhanced

  • Enhanced cluster: Enhanced clusters support all available features, including features not supported by basic clusters (such as virtual nodes, cluster add-on management, workload identity, and additional worker nodes per cluster). Enhanced clusters come with a financially-backed service level agreement (SLA).
    • Cluster add-ons: In an enhanced cluster, you can use Kubernetes Engine to manage both essential add-ons and a growing portfolio of optional add-ons. You can enable or disable specific add-ons, select add-on versions, opt into and out of automatic updates by Oracle, and manage add-on specific customizations.
  • Basic cluster: Basic clusters support all the core functionality provided by Kubernetes and Kubernetes Engine, but none of the enhanced features that Kubernetes Engine provides. Basic clusters come with a service level objective (SLO), but not a financially-backed service level agreement (SLA).
    • Cluster add-ons: In a basic cluster, you have more responsibility and less flexibility when managing cluster add-ons. You are responsible for upgrading essential add-ons, but you cannot install or disable specific add-ons, select add-on versions, opt into and out of automatic updates by Oracle, or manage add-on specific customizations. In addition, you are responsible for installing, managing, and maintaining any optional add-ons you want in the cluster

If you are aiming to build scalable, secure, and production-ready apps, enhanced clusters are the way to go.

Note: A new cluster using the console is created as an enhanced cluster by default. If you are using the CLI or API to create a cluster, a new cluster is created as a basic cluster by default.

Kubernetes Cluster Controle Plane

The Kubernetes cluster control plane implements core Kubernetes functionality. It runs on compute instances (known as ‘control plane nodes’) in the Kubernetes Engine service tenancy. The cluster control plane is fully managed by Oracle.

The cluster control plane runs a number of processes, including:

  • kube-apiserver to support Kubernetes API operations requested from the Kubernetes command line tool (kubectl) and other command line tools, as well as from direct REST calls. The kube-apiserver includes admissions controllers required for advanced Kubernetes operations.
  • kube-controller-manager to manage different Kubernetes components (for example, replication controller, endpoints controller, namespace controller, and serviceaccounts controller)
  • kube-scheduler to control where in the cluster to run jobs
  • etcd to store the cluster’s configuration data
  • cloud-controller-manager to update and delete worker nodes (using the node controller), to create load balancers when Kubernetes services of type: LoadBalancer are created (using the service controller), and to set up network routes (using the route controller). The oci-cloud-controller-manager also implements a container-storage-interface, a flexvolume driver, and a flexvolume provisioner (for more information, see the OCI Cloud Controller Manager (CCM) documentation on GitHub).

Kubernetes Data Plane and Worker Nodes

Worker nodes are where you run the applications that you deploy in a cluster.

Each worker node runs a number of processes, including:

  • kubelet to communicate with the cluster control plane
  • kube-proxy to maintain networking rules

The cluster control plane processes monitor and record the state of the worker nodes and distribute requested operations between them.

OKE Kubernetes Worker Nodes

A node pool is a subset of worker nodes within a cluster that all have the same configuration. Node pools enable you to create pools of machines within a cluster that have different configurations. For example, you might create one pool of nodes in a cluster as virtual machines, and another pool of nodes as bare metal machines. A cluster must have a minimum of one node pool, but a node pool need not contain any worker nodes.

Worker nodes in a node pool are connected to a worker node subnet in your VCN.

Supported Images and Shapes for Worker Nodes

When creating a node pool with Kubernetes Engine, you specify that the worker nodes in the node pool are to be created as one or other of the following:

  • Virtual nodes, fully managed by Oracle. Virtual nodes provide a ‘serverless’ Kubernetes experience, enabling you to run containerized applications at scale without the operational overhead of upgrading the data plane infrastructure and managing the capacity of clusters. You can only create virtual nodes in enhanced clusters.
  • Managed nodes, running on compute instances (either bare metal or virtual machine) in your tenancy, and at least partly managed by you. You are responsible for upgrading Kubernetes on managed nodes, and for managing cluster capacity. You can create managed nodes in both basic clusters and enhanced clusters.

OKE Managed Nodes and Virtual Nodes

Note: You can choose to upgrade the basic cluster to an enhanced cluster later, but you cannot downgrade an enhanced cluster to a basic cluster.

Supported Images for Managed Nodes

OKE supports the provisioning of worker nodes (managed nodes only) using some, but not all, of the latest Oracle Linux images provided by Oracle Cloud Infrastructure.

Platform Images:

  • Provided by Oracle and only contain an Oracle Linux operating system
  • The managed nodes’ initial boot triggers a software download and setup by OKE

OKE Images:

  • Built on platform images
  • OKE images are optimized for use as managed node base images, with all the necessary configurations and required software
  • For faster managed node provisioning during cluster creation and updates

Custom images:

  • Can be built on supported platform images and OKE images
  • Custom images contain Oracle Linux OSes with customizations, configurations and software that were present when you created the image.

Shapes for Managed Nodes and Virtual Nodes

OKE supports the provisioning of worker nodes (both managed nodes and virtual nodes) using many, but not all, of the shapes provided by Oracle Cloud Infrastructure. More specifically:

  • Managed Nodes
    • Supported for managed nodes:
      • Flexible shapes, except flexible shapes to create burstable instances (for example, VM.Standard.E3.Flex)
      • Bare Metal shapes, including standard shapes and GPU shapes;
      • HPC shapes, except in RDMA networks;
      • VM shapes, including standard shapes and GPU shapes;
      • Dense I/O shapes
      • For the list of supported GPU shapes, see GPU shapes supported by Kubernetes Engine (OKE).
    • Not Supported:
      • Dedicated VM host shapes
      • Micro VM shapes
      • HPC shapes on Bare Metal instances in RDMA networks
      • flexible shapes to create burstable instances (for example, VM.Standard.E3.Flex).
  • Virtual Nodes
    • Supported for virtual nodes:
      • Pod.Standard.A1.Flex, Pod.Standard.E3.Flex, Pod.Standard.E4.Flex.
    • Not Supported: All other shapes.

Self-Managed Nodes

A self-managed node is a worker node hosted on a compute instance (or instance pool) that you have created yourself in Compute service, rather than on a compute instance that Kubernetes Engine has created for you. Self-managed nodes are often referred to as Bring Your Own Nodes (BYON). Unlike managed nodes and virtual nodes (which are grouped into managed node pools and virtual node pools respectively), self-managed nodes are not grouped into node pools.

Using the Compute service enables you to configure compute instances for specialized workloads, including compute shape and image combinations that are not available for managed nodes and virtual nodes.

Note: You can only add self-managed nodes to enhanced clusters.

Supported Images and Shapes for Self-Managed Nodes

Kubernetes Engine supports the provisioning of self-managed nodes using some, but not all, of the Oracle Linux images and shapes provided by Oracle Cloud Infrastructure. More specifically:

  • Images supported for self-managed nodes: The image you select for the compute instance hosting a self-managed node must be one of the OKE Oracle Linux 7 (OL7) or Oracle Linux 8 (OL8) images, and the image must have a Release Date of March 28, 2023 or later. See Image Requirements.
  • Shapes supported for self-managed nodes: The shape you can select for the compute instance hosting a self-managed node is determined by the OKE Oracle Linux 7 (OL7) or Oracle Linux 8 (OL8) image you select for the compute instance.

Prerequisites to create an OKE Cluster

Before you can use Kubernetes Engine to create a Kubernetes cluster, you have to meet prerequisites before you can use OKE. The list can be found here: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengprerequisites.htm

Part 2: https://www.cloud13.ch/2025/04/29/becoming-an-oracle-cloud-infrastructure-certified-devops-professional-part-2-devsecops-with-oci/ 

 

Oracle Cloud Infrastructure 2025 Architect Associate Study Guide

Oracle Cloud Infrastructure 2025 Architect Associate Study Guide

Are you preparing for the Oracle Cloud Infrastructure (OCI) 2025 Architect Associate Exam? Me too. 🙂

Whether you are just starting your cloud journey or leveling up your OCI skills, the 2025 Architect Associate exam is designed to test your understanding of core OCI services across compute, networking, storage, IAM, and more. It is about knowing how to build and manage scalable, secure, high-performing infrastructure on Oracle Cloud Infrastructure.

In this guide, I have broken down everything you need to know and mapped it directly to Oracle’s official documentation.

The following table lists the exam objectives and their weightings.

Objectives % of Exam
 Compute 20%
 Networking 35%
 Storage 25%
 Identity and Access Management 20%

Reminder – Oracle courses are free: https://mylearn.oracle.com/ou/learning-path/become-an-oci-architect-associate-2025/147631 

Last year, when I was studying for the 2024 version of the exam without any prior knowledge of OCI, I only used the online course and the official documentation to pass the exam.

Good luck! 🙂

 


1. Compute


2. Networking


3. VCN Connectivity


4. DNS and Traffic Management


5. Load Balancing


6. Network Command Center Services


7. Storage

Block Storage

Object Storage

File Storage


8. Identity and Access Management (IAM)

 

Private Cloud Autarky – You Are Safe Until The World Moves On

Private Cloud Autarky – You Are Safe Until The World Moves On

I believe it was 2023 when the term “autarky” was mentioned during my conversations with several customers, who maintained their own data centers and private clouds. Interestingly, this word popped up again recently at work, but I only knew it from photovoltaic systems. And it kept my mind busy for several weeks.

What is autarky?

To understand autarky in the IT world and its implications for private clouds, an analogy from the photovoltaic (solar power) system world offers a clear parallel. Just as autarky in IT means a private cloud that is fully self-sufficient, autarky in photovoltaics refers to an “off-grid” solar setup that powers a home or facility without relying on the external electrical grid or outside suppliers.

Imagine a homeowner aiming for total energy independence – an autarkic photovoltaic system. Here is what it looks like:

  • Solar Panels: The homeowner installs panels to capture sunlight and generate electricity.
  • Battery: Excess power is stored in batteries (e.g., lithium-ion) for use at night or on cloudy days.
  • Inverter: A device converts solar DC power to usable AC power for appliances.
  • Self-Maintenance: The homeowner repairs panels, replaces batteries, and manages the system without calling a utility company or buying parts. 

This setup cuts ties with the power grid – no monthly bills, no reliance on power plants. It is a self-contained energy ecosystem, much like an autarkic private cloud aims to be a self-contained digital ecosystem.

Question: Which partner (installation company) has enough spare parts and how many homeowners can repair the whole system by themselves?

Let’s align this with autarky in IT:

  • Solar Panels = Servers and Hardware: Just as panels generate power, servers (compute, storage, networking) generate the cloud’s processing capability. Theoretically, an autarkic private cloud requires the organization to build its own servers, similar to crafting custom solar panels instead of buying from any vendor.
  • Battery = Spares and Redundancy: Batteries store energy for later; spare hardware (e.g., extra servers, drives, networking equipment) keeps the cloud running when parts fail. 
  • Inverter = Software Stack: The inverter transforms raw power into usable energy, like how a software stack (OS, hypervisor) turns hardware into a functional cloud.
  • Self-Maintenance = Internal Operations: Fixing a solar system solo parallels maintaining a cloud without vendor support – both need in-house expertise to troubleshoot and repair everything.

Let me repeat it: both need in-house expertise to troubleshoot and repair everything. Everything.

The goal is self-sufficiency and independence. So, what are companies doing?

An autarkic private cloud might stockpile Dell servers or Nvidia GPUs upfront, but that first purchase ties you to external vendors. True autarky would mean mining silicon and forging chips yourself – impractical, just like growing your own silicon crystals for panels.

The problem

In practice, autarky for private clouds sounds like an extreme goal. It promises maximum control. Ideal for scenarios like military secrecy, regulatory isolation, or distrust of global supply chains but clashes with the realities of modern IT:

  • Once the last spare dies, you are done. No new tech without breaking autarky.
  • Autarky trades resilience for stagnation. Your cloud stays alive but grows irrelevant.
  • Autarky’s price tag limits it to tiny, niche clouds – not hyperscale rivals.
  • Future workloads are a guessing game. Stockpile too few servers, and you can’t expand. Too many, and you have wasted millions. A 2027 AI boom or quantum shift could make your equipment useless.

But where is this idea of self-sufficiency or sovereign operations coming from? Nowadays? Geopolitical resilience.

Sanctions or trade wars will not starve your cloud. A private (hyperscale) cloud that answers to no one, free from external risks or influence. That is the whole idea.

What is the probability of such sanctions? Who knows… but this is a number that has to be defined for each case depending on the location/country, internal and external customers, and requirements.

If it happens, is it foreseeable, and what does it force you to do? Does it trigger a cloud-exit scenario?

I just know that if there are sanctions, any hyperscaler in your country has the same problems. No matter if it is a public or dedicated region. That is the blast radius. It is not only about you and your infrastructure anymore.

What about private disconnected hyperscale clouds?

When hosting workloads in the public clouds, organizations care more about data residency, regulations, the US Cloud Act, and less about autarky.

Hyperscale clouds like Microsoft Azure and Oracle Cloud Infrastructure (OCI) are built to deliver massive scale, flexibility, and performance but they rely on complex ecosystems that make full autarky impossible. Oracle offers solutions like OCI Dedicated Region and Oracle Alloy to address sovereignty needs, giving customers more control over their data and operations. However, even these solutions fall short of true autarky and absolute sovereign operations due to practical, technical, and economic realities.

A short explanation from Microsoft gives us a hint why that is the case:

Additionally, some operational sovereignty requirements, like Autarky (for example, being able to run independently of external networks and systems) are infeasible in hyperscale cloud-computing platforms like Azure, which rely on regular platform updates to keep systems in an optimal state.

So, what are customers asking for when they are interested in hosting their own dedicated cloud region in their data centers? Disconnected hyperscale clouds.

But hosting an OCI Dedicated Region in your data center does not change the underlying architecture of Oracle Cloud Infrastructure (OCI). Nor does it change the upgrade or patching process, or the whole operating model.

Hyperscale clouds do not exist in a vacuum. They lean on a web of external and internal dependencies to work:

  • Hardware Suppliers. For example, most public clouds use Nvidia’s GPUs for AI workloads. Without these vendors, hyperscalers could not keep up with the demand.
  • Global Internet Infrastructure. Hyperscalers need massive bandwidth to connect users worldwide. They rely on telecom giants and undersea cables for internet backbone, plus partnerships with content delivery networks (CDNs) like Akamai to speed things up.
  • Software Ecosystems. Open-source tools like Linux and Kubernetes are part of the backbone of hyperscale operations.
  • Operations. Think about telemetry data and external health monitoring.

Innovation depends on ecosystems

The tech world moves fast. Open-source software and industry standards let hyperscalers innovate without reinventing the wheel. OCI’s adoption of Linux or Azure’s use of Kubernetes shows they thrive by tapping into shared knowledge, not isolating themselves. Going it alone would skyrocket costs. Designing custom chips, giving away or sharing operational control or skipping partnerships would drain billions – money better spent on new features, services or lower prices.

Hyperscale clouds are global by nature, this includes Oracle Dedicated Region and Alloy. In return you get:

  • Innovation
  • Scalability
  • Cybersecurity
  • Agility
  • Reliability
  • Integration and Partnerships

Again, by nature and design, hyperscale clouds – even those hosted in your data center as private Clouds (OCI Dedicated Region and Alloy) – are still tied to a hyperscaler’s software repositories, third-party hardware, operations personnel, and global infrastructure.

Sovereignty is real, autarky is a dream

Autarky sounds appealing: a hyperscale cloud that answers to no one, free from external risks or influence. Imagine OCI Dedicated Region or Oracle Alloy as self-contained kingdoms, untouchable by global chaos.

Autarky sacrifices expertise for control, and the result would be a weaker, slower and probably less secure cloud. Self-sufficiency is not cheap. Hyperscalers spend billions of dollars yearly on infrastructure, leaning on economies of scale and vendor deals. Tech moves at lightning speed. New GPUs drop yearly, software patches roll out daily (think about 1’000 updates/patches a month). Autarky means falling behind. It would turn your hyperscale cloud into a relic.

Please note, there are other solutions like air-gapped isolated cloud regions, but those are for a specific industry and set of customers.

Why OCI Dedicated Region and Oracle Cloud VMware Solution are a Winning Combination

Why OCI Dedicated Region and Oracle Cloud VMware Solution are a Winning Combination

 

In this article, we will explore what makes OCI Dedicated Region and Oracle Cloud VMware Solution (OCVS) a unique and powerful combination, cover their core features, how they address key IT challenges, and why CIOs should consider this pairing as a strategic investment for a future-proof IT environment.

What is OCI Dedicated Region?

OCI Dedicated Region is Oracle’s fully managed public cloud region that is deployed directly in a customer’s data center. It provides all of Oracle’s public cloud services (including Oracle Autonomous Database, and AI/ML capabilities) while meeting strict data residency, latency, and regulatory requirements. This allows organizations to enjoy the benefits of a public cloud while retaining physical control over data and infrastructure:

  • Data Residency and Compliance: By deploying cloud services in a customer’s data center, OCI Dedicated Region ensures data remains within the organization’s control, meeting data residency and compliance requirements critical in industries like finance, healthcare, and government.
  • Operational Consistency: Organizations get access to the same tools, APIs, and SLAs as Oracle’s public cloud, which ensures a consistent operational experience across on-premises and cloud environments.
  • Scalability and Flexibility: OCI Dedicated Region provides elastic scaling for workloads without the need for substantial capital expenditure on hardware. 
  • Cost-Effective: By consolidating on-premises and cloud infrastructure, OCI Dedicated Region reduces operational complexity and costs associated with data center management, disaster recovery, and infrastructure procurement.

What is Oracle Cloud VMware Solution?

For many enterprises, VMware is a cornerstone of their infrastructure, powering mission-critical applications and handling sensitive workloads. Migrating these workloads to the cloud has the potential to unlock new efficiencies, but it also brings challenges related to compatibility, risk, and cost.

Oracle Cloud VMware Solution (OCVS) is an answer to these challenges, enabling organizations to extend or migrate VMware environments to Oracle Cloud Infrastructure (OCI) without re-architecting applications:

  • Minimal Disruption: Since OCVS is a VMware-certified solution, applications continue running as they did on-premises, ensuring continuity.
  • Reduced Risk: By leveraging familiar VMware tools and processes, the learning curve is minimized, reducing operational risk.
  • Lower Migration Costs: Avoiding re-architecting means lower costs and faster time-to-value.
  • Enhanced Security: OCVS inherits OCI’s strong security posture, ensuring that data is safeguarded at every layer, from infrastructure to application.
  • Reduced Hardware Spending: Since OCVS runs on OCI, there’s no need to invest in new data center hardware.
  • Disaster Recovery: Allowing enterprises to establish OCI as a disaster recovery site, reducing capital costs on duplicate infrastructure.

The Synergy Between OCI Dedicated Region and OCVS

Using OCVS as a part of an OCI Dedicated Region and brings a unique set of advantages to private clouds. Together, they provide a solution that addresses the pressing demands for data sovereignty, cloud flexibility, and seamless application modernization.

OCI Dedicated Region and OCVS enable operational consistency across cloud and on-premises environments. Teams familiar with Oracle’s public cloud or VMware’s suite of tools can manage both environments with ease. This consistency allows CIOs to retain talent by providing a familiar technology landscape and reduces the need for retraining, thereby improving productivity.

Additionally, this combination allows the creation of a hybrid cloud architecture that seamlessly integrates on-premises infrastructure with cloud resources. OCI Dedicated Region provides a cloud environment within the customer’s data center, while OCVS allows existing VMware workloads to shift to this region without disruptions.

Conclusion

OCI Dedicated Region and Oracle Cloud VMware Solution together offer a powerful, flexible, and compliant infrastructure that empowers CIOs to meet the complex demands of modern enterprise IT. By combining the control of on-premises with the agility/flexibility of the cloud, this combined solution helps organizations achieve operational excellence, reduce risk, and accelerate digital transformation.

For decision-makers looking to strike a balance between legacy infrastructure and future-oriented cloud solutions, OCI Dedicated Region and OCVS represent a strategic investment that brings immediate and long-term value to the enterprise. This combination is not just about technology – it is about enabling business growth, operational resilience, and competitive advantage in a digital-first world.