Becoming an Oracle Cloud Infrastructure Certified DevOps Professional Part 2 – DevSecOps with OCI

Becoming an Oracle Cloud Infrastructure Certified DevOps Professional Part 2 – DevSecOps with OCI

DevSecOps is the backbone of modern software delivery. Whether you are a fast-moving digital startup or a large enterprise modernizing legacy systems, having an automated, secure CI/CD pipeline is what separates high-performing teams from those always stuck firefighting. Most teams stitch together DevSecOps pipelines using a mix of open-source tools, third-party platforms, and scripts.

OCI gives you a clean, enterprise-grade stack for DevSecOps that is ready out of the box. We are talking source control, automated builds and deployments, secret management, container orchestration, real-time monitoring, and everything is tightly integrated, deeply secure, and easy to use.

So, the second part of this blog series is about OCI’s developer services. If you missed the first part about Oracle Kubernetes Engine (OKE), click here.

OCI Developer Services

Why Enterprises and Digital Natives Should Look at OCI

Let’s break it down:

  • Enterprises get the compliance, SLAs, and governance they need with a cloud-native platform that integrates with existing Oracle workloads and mission-critical systems.

  • Digital natives and startups get a modern, developer-first experience without juggling 15 different tools. OCI’s pay-as-you-go model and generous free tier also help teams stay lean while scaling.

And it is built for hybrid and multicloud from the start. OCI works whether you are running greenfield Kubernetes apps or still managing monoliths.

How to Build a Complete DevSecOps Pipeline on OCI

As part of the journey of becoming a certified OCI DevOps Professional, you need to understand how you can build a complete and secure pipeline using Oracle Cloud Infrastructure’s native services. Think of this as your blueprint for DevSecOps: secure, scalable and automated from code to production. The following diagram illustrates this reference architecture:

DevSecOps deployment pipeline in OCI.

Plan, Collaborate & Set Up Infrastructure

OCI DevOps Code Repositories

Private Git repositories hosted by the DevOps service. You can store, manage, develop source code with OCI DevOps Code Repositories and create your own private code repositories or connect to external code repositories such as GitHub, GitLab, Bitbucket Cloud, Visual Builder Studio, Bitbucket Server, and GitLab Server. It is perfect for managing application code, Terraform configurations, and CI/CD definitions all in one place.

OCI Resource Manager (Terraform as a Service)

Automate infrastructure provisioning and lifecycle using Oracle’s managed Terraform service:

  • Write declarative infra-as-code
  • Apply it across multiple compartments with consistent governance
  • Integrates with Vault, IAM, and tagging for full automation
  • This lets you define environments (dev/stage/prod) as code, and roll them out safely and repeatably.

The following image represents a generalized view of the Resource Manager workflow:

This image shows the workflow for provisioning infrastructure using Resource Manager.

OCI Vault

Every DevSecOps pipeline needs a central place for secrets and encryption keys. Vaults are logical entities where the Key Management Service creates and durably stores vault keys and secrets.

  • Store passwords, API tokens, certs, and encryption keys securely
  • Integrated with KMS (Key Management Service) for encryption at rest and in transit
    • Integrates encryption with other OCI services such as storage, database, and Fusion Applications for protecting data stored in these services
  • Automate access via IAM policies and code

Develop, Build, and Test Code

OCI DevOps Build Pipelines

A build pipeline takes a commit ID from your source code repositories and uses that source code to run your build instructions. Build pipelines define a set of stages for the build process – building, testing and compiling software artifacts, delivering artifacts to OCI repositories, and optionally triggering a deployment.. You define the flow and instructions of your build run in the build spec file. Define build pipelines using YAML or the console:

  • Automate Java, Python, Node.js, Docker, and Go builds
  • Customize steps for unit tests, code quality scans (e.g., SonarQube)
  • Connect directly to OCI repos or GitHub

The Application Dependency Management (ADM) service provides you with an integrated vulnerability knowledge base that you can use from the Oracle Cloud Infrastructure (OCI) DevOps build pipelines to detect vulnerabilities in the packages used for the build.

OCI Cloud Shell

A browser-based Linux shell.

OCI CLI

Git, Docker, Terraform, kubectl, Helm, and more

Ideal for quick testing, debugging, or managing your pipeline without installing tools locally.

OCI Application Performance Monitoring (APM)

Do not wait until production to spot performance issues:

  • Distributed tracing across microservices
  • Real User Monitoring (RUM)
  • Availability Monitoring
  • Server Monitoring

Shift-Left Security from the Start

OCI Vault (again, because security is never just one step)

Use Vault throughout your pipeline to securely inject secrets into build/deploy steps.

OCI Cloud Guard

Cloud Guard examines your Oracle Cloud Infrastructure resources for security weaknesses related to configuration, and your operators and users for risky activities. Upon detection, Cloud Guard can suggest, assist, or take corrective actions, based on your configuration.

  • Monitors for risky configurations (open ports, unused keys, misconfigured buckets)
  • Uses rules and detectors to flag and respond to threats
  • Integrates with other OCI services for automated remediation

Perfect for enforcing security baselines as part of your CI/CD process.

OCI Security Zones

Apply guardrails with security policies baked into the compartments:

  • Blocks risky actions (e.g., public DBs)
  • Ensures workloads meet compliance and governance standards automatically

Resources in a region are organized into two compartments. One of the compartments is associated with a security zone, a security zone recipe, and a security zone target in Cloud Guard.

Security Zones let you be confident that your resources in Oracle Cloud Infrastructure, including Compute, Networking, Object Storage, Block Volume and Database resources, comply with your security policies.

Deploy Automatically (and Confidently)

OCI DevOps Deployment Pipelines

A sequence of steps for delivering and deploying a set of artifacts to a target environment. The flow and logic of your software release can be controlled by defining stages that can run in serial or parallel. The delivery side of CI/CD:

  • Create multi-stage pipelines with approval gates, rollbacks, and parallel deployments
  • Deploy to OKE, Functions, Compute, or custom targets
  • Track deployment history and success/failure per environment

Works seamlessly with build pipelines for full Git-to-production automation.

OCI Functions

Event-driven, serverless compute built on Fn Project:

  • Write functions in Java, Python, Node.js, Go
  • Scale automatically based on events or triggers
  • Deploy from build artifacts or container images

Great for microservices, APIs, scheduled jobs, or glue logic in your pipeline.

Oracle Kubernetes Engine (OKE)

The reference architecture deploys the OKE cluster as one of the target environments. The worker nodes are deployed on Oracle Linux OS. This architecture uses three worker nodes in the cluster, but you can create up to 5’000 nodes on each cluster. Managed Kubernetes, Oracle-style:

  • CNCF-compliant, fully managed clusters
  • Integrated with IAM, Container Registry, Load Balancers, and Logging
  • Auto-scaling, node pools, and lifecycle management

Perfect for teams building containerized applications or adopting GitOps practices.

OCI Container Registry

This architecture deploys registry as a private Docker registry for internal use. Docker images are pushed to and pulled from the registry. You can also use registry as a public Docker registry, enabling any user with internet access and knowledge of the appropriate URL to pull images from public repositories in OCI.

  • Push/pull images securely
  • Scan images with third-party security tools
  • Deploy directly into OKE or Functions

Acts as the bridge between build and deploy stages in your pipeline.

Observability – Monitor, Operate, and Optimize

OCI Logging

OCI Logging service stores logs related to the deployment. The deployment runtime output and the final results of the deployment are shown as log entries. OCI Notifications service provides visibility into the latest state of the deployment project and its resources and takes any necessary action. For example, you’re notified when an important event, such as a stage in a deploy pipeline waiting for approval. When you receive the notification message, you can go to DevOps deployment pipelines and approve the stage. Centralized logging across all OCI services and custom apps:

  • Collect, search, and filter logs in real time
  • Create custom queries and alerts
  • Export logs to Object Storage or third-party SIEMs

Feeds directly into security tools and helps debug issues post-deployment.

OCI Monitoring

Metric collection and alerting at every level:

  • Out-of-the-box metrics for compute, load balancers, databases, Kubernetes, and more
  • Custom metrics via SDKs
  • Alarms with notifications (e-mail, Slack, etc.)

This image shows metrics and alarms as used in the Monitoring service.

Events

Oracle Cloud Infrastructure Events enables you to create automation based on the state changes of resources throughout your tenancy. Use Events to allow your development teams to automatically respond when a resource changes its state.

Here are some examples of how you might use Events:

  • Send a notification to a DevOps team when a database backup completes.
  • Convert files of one format to another when files are uploaded to an Object Storage bucket.

Final Thoughts

Oracle Cloud Infrastructure might not always be the flashiest name in DevOps circles, but when it comes to building a secure, scalable, all-in-one DevSecOps pipeline, it delivers.

Whether you are modernizing a legacy stack or building cloud-native microservices, OCI gives you the tools to:

  • Automate everything

  • Bake in security and governance

  • Monitor, understand, and optimize

You just need the right foundation, and OCI makes it possible.

Becoming an Oracle Cloud Infrastructure Certified DevOps Professional Part 1 – Introduction to Oracle Kubernetes Engine

Becoming an Oracle Cloud Infrastructure Certified DevOps Professional Part 1 – Introduction to Oracle Kubernetes Engine

I have just started diving into the OCI DevOps Professional certification course, so why not share some lessons and important information I gathered from the official Oracle course as part of my preparation? My goal? To pass the exam in the next few weeks. In this first part, I’m covering the core concept of Oracle Kubernetes Engine (OKE). Please note that I am also copy-pasting parts of the official documentation.

What is Oracle Kubernetes Engine?

Oracle Kubernetes Engine (OKE) is Oracle Cloud Infrastructure’s managed Kubernetes service. It is designed to let you deploy, manage, and scale containerized applications using Kubernetes, but without the heavy lifting of setting up and maintaining the control plane yourself.

OKE is:

  • Certified by the CNCF (Cloud Native Computing Foundation)

  • Fully integrated with OCI services like networking, load balancing, and IAM

  • Designed for production workloads, with a choice between traditional VM-based clusters or serverless options.

OKE Integration with other OCI Services

You get the flexibility and power of Kubernetes, but Oracle handles the control plane: updates, availability, and scaling

Kubernetes Clusters

OKE supports two types of Kubernetes cluster options: Basic and Enhanced

  • Enhanced cluster: Enhanced clusters support all available features, including features not supported by basic clusters (such as virtual nodes, cluster add-on management, workload identity, and additional worker nodes per cluster). Enhanced clusters come with a financially-backed service level agreement (SLA).
    • Cluster add-ons: In an enhanced cluster, you can use Kubernetes Engine to manage both essential add-ons and a growing portfolio of optional add-ons. You can enable or disable specific add-ons, select add-on versions, opt into and out of automatic updates by Oracle, and manage add-on specific customizations.
  • Basic cluster: Basic clusters support all the core functionality provided by Kubernetes and Kubernetes Engine, but none of the enhanced features that Kubernetes Engine provides. Basic clusters come with a service level objective (SLO), but not a financially-backed service level agreement (SLA).
    • Cluster add-ons: In a basic cluster, you have more responsibility and less flexibility when managing cluster add-ons. You are responsible for upgrading essential add-ons, but you cannot install or disable specific add-ons, select add-on versions, opt into and out of automatic updates by Oracle, or manage add-on specific customizations. In addition, you are responsible for installing, managing, and maintaining any optional add-ons you want in the cluster

If you are aiming to build scalable, secure, and production-ready apps, enhanced clusters are the way to go.

Note: A new cluster using the console is created as an enhanced cluster by default. If you are using the CLI or API to create a cluster, a new cluster is created as a basic cluster by default.

Kubernetes Cluster Controle Plane

The Kubernetes cluster control plane implements core Kubernetes functionality. It runs on compute instances (known as ‘control plane nodes’) in the Kubernetes Engine service tenancy. The cluster control plane is fully managed by Oracle.

The cluster control plane runs a number of processes, including:

  • kube-apiserver to support Kubernetes API operations requested from the Kubernetes command line tool (kubectl) and other command line tools, as well as from direct REST calls. The kube-apiserver includes admissions controllers required for advanced Kubernetes operations.
  • kube-controller-manager to manage different Kubernetes components (for example, replication controller, endpoints controller, namespace controller, and serviceaccounts controller)
  • kube-scheduler to control where in the cluster to run jobs
  • etcd to store the cluster’s configuration data
  • cloud-controller-manager to update and delete worker nodes (using the node controller), to create load balancers when Kubernetes services of type: LoadBalancer are created (using the service controller), and to set up network routes (using the route controller). The oci-cloud-controller-manager also implements a container-storage-interface, a flexvolume driver, and a flexvolume provisioner (for more information, see the OCI Cloud Controller Manager (CCM) documentation on GitHub).

Kubernetes Data Plane and Worker Nodes

Worker nodes are where you run the applications that you deploy in a cluster.

Each worker node runs a number of processes, including:

  • kubelet to communicate with the cluster control plane
  • kube-proxy to maintain networking rules

The cluster control plane processes monitor and record the state of the worker nodes and distribute requested operations between them.

OKE Kubernetes Worker Nodes

A node pool is a subset of worker nodes within a cluster that all have the same configuration. Node pools enable you to create pools of machines within a cluster that have different configurations. For example, you might create one pool of nodes in a cluster as virtual machines, and another pool of nodes as bare metal machines. A cluster must have a minimum of one node pool, but a node pool need not contain any worker nodes.

Worker nodes in a node pool are connected to a worker node subnet in your VCN.

Supported Images and Shapes for Worker Nodes

When creating a node pool with Kubernetes Engine, you specify that the worker nodes in the node pool are to be created as one or other of the following:

  • Virtual nodes, fully managed by Oracle. Virtual nodes provide a ‘serverless’ Kubernetes experience, enabling you to run containerized applications at scale without the operational overhead of upgrading the data plane infrastructure and managing the capacity of clusters. You can only create virtual nodes in enhanced clusters.
  • Managed nodes, running on compute instances (either bare metal or virtual machine) in your tenancy, and at least partly managed by you. You are responsible for upgrading Kubernetes on managed nodes, and for managing cluster capacity. You can create managed nodes in both basic clusters and enhanced clusters.

OKE Managed Nodes and Virtual Nodes

Note: You can choose to upgrade the basic cluster to an enhanced cluster later, but you cannot downgrade an enhanced cluster to a basic cluster.

Supported Images for Managed Nodes

OKE supports the provisioning of worker nodes (managed nodes only) using some, but not all, of the latest Oracle Linux images provided by Oracle Cloud Infrastructure.

Platform Images:

  • Provided by Oracle and only contain an Oracle Linux operating system
  • The managed nodes’ initial boot triggers a software download and setup by OKE

OKE Images:

  • Built on platform images
  • OKE images are optimized for use as managed node base images, with all the necessary configurations and required software
  • For faster managed node provisioning during cluster creation and updates

Custom images:

  • Can be built on supported platform images and OKE images
  • Custom images contain Oracle Linux OSes with customizations, configurations and software that were present when you created the image.

Shapes for Managed Nodes and Virtual Nodes

OKE supports the provisioning of worker nodes (both managed nodes and virtual nodes) using many, but not all, of the shapes provided by Oracle Cloud Infrastructure. More specifically:

  • Managed Nodes
    • Supported for managed nodes:
      • Flexible shapes, except flexible shapes to create burstable instances (for example, VM.Standard.E3.Flex)
      • Bare Metal shapes, including standard shapes and GPU shapes;
      • HPC shapes, except in RDMA networks;
      • VM shapes, including standard shapes and GPU shapes;
      • Dense I/O shapes
      • For the list of supported GPU shapes, see GPU shapes supported by Kubernetes Engine (OKE).
    • Not Supported:
      • Dedicated VM host shapes
      • Micro VM shapes
      • HPC shapes on Bare Metal instances in RDMA networks
      • flexible shapes to create burstable instances (for example, VM.Standard.E3.Flex).
  • Virtual Nodes
    • Supported for virtual nodes:
      • Pod.Standard.A1.Flex, Pod.Standard.E3.Flex, Pod.Standard.E4.Flex.
    • Not Supported: All other shapes.

Self-Managed Nodes

A self-managed node is a worker node hosted on a compute instance (or instance pool) that you have created yourself in Compute service, rather than on a compute instance that Kubernetes Engine has created for you. Self-managed nodes are often referred to as Bring Your Own Nodes (BYON). Unlike managed nodes and virtual nodes (which are grouped into managed node pools and virtual node pools respectively), self-managed nodes are not grouped into node pools.

Using the Compute service enables you to configure compute instances for specialized workloads, including compute shape and image combinations that are not available for managed nodes and virtual nodes.

Note: You can only add self-managed nodes to enhanced clusters.

Supported Images and Shapes for Self-Managed Nodes

Kubernetes Engine supports the provisioning of self-managed nodes using some, but not all, of the Oracle Linux images and shapes provided by Oracle Cloud Infrastructure. More specifically:

  • Images supported for self-managed nodes: The image you select for the compute instance hosting a self-managed node must be one of the OKE Oracle Linux 7 (OL7) or Oracle Linux 8 (OL8) images, and the image must have a Release Date of March 28, 2023 or later. See Image Requirements.
  • Shapes supported for self-managed nodes: The shape you can select for the compute instance hosting a self-managed node is determined by the OKE Oracle Linux 7 (OL7) or Oracle Linux 8 (OL8) image you select for the compute instance.

Prerequisites to create an OKE Cluster

Before you can use Kubernetes Engine to create a Kubernetes cluster, you have to meet prerequisites before you can use OKE. The list can be found here: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengprerequisites.htm

 

Oracle Cloud Infrastructure 2025 Architect Associate Study Guide

Oracle Cloud Infrastructure 2025 Architect Associate Study Guide

Are you preparing for the Oracle Cloud Infrastructure (OCI) 2025 Architect Associate Exam? Me too. 🙂

Whether you are just starting your cloud journey or leveling up your OCI skills, the 2025 Architect Associate exam is designed to test your understanding of core OCI services across compute, networking, storage, IAM, and more. It is about knowing how to build and manage scalable, secure, high-performing infrastructure on Oracle Cloud Infrastructure.

In this guide, I have broken down everything you need to know and mapped it directly to Oracle’s official documentation.

The following table lists the exam objectives and their weightings.

Objectives % of Exam
 Compute 20%
 Networking 35%
 Storage 25%
 Identity and Access Management 20%

Reminder – Oracle courses are free: https://mylearn.oracle.com/ou/learning-path/become-an-oci-architect-associate-2025/147631 

Last year, when I was studying for the 2024 version of the exam without any prior knowledge of OCI, I only used the online course and the official documentation to pass the exam.

Good luck! 🙂

 


1. Compute


2. Networking


3. VCN Connectivity


4. DNS and Traffic Management


5. Load Balancing


6. Network Command Center Services


7. Storage

Block Storage

Object Storage

File Storage


8. Identity and Access Management (IAM)

 

Private Cloud Autarky – You Are Safe Until The World Moves On

Private Cloud Autarky – You Are Safe Until The World Moves On

I believe it was 2023 when the term “autarky” was mentioned during my conversations with several customers, who maintained their own data centers and private clouds. Interestingly, this word popped up again recently at work, but I only knew it from photovoltaic systems. And it kept my mind busy for several weeks.

What is autarky?

To understand autarky in the IT world and its implications for private clouds, an analogy from the photovoltaic (solar power) system world offers a clear parallel. Just as autarky in IT means a private cloud that is fully self-sufficient, autarky in photovoltaics refers to an “off-grid” solar setup that powers a home or facility without relying on the external electrical grid or outside suppliers.

Imagine a homeowner aiming for total energy independence – an autarkic photovoltaic system. Here is what it looks like:

  • Solar Panels: The homeowner installs panels to capture sunlight and generate electricity.
  • Battery: Excess power is stored in batteries (e.g., lithium-ion) for use at night or on cloudy days.
  • Inverter: A device converts solar DC power to usable AC power for appliances.
  • Self-Maintenance: The homeowner repairs panels, replaces batteries, and manages the system without calling a utility company or buying parts. 

This setup cuts ties with the power grid – no monthly bills, no reliance on power plants. It is a self-contained energy ecosystem, much like an autarkic private cloud aims to be a self-contained digital ecosystem.

Question: Which partner (installation company) has enough spare parts and how many homeowners can repair the whole system by themselves?

Let’s align this with autarky in IT:

  • Solar Panels = Servers and Hardware: Just as panels generate power, servers (compute, storage, networking) generate the cloud’s processing capability. Theoretically, an autarkic private cloud requires the organization to build its own servers, similar to crafting custom solar panels instead of buying from any vendor.
  • Battery = Spares and Redundancy: Batteries store energy for later; spare hardware (e.g., extra servers, drives, networking equipment) keeps the cloud running when parts fail. 
  • Inverter = Software Stack: The inverter transforms raw power into usable energy, like how a software stack (OS, hypervisor) turns hardware into a functional cloud.
  • Self-Maintenance = Internal Operations: Fixing a solar system solo parallels maintaining a cloud without vendor support – both need in-house expertise to troubleshoot and repair everything.

Let me repeat it: both need in-house expertise to troubleshoot and repair everything. Everything.

The goal is self-sufficiency and independence. So, what are companies doing?

An autarkic private cloud might stockpile Dell servers or Nvidia GPUs upfront, but that first purchase ties you to external vendors. True autarky would mean mining silicon and forging chips yourself – impractical, just like growing your own silicon crystals for panels.

The problem

In practice, autarky for private clouds sounds like an extreme goal. It promises maximum control. Ideal for scenarios like military secrecy, regulatory isolation, or distrust of global supply chains but clashes with the realities of modern IT:

  • Once the last spare dies, you are done. No new tech without breaking autarky.
  • Autarky trades resilience for stagnation. Your cloud stays alive but grows irrelevant.
  • Autarky’s price tag limits it to tiny, niche clouds – not hyperscale rivals.
  • Future workloads are a guessing game. Stockpile too few servers, and you can’t expand. Too many, and you have wasted millions. A 2027 AI boom or quantum shift could make your equipment useless.

But where is this idea of self-sufficiency or sovereign operations coming from? Nowadays? Geopolitical resilience.

Sanctions or trade wars will not starve your cloud. A private (hyperscale) cloud that answers to no one, free from external risks or influence. That is the whole idea.

What is the probability of such sanctions? Who knows… but this is a number that has to be defined for each case depending on the location/country, internal and external customers, and requirements.

If it happens, is it foreseeable, and what does it force you to do? Does it trigger a cloud-exit scenario?

I just know that if there are sanctions, any hyperscaler in your country has the same problems. No matter if it is a public or dedicated region. That is the blast radius. It is not only about you and your infrastructure anymore.

What about private disconnected hyperscale clouds?

When hosting workloads in the public clouds, organizations care more about data residency, regulations, the US Cloud Act, and less about autarky.

Hyperscale clouds like Microsoft Azure and Oracle Cloud Infrastructure (OCI) are built to deliver massive scale, flexibility, and performance but they rely on complex ecosystems that make full autarky impossible. Oracle offers solutions like OCI Dedicated Region and Oracle Alloy to address sovereignty needs, giving customers more control over their data and operations. However, even these solutions fall short of true autarky and absolute sovereign operations due to practical, technical, and economic realities.

A short explanation from Microsoft gives us a hint why that is the case:

Additionally, some operational sovereignty requirements, like Autarky (for example, being able to run independently of external networks and systems) are infeasible in hyperscale cloud-computing platforms like Azure, which rely on regular platform updates to keep systems in an optimal state.

So, what are customers asking for when they are interested in hosting their own dedicated cloud region in their data centers? Disconnected hyperscale clouds.

But hosting an OCI Dedicated Region in your data center does not change the underlying architecture of Oracle Cloud Infrastructure (OCI). Nor does it change the upgrade or patching process, or the whole operating model.

Hyperscale clouds do not exist in a vacuum. They lean on a web of external and internal dependencies to work:

  • Hardware Suppliers. For example, most public clouds use Nvidia’s GPUs for AI workloads. Without these vendors, hyperscalers could not keep up with the demand.
  • Global Internet Infrastructure. Hyperscalers need massive bandwidth to connect users worldwide. They rely on telecom giants and undersea cables for internet backbone, plus partnerships with content delivery networks (CDNs) like Akamai to speed things up.
  • Software Ecosystems. Open-source tools like Linux and Kubernetes are part of the backbone of hyperscale operations.
  • Operations. Think about telemetry data and external health monitoring.

Innovation depends on ecosystems

The tech world moves fast. Open-source software and industry standards let hyperscalers innovate without reinventing the wheel. OCI’s adoption of Linux or Azure’s use of Kubernetes shows they thrive by tapping into shared knowledge, not isolating themselves. Going it alone would skyrocket costs. Designing custom chips, giving away or sharing operational control or skipping partnerships would drain billions – money better spent on new features, services or lower prices.

Hyperscale clouds are global by nature, this includes Oracle Dedicated Region and Alloy. In return you get:

  • Innovation
  • Scalability
  • Cybersecurity
  • Agility
  • Reliability
  • Integration and Partnerships

Again, by nature and design, hyperscale clouds – even those hosted in your data center as private Clouds (OCI Dedicated Region and Alloy) – are still tied to a hyperscaler’s software repositories, third-party hardware, operations personnel, and global infrastructure.

Sovereignty is real, autarky is a dream

Autarky sounds appealing: a hyperscale cloud that answers to no one, free from external risks or influence. Imagine OCI Dedicated Region or Oracle Alloy as self-contained kingdoms, untouchable by global chaos.

Autarky sacrifices expertise for control, and the result would be a weaker, slower and probably less secure cloud. Self-sufficiency is not cheap. Hyperscalers spend billions of dollars yearly on infrastructure, leaning on economies of scale and vendor deals. Tech moves at lightning speed. New GPUs drop yearly, software patches roll out daily (think about 1’000 updates/patches a month). Autarky means falling behind. It would turn your hyperscale cloud into a relic.

Please note, there are other solutions like air-gapped isolated cloud regions, but those are for a specific industry and set of customers.

Why OCI Dedicated Region and Oracle Cloud VMware Solution are a Winning Combination

Why OCI Dedicated Region and Oracle Cloud VMware Solution are a Winning Combination

 

In this article, we will explore what makes OCI Dedicated Region and Oracle Cloud VMware Solution (OCVS) a unique and powerful combination, cover their core features, how they address key IT challenges, and why CIOs should consider this pairing as a strategic investment for a future-proof IT environment.

What is OCI Dedicated Region?

OCI Dedicated Region is Oracle’s fully managed public cloud region that is deployed directly in a customer’s data center. It provides all of Oracle’s public cloud services (including Oracle Autonomous Database, and AI/ML capabilities) while meeting strict data residency, latency, and regulatory requirements. This allows organizations to enjoy the benefits of a public cloud while retaining physical control over data and infrastructure:

  • Data Residency and Compliance: By deploying cloud services in a customer’s data center, OCI Dedicated Region ensures data remains within the organization’s control, meeting data residency and compliance requirements critical in industries like finance, healthcare, and government.
  • Operational Consistency: Organizations get access to the same tools, APIs, and SLAs as Oracle’s public cloud, which ensures a consistent operational experience across on-premises and cloud environments.
  • Scalability and Flexibility: OCI Dedicated Region provides elastic scaling for workloads without the need for substantial capital expenditure on hardware. 
  • Cost-Effective: By consolidating on-premises and cloud infrastructure, OCI Dedicated Region reduces operational complexity and costs associated with data center management, disaster recovery, and infrastructure procurement.

What is Oracle Cloud VMware Solution?

For many enterprises, VMware is a cornerstone of their infrastructure, powering mission-critical applications and handling sensitive workloads. Migrating these workloads to the cloud has the potential to unlock new efficiencies, but it also brings challenges related to compatibility, risk, and cost.

Oracle Cloud VMware Solution (OCVS) is an answer to these challenges, enabling organizations to extend or migrate VMware environments to Oracle Cloud Infrastructure (OCI) without re-architecting applications:

  • Minimal Disruption: Since OCVS is a VMware-certified solution, applications continue running as they did on-premises, ensuring continuity.
  • Reduced Risk: By leveraging familiar VMware tools and processes, the learning curve is minimized, reducing operational risk.
  • Lower Migration Costs: Avoiding re-architecting means lower costs and faster time-to-value.
  • Enhanced Security: OCVS inherits OCI’s strong security posture, ensuring that data is safeguarded at every layer, from infrastructure to application.
  • Reduced Hardware Spending: Since OCVS runs on OCI, there’s no need to invest in new data center hardware.
  • Disaster Recovery: Allowing enterprises to establish OCI as a disaster recovery site, reducing capital costs on duplicate infrastructure.

The Synergy Between OCI Dedicated Region and OCVS

Using OCVS as a part of an OCI Dedicated Region and brings a unique set of advantages to private clouds. Together, they provide a solution that addresses the pressing demands for data sovereignty, cloud flexibility, and seamless application modernization.

OCI Dedicated Region and OCVS enable operational consistency across cloud and on-premises environments. Teams familiar with Oracle’s public cloud or VMware’s suite of tools can manage both environments with ease. This consistency allows CIOs to retain talent by providing a familiar technology landscape and reduces the need for retraining, thereby improving productivity.

Additionally, this combination allows the creation of a hybrid cloud architecture that seamlessly integrates on-premises infrastructure with cloud resources. OCI Dedicated Region provides a cloud environment within the customer’s data center, while OCVS allows existing VMware workloads to shift to this region without disruptions.

Conclusion

OCI Dedicated Region and Oracle Cloud VMware Solution together offer a powerful, flexible, and compliant infrastructure that empowers CIOs to meet the complex demands of modern enterprise IT. By combining the control of on-premises with the agility/flexibility of the cloud, this combined solution helps organizations achieve operational excellence, reduce risk, and accelerate digital transformation.

For decision-makers looking to strike a balance between legacy infrastructure and future-oriented cloud solutions, OCI Dedicated Region and OCVS represent a strategic investment that brings immediate and long-term value to the enterprise. This combination is not just about technology – it is about enabling business growth, operational resilience, and competitive advantage in a digital-first world.

OCI Dedicated Region – The Next-Generation Private Cloud

OCI Dedicated Region – The Next-Generation Private Cloud

Private clouds and IT infrastructures deployed in on-premises data centers are going through the next evolution. We see vendors and solutions shifting from siloed private clouds more towards a platform approach. A platform that does not consist of different solutions (products) and components anymore but rather something that provides the right foundation, a feature set and interfaces that let you expose and consume services like IaaS, PaaS, DBaaS, DRaaS etc.

If we talk about a platform, we usually mean something that is unified and that is not just “integrated” or stitched together. Integrated would imply that we still have different products (could also be from the same vendor), and this is becoming less popular now. Except this is your way to attract talent by using a best-of-breed approach. Do not forget: It increases your technical debt and hence the complexity massively.

This article highlights a private cloud platform that brings true public cloud characteristics to private clouds. As a matter of fact, it brings the public cloud to your on-premises data center: OCI Dedicated Region

The Cloud Paradox

We could start an endless discussion about technical debt, the so-called public cloud sprawl, and the wish for cloud repatriation. Many people believe that “the” public cloud has failed to deliver its promise. Organizations and decision-makers are still figuring out the optimal way for their business to operate in a multi-cloud world.

In my opinion, the challenge today is that you have so many more topics to consider than ever before. New technologies, new vendors, new solutions, new regulations, and in general so many new possibilities for how to deliver a solution.

IT organizations have invested a lot of money, time, and resources over the past few years to familiarize themselves with these possibilities: hybrid cloud, multi-cloud, application modernization, security, data management, and artificial intelligence.

The public cloud has not failed – it is just failing forward, which means it is still maturing as well!

Other (private) cloud and virtualization companies grew by developing homegrown products and by acquiring different companies to close some feature gaps, which then led to heavy integration efforts. Since the private cloud and the related vendors are also still evolving/maturing, but also still trying to fix the technical debt that they have delivered to their customers and partners, there seems not to be a single private cloud vendor in the market that can provide a true unified platform for on-premises data centers.

Interoperability, Portability, Data Gravity

In 2010, different companies and researchers have been looking for ways to make the private and public clouds more interoperable. The idea was a so-called “intercloud” that would allow organizations to move applications securely and freely between clouds at an acceptable cost. While this cost problem has not been solved yet, the following illustration from 2023 (perhaps not accurate, please verify) should give you an idea where we stand:

Source: https://medium.com/@alexandre_43174/the-surprising-truth-about-cloud-egress-costs-d1be3f70d001 

Constantly moving applications and their data between clouds is not something that CIOs and application owners want. Do not forget: We are still figuring out how to move applications to the right cloud based on the right reasons.

Thought: AI/ML-based workload mobility and cost optimization could become a reality though but that is still far away.

That brings us to interoperability. The idea almost 15 years ago was based on standardized protocols and common standards that would allow VM/application mobility, which then can be seen as cloud interoperability.

So, how are cloud providers trying to solve this challenge? By providing their proprietary solutions in other clouds.

While these hybrid or hybrid multi-cloud solutions bring advantages and solve some of the problems, depending on an organization’s strategy and partnerships, we face the next obstacle called data gravity.

The larger a dataset or database is the more difficult it is to move, which incentivizes organizations to bring computing resources and applications closer to the data, rather than moving the data to where the processing is done. That is why organizations are using different database solutions and DBaaS offerings in their private and public cloud(s).

Distributed Cloud Architecture

Oracle’s distributed cloud architecture enables customers to run their workloads in geographically diverse locations while maintaining a consistent operational model across different environments:

  • Oracle Cloud Infrastructure (OCI). Oracle has built OCI to deliver high-performance computing and enterprise-grade cloud services with global availability across its various regions.
  • Hybrid Cloud and Interoperability. Oracle’s hybrid cloud capabilities, such as Exadata Cloud@Customer and OCI Dedicated Region, enable organizations to run Oracle Cloud services in their own data center. These services give customers the full benefits of Oracle Cloud Infrastructure while maintaining data within their data centers, which is ideal for industries with strict data residency or security policies.
  • Multi-Cloud. Oracle is the first hyperscaler that offers databases in all the major public clouds (Azure, Google Cloud and AWS). Then there is HeatWave MySQL on AWS and the different interconnect options (Google Cloud, Azure).

These offerings address the mobility, interoperability, egress costs, and data gravity challenges mentioned above. In my opinion, there is no other vendor yet who achieved the same level of partnerships and integrations that brings us closer to cloud interoperability.

This is the Gartner Magic Quadrant (MQ) for Distributed Hybrid Infrastructure from August 2023:

Gartner Magic-Quadrant-for-Distributed-Hybrid-Infrastructure

I do not know when the next MQ for Distributed Hybrid Infrastructure comes out (Update: the 2024 Gartner MQ for DHI came out on October 10), but I guess that Oracle will even be positioned better then, because of the Oracle CloudWorld 2024 announcements and the future release of OCI Dedicated Region 25. If you missed the Dedicated Region 25 announcement, have a look at this interview:

Let us park OCI Dedicated Region for a minute and talk about data centers quickly.

Monolithic Data Centers for Modern Applications

As many of us know, the word “monolithic” describes something very large, and difficult to change. Something inflexible.

It is very interesting to see that so many organizations talk about modern applications, but are still managing and maintaining what one could call a “monolithic” data center. I had customers discussing a modern infrastructure for their modern (or to be modernized) applications. With “modern” they were referring to a modern infrastructure which means “public cloud” for them.

So, it still surprises me that almost nobody talks about monolithic infrastructures or monolithic private clouds. Perhaps this has something to do with the mostly (still) monolithic applications, which implies that these workloads are running on a “legacy” or monolithic infrastructure. 

So, what happens to the applications that have to stay in your data center, because you cannot or do not want to migrate them to the public cloud?

Some of those apps are for sure still important to the business, need to be lifecycled and patched, and some of them need to be modernized for you to stay competitive with the market.

What about a modern private cloud?

If your goal is to put modern applications on a modern platform, what are the reasons for stopping you and not investing in a more modern platform that can not only host your modern apps, but also legacy apps, and anything that might come in the future?

Where do you deploy your AI-based workloads and data services if such applications/workloads and their data have to stay in your private cloud?

And what is Gartner saying about the trend for public services spend?

All segments of the cloud market are expected to see growth in 2024. Infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth at 25.6%, followed by platform-as-a-service (PaaS) at 20.6%…

Why do I mention this?

Because some people think that virtual machines and IaaS are legacy, and then come to the false conclusion that an on-premises cloud is obsolete. If that would be true, why does Gartner regularly forecast the highest spending growth with IaaS? And wouldn’t it mean that the modern public cloud is hosting a huge number of legacy workloads, and hence become obsolete as well?

I do not think so. 😀

The Next Generation

One of the main challenges with existing private clouds is the operating model. Imagine how organizations have built data centers in the past. You started to virtualize compute, then networking, and then storage. A little bit later you had to figure out how you automate, deploy, integrate, and maintain these components without forgetting security in the end.

A few years later, you had to start thinking about container orchestration and go through the same process again: how to build, run, connect, and secure container-based workloads.

Why? Because people believe that on-premises data centers will disappear, applications must be cloud-native, containerized, and therefore be orchestrated with Kubernetes. That’s the very short and extremely simplified version of 20 years of virtualization history.

So, suddenly, you are stuck in both worlds, the monolithic data center and the modern public cloud, with different people (engineering, architecture, operations), processes, and technologies. Different integrations (ecosystem), strategic partnerships and operating models for different clouds.

What are the options at this point? Well, there are not so many:

  1. Stretch the private cloud to the public cloud (e.g., VMware Cloud Foundation, Nutanix)
  2. Stretch the public cloud to your data center (AWS Outposts, Azure Stack, OCI Dedicated Region or Oracle’s Cloud@Customer offerings)
  3. Leave all as it is and try to abstract the underlying infrastructure, but build a control plane on top for everything (Azure Arc, Google Anthos, VMware Tanzu)

The (existing) private cloud will always be seen as the legacy and outdated private cloud, if nobody changes the processes and the capabilities that the data center platform can deliver.

Note: But that might be okay depending on an organization’s size and requirements

What am I trying to say here? It is not only the operating model that has to change but also how the private cloud services are consumed by developers and operators. Some of the key features and “characteristics” they seek include:

  • Elastic scalability: The ability to automatically scale resources up and down based on demand, without the need for manual intervention or hardware provisioning.
  • Cost transparency and efficiency: Pay-as-you-go pricing models that align costs with actual resource consumption, improving financial efficiency.
  • Cloud-native services: Access to a wide range of managed services, such as databases, AI/ML tools, and serverless platforms, that accelerate application development and deployment.
  • Low operational overhead: Outsourcing the management of underlying infrastructure to reduce operational complexity and allow teams to focus on business outcomes.
  • Compliance and data sovereignty: The ability to meet strict regulatory requirements while ensuring that data and workloads remain under the enterprise’s control.

This brings me to option number 2 and OCI Dedicated Region, because Oracle is the only public cloud provider, who can bring the same set of public cloud services to an enterprise data center.

What is OCI Dedicated Region?

OCI Dedicated Region (previously known as Oracle Dedicated Region Cloud@Customer aka DRCC) provides the full suite of Oracle cloud services (IaaS, PaaS, and SaaS) for deployment in one or more customer-specified physical locations. This solution allows customers to maintain complete control over their data and applications, addressing the strictest security, regulatory, low latency, and data residency requirements. It is ideal for mission-critical workloads that may not move to the public cloud.

Diagram of OCI in a dedicated region, description below

OCI Dedicated Region provides the same services available in Oracle’s public cloud regions. It is also certified to run Oracle SaaS applications, including ERP, Financials, HCM, and SCM, making it the only solution that delivers a fully integrated cloud experience for IaaS, PaaS, and SaaS directly on-premises.

Key features of DRCC:

  • Full Public Cloud Parity: DRCC offers the same services, APIs, and operational experience as Oracle’s public cloud. This includes Oracle Autonomous Database, Exadata, high-performance computing (HPC), Kubernetes, and more.
  • Private Cloud: The infrastructure is deployed within the customer’s data center, meaning all data stays on-premises, which is ideal for industries with strict data privacy or residency requirements.
  • Managed by Oracle: Oracle is responsible for managing, monitoring, updating, and securing the infrastructure, ensuring it operates with the same level of service as Oracle’s public cloud.
  • Pay-as-you-go: DRCC operates under a consumption-based pricing model, similar to public cloud services, where customers pay based on the resources they use.

Oracle Alloy

Oracle Alloy is a cloud infrastructure platform designed to allow service providers, independent software vendors (ISVs), and enterprises to build and operate their own customized cloud environments based on Oracle Cloud Infrastructure.

Becoming an Oracle Alloy partner  diagram, description below

Some key features of Oracle Alloy:

  • Customizable Cloud: Oracle Alloy allows organizations to brand, customize, and offer their own cloud services to customers using Oracle’s OCI technology. This enables service providers and enterprises to create tailored cloud environments for specific industries or regional needs.
  • Full Control: Unlike DRCC, which is managed entirely by Oracle, Alloy provides organizations with full control (of operations) over the infrastructure. They can operate, manage, and upgrade the environment as they see fit.
  • White-label Cloud Services: Oracle Alloy allows organizations to build and offer cloud services under their own brand. This is especially useful for telcos, financial institutions, governments or regional service providers who want to become cloud providers themselves.

In addition, partners can set their own pricing, rate cards, account types, and discount schedules. They can also define support structure and service levels. With embedded financial management capabilities from the Oracle Fusion Cloud ERP offering, Oracle Alloy enables partners to manage the customer lifecycle, including invoicing and billing their customers.

Final Words

Just because organizations call the combination of their data center solutions (even the components are coming from the same vendor) a private cloud, doesn’t mean that they have the right capabilities (people, processes, technology – not only technology!) and private cloud maturity to enable business transformations.

So, if you want to bring your on-premises environment to the next level with a true private cloud and a cloud operating model, why don’t you bring a complete public cloud region into your data center? 🙂