Oracle Cloud Infrastructure 2025 Networking Professional Study Guide

Oracle Cloud Infrastructure 2025 Networking Professional Study Guide

When I first stepped into the world of cloud networking, it wasn’t through Oracle, AWS, or Azure. It was about 13 years ago, working at a small cloud service provider that ran its own infrastructure stack. We didn’t use hyperscale magic, we built everything ourselves.

Our cloud was stitched together with VMware vCloud Director, Cisco Nexus 1000v, physical Cisco routers and switches, and a good amount of BGP. We managed our own IP transits, IP peerings, created interconnects, configured static and dynamic routing, and deployed site-to-site VPNs for customers.

Years later, after moving into cloud-native networking and skilling up on Oracle Cloud Infrastructure (OCI), I realized how many of the same concepts apply, but with better tools, faster provisioning, and scalable security. OCI offers powerful services for building modern network topologies: Dynamic Routing Gateways, Service Gateways, FastConnect, Network Firewalls, and Zero Trust Packet Routing (ZPR).

This study guide is for anyone preparing for the OCI 2025 Networking Professional certification. 

Exam Objectives

Review the exam topics:

  • Design and Deploy OCI Virtual Cloud Networks (VCN)
  • Plan and Design OCI Networking Solutions and App Services
  • Design for Hybrid Networking Architectures
  • Transitive Routing
  • Implement and Operate Secure OCI Networking and Connectivity Solutions
  • Migrate Workloads to OCI
  • Troubleshoot OCI Networking and Connectivity Issues

VCN – Your Virtual Cloud Network

Think of a VCN as your private, software-defined data center in the cloud. It is where everything begins. Subnets, whether public or private, live inside it. You control IP address ranges (CIDRs), route tables, and security lists, which together determine who can talk to what and how. Every other networking component in OCI connects back to the VCN, making it the central nervous system of your cloud network.

Internet Gateway – Letting the Outside World In (and Out)

If your VCN needs to connect to the public internet – say, to allow inbound HTTP traffic to a web server or to allow your compute instances to fetch updates – you’ll need an Internet Gateway. It attaches to your VCN and enables this connectivity.

This image shows a simple layout of a VCN with a public subnet that uses an internet gateway.

But it is just one piece of the puzzle. You still need to configure route tables and security rules correctly. Otherwise, traffic won’t flow. 

Local Peering Gateway – Talking Across VCNs (in the Same Region)

When you have got multiple VCNs in the same OCI region, maybe for environment isolation or organizational structure, a Local Peering Gateway (LPG) allows them to communicate privately. No internet, no extra costs. Just fast, internal traffic. It’s especially useful when designing multi-VCN architectures that require secure east-west traffic flow within a single region.

This image shows the basic layout of two VCNs that are locally peered, each with a local peering gateway.

Dynamic Routing Gateway – The Multi-Path Hub

The Dynamic Routing Gateway (DRG) is like the border router for your VCN. If you want to connect to on-prem via VPN, FastConnect, or peer across regions, you’re doing it through the DRG. It supports advanced routing, enables transitive routing, and connects you to just about everything external. It’s your ticket to hybrid and multi-region topologies.

Remote Peering Connection – Cross-Region VCN Peering

Remote Peering Connections (RPCs) let you extend your VCN communication across regions. Let’s say you have got a primary environment in US East and DR in Germany, you’ll need a DRG in each region and an RPC between them. It’s all private, secure, and highly performant. And it’s one of the foundations for multi-region, global OCI architectures.

This image shows the basic layout of two VCNs that are remotely peered, each with a remote peering connection on the DRG

Note: Without peering, a given VCN would need an internet gateway and public IP addresses for the instances that need to communicate with another VCN in a different region. 

Service Gateway – OCI Services Without Public Internet

The Service Gateway is gold! It allows your VCN to access OCI services like Object Storage or Autonomous Database without going over the public internet. Traffic stays on the Oracle backbone, meaning better performance and tighter security. No internet gateway or NAT gateway is required to reach those specific services.

This image shows the basic layout of a VCN with a service gateway

NAT Gateway – Internet Access

A NAT Gateway allows outbound internet access for private subnets, while keeping those instances hidden from unsolicited inbound traffic. When a host in the private network initiates an internet-bound connection, the NAT device’s public IP address becomes the source IP address for the outbound traffic. The response traffic from the internet therefore uses that public IP address as the destination IP address. The NAT device then routes the response to the host in the private network that initiated the connection.

This image shows the basic layout of a VCN with a NAT gateway and internet gateway

Private Endpoints – Lock Down Your Services

With Private Endpoints, you can expose services like OKE, Functions, or Object Storage only within a VCN or peered network. It’s the cloud-native way to implement zero-trust within your OCI environment, making sure services aren’t reachable over the public internet unless you explicitly want them to be. You can think of the private endpoint as just another VNIC in your VCN. You can control access to it like you would for any other VNIC: by using security rules

This diagram shows a VCN with a private endpoint for a resource.

The private endpoint gives hosts within your VCN and your on-premises network access to a single resource within the Oracle service of interest (for example, one database in Autonomous Database Serverless). Compare that private access model with a service gateway (explained before):

If you created five Autonomous Databases for a given VCN, all five would be accessible through a single service gateway by sending requests to a public endpoint for the service. However, with the private endpoint model, there would be five separate private endpoints: one for each Autonomous Database, and each with its own private IP address.

The list of supported services with a service gateway can be found here.

Oracle Services Network (OSN) – The Private Path to Oracle

The Oracle Services Network is the internal highway for communication between your VCN and Oracle-managed services. It underpins things like the Service Gateway and ensures your service traffic doesn’t touch the public internet. When someone says “use OCI’s backbone,” this is what they’re talking about.

Network Load Balancer – Lightweight, Fast, Private

Network Load Balancer is a load balancing service which operates at Layer-3 and Layer-4 of the Open Systems Interconnection (OSI) model. This service provides the benefits of high availability and offers high throughput while maintaining ultra low latency. You have three modes in Network Load Balancer in which you can operate:

  • Full Network Address Translation (NAT) mode 
  • Source Preservation mode
  • Transparent (Source/Destination Preservation) mode

The Network Load Balancer service supports three primary network load balancer policy types:

  1. 5-Tuple Hash: Routes incoming traffic based on 5-Tuple (source IP and port, destination IP and port, protocol) Hash. This is the default network load balancer policy.
  2. 3-Tuple Hash: Routes incoming traffic based on 3-Tuple (source IP, destination IP, protocol) Hash.
  3. 2-Tuple Hash: Routes incoming traffic based on 2-Tuple (source IP, destination IP) Hash.

Site-to-Site VPN – The Hybrid Gateway

Connecting your on-premises network to OCI? The Site-to-Site VPN offers a quick, secure way to do it. It uses IPSec tunnels, and while it’s great for development and backup connectivity, you might find bandwidth a bit constrained for production workloads. That’s where FastConnect steps in.

When you set up Site-to-Site VPN, it has two redundant IPSec tunnels. Oracle encourages you to configure your CPE device to use both tunnels (if your device supports it).

This image shows Scenario B: a VCN with a regional private subnet and a VPN IPSec connection.

FastConnect – Dedicated, Predictable Connectivity

FastConnect gives you a private, dedicated connection between your data center and OCI. It’s the go-to solution when you need stable, high-throughput performance. It comes via Oracle partners, 3rd party providers, or colocations and bypasses the public internet entirely. In hybrid setups, FastConnect is the gold standard.

This image shows a colocation setup where you have two physical connections and virtual circuits to the FastConnect location.

Have a look at the FastConnect Redundancy Best Practices!

IPsec over FastConnect

You can also layer IPSec encryption over FastConnect, giving you the security of VPN and the performance of FastConnect. This is especially useful for compliance or regulatory scenarios that demand encryption at every hop, even over private circuits.

Diagram showing the termination ends of both virtual circuit and IPSec tunnel

Note: IPSec over FastConnect is available for all three connectivity models (partner, third-party provider, colocation with Oracle) and multiple IPSec tunnels can exist over a single FastConnect virtual circuit.

FastConnect – MACsec Encryption

FastConnect natively supports line-rate encryption between the FastConnect edge device and your CPE without concern for the cryptographic overhead associated with other methods of encryption, such as IPsec VPNs. With MACsec, customers can secure and protect all their traffic between on-premises and OCI from threats, such as intrusions, eavesdropping, and man-in-the-middle attacks. 

Border Gateway Protocol (BGP) – The Routing Protocol of the Internet

If you are using FastConnect, Site-to-Site VPN, or any complex DRG routing scenario, you are likely working with BGP. OCI uses BGP to dynamically exchange routes between your on-premises network and your DRG.

BGP enables route prioritization, failover, and smarter traffic engineering. You’ll need to understand concepts like ASNs, route advertisements, and local preference.

BGP is also essential in multi-DRG and transitive routing topologies, where path selection and traffic symmetry matter.

Transitive Routing

You can have a VCN that acts as a hub, routing traffic between spokes. This is crucial for building scalable, shared services architectures. Using DRG attachments and route rules, you can create full-mesh or hub-and-spoke topologies with total control. Transit Routing can also be used to transit from one OCI region to another OCI region leveraging the OCI backbone.

The three primary transit routing scenarios are:

  • Access between several networks through a single DRG with a firewall between networks
  • Access to several VCNs in the same region
  • Private access to Oracle services

This image shows the basic hub and spoke layout of VCNs along with the gateways required.

Inter-Tenancy Connectivity – Across Tenants

In multi-tenant scenarios, for example between business units or regions, inter-tenancy connectivity allows you to securely link VCNs across OCI accounts. This might involve shared DRGs or peering setups. It’s increasingly relevant for large enterprises where cloud governance splits resources across different tenancies but still needs seamless interconnectivity.

Network Firewall – Powered by Palo Alto Networks

The OCI Network Firewall is a managed, cloud-native network security service. It acts as a stateful, Layer 3 to 7 firewall that inspects and filters network traffic at a granular, application-aware level. You can think of it as a fully integrated, Oracle-managed instance of Palo Alto’s firewall technology with all the power of Palo Alto, but integrated into OCI’s networking fabric.

In this example, routing is configured from an on-premises network through a dynamic routing gateway (DRG) to the firewall. Traffic is routed from the DRG, through the firewall, and then from the firewall subnet to a private subnet

Diagram of routing from a DRG through a firewall, and then to a private subnet.

In this example, routing is configured from the internet to the firewall. Traffic is routed from the internet gateway (IGW), through the firewall, and then from the firewall subnet to a public subnet.

This diagram shows routing from the internet, through a firewall, and then to a public subnet.

In this example, routing is configured from a subnet to the firewall. Traffic is routed from Subnet A, through the firewall, and then from the firewall subnet to Subnet B.

This diagram shows routing from Subnet A, through a firewall, and then to Subnet B.

Zero Trust Packet Routing (ZPR)

Oracle Cloud Infrastructure Zero Trust Packet Routing (ZPR) protects sensitive data from unauthorized access through intent-based security policies that you write for the OCI resources that you assign security attributes to. Security attributes are labels that ZPR uses to identify and organize OCI resources. ZPR enforces policy at the network level each time access is requested, regardless of potential network architecture changes or misconfigurations.

ZPR is built on top of existing network security group (NSG) and security control list (SCL) rules. For a packet to reach a target, it must pass all NSG and SCL rules, and ZPR policy. If any NSG, SCL, or ZPR rule or policy doesn’t allow traffic, the request is dropped. 

Wrapping Up

OCI’s networking stack is deep, flexible, and modern. Whether you are an enterprise architect, a security specialist, or a hands-on cloud engineer, mastering these building blocks is key. Not just to pass the OCI 2025 Network Professional certification, but to design secure, scalable, and resilient cloud networks. 🙂

 

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

We all know that the future of existing VMware customers has become more complicated and less certain. Many enterprises are reevaluating their reliance on VMware as their core infrastructure stack. So, where to go next?

For enterprises already invested in Oracle technology, or simply those looking for a credible, flexible, and enterprise-grade alternative, Oracle Cloud Infrastructure (OCI) offers a comprehensive set of paths forward. Whether you want to modernize, rehost, or run hybrid workloads, OCI doesn’t force you to pick a single direction. Instead, it gives you a range of options: from going cloud-native, to running your existing VMware stack unchanged, to building your own sovereign cloud footprint.

Here are five realistic strategies for VMware customers considering OCI. Learn how to migrate from VMware to Oracle Cloud Infrastructure. It doesn’t need to be an either-or decision, it can also be an “and” approach.

1. Cloud-Native with OCI – Start Fresh, Leave VMware Behind

For organizations ready to move beyond traditional infrastructure altogether, the cloud-native route is the cleanest break you can make. This is where you don’t just move workloads; you rearchitect them. You replace VMs with containers where possible, and perhaps lift and shift some of the existing workloads. You replace legacy service dependencies with managed cloud services. And most importantly, you replace static, manually operated environments with API-driven infrastructure.

OCI supports this approach with a robust portfolio: you have got compute Instances that scale on demand, Oracle Kubernetes Engine (OKE) for container orchestration, OCI Functions for serverless workloads, and Autonomous Database for data platforms that patch and tune themselves. The tooling is modern, open, and mature – Terraform, Ansible, and native SDKs are all available and well-documented.

This isn’t a quick VMware replacement. It requires a DevOps mindset, application refactoring, and an investment in automation and CI/CD. It is not something you do in a weekend. But it’s the only path that truly lets you leave the baggage behind and design infrastructure the way it should work in 2025.

2. OCVS – Run VMware As-Is, Without the Hardware

If cloud-native is the clean break, then Oracle Cloud VMware Solution (OCVS) is the strategic pause. This is the lift-and-shift strategy for enterprises that need continuity now, but don’t want to double down on on-prem investment.

With OCVS, you’re not running a fully managed service (compared to AWS, Azure, GCP). You get the full vSphere, vSAN, NSX, and vCenter stack deployed on Oracle bare-metal infrastructure in your own OCI tenancy. You’re the admin. You manage the lifecycle. You patch and control access. But you don’t have to worry about hardware procurement, power and cooling, or supply chain delays. And you can integrate natively with OCI services: backup to OCI Object Storage, peer with Exadata, and extend IAM policies across the board.

Oracle Cloud VMware Solution

The migration is straightforward. You can replicate your existing environment (with HCX), run staging workloads side-by-side, and move VMs with minimal friction. You keep your operational model, your monitoring stack, and your tools. The difference is, you get out of your data center contract and stop burning time and money on hardware lifecycle management.

This isn’t about modernizing right now. It’s about escaping VMware hardware and licensing lock-in without losing operational control.

3. Hybrid with OCVS, Compute Cloud@Customer, and Exadata Cloud@Customer

Now we’re getting into enterprise-grade architecture. This is the model where OCI becomes a platform, not just a destination. If you’re in a regulated industry and you can’t run everything in the public cloud, but you still want the same elasticity, automation, and control, this hybrid model makes a lot of sense.

A diagram showing your tenancy in an OCI region, and how it connects to Compute Cloud@Customer in your data center.

Here’s how it works: you run OCVS in the OCI public region for DR, or workloads that have to stay on vSphere. But instead of moving everything to the cloud, you deploy Compute Cloud@Customer (C3) and Exadata Cloud@Customer (ExaCC) on-prem. That gives you a private cloud footprint with the same APIs and a subset of OCI IaaS/PaaS services but physically located in your own facility, behind your firewall, under your compliance regime.

You manage workloads on C3 using the exact same SDKs, CLI tools, and Terraform modules as the public cloud. You can replicate between on-prem and cloud, burst when needed, or migrate in stages. And with ExaCC running in the same data center, your Oracle databases benefit from the same SLA and performance guarantees, with none of the data residency headaches.

This model is ideal if you’re trying to modernize without breaking compliance. It keeps you in control, avoids migration pain, and still gives you access to the full OCI ecosystem when and where you need it.

4. OCI Dedicated Region – A Public Cloud That Lives On-Prem

When public cloud is not an option, OCI Dedicated Region becomes the answer.

This isn’t a rack. It is an entire cloud region. You get all OCI services like compute, storage, OCVS, OKE, Autonomous DB, identity, even SaaS, deployed inside your own facility. You retain data sovereignty and you control physical access. You also enforce local compliance rules and operate everything with the same OCI tooling and automation used in Oracle’s own hyperscale regions.

サーバラック3つで自分のOracle Cloudリージョンが持てる「Oracle Dedicated Region 25」発表 - Publickey

What makes Dedicated Region different from C3 is the scale and service parity. While C3 delivers core IaaS and some PaaS capabilities, Dedicated Region is literally the full stack. You can run OCVS in there, connect it to your enterprise apps, and have a fully isolated VMware environment that never leaves your perimeter.

For VMware customers, it means you don’t have to choose between control and modernization. You get both.

5. Oracle Alloy – Cloud Infrastructure for Telcos and VMware Service Providers

If you’re a VMware Cloud Director customer or a telco/provider building cloud services for others, then Oracle just handed you an entirely new business model. Oracle Alloy allows you to offer your own cloud under your brand, with your pricing, and your operational control based on the same OCI technology stack Oracle runs themselves.

This is not only reselling, it is operating your own OCI cloud.

Becoming an Oracle Alloy partner  diagram, description below

As a VMware-based cloud provider, Alloy gives you a path to modernize your platform and expand your services without abandoning your customer base. You can run your own VMware environment (OCVS), offer cloud-native services (OKE, DBaaS, Identity, Monitoring), and transition your customers at your own pace. All of it on a single platform, under your governance.

What makes Alloy compelling is that it doesn’t force you to pick between VMware and OCI, it lets you host both side by side. You keep your high-value B2B workloads and add modern, cloud-native services that attract new tenants or internal business units.

For providers caught in the middle of the VMware licensing storm, Alloy might be the most strategic long-term play available right now.

 

Open-Source Can Help With Portability And Lock-In But It Is Not A Silver Bullet

Open-Source Can Help With Portability And Lock-In But It Is Not A Silver Bullet

We have spent years chasing cloud portability and warning against vendor lock-in. And yet, every enterprise I have worked with is more locked in today than ever. Not because they failed to use open-source software (OSS). Not because they made bad decisions, but because real-world architecture, scale, and business momentum don’t care about ideals. They care about outcomes.

The public cloud promised freedom. APIs, managed services, and agility. Open-source added hope. Kubernetes, Terraform, Postgres. Tools that could, in theory, run anywhere. And so we bought into the idea that we were building “portable” infrastructure. That one day, if pricing changed or strategy shifted, we could pack up our workloads and move. But now, many enterprises are finding out the truth:

Portability is not a feature. It is a myth, and for most large organizations, it is a unicorn, but elusive in reality.

Let me explain, and before I do, talk about interclouds again.

Remember Interclouds?

Interclouds, once hyped as the answer to cloud portability (and lock-in), promised a seamless way to abstract infrastructure across providers, enabling workloads to move freely between clouds. In theory, they would shield enterprises from vendor dependency by creating a uniform control plane and protocols across AWS, Azure, GCP, OCI and beyond.

David Bernstein Intercloud

Note: An idea and concept that was discussed in 2012. It is 2025, and not much has happened since then.

But in practice, intercloud platforms failed to solve the lock-in problem because they only masked it, not removed it. Beneath the abstraction layer, each provider still has its own APIs, services, network behaviors, and operational peculiarities.

Enterprises quickly discovered that you can’t abstract your way out of data gravity, compliance policies, or deeply integrated PaaS services. Instead of enabling true portability, interclouds just delayed the inevitable realization: you still have to commit somewhere.

The Trigger Nobody Plans For

Imagine you are running a global enterprise with 500 or 1’000 applications. They span two public clouds. Some are modern, containerized, and well-defined in Terraform. Others are legacy, fragile, lifted, and shifted years ago in a hurry. A few run in third-party SaaS platforms.

Then the call comes: “We need to exit one of our clouds. Legal, compliance, pricing. Doesn’t matter why. It has to go.”

Suddenly, that portability you thought you had? It is smoke. The Kubernetes clusters are portable in theory, but the CI/CD tooling, monitoring stack, and security policies are not. Dozens of apps use PaaS services tightly coupled to their original cloud. Even the apps that run in containers still need to be re-integrated, re-tested, and re-certified in the new environment.

This isn’t theoretical. I have seen it firsthand. The dream of being “cloud neutral” dies the moment you try to move production workloads – at scale, with real dependencies, under real deadlines.

Open-Source – Freedom with Strings Attached

It is tempting to think that open-source will save you. After all, it is portable, right? It is not tied to any vendor. You can run it anywhere. And that is true on paper.

But the moment you run it in production, at enterprise scale, a new reality sets in. You need observability, governance, upgrades, SLAs. You start relying on managed services for these open-source tools. Or you run them yourself, and now your internal teams are on the hook for uptime, performance, and patching.

You have simply traded one form of lock-in for another: the operational lock-in of owning complexity.

So yes, open-source gives you options. But it doesn’t remove friction. It shifts it.

The Other Lock-Ins No One Talks About

When we talk about “avoiding lock-in”, we usually mean avoiding proprietary APIs or data formats. But in practice, most enterprises are locked in through completely different vectors:

Data gravity makes it painful to move large volumes of information, especially when compliance and residency rules come into play. The real issue is the latency, synchronization, and duplication challenges that come with moving data between clouds.

Tooling ecosystems create invisible glue. Your CI/CD pipelines, security policies, alerting, cost management. These are all tightly coupled to your cloud environment. Even if the core app is portable, rebuilding the ecosystem around it is expensive and time-consuming.

Skills and culture are rarely discussed, but they are often the biggest blockers. A team trained to build in cloud A doesn’t instantly become productive in cloud B. Tooling changes. Concepts shift. You have to retrain, re-hire, or rely on partners.

So, the question becomes: is lock-in really about technology or inertia (of an enterprise’s IT team)?

Data Gravity

Data gravity is one of the most underestimated forces in cloud architecture. Whether you are using proprietary services or open-source software. The idea is simple: as data accumulates, everything else like compute, analytics, machine learning, and governance, tends to move closer to it.

In practice, this means that once your data reaches a certain scale or sensitivity, it becomes extremely hard to move, regardless of whether it is stored in a proprietary cloud database or an open-source solution like PostgreSQL or Kafka.

With proprietary platforms, the pain comes from API compatibility, licensing, and high egress costs. With open-source tools, it is about operational entanglement: complex clusters, replication lag, security hardening, and integration sprawl.

Either way, once data settles, it anchors your architecture, creating a gravitational pull that resists even the most well-intentioned portability efforts.

The Cost of Chasing Portability

Portability is often presented as a best practice. But there is a hidden cost.

To build truly portable applications, you need to avoid proprietary features, abstract your infrastructure, and write for the lowest common denominator. That often means giving up performance, integration, and velocity. You are paying an “insurance premium” for a theoretical future event like cloud exit or vendor failure, that may never come.

Worse, in some cases, over-engineering for portability can slow down innovation. Developers spend more time writing glue code or dealing with platform abstraction layers than delivering business value.

If the business needs speed and differentiation, this trade-off rarely holds up.

So… What Should We Do?

Here is the hard truth: lock-in is not the problem. Lack of intention is.

Lock-in is unavoidable. Whether it is a cloud provider, a platform, a SaaS tool, or even an open-source ecosystem. You are always choosing dependencies. What matters is knowing what you are committing to, why you are doing it, and what the exit cost will be. That is where most enterprises fail.

And let us be honest for a moment. A lot of enterprises call it lock-in because their past strategic decision doesn’t feel right anymore. And then they blame their “strategic” partner.

The better strategy? Accept lock-in, but make it intentional. Know your critical workloads. Understand where your data lives. Identify which apps are migration-ready and which ones never will be. And start building the muscle of exit-readiness. Not for all 1’000 apps, but for the ones that matter most.

True portability isn’t binary. And in most large enterprises, it only applies to the top 10–20% of apps that are already modernized, loosely coupled, and containerized. The rest? They are staying where they are until there is a budget, a compliance event, or a crisis.

Avoiding U.S. Public Clouds And The Illusion of Independence

While independence from the U.S. hyperscalers and the potential risks associated with the CLOUD Act may seem like a compelling reason to adopt open-source solutions, it is not always the silver bullet it appears to be. The idea is appealing: running your infrastructure on open-source tools in order to avoid being dependent on any single cloud provider, especially those based in the U.S., whose data may be subject to foreign government access under the CLOUD Act.

However, this approach introduces its own set of challenges.

First, by attempting to cut ties with US providers, organizations often overlook the global nature of the cloud. Most open-source tools still rely on cloud providers for deployment, support, and scalability. Even if you host your open-source infrastructure on non-U.S. clouds, the reality is that many key components of your stack, like databases, messaging systems, or AI tools, may still be indirectly influenced by U.S.-based tech giants.

Second, operational complexity increases as you move away from managed services, requiring more internal resources to manage security, compliance, and performance. Rather than providing true sovereignty, the focus on avoiding U.S. hyperscalers may result in an unintended shift of lock-in from the provider to the infrastructure itself, where the trade-off is a higher cost in complexity and operational overhead.

Top Contributors To Key Open-Source Projects

U.S. public cloud providers like Google, Amazon, Microsoft, Oracle and others are not just spectators in this space. They’re driving the innovation and development of key projects:

  1. Kubernetes remains the flagship project of the CNCF, offering a robust container orchestration platform that has become essential for cloud-native architectures. The project has been significantly influenced by a variety of contributors, with Google being the original creator.
  2. Prometheus, the popular monitoring and alerting toolkit, was created by SoundCloud and is now widely adopted in cloud-native environments. The project has received significant contributions from major players, including Google, Amazon, Facebook, IBM, Lyft, and Apple. 
  3. Envoy, a high-performance proxy and communication bus for microservices, was developed by Lyft, with broad support from Google, Amazon, VMware, and Salesforce.
  4. Helm is the Kubernetes package manager, designed to simplify the deployment and management of applications on Kubernetes. It has a strong community with contributions from Microsoft (via Deis, which they acquired), Google, and other cloud providers.
  5. OpenTelemetry provides a unified standard for distributed tracing and observability, ensuring applications are traceable across multiple systems. The project has seen extensive contributions from Google, Microsoft, Amazon, Red Hat, and Cisco, among others. 

While these projects are open-source and governed by the CNCF (Cloud Native Computing Foundation), the influence of these tech companies cannot be understated. They not only provide the tools and resources necessary to drive innovation but also ensure that the technologies powering modern cloud infrastructures remain at the cutting edge of industry standards.

Final Thoughts

Portability has become the rallying cry of modern cloud architecture. Real-world enterprises aren’t moving between clouds every year. They are digging deeper into ecosystems, relying more on managed services, and optimizing for speed.

So maybe the conversation shouldn’t be about avoiding lock-in but about managing it. Perhaps more about understanding it. And, above all, owning it. The problem isn’t lock-in itself. The problem is treating lock-in like a disease, rather than what it really is: an architectural and strategic trade-off.

This is where architects and technology leaders have a critical role to play. Not in pretending we can design our way out of lock-in, but in navigating it intentionally. That means knowing where you can afford to be tightly coupled, where you should invest in optionality, and where it is simply not worth the effort to abstract away.

Disaster Recovery With OCI Dedicated Region

While studying for the OCI 2025 Network Professional exam, I ran into something that is easy to miss at first: OCI Realms. They define boundaries between regions, and they matter a lot, especially when working with OCI Dedicated Regions. One of the most asked questions during technical workshops for OCI Dedicated Region is “Can I only run a single OCI Dedicated Region deployment in my data center and use the public (commercial) OCI region as a secondary site?“.

To answer this question, we have to understand the basic concept of realms first.

What are realms?

Oracle Cloud Infrastructure (OCI) regions are organized into separate cloud realms for customers with differing security and compliance needs. Realms are isolated from each other and share no physical infrastructure, resources, data, accounts, or network connections. OCI has multiple realms, including commercial, government, and dedicated realms. You can’t access regions that aren’t in your realm.

OCI Realms

Customer tenancies exist in a single realm and can access only regions that belong to that realm.

Example: The regions Paris, Frankfurt, Madrid, Stockholm, and Zurich all have the same realm key “OC1” and therefore belong to the same realm. The Serbian region has the realm key “OC20 and belongs to a different realm.

Dedicated Regions are public regions assigned to a single organization. Region-specific details, such as region ID and region key are not available in public documentation. You need to ask your Oracle representative for this information for your OCI Dedicated Region.

OCI Realms - Public and Dedicated Region 

Note: Please be aware that I took this screenshot from the OCI 2025 Network Professional course’s student guide. Follow this link to get an actual view of the currently available cloud regions.

physical and logical isolation between realms

Yes, this also means that the EU Sovereign Cloud realm is completely isolated from the commercial public cloud realm.

So, we have learned that by default, all commercial OCI regions live in the same realm. That means they can talk to each other using native OCI services like VCN peering, object storage replication, IAM policies, etc. 

What else should you know?

Let us come back to our question “Can I only run a single OCI Dedicated Region deployment in my data center and use the public (commercial) OCI region as a secondary site?”.

First of all, we have to ask ourselves: Why do we want to connect a Dedicated (private) Region to a public commercial region? Just because of money? 

Most customers have anyways already have two data center locations. What is stopping you from deploying two OCI Dedicated Regions?

Second, what is the impact if I host my primary site locally in an OCI Dedicated Region and, for disaster recovery purposes, use a commercial region?

Connect an OCI Dedicated Region to another commercial region

Oracle does not provide the tools to connect regions across a realm boundary via their network backbone. But in such cases, it is still possible to leverage OCI FastConnect. From my understanding, we would need to provision/have Virtual Cloud Networks (VCNs) with non-overlapping CIDR blocks. We would then make use of Dynamic Routing Gateways (DRGs), one in your local Dedicated Region and one DRG in Zurich, to allow traffic between both VCNs.

The DRG is a virtual router that provides a path for private network traffic between VCNs in the same region, between a VCN and a network outside the region, such as a VCN in another Oracle Cloud Infrastructure region, an on-premises network, or a network in another cloud provider.

This image shows the basic layout of two VCNs that are remotely peered, each with a remote peering connection on the DRG

Disaster recovery across realms: Not recommended

Oracle recommends configuring disaster recovery (DR) within the same realm due to the isolation between realms.

Some OCI services might support manual DR between realms, others require custom scripts or tools (like rsync, Data Pump, or GoldenGate), and some services (like Autonomous DB or native Object Storage replication) just won’t work across realms. No replication. No failover.

Oracle’s best practices are clear: If you need disaster recovery, keep both OCI (Dedicated) Regions in the same realm.

When you cross realms, you are building everything manually: replication, IAM, automation, failover

The result? You are unsupported by some OCI services (make sure you validate your architecture, requirements, and configuration). And nobody wants a manual, high-risk, and unsupported path. Right? 🙂 

OCI Dedicated Region Is A Strategic Enabler Of Transformation

OCI Dedicated Region Is A Strategic Enabler Of Transformation

Many enterprises are reaching a tipping point. Rather than continuing to extend and maintain aging legacy systems, they are taking a bolder path: building new IT foundations from the ground up. This greenfield approach reflects a desire to move faster, innovate with fewer constraints, and finally free the organization from years of accumulated technical debt. But while the opportunity is clear, the execution is complex. Enterprises need a way to modernize without compromising compliance, performance, or control. Especially in industries where data sensitivity and regulatory oversight are non-negotiable. Oracle Cloud Infrastructure (OCI) Dedicated Region meets this challenge head-on by offering a full public cloud experience delivered inside the enterprise’s own environment, behind its firewall, under its governance.

Build a Modern Foundation Without Constraints

When organizations choose to start fresh with a greenfield architecture, they typically aim to embrace cloud-native design patterns, modernize their application stack, and implement automation from day one. However, many enterprise-grade solutions still force trade-offs between control and capability. Either you give up data residency by using a public cloud, or you sacrifice functionality by deploying a limited private cloud or hybrid solution.

OCI Dedicated Region removes this dilemma. It provides access to the entire suite of Oracle’s cloud services, including high-performance compute, autonomous databases, machine learning, analytics, integration tools, and more. All deployed inside your own data center. This means organizations no longer need to compromise. They can build a modern, scalable, cloud-native platform that meets both their business and regulatory needs, and all without data ever leaving their premises.

OCI Dedicated Region Overview

Minimize Risk While Transforming

Enterprise transformation is rarely about a single cutover. The reality is that legacy systems and new platforms must often coexist for months and sometimes even years during migration. OCI Dedicated Region makes it possible to build your future-state environment in parallel with your current one. This decouples the pace of innovation from the constraints of legacy systems. You can test, iterate, and scale new workloads without immediately touching the systems that still keep the business running.

And because OCI Dedicated Region is operated and managed by Oracle as a service, even though it runs on your premises, your internal teams are freed from much of the operational overhead. This hybrid approach significantly reduces transformation risk, making it easier to modernize core systems without the “big bang” stress that often derails large-scale IT initiatives.

Enabling Organizational Agility

Technology transformation alone isn’t enough. Enterprises also need to rethink how they operate, how teams collaborate, make decisions, and deliver value faster. In traditional environments, IT processes are centralized and slow-moving. Provisioning new infrastructure, accessing secure data sets, or deploying applications often involves multiple layers of approval and coordination, which limits agility.

OCI changes that dynamic. With built-in support for self-service, DevOps workflows, and on-demand resource provisioning, technical and interdisciplinary teams gain the freedom to act quickly within a structured governance model. Whether it’s a development team testing a new product feature or a data team running a machine learning pipeline, OCI Dedicated Region provides the tooling to move fast without waiting. More importantly, these capabilities are consistent whether you’re running in the public OCI cloud or in your own Dedicated Region

Autonomy with Governance

As organizations move toward more distributed operating models, where decisions are pushed closer to the edges of the business, the need for robust governance becomes even more critical. Teams must have the autonomy to act quickly, but within well-defined boundaries. OCI addresses this balance through a rich set of identity, access, and policy management features that let enterprises define who can do what, with which resources, and under what conditions.

With tools like compartments, quotas, tagging policies, and integrated audit logging, IT teams can enforce operational controls without creating friction for teams. OCI Dedicated Region applies these same governance tools locally, ensuring that even when infrastructure is deployed on-premises, the same policies and oversight models can be maintained. This allows organizations to scale innovation across teams and departments while maintaining a consistent approach to security, compliance, and resource management.

Application Portability and Workload Mobility

One of the key advantages of this consistent infrastructure, using OCI and OCI Dedicated Region, is application portability and workload mobility. In many cloud environments, moving workloads between regions, clouds, or on-premises data centers often requires significant re-architecture or compromises in functionality.

OCI takes a fundamentally different approach by ensuring consistency across environments at both the infrastructure and platform levels. Whether you’re running in the public OCI cloud, a Dedicated Region in your data center, or even a hybrid deployment that spans both, the same APIs, services, management tools, and SLAs apply. This makes it much easier to build once and deploy anywhere – without rewriting code, changing dependencies, or retraining staff.

For regulated industries or global enterprises, this enables a flexible deployment strategy where applications and data can move based on changing legal, cost, or performance requirements, and not because of vendor limitations. The result is a true “portable cloud” model where you control the placement of your workloads, not your provider.

While multi-cloud strategies are touted for their potential to mitigate vendor lock-in, they introduce significant operational complexities:

  • Diverse APIs and Management Tools: Managing different cloud platforms requires teams to learn and maintain multiple sets of tools and interfaces.

  • Inconsistent Security Models: Each cloud provider has its own security protocols, complicating unified security management.

  • Fragmented Compliance Postures: Ensuring compliance across multiple clouds can be challenging due to varying standards and certifications.

  • Increased Operational Overhead: Coordinating between different providers can lead to inefficiencies and increased costs.

These challenges often lead organizations to opt for a single cloud provider, accepting the trade-off of potential lock-in for the sake of operational simplicity.

Conclusion

What enterprises need today is not just new infrastructure, they need a platform for change. A platform that enables both IT and business transformation, that reduces friction while increasing security, and that empowers teams to deliver results faster. OCI Dedicated Region provides exactly that. It combines the agility of the public cloud with the control and assurance of on-premises deployment. It supports greenfield initiatives that demand flexibility, coexistence with legacy systems, and scalable governance. And it does all of this in a way that aligns with the realities of large, complex organizations.

Whether you’re reimagining core platforms, enabling AI-driven use cases, or simply creating a future-ready digital foundation, OCI Dedicated Region delivers the architecture, the tools, and the flexibility to move with confidence.

It’s more than an infrastructure choice: it’s a strategic enabler for long-term, enterprise-grade transformation.

The State of Application Modernization 2025

The State of Application Modernization 2025

Every few weeks, I find myself in a conversation with customers or colleagues where the topic of application modernization comes up. Everyone agrees that modernization is more important than ever. The pressure to move faster, build more resilient systems, and increase operational efficiency is not going away.

But at the same time, when you look at what has actually changed since 2020… it is surprising how much has not.

We are still talking about the same problems: legacy dependencies, unclear ownership, lack of platform strategy, organizational silos. New technologies have emerged, sure. AI is everywhere, platforms have matured, and cloud-native patterns are no longer new. And yet, many companies have not even started building the kind of modern on-premises or cloud platforms needed to support next-generation applications.

It is like we are stuck between understanding why we need to modernize and actually being able to do it.

Remind me, why do we need to modernize?

When I joined Oracle in October 2024, some people reminded me that most of us do not know why we are where we are. One could say that it is not important to know that. In my opinion, it very much is. Something has fundamentally changed in the past that has led us to our situation.

In the past, when we moved from physical servers to virtual machines (VMs), apps did not need to change. You could lift and shift a legacy app from bare metal to a VM and it would still run the same way. The platform changed, but the application did not care. It was an infrastructure-level transformation without rethinking the app itself. So, the transition (P2V) of an application was very smooth and not complicated.

But now? The platform demands change.

Cloud-native platforms like Kubernetes, serverless runtimes, or even fully managed cloud services do not just offer a new home. They offer a whole new way of doing things. To benefit from them, you often have to re-architect how your application is built and deployed.

That is the reason why enterprises have to modernize their applications.

What else is different?

User expectations, business needs, and competitive pressure have exploded as well. Companies need to:

  • Ship features faster
  • Scale globally
  • Handle variable load
  • Respond to security threats instantly
  • Reduce operational overhead

A Quick Analogy

Think of it like this: moving from physical servers to VMs was like transferring your VHS tapes to DVDs. Same content, just a better format.

But app modernization? That is like going from DVDs to Netflix. You do not just change the format, but you rethink the whole delivery model, the user experience, the business model, and the infrastructure behind it.

Why Is Modernization So Hard?

If application modernization is so powerful, why is not everyone done with it already? The truth is, it is complex, disruptive, and deeply intertwined with how a business operates. Organizations often underestimate how much effort it takes to replatform systems that have evolved over decades. Here are 6 common challenges companies face during modernization:

  1. Legacy Complexity – Many existing systems are tightly coupled, poorly documented, and full of business logic buried deep in spaghetti code. 
  2. Skill Gaps – Moving to cloud-native tech like Kubernetes, microservices, or DevOps pipelines requires skills many organizations do not have in-house. Upskilling or hiring takes time and money.
  3. Cultural Resistance – Modernization often challenges organizational norms, team structures, and approval processes. People do not always welcome change, especially if it threatens familiar workflows.
  4. Data Migration & Integration – Legacy apps are often tied to on-prem databases or batch-driven data flows. Migrating that data without downtime is a massive undertaking.
  5. Security & Compliance Risks – Introducing new tech stacks can create blind spots or security gaps. Modernizing without violating regulatory requirements is a balancing act.
  6. Cost Overruns – It is easy to start a cloud migration or container rollout only to realize the costs (cloud bills, consultants, delays) are far higher than expected.

Modernization is not just a technical migration. It’s a transformation of people, process, and platform (technology). That is why it is hard and why doing it well is such a competitive advantage!

Technical Debt Is Also Slowing Things Down

Also known as the silent killer of velocity and innovation: technical debt

Technical debt is the cost of choosing a quick solution now instead of a better one that would take longer. We have all seen/done it. 🙂 Sometimes it is intentional (you needed to hit a deadline), sometimes it is unintentional (you did not know better back then). Either way, it is a trade-off. And just like financial debt, it accrues interest over time.

Here is the tricky part: technical debt usually doesn’t hurt you right away. You ship the feature. The app runs. Management is happy.

But over time, debt compounds:

  • New features take longer because the system is harder to change

  • Bugs increase because no one understands the code

  • Every change becomes risky because there is no test safety net

Eventually, you hit a wall where your team is spending more time working around the system than building within it. That is when people start whispering: “Maybe we need to rewrite it.”  Or they just leave your company.

Let me say it: Cloud Can Also Introduce New Debt

Cloud-native architectures can reduce technical debt, but only if used thoughtfully.

You can still:

  • Over-complicate microservices

  • Abuse Kubernetes without understanding it

  • Ignore costs and create “cost debt”

  • Rely on too many services and lose track

Use the cloud to eliminate debt by simplifying, automating, and replacing legacy patterns, not just lifting them into someone else’s data center.

It Is More Than Just Moving to the Cloud 

Modernization is about upgrading how your applications are built, deployed, run, and evolved, so they are faster, cheaper, safer, and easier to change. Here are some core areas where I saw organizations are making real progress:

  • Improving CI/CD. You can’t build modern applications if your delivery process is stuck in 2010.
  • Data Modernization. Migrate from monolithic databases to cloud-native, distributed ones.
  • Automation & Infrastructure as Code. It is the path to resilience and scale.
  • Serverless Computing. It is the “don’t worry about servers” mindset and ideal for many modern workloads.
  • Containerizing Workloads. Containers are a stepping stone to microservices, Kubernetes, and real DevOps maturity.
  • Zero-Trust Security & Cybersecurity Posture. One of the biggest priorities at the moment.
  • Cloud Migration. It is not about where your apps run. it is about how well they run there. “The cloud” should make you faster, safer, and leaner.

As you can see, application modernization is not one thing, it’s many things. You do not have to do all of these at once. But if you are serious about modernizing, these points (any more) must be part of your blueprint. Modernization is a mindset.

Why (replatforming) now?

There are a few reasons why application modernization projects are increasing:

  • The maturity of cloud-native platforms: Kubernetes, managed databases, and serverless frameworks have matured to the point where they can handle serious production workloads. It is no longer “bleeding edge”
  • DevOps and Platform Engineering are mainstream: We have shifted from siloed teams to collaborative, continuous delivery models. But that only works if your platform supports it.
  • AI and automation demand modern infrastructure: To leverage modern AI tools, event-driven data, and real-time analytics, your backend can’t be a 2004-era database with a web front-end duct-taped to it.

Conclusion

There is no longer much debate: (modern) applications are more important than ever. Yet despite all the talk around cloud-native technologies and modern architectures, the truth is that many organizations are still trying to catch up and work hard to modernize not just their applications, but also the infrastructure and processes that support them.

The current progress is encouraging, and many companies have learned from the experience of their first modernization projects.

One thing that is becoming harder to ignore is how much the geopolitical situation is starting to shape decisions around application modernization and cloud adoption. Concerns around data sovereignty, digital borders, national cloud regulations, and supply chain security are no longer just legal or compliance issues. They are shaping architecture choices.

Some organizations are rethinking their cloud and modernization strategies, looking at multi-cloud or hybrid models to mitigate risk. Others are delaying cloud adoption due to regional uncertainty, while a few are doubling down on local infrastructure to retain control. It is not just about performance or cost anymore, but also about resilience and autonomy.

The global context (suddenly) matters, and it is influencing how platforms are built, where data lives, and who organizations choose to partner with. If anything, it makes the case even stronger for flexible, portable, cloud-native architectures. So you are not locked into a single region or provider.