Secure Cloud Networking in OCI – Zero Trust Packet Routing

Secure Cloud Networking in OCI – Zero Trust Packet Routing

Zero Trust Packet Routing (ZPR) is Oracle Cloud Infrastructure’s (OCI) move to bring the principles of zero trust to the packet level. In simple terms, it allows you to control exactly which workloads can communicate with each other, based not on IP addresses, ports, or subnets, but on high-level, intent-based labels.

Think of it as network segmentation for the cloud-native era. Done without messing with subnet layouts, static security lists, or hard-to-follow firewall rules.

ZPR allows you to define policies that are explicit, least-privilege, auditable, and decoupled from network topology. It provides an additional layer of protection on top of existing OCI security primitives, such as NSGs, Security Lists, and IAM.

Protection against internet exfiltration with Zero Trust Packet Routing (ZPR)

Key Concepts Behind ZPR

To really understand ZPR, let’s break it into four essential building blocks:

1. Security Attribute Namespaces & Attributes

These are labels that describe your cloud resources in human-readable, intent-focused terms.

  • A Namespace is a grouping mechanism for attributes (e.g. app, env, sensitivity).

  • An Attribute is a key-value pair like app:frontend, env:prod, sensitivity:high.

ZPR lets you tag resources with up to 3 attributes (1 for VCNs), and policies reference those attributes to determine which communication flows are permitted.

This is powerful because it enables semantic security policies. You are no longer relying on IP or port-based rules and are using logic that’s closer to your business model.

2. ZPR Policy Language (ZPL)

ZPR policies are written in ZPL, Oracle’s purpose-built policy language for defining allowed connections. ZPL statements follow a clear syntax:

in networks:<VCN-name> allow <source-attribute> endpoints to connect to <destination-attribute> endpoints with protocol='<proto/port>'

Example:

in networks:prod-vcn allow app:frontend endpoints to connect to app:backend endpoints with protocol='tcp/443'

This policy allows all frontend workloads to reach backend workloads over HTTPS only within the prod-vcn.

This type of human-readable policy is easy to reason about, easy to audit, and matches well with how teams think about their systems (by role, not IP).

More policy examples can be found here.

3. Enforcement and Evaluation Logic

ZPR does not replace OCI’s native security tools but it layers on top of them. Every packet that passes through your VCN is evaluated against:

  1. Network Security Groups (NSGs)
  2. Security Lists (for subnets)
  3. ZPR Policies

A packet is only allowed if all three layers agree to permit it.

This makes ZPR defense-in-depth rather than a replacement for traditional controls.

It’s also worth noting:

  • ZPR policies are enforced only within a single VCN.

    • Inter-VCN communication still relies on other mechanisms like DRG and route tables.

  • ZPR policies are evaluated at packet routing time, before any connection is established.

4. Resource Support & Scope

ZPR is currently supported on a growing list of OCI resources, including:

  • VCNs

  • Compute Instances

  • Load Balancers

  • DB Systems (Autonomous/Exadata)

Also important:

  • ZPR can be enabled only in the home region of a tenancy

  • Enabling ZPR in a tenancy creates a default Oracle-ZPR security attribute namespace

  • Changes to ZPR policies in the Console might take up to five minutes to apply

How to Use ZPR

 

Step 1: Create Namespaces and Attributes

You start by creating Security Attribute Namespaces (e.g., env, app, tier) and assigning Attributes (e.g., env:prod, app:frontend) to your resources.

You can do this via:

  • OCI Console

  • CLI (oci zpr security-attribute create)

  • Terraform (via oci_zpr_security_attribute resource)

  • REST API or SDKs

You can assign up to 3 attributes per resource (except VCNs, which allow only 1).

Step 2: Write ZPR Policies Using ZPL

Once your attributes are in place, write policies in ZPL to define who can talk to whom. You can use:

  • Simple Policy Builder – GUI-based, good for basic use cases. It lets you select from prepopulated lists of resources identified by their security attributes to express security intent between two endpoints. The policy builder automatically generates the policy statement using correct syntax.

  • Policy Template Builder – Uses predefined templates It lets you select from a list of templates based on common use case scenarios that provide prefilled ZPR policy statements that you can then customize to create a ZPR policy.

  • Manual Policy Editor

  • CLI or API – For IaC and automation flows

Example: Allow backend apps in the prod-vcn to reach the database tier on port 1521 (Oracle DB):

in networks:prod-vcn allow app:backend endpoints to connect to app:database endpoints with protocol='tcp/1521'

Step 3: Assign Attributes to Resources

Finally, use the Console or CLI to attach attributes to resources like compute instances, load balancers, and VCNs.

This is the crucial step that links the policy with real workloads.

Security Advantages of ZPR

Zero Trust Packet Routing introduces significant security improvements across Oracle Cloud Infrastructure. Here’s what makes it a standout approach:

  • Identity-Aware Traffic Control
    Policies are based on resource identity and metadata (tags), not just IP addresses, making lateral movement by attackers significantly harder.

  • Micro-segmentation by Design
    Enables granular control between resources such as frontend, backend, and database tiers, aligned with zero trust principles.

  • No Dependency on Subnets or Security Lists
    ZPR policies operate independently of traditional network segmentation, reducing configuration complexity.

  • Simplified Policy Management with ZPL
    Oracle’s purpose-built Zero Trust Policy Language (ZPL) allows for concise, human-readable security rules, reducing human error.

  • Auditability and Transparency
    All ZPR policies are tracked and auditable via OCI logs and events, supporting compliance and governance needs.

  • Built for Modern Cloud Architectures
    Native support for dynamic and ephemeral cloud resources like managed databases, load balancers, and more.

  • Defense-in-Depth Integration
    ZPR complements other OCI security tools like NSGs, IAM, and Logging, reinforcing a layered security posture.

Summary

Zero Trust Packet Routing marks a pivotal shift in how network security is managed in Oracle Cloud Infrastructure. Traditional security models rely heavily on IP addresses, static network boundaries, and perimeter-based controls. In contrast, ZPR allows you to enforce policies based on the actual identity and purpose of resources by using a policy language that is both readable and precise.

By decoupling security controls from network constructs like subnets and IP spaces, ZPR introduces a modern, identity-centric approach that scales effortlessly with cloud-native workloads. Whether you are segmenting environments in a multitenant architecture, controlling east-west traffic between microservices, or enforcing strict rules for database access, ZPR offers the control and granularity you need without compromising agility.

The real power of ZPR lies not just in its policy engine but in how it integrates with the broader OCI ecosystem. It complements IAM, NSGs, and logging by offering another layer of precision. One that’s declarative and tightly aligned with your operational and compliance requirements.

If you are serious about least privilege, microsegmentation, and secure cloud-native design, ZPR deserves your attention.

Cloud Exit Triggers – What Happens When the Exit Button Isn’t Optional?

Cloud Exit Triggers – What Happens When the Exit Button Isn’t Optional?

It is becoming clearer by the day: geopolitical realities are forcing CIOs and regulators to revisit their cloud strategy, not just for performance or innovation, but for continuity, legal control, and sovereignty. The past few years have been a story of cloud-first, then cloud-smart, and then about cloud repatriation. The next chapter is about cloud control. And with the growing influence of U.S. legislation like the CLOUD Act, many in Europe’s regulated sectors are starting to ask: what happens when we need to exit?

Now add another layer: what if your cloud provider is still technically and legally subject to a foreign jurisdiction, even when the cloud lives in your own country and your own data centers?

That’s the fundamental tension/question with models like Oracle Alloy (or OCI Dedicated Region), a promising construct that brings full public cloud capabilities into local hands, but with a control plane and infrastructure still operated by Oracle itself. So what if something changes (for example, politically) and you need to exit?

Let’s explore what that exit could look like in practice, and whether Oracle’s broader portfolio provides a path forward for such a scenario.

Local Control – How Far Does Oracle Alloy Really Go?

Oracle Alloy introduces a compelling model for delivering public cloud services with local control. For providers like cloud13 (that’s the fictitious company I am using for this article), this means the full OCI service catalogue can run under the cloud13 brand, with customer relationships, onboarding, and support all handled locally. Critically, the Alloy control plane itself is deployed on-premises in cloud13’s own data center, not remotely managed from an Oracle facility. This on-site architecture ensures that operational control, including provisioning, monitoring, and lifecycle management, remains firmly within Swiss borders.

But while the infrastructure and control plane are physically hosted and operated by cloud13, Oracle still provides and maintains the software stack. The source code, system updates, telemetry architecture, and core service frameworks are still Oracle-owned IP, and subject to Oracle’s global governance and legal obligations. 

Please note: Even in disconnected Alloy scenarios, update mechanisms or security patches may require periodic Oracle coordination. Understanding how these touchpoints are logged and audited will be crucial in high-compliance sectors.

Oracle Alloy

So, while cloud13 ensures data residency, operational proximity, and sovereign service branding, the legal perimeter around the software stack itself may still inherit external jurisdictional influence.

For some sectors, this hybrid control model strikes the right balance. But for others, particularly those anticipating geopolitical triggers (even highly unlikely!) or regulatory shifts, it raises a question: what if you need to exit Alloy entirely?

What a Cloud Exit Really Costs – From Oracle to Anywhere

Let’s be honest and realistic: moving cleanly from Oracle Cloud Infrastructure (OCI) to a hyperscaler like AWS or Azure is anything but simple. OCI’s services are deeply intertwined. If you are running Oracle-native PaaS or database services, you are looking at significant rework – sometimes a full rebuild – to get those workloads running smoothly in a different cloud ecosystem.

On top of that, data egress fees can quickly pile up, and when you add the cost and time of re-certification, adapting security policies, and retraining your teams on new tools, the exit suddenly becomes expensive and drawn out.

That brings us to the critical question: if you are already running workloads in Oracle Alloy, what are your realistic exit paths, especially on-premises?

Going the VMware, Nutanix, or Platform9 route doesn’t solve the problem much either. Sure, they offer a familiar infrastructure layer, but they don’t come close to the breadth of integrated platform services Oracle provides. Every native service dependency you have will need to be rebuilt or replaced.

Then There’s Azure Local and Google Distributed Cloud

Microsoft and Google both offer sovereign cloud variants that come in connected and disconnected flavours.

While Azure Local and Google Distributed Cloud are potential alternatives, they behave much like public cloud platforms. If your workloads already live in Azure or Google Cloud, these services might offer a regulatory bridge. But if you are not already in those ecosystems, and like in our case, are migrating from an Oracle-based platform, you are still facing a full cloud migration.

Yes, that’s rebuilding infrastructure, reconfiguring services, and potentially rearchitecting dozens or even hundreds of applications.

And it’s not just about code. Legacy apps often depend on specific runtimes, custom integrations, or licensed software that doesn’t map easily into a new stack. Even containerised workloads need careful redesign to match new orchestration, security, and networking models. Multiply that across your application estate, and you are no longer talking about a pivot.

You are talking about a multi-year transformation programme.

That’s before you even consider the physical reality. To run such workloads locally, you would need enough data center space (image repatriation or a dual-vendor strategy), power, cooling, network integration, and a team that can operate it all at scale. These alternatives aren’t just expensive to build. They also require a mature operational model and skills that most enterprises simply don’t have ready.

One cloud is already challenging enough. Now, imagine a multi-cloud setup and pressure to migrate.

From Alloy to Oracle Compute Cloud@Customer Isolated – An Exit Without Downtime

Oracle’s architecture allows customers to move their cloud workloads from Alloy into an Oracle Compute Cloud@Customer environment (known as C3I), with minimal disruption. Because these environments use the exact same software stack and APIs as the public OCI cloud, workloads don’t need to be rewritten or restructured. You maintain the same database services, the same networking constructs, and the same automation frameworks.

This makes the transition more of a relocation than a rebuild. Everything stays intact – your code, your security model, your SLAs. The only thing that changes is the control boundary. In the case of C3I, Oracle has no remote access. All infrastructure remains physically isolated, and operational authority rests entirely with the customer.

Oracle Compute Cloud@Customer Isolated

By contrast, shifting to another public or private cloud requires rebuilding and retesting. And while VMware or similar platforms might accommodate general-purpose workloads, they still lack the cloud experience.

Note: Oracle Compute Cloud@Customer offers OCI’s full IaaS and a subset of PaaS services.

While C3I doesn’t yet deliver the full OCI portfolio, it includes essential services like Oracle Linux, Autonomous Database, Vault, IAM, and Observability & Management, making it viable for most regulated use cases.

Alloy as a Strategic Starting Point

So, should cloud13 even start with Alloy?

That depends on the intended path. For some, Alloy is a fast way to enter the market, leveraging OCI’s full capabilities with local branding and customer intimacy. But it should never be a one-way road. The exit path, no matter what the destination is, must be designed, validated, and ready before geopolitical conditions force a decision.

This isn’t a question of paranoia. It’s good cloud design. You want to have an answer for the regulators. You want to be prepared and feel safe.

The customer experience must remain seamless. And when required, ideally, the workloads must move within the same cloud logic, same automation, and same/some platform services.

Could VMware Be Enough?

For some customers, VMware might remain a logical choice, particularly where traditional applications and operational familiarity dominate. It enables a high degree of portability, and for infrastructure-led workloads, it’s an acceptable solution. But VMware environments lack integrated PaaS. You don’t get Autonomous DB. You get limited monitoring, logging, or modern analytics services. You don’t get out-of-the-box identity federation or application delivery pipelines.

Ultimately, you are buying infrastructure, not a cloud.

The Sovereign Stack – C3I and Exadata Cloud@Customer

That’s why Oracle’s C3I, especially when paired with Exadata Cloud@Customer (ExaCC) or a future isolated variant of it, offers a more complete solution. It delivers the performance, manageability, and sovereignty that today’s regulated industries demand. It lets you operate a true cloud on your own terms – local, isolated, yet fully integrated with Oracle’s broader cloud ecosystem.

C3I may not yet fit every use case. Its scale and deployment model must match customer expectations. But for highly regulated workloads, and especially for organizations planning for long-term legal and geopolitical shifts, it represents the most strategic exit vector available.

Final Thought

Cloud exit should never be a last-minute decision. In an IT landscape where laws, alliances, and risks shift quickly, exit planning is not a sign of failure. It’s considered a mark of maturity!

Oracle’s unique ecosystem, from Alloy to C3I, is one of the few that lets you build with that maturity from day one.

Whether you are planning a sovereign cloud, or are already deep into a regulated workload strategy, now is the time to assess exit options before they are needed. Make exit architecture part of your initial cloud blueprint.

Why OCI Dedicated Region Is the Missing Piece for Agentic Workloads

Why OCI Dedicated Region Is the Missing Piece for Agentic Workloads

In my last blog post, I explored how OCI Dedicated Region helps enterprises retrofit AI workloads into their existing data centers. We discussed how bringing Oracle’s cloud infrastructure on-premises addresses challenges such as GPU availability, latency, and data sovereignty, thereby removing many barriers to AI adoption.

Today, I want to take this further and explore the next wave of AI evolution, agentic AI, which not only responds to prompts but also takes autonomous actions. This isn’t just about having powerful models, it’s about embedding intelligence where it counts most: right next to your critical legacy systems.

The Rise of Agentic AI and Why It’s Different

Agentic AI represents a shift from passive AI tools to systems that can observe, decide, and act independently. Imagine AI agents that don’t just answer questions but manage workflows, orchestrate cloud resources, or automate incident response. This means giving AI the ability to interact with APIs, monitor real-time data streams, and adjust systems dynamically without human intervention.

The challenge? Most organizations’ critical data and applications still live in legacy platforms or tightly controlled environments. These environments were never built with autonomous AI in mind. Simply putting agentic AI in the public cloud and hoping it will integrate smoothly is not realistic. The physical and architectural distance creates latency, security risks, and compliance headaches that slow down adoption.

Legacy Systems and the Limits of Retrofitting

In my previous article, I described how OCI Dedicated Region helps organizations retrofit their existing infrastructure to support AI workloads by providing cloud-native GPU compute and AI services on-premises. While this approach is a game changer for many pilot projects and inference jobs, agentic AI demands something more foundational.

Agentic AI needs to be deeply integrated into the operational fabric of an enterprise. It requires direct, low-latency connections to databases, enterprise resource planning systems, and mission-critical applications that govern day-to-day business. Integrating AI compute into existing traditional infrastructure is a good first step, but it frequently results in complicated networks and security setups that raise operational risks.

Beyond Retrofit – OCI Dedicated Region as a Fully-Integrated AI Platform

OCI Dedicated Region is not just an add-on for AI, it’s a cloud region deployed inside your data center, delivering the same cloud services and infrastructure as Oracle’s public cloud, but physically under your control. This means you get a fully operational cloud region with high-performance computing, GPU acceleration, storage, networking, and AI services—all seamlessly integrated and ready to connect with your existing systems.

This is a fundamental shift. Instead of adapting your legacy environment to AI, you now place a full cloud region right next to your workloads. The AI agents you deploy can access real-time data, interact with legacy applications through native APIs, and operate within your strict security and compliance boundaries.

This close proximity eliminates latency and trust issues that come with remote public cloud AI deployments. It also reduces the need for complex VPNs or data synchronization layers, making agentic AI not just possible but practical.

Why Proximity Matters for Autonomous AI

Agentic AI thrives on context and immediacy. The closer it is to the systems it manages, the better decisions it can make and the faster it can act. For instance, if an AI agent detects a fault in a manufacturing control system or a spike in financial transaction anomalies, it must respond quickly to minimize disruption.

Running these AI systems in a public cloud region thousands of miles away adds delays and potential security risks, which can be unacceptable in regulated industries or mission-critical environments. OCI Dedicated Region removes those barriers by bringing the cloud to you.

By combining cloud agility with on-premises control, you get a hybrid environment where agentic AI can operate with the speed, reliability, and security enterprises demand.

The Strategic Advantage of OCI Dedicated Region

Most organizations aren’t looking for AI experiments, they want to operationalize AI at scale and embed it within their core processes. OCI Dedicated Region provides the infrastructure foundation to do just that.

It offers enterprise-ready cloud services inside your data center, enabling agentic AI to interact naturally with legacy systems without requiring costly or risky migrations. This means AI-powered automation, orchestration, and decision-making become achievable realities instead of distant goals.

If you want to move beyond retrofitting and truly modernize your AI journey, keeping the cloud close to your data, and your data close to the cloud, is essential. OCI Dedicated Region delivers exactly that.

Retrofitting AI Workloads with OCI Dedicated Region

Retrofitting AI Workloads with OCI Dedicated Region

As AI adoption becomes a strategic priority across nearly every industry, enterprises are discovering that scaling these workloads isn’t as simple as adding more GPUs to their cloud bill. Public cloud platforms like AWS and Azure offer extensive AI infrastructure, but many organizations are now facing steep costs, unpredictable pricing models, and growing concerns about data sovereignty, compliance, long-term scalability, and operational complexity. There are also physical challenges: most enterprise data centers were never designed for the high power, cooling, and interconnect demands of AI infrastructure.

A typical GPU rack can draw between 40 and 100 kW, far beyond the 5-10 kW that traditional racks can handle. Retrofitting a legacy data center to support such density requires high-density power delivery, advanced cooling, reinforced flooring, low-latency networking, and highly parallel storage systems. The investment often ranges from $4-8M per megawatt for retrofits and up to $15M for greenfield builds. Even with this capital outlay, organizations still face integration complexity, deployment delays, and fragmented operations.

This creates a challenging question: how can enterprises gain the agility, scale, and services of the public cloud for AI, without incurring its spiraling costs or rebuilding their entire infrastructure?

Oracle Cloud Infrastructure (OCI) Dedicated Region presents a compelling answer. It delivers the full OCI public cloud experience, including GPU compute, AI services, and cloud-native tooling, within your own data center. Oracle operates and manages the region, while you maintain full control. The result: public cloud performance and capabilities, delivered on-premises, without the compromises.

The Infrastructure Challenge of AI at Scale

AI workloads are no longer experimental, they are driving real business impact. Whether it’s training foundation models, deploying LLMs, or powering advanced search capabilities, these workloads require specialized infrastructure.

Unlike traditional enterprise IT, AI workloads place massive demands on power density, cooling, networking, and storage. GPU racks housing Nvidia H100 or A100 units can exceed 100 kW. Air cooling becomes ineffective, and liquid or hybrid cooling systems become essential. High-throughput, low-latency networks like 100/400 Gbps Ethernet or InfiniBand, are needed to connect compute clusters efficiently. AI workloads also rely heavily on large datasets and require high-bandwidth storage located close to compute.

In many enterprise data centers, this level of performance is simply out of reach. The facilities can’t provide the power or cooling, the racks can’t carry the weight, and the legacy networks can’t keep up.

The High Cost of Retrofitting for AI

For organizations considering bringing AI workloads back on-premises to manage costs, retrofitting is often seen as the obvious next step. But it rarely delivers the value expected.

Upgrading power infrastructure alone demands new transformers, PDUs, backup systems, and complex energy management. Cooling must shift from traditional air-based systems to liquid-based cooling loops or immersion techniques, requiring structural and spatial changes. Enterprise-grade racks are often too lightweight or densely packed for GPU servers, which can weigh over a ton each. Existing data center floors may need reinforcement.

Meanwhile, storage and networking systems must evolve to support I/O-intensive workloads. Parallel file systems, NVMe arrays, and tightly coupled fabrics are all essential, but rarely available in legacy environments. On top of that, most traditional data centers lack the cloud-native software stack needed for orchestration, security, observability, and automation.

Retrofits cost between $4-8M per megawatt. A greenfield build costs $11-15M per megawatt. These figures exclude operational overhead, integration timelines, training, and change management. For many, this is a non-starter.

OCI Dedicated Region – A True Public Cloud in Your Data Center

OCI Dedicated Region sidesteps these challenges. Oracle delivers a complete public cloud region, fully managed and operated by Oracle, inside your own facility. You get all the same infrastructure, services, and APIs as OCI’s public regions, with no loss of capability.

This includes GPU-accelerated compute (think of any Nvidia GPU), AI Services (like Data Science, Generative AI, and Vector Search), high-performance block and object storage, Oracle Autonomous Database, Exadata, analytics, low-latency networking, and full DevOps toolchains.

You also benefit from service mesh, load balancing, Kubernetes (OKE), serverless, observability, and zero-trust security services. From a developer perspective, it’s the same OCI experience – tools, SDKs, Terraform modules, and management consoles all work identically.

Importantly, data locality and sovereignty remain fully under your control. You manage access policies, audit trails, physical security, and compliance workflows.

Shifting from Capital Investment to Operational Efficiency

OCI Dedicated Region transforms infrastructure investment into an operating model. Rather than pouring capital into facilities, power systems, and integration, enterprises consume cloud resources on a predictable subscription basis. This eliminates hidden costs. No GPU spot market pricing, no surprise egress fees, no peak-hour surcharges.

Deployment is significantly faster compared to building or retrofitting infrastructure. Oracle delivers the region as a turnkey service, with pre-integrated compute, storage, AI, networking, and security. This minimizes integration complexity and accelerates time to value.

Operations are also simplified. OCI Dedicated Region maintains service parity with public OCI, which means your teams don’t need to adapt to different environments for hybrid or multi-cloud strategies. Everything runs on a consistent stack, which reduces friction and operational risk.

This model is particularly well-suited to highly regulated industries that require absolute control over data and infrastructure without losing access to modern AI tools.

Built for the Future of AI

OCI Dedicated Region supports a broad range of next-generation AI architectures and operational models. It enables federated AI, edge inference, and hybrid deployment strategies, allowing enterprises to place workloads where they make the most sense, without sacrificing consistency.

For instance, organizations can run real-time inference close to data sources at the edge (for example with Oracle Compute Cloud@Customer connected to your OCI Dedicated Region), while managing training and orchestration centrally. Workloads can burst into the public cloud when needed, leveraging OCI’s public regions without migrating entire stacks. Container-based scaling through Kubernetes ensures policy-driven elasticity and workload portability.

As power and cooling demands continue to rise, most enterprise data centers will be unable to keep the pace. OCI Dedicated Region is designed to absorb these demands, both technically and operationally.

Conclusion – Cloud Economics and Control Without Compromise

AI is quickly becoming a core part of enterprise infrastructure, and it’s exposing the limitations of both traditional data centers and conventional cloud models. Public cloud offers scale and agility, but often at unsustainable cost. On-prem retrofits are slow, expensive, and hard to manage.

OCI Dedicated Region offers a balanced alternative. It provides a complete cloud experience, GPU-ready and AI-optimized, within your own facility. You get the innovation, scale, and flexibility of public cloud, without losing control over data, compliance, or budget.

If your cloud bills are climbing and your infrastructure can’t keep up with the pace of AI innovation, OCI Dedicated Region is worth a serious look.

Why Emulating the Cloud Isn’t the Same as Being One

Why Emulating the Cloud Isn’t the Same as Being One

It’s easy to mistake progress for innovation. VMware Cloud Foundation 9.0 (VCF) introduces long-awaited features like VPC-style networking, developer-centric automation, and bundled services. But let’s be honest: this is not the future of cloud. This is infrastructure catching up to where the public cloud world already was ten years ago.

Example: Moving some concepts and features from VMware Cloud Director (vCD) to Aria Automation and then calling it VCF Automation is also not innovative. It was the right thing to do, as vCD and Aria Automation (formerly known as vRealize Automation) shared many overlapping features and concepts. In other words, we can expect VCF Automation to be the future and vCD will be retired in a few years.

Anyway, there’s a pattern here. Platform vendors continue to position themselves as “private cloud providers”, yet the experience they offer remains rooted in managing hardware, scaling clusters, and applying patches. Whether it’s VCF or Nutanix, the story is always the same: it’s better infrastructure. But that’s the problem. It’s still infrastructure.

In contrast, the real shift toward cloud doesn’t start with software-defined storage or NSX overlay networks. It starts with the service model. That’s what makes cloud work. That’s what makes it scalable, elastic, and developer-first. That’s what customers actually need.

Let’s unpack where VCF 9.0 lands and why it still misses the mark.

What’s New in VCF 9.0. And What’s Not.

Broadcom deserves credit for moving VCF closer to what customers have been asking for since at least 2020. The platform now includes a proper developer consumption layer, integrated VPC-style networking, a simplified control plane, and aligned software versions for different products. Yes, it feels more like a cloud. It automates more, hides more complexity, and makes day 2 operations less painful. All good steps!

The new virtual private cloud constructs let teams carve out self-contained network domains – complete with subnets, NAT, firewall rules, and load balancers – all provisioned from a central interface. That’s a meaningful upgrade from the old NSX workflows. Now, transit gateways can be deployed automatically, reducing the friction of multi-domain connectivity. The whole setup is better, simpler, and more cloud-like. Well done.

On the consumption side, there’s a proper push toward unified APIs. Terraform support, policy-as-code blueprints in YAML, and native Kubernetes provisioning give developers a way to consume infrastructure more like they would in a hyperscaler environment. VCF customers can onboard teams faster, and the lifecycle engine behind the scenes handles upgrades, certificates, and best-practice configurations with far less manual effort.

So yes, VCF 9.0 is a big step forward for Broadcom and for existing VMware customers. But let’s put that progress into perspective.

Cloud Features Delivered Years Too Late

The features we’re seeing now – developer APIs, VPCs, self-service provisioning, built-in security, elastic-like networking – these aren’t breakthroughs. They are basic expectations. Public cloud providers like AWS and Azure introduced the VPC concept more than 10 years ago. Public clouds have offered full-stack policy automation, service mesh observability, and integrated load balancing for most of the last decade.

What VCF 9.0 delivers in 2025 is essentially what existing on-premises customers were asking for back in 2020.

The bigger concern is that VMware has always been the benchmark for enterprise-grade virtualization and private infrastructure. When customers bought into VCF years ago, they expected these capabilities then, not now. Broadcom has simply shipped the version of VCF that many customers assumed was already on the roadmap, five years ago.

And even now, many of the services (add-ons) in VCF 9.0 like Avi load balancing, vDefend IDS/IPS, integrated databases, and AI services, are optional components, mostly manually deployed, and not fully elastic or usage-based. These are integrations, not native services. You still need to operate them.

The Core Problem: It’s Still Infrastructure-Led

That’s the real difference. VCF and Nutanix remain infrastructure-led platforms. They require hardware planning, capacity management, lifecycle orchestration, and dependency tracking. Yes, they have APIs. Yes, they support Kubernetes. But at their core, they are platforms you need to own, operate, and scale yourself.

Cloud, on the other hand, is not about owning anything. It’s about consuming outcomes. VCF 9.0 and others are just not there yet.

The Illusion of a Private Cloud

This is why it’s time to call out the difference. Just because something looks like cloud – has some APIs, supports Kubernetes, uses words like “consumption” and “developer self-service” – doesn’t mean it actually behaves like cloud.

The illusion of a “private cloud” is seductive. You get to keep control. You get to use familiar tools. But control also means responsibility. Familiar tools mean legacy thinking. And a so-called private cloud, in most cases, just means more complex infrastructure with higher expectations.

That’s not transformation. That’s rebranding.

What VCF 9.0 delivers is an important evolution of VMware’s private infrastructure platform. But let’s not confuse that with cloud. Broadcom has moved in the right direction. They have shipped what customers needed years ago. But they are still delivering (virtual) infrastructure. Just better packaged.

Final Thought

You don’t transform your IT strategy by modernizing clusters. You transform it by changing how you consume and operate technology.

So the question isn’t whether your stack looks like “the cloud”. The question is whether you can stop operating infrastructure and start consuming services.

That’s the real line between emulating the cloud and actually being one. And as of today, VCF (and Nutanix) are still on the other side of that line. It’s not good. It’s not bad. It is what it is.

Why Switzerland Needs a Different Kind of Sovereign Cloud

Why Switzerland Needs a Different Kind of Sovereign Cloud

Switzerland doesn’t follow. It observes, evaluates, and decides on its own terms. In tech, in policy, and especially in how it protects its data. That’s why the typical EU sovereign cloud model won’t work here. It solves a different problem, for a different kind of political union.

But what if we could go further? What if the right partner, one that understands vertical integration, local control, and legal separation, could build something actually sovereign?

That partner might just be Oracle.

Everyone is talking about the EU’s digital sovereignty push and Oracle responded with a serious answer: the EU Sovereign Cloud, which celebrated its second anniversary a few weeks ago. It’s a legally ring-fenced, EU-operated, independently staffed infrastructure platform. Built for sovereignty, not just compliance.

That’s the right instinct. But Switzerland is not the EU. And sovereignty here means more than “EU-only.” It means operations bound by Swiss law, infrastructure operated on Swiss soil, and decisions made by Swiss entities.

Oracle Alloy and OCI Dedicated Region – Sovereignty by Design

Oracle’s OCI Dedicated Region and the newer Alloy model were designed with decentralization in mind. Unlike traditional hyperscaler zones, these models bring the entire control plane on-premises, not just the data.

That allows for policy enforcement, tenant isolation, and lifecycle management to happen within the customer’s boundaries, without default exposure to centralized cloud control towers. In short, the foundation for digital sovereignty is already there.

But Switzerland, especially the public sector, expects more.

What Still Needs to Be Solved for Switzerland

Switzerland doesn’t just care about where data sits. It cares about who holds the keys, who manages the lifecycle, and under which jurisdiction they operate.

While OCI Dedicated Region and Alloy keep the control plane local, certain essential services, such as telemetry, patch delivery, and upgrade mechanisms, still depend on Oracle’s global backbone. In the Swiss context, even a low-level dependency can raise concerns about jurisdictional risk, including exposure to laws like the U.S. CLOUD Act.

Support must remain within Swiss borders. Sovereign regions that rely on non-Swiss teams or legal entities to resolve incidents still carry legal and operational exposure – but this data can be anonymized. Sovereignty includes not only local infrastructure, but also patch transparency, cryptographic root trust, and full legal separation from foreign jurisdictions. 

Yes, operational teams must be Swiss-based, except at the tier 2 or tier 3 level.

Avaloq Is Already Leading the Way

This isn’t just theory. Switzerland already has a working example: Avaloq, the Swiss financial technology provider, is running core workloads on OCI Dedicated Region.

These are not edge apps or sandbox environments. Avaloq supports mission-critical platforms for regulated financial institutions. If they trust Oracle’s architecture with that responsibility, the model is clearly feasable. From a sovereignty, security, and compliance perspective.

Avaloq’s deployment shows that Swiss-regulated workloads can run securely, locally, and independently. And if one of Switzerland’s most finance-sensitive firms went down this path, others across government, healthcare, and infrastructure should be paying attention.

Sovereignty doesn’t mean reinventing everything. It means learning from those already building it.

The Bottom Line

Switzerland doesn’t need more cloud. It needs a cloud built for Swiss values: neutrality, autonomy, and legal independence.

Oracle is closer to that model than most. Its architecture is already designed for local control. Its EU Sovereign Cloud shows it understands the legal and operational dimensions of sovereignty. And with Avaloq already in production on OCI Dedicated Region, the proof is there.

The technology is ready. The reference customer is live.

What comes next is a question of commitment.