What If Cloud Was Never the Destination But Just One Chapter In A Longer Journey

What If Cloud Was Never the Destination But Just One Chapter In A Longer Journey

For more than a decade, IT strategies were shaped by a powerful promise that the public cloud was the final destination. Enterprises were told that everything would eventually run there, that the data center would become obsolete, and that the only rational strategy was “cloud-first”. For a time, this narrative worked. It created clarity in a complex world and provided decision-makers with a guiding principle.

Hyperscalers accelerated digital transformation in ways no one else could have. Without their scale and speed, the last decade of IT modernization would have looked very different. But what worked as a catalyst does not automatically define the long-term architecture.

But what if that narrative was never entirely true? What if the cloud was not the destination at all, but only a chapter? A critical accelerator in the broader evolution of enterprise infrastructure? The growing evidence suggests exactly that. Today, we are seeing the limits of mono-cloud thinking and the emergence of something new. A shift towards adaptive platforms that prioritize autonomy over location.

The Rise and Fall of Mono-Cloud Thinking

The first wave of cloud adoption was almost euphoric. Moving everything into a single public cloud seemed not just efficient but inevitable. Infrastructure management became simpler, procurement cycles shorter, and time-to-market faster. For CIOs under pressure to modernize, the benefits were immediate and tangible.

Yet over time, the cost savings that once justified the shift started to erode. What initially looked like operational efficiency transformed into long-term operating expenses that grew relentlessly with scale. Data gravity added another layer of friction. While applications were easy to deploy, the vast datasets they relied on were not as mobile. And then came the growing emphasis on sovereignty and compliance. Governments and regulators, citizens and journalists as well, started asking difficult questions about who ultimately controlled the data and under what jurisdiction.

These realities did not erase the value of the public cloud, but they reframed it. Mono-cloud strategies, while powerful in their early days, increasingly appeared too rigid, too costly, and too dependent on external factors beyond the control of the enterprise.

Multi-Cloud as a Halfway Step

In response, many organizations turned to multi-cloud. If one provider created lock-in, why not distribute workloads across two or three? The reasoning was logical. Diversify risk, improve resilience, and gain leverage in vendor negotiations.

But as the theory met reality, the complexity of multi-cloud began to outweigh its promises. Each cloud provider came with its own set of tools, APIs, and management layers, creating operational fragmentation rather than simplification. Policies around security and compliance became harder to enforce consistently. And the cost of expertise rose dramatically, as teams were suddenly required to master multiple environments instead of one.

Multi-cloud, in practice, became less of a strategy and more of a compromise. It revealed the desire for autonomy, but without providing the mechanisms to truly achieve it. What emerged was not freedom, but another form of dependency. This time, on the ability of teams to stitch together disparate environments at great cost and complexity.

The Adaptive Platform Hypothesis

If mono-cloud was too rigid and multi-cloud too fragmented, then what comes next? The hypothesis that is now emerging is that the future will be defined not by a place – cloud, on-premises, or edge – but by the adaptability of the platform that connects them.

Adaptive platforms are designed to eliminate friction, allowing workloads to move freely when circumstances change. They bring compute to the data rather than forcing data to move to compute, which becomes especially critical in the age of AI. They make sovereignty and compliance part of the design rather than an afterthought, ensuring that regulatory shifts do not force expensive architectural overhauls. And most importantly, they allow enterprises to retain operational autonomy even as vendors merge, licensing models change, or new technologies emerge.

This idea reframes the conversation entirely. Instead of asking where workloads should run, the more relevant question becomes how quickly and easily they can be moved, scaled, and adapted. Autonomy, not location, becomes the decisive metric of success.

Autonomy as the New Metric?

The story of the cloud is not over, but the chapter of cloud as a final destination is closing. The public cloud was never the endpoint, but it was a powerful catalyst that changed how we think about IT consumption. But the next stage is already being written, and it is less about destinations than about options.

Certain workloads will always thrive in a hyperscale cloud – think collaboration tools, globally distributed apps, or burst capacity. Others, especially those tied to sovereignty, compliance, or AI data proximity, demand a different approach. Adaptive platforms are emerging to fill that gap.

Enterprises that build for autonomy will be better positioned to navigate an unpredictable future. They will be able to shift workloads without fear of vendor lock-in, place AI infrastructure close to where data resides, and comply with sovereignty requirements without slowing down innovation.

The emerging truth is simple: Cloud was never the destination. It was only one chapter in a much longer journey. The next chapter belongs to adaptive platforms and to organizations bold enough to design for freedom rather than dependency.

Stop Writing About VMware vs. Nutanix

Stop Writing About VMware vs. Nutanix

Over the last months I have noticed something “interesting”. My LinkedIn feed and Google searches are full of posts and blogs that try to compare VMware and Nutanix. Most of them follow the same pattern. They take the obvious features, line them up in two columns, and declare a “winner”. Some even let AI write these comparisons without a single line of lived experience behind it.

The problem? This type of content has no real value for anyone who has actually run these platforms in production. It reduces years of engineering effort, architectural depth, and customer-specific context into a shallow bullet list. Worse, it creates the illusion that such a side-by-side comparison could ever answer the strategic question of “what should I run my business on?”.

The Wrong Question

VMware vs. Nutanix is the wrong question to ask. Both vendors have their advantages, both have strong technology stacks, and both have long histories in enterprise IT. But if you are an IT leader in 2025, your real challenge is not to pick between two virtualization platforms. Your challenge is to define what your infrastructure should enable in the next decade.

Do you need more sovereignty and independence from hyperscalers? Do you need a platform that scales horizontally across the edge, data center, and public cloud with a consistent operating model? Do you need to keep costs predictable and avoid the complexity tax that often comes with layered products and licensing schemes?

Those are the real questions. None of them can be answered by a generic VMware vs. Nutanix LinkedIn post.

The Context Matters

A defense organization in Europe has different requirements than a SaaS startup in Silicon Valley. A government ministry evaluates sovereignty, compliance, and vendor control differently than a commercial bank that cares most about performance and transaction throughput.

The context (regulatory, organizational, and strategic) always matters more than product comparison charts. If someone claims otherwise, they probably have not spent enough time in the field, working with CIOs and architects who wrestle with these issues every day. Yes, (some) features are important and sometimes make the difference, but the big feature war days are over.

It’s About the Partner, Not Just the Platform

At the end of the day, the platform is only one piece of the puzzle. The bigger question is: who do you want as your partner for the next decade?

Technology shifts, products evolve, and roadmaps change. What remains constant is the relationship you build with the vendor or partner behind the platform. Can you trust them to execute your strategy with you? Can you rely on them when things go wrong? Do they share your vision for sovereignty, resilience, and simplicity or are they simply pushing their own agenda?

The answer to these questions matters far more than whether VMware or Nutanix has the upper hand in a feature battle.

A Better Conversation

Instead of writing another VMware vs. Nutanix blog, we should start a different conversation. One that focuses on operating models, trust, innovation, ecosystem integration, and how future-proof your platform is.

Nutanix, VMware, Red Hat, hyperscalers, all of them are building infrastructure and cloud stacks. The differentiator is not whether vendor A has a slightly faster vMotion or vendor B has one more checkbox in the feature matrix. The differentiator is how these platforms align with your strategy, your people, and your risk appetite, and whether you believe the partner behind it is one you can depend on.

Why This Matters Now

The market is in motion. VMware customers are forced to reconsider their roadmap due to the Broadcom acquisition and the associated licensing changes. Nutanix is positioning itself as a sovereign alternative with strong hybrid cloud credentials. Hyperscalers are pushing local zones and sovereign cloud initiatives.

In such a market, chasing simplistic comparisons is a waste of time. Enterprises should focus on long-term alignment with their cloud and data strategy. They should invest in platforms and partners that give them control, choice, and agility.

Final Thought

So let’s stop writing useless VMware vs. Nutanix comparisons. They don’t help anyone who actually has to make decisions at scale. Let’s raise the bar and bring back thought leadership to this industry. Share real experiences. Talk about strategy and outcomes. Show where platforms fit into the bigger picture of sovereignty, resilience, and execution. And most importantly: choose the partner you can trust to walk this path with you.

That is the conversation worth having. Everything else is just noise and bullshit.

Finally, People Start Realizing Sovereignty Is a Spectrum

Finally, People Start Realizing Sovereignty Is a Spectrum

For months and years, the discussion about cloud and digital sovereignty has been dominated by absolutes. It was framed as a black-and-white choice. Either you are sovereign, or you are not. Either you trust hyperscalers, or you don’t. Either you build everything yourself, or you hand it all over. But over the past two years, organizations, governments, and even the vendors themselves have started to recognize that this way of thinking doesn’t reflect reality. Sovereignty is seen as a spectrum now.

When I look at the latest Gartner Magic Quadrant (MQ) for Distributed Hybrid Infrastructure (DHI), this shift becomes even more visible. In the leader’s quadrant, we find AWS, Microsoft, Oracle, Broadcom (VMware), and Nutanix. Each of them is positioned differently, but they all share one thing in common. They now operate somewhere along this sovereignty spectrum. Some of them are “fully” sovereign and some of them are dependent. The truth lies in between, and it is about how much control you want to retain versus how much you are willing to outsource. But it’s also possible to have multiple vendors and solutions co-existing.

Gartner MQ DHI 2025

The Bandwidth of Sovereignty

To make this shift more tangible, think of sovereignty as a bandwidth rather than a single point. On the far left, you give up almost all control and rely fully on global hyperscalers, following their rules, jurisdictions, and technical standards. On the far right, you own and operate everything in your data center, with full control but also full responsibility. Most organizations today are somewhere in between (using a mix of different vendors and clouds).

This bandwidth allows us to rate the leaders in the MQ not as sovereign or non-sovereign, but according to where they sit on the spectrum:

  • AWS stretches furthest toward global reach and scalability. They are still in the process of building a sovereign cloud, and until that becomes reality, none of their extensions (Outposts, Wavelength, Local Zones) can truly be seen as sovereign (please correct me if I am wrong). Their new Dedicated Local Zones bring infrastructure closer, but AWS continues to run the show. Meaning sovereignty is framed through compliance, not operational autonomy.

  • Microsoft sits closer to the middle. With Microsoft’s Sovereign Cloud initiatives in Europe, they acknowledge the political and regulatory reality. Customers gain some control over data residency and compliance, but the operational steering remains with Microsoft (except for their “Sovereign Private Cloud” offering, which consists of Azure Local + Microsoft 365 Local).

  • Oracle has its EU Sovereign Cloud, which is already available today, and with offerings like OCI Dedicated Region and Alloy that push sovereignty closer to customers. Still, these don’t offer operational autonomy, as Oracle continues to manage much of the infrastructure. For full isolation, Oracle provides Oracle Cloud Isolated Region and the smaller Oracle Compute Cloud@Customer Isolated (C3I). These are unique in the hyperscaler landscape and move Oracle further to the right.

  • Broadcom (VMware) operates in a different zone of the spectrum. With VMware’s Cloud Foundation stack, customers can indeed build sovereign clouds with operational autonomy in their own data centers. This puts them further right than most hyperscalers. But Gartner and recent market realities also show that dependency risks are not exclusive to AWS or Azure. VMware customers face uncertainty tied to Broadcom’s licensing models and strategic direction, which balances out their autonomy.

  • Google does not appear in the leader’s quadrant yet, but their Google Distributed Cloud (GDC) deserves mention. Gartner highlights how GDC is strategically advancing, winning sovereign cloud projects with governments and partners, and embedding AI capabilities on-premises. Their trajectory is promising, even if their current market standing hasn’t brought them into the top right yet.

  • Nutanix stands out by offering a comprehensive single product – the Nutanix Cloud Platform (NCP). Gartner underlines that NCP is particularly suited for sovereign workloads, hybrid infrastructure management, and edge multi-cloud deployments. Unlike most hyperscalers, Nutanix delivers one unified stack, including its “own hypervisor as a credible ESXi alternative”. That makes it possible to run a fully sovereign private cloud with operational autonomy, without sacrificing cloud-like agility and elasticity.

Why the Spectrum Matters

This sovereignty spectrum changes how CIOs and policymakers make decisions. Instead of asking “Am I sovereign or not?”, the real question becomes:

How far along the spectrum do I want to be and how much am I willing to compromise for flexibility, cost, or innovation?

It is no longer about right or wrong. Choosing AWS does not make you naive. Choosing Nutanix does not make you paranoid. Choosing Oracle does not make you old-fashioned. Choosing Microsoft doesn’t make you a criminal. Each decision reflects an organization’s position along the bandwidth, balancing risk, trust, cost, and control.

Where We Go From Here

The shift to this spectrum-based view has major consequences. Vendors will increasingly market not only their technology but also their place on the sovereignty bandwidth. Governments will stop asking for absolute sovereignty and instead demand clarity about where along the spectrum a solution sits. And organizations will begin to treat sovereignty not as a one-time decision but as a dynamic posture that can move left or right over time, depending on regulation, innovation, and geopolitical context.

The Gartner MQ shows that the leaders are already converging around this reality. The differentiation now lies in how transparent they are about it and how much choice they give their customers to slide along the spectrum. Sovereignty, in the end, is not a fixed state. It is a journey.

Swiss Government Cloud – Possible Paths, Components and Vendors Compared

Swiss Government Cloud – Possible Paths, Components and Vendors Compared

Disclaimer: This article reflects my personal opinion and not that of my employer.

The discussion around a Swiss Government Cloud (SGC) has gained significant momentum in recent months. Digital sovereignty has become a political and technological necessity with immediate relevance. Government, politics, and industry are increasingly asking how a national cloud infrastructure could look. This debate is not just about technology, but also about governance, funding, and the ability to innovate.

Today’s starting point is relatively homogeneous. A significant portion of Switzerland’s public sector IT infrastructure still relies on VMware. This setup is stable but not designed for the future. When we talk about the Swiss Government Cloud, the real question is how this landscape evolves from pragmatic use of the public cloud to the creation of a fully sovereign cloud, operated under Swiss control.

For clarity: I use the word “component” (Stufe), in line with the Federal Council’s report, to describe the different maturity levels and deployment models.

A Note on the Federal Council’s Definition of Digital Sovereignty

According to the official report, digital sovereignty includes:

  • Data and information sovereignty: Full control over how data is collected, stored, processed, and shared.

  • Operational autonomy: The ability of the Federal Office of Information Technology (FOITT) to run systems with its own staff for extended periods, without external partners.

  • Jurisdiction and governance: Ensuring that Swiss law, not foreign regulation, defines control and access.

This definition is crucial when assessing whether a cloud model is “sovereign enough.”

Component 1 – Public Clouds Standard

The first component describes the use of the public cloud in its standard form. Data can be stored anywhere in the world, but not necessarily in Switzerland. Hyperscalers such as AWS, Microsoft Azure, or Oracle Cloud Infrastructure (OCI) offer virtually unlimited scalability, the highest pace of innovation, and a vast portfolio of services.

Amazon Web Services (AWS) is the broadest platform by far, providing access to an almost endless variety of services and already powering workloads from MeteoSwiss.

Microsoft Azure integrates deeply with Microsoft environments, making it especially attractive for administrations that are already heavily invested in Microsoft 365 and MS Teams. This ecosystem makes Azure a natural extension for many public sector IT landscapes.

Oracle Cloud Infrastructure (OCI), on the other hand, emphasizes efficiency and infrastructure, particularly for databases, and is often more transparent and predictable in terms of cost.

However, component 1 must be seen as an exception. For sovereign and compliant workloads, there is little reason to rely on a global public cloud without local residency or control. This component should not play a meaningful role, except in cases of experimental or non-critical workloads.

And if there really is a need for workloads to run outside of Switzerland, I would recommend using alternatives like the Oracle EU Sovereign Cloud instead of a regular public region, as it offers stricter compliance, governance, and isolation for European customers.

Component 2a – Public Clouds+

Here, hyperscalers offer public clouds with Swiss data residency, combined with additional technical restrictions to improve sovereignty and governance.

AWS, Azure, and Oracle already operate cloud regions in Switzerland today. They promise that data will not leave the country, often combined with features such as tenant isolation or extra encryption layers.

The advantage is clear. Administrations can leverage hyperscaler innovation while meeting legal requirements for data residency.

Component 2b – Public Cloud Stretched Into FOITT Data Centers

This component goes a step further. Here, the public cloud is stretched directly into the local data centers. In practice, this means solutions like AWS Outposts, Azure Local (formerly Azure Stack), Oracle Exadata & Compute Cloud@Customer (C3), or Google Distributed Cloud deploy their infrastructure physically inside FOITT facilities.

Diagram showing an Outpost deployed in a customer data center and connected back to its anchor AZ and parent Region

This creates a hybrid cloud. APIs and management remain identical to the public cloud, while data is hosted directly by the Swiss government. For critical systems that need cloud benefits but under strict sovereignty (which differs between solutions), this is an attractive compromise.

The vendors differ significantly:

Costs and operations are predictable since most of these models are subscription-based.

Component 3 – Federally Owned Private Cloud Standard

Not every path to the Swiss Government Cloud requires a leap into hyperscalers. In many cases, continuing with VMware (Broadcom) represents the least disruptive option, providing the fastest and lowest-risk migration route. It lets institutions build on existing infrastructure and leverage known tools and expertise.

Still, the cloud dependency challenge isn’t exclusive to hyperscalers. Relying solely on VMware also creates a dependency. Whether with VMware or a hyperscaler, dependency remains dependency and requires careful planning.

Another option is Nutanix, which offers a modern hybrid multi-cloud solution with its AHV hypervisor. However, migrating from VMware to AHV is in practice as complex as moving to another public cloud. You need to replatform, retrain staff, and become experts again.

Hybrid Cloud Management diagram

Both VMware and Nutanix offer valid hybrid strategies:

  • VMware: Minimal disruption, continuity, familiarity, low migration risk. Can run VMware Cloud Foundation (VCF) on AWS, Azure, Google Cloud and OCI.

  • Nutanix: Flexibility and hybrid multi-cloud potential, but migration is similar to changing cloud providers.

Crucially, changing platforms or introducing a new cloud is harder than it looks. Organizations are built around years of tailored tooling, integrations, automation, security frameworks, governance policies, and operational familiarity. Adding or replacing another cloud often multiplies complexity rather than reduces it.

Gartner Magic Quadrant for_Distributed_Hybrid_Infrastructure

Source: https://www.oracle.com/uk/cloud/distributed-cloud/gartner-leadership-report/ 

But sometimes it is just about costs, new requirements, or new partnerships, which result in a big platform change. Let’s see how the Leaders Magic Quadrant for “Distributed Hybrid Infrastructure” changes in the coming weeks or months.

Why Not Host A “Private Public Cloud” To Combine Components 2 and 3?

This possibility (let’s call with component 3.5) goes beyond the traditional framework of the Federal Council. In my view, a “private public cloud” represents the convergence of public cloud innovation with private cloud sovereignty.

“The hardware needed to build on-premises solutions should therefore be obtained through a ‘pay as you use’ model rather than purchased. A strong increase in data volumes is expected across all components.”

Source: https://www.fedlex.admin.ch/eli/fga/2024/1408/de#lvl_2/lvl_2.4 (translated from German)

Solutions like OCI Dedicated Region or Oracle Alloy can deliver the entire Oracle Cloud stack– IaaS, PaaS, databases, analytics – on Swiss soil. Unlike AWS Outposts, Azure Local, these solutions are not coming with a limited set of cloud services, but are identical to a full public cloud region with all cloud services, only hosted and operated locally in a customer’s data center.

OCI Dedicated Region Overview

Additionally, Oracle offers a compelling hybrid path. With OCVS (Oracle Cloud VMware Solution) on OCI Dedicated Region or Alloy, organizations can lift-and-shift VMware workloads unchanged to this local public cloud based on Oracle Cloud Infrastructure, then gradually modernize selected applications using Oracle’s PaaS services like databases, analytics, or AI.

OCI Dedicated Region and OCVS

This “lift-and-modernize” journey helps balance risk and innovation while ensuring continuous operational stability.

And this makes component 3.5 unique. It provides the same public-cloud experience (continuous updates, innovation pace; 300-500 changes per day, 10’000-15’000 per month) but under sovereign control. With Alloy, the FOITT could even act as a Cloud Service Provider for cantons, municipalities, and hospitals, creating a federated model without each canton building its own cloud.

Real-world Swiss example: Avaloq was the first customer in Switzerland to deploy an OCI Dedicated Region, enabling it to migrate clients and offer cloud services to other banks in Switzerland. This shows how a private public cloud can serve highly regulated industries while keeping data and operations in-country.

In this model:

  • Component 2b becomes redundant: Why stretch a public cloud into FOITT data centers if a full cloud can already run there?

  • Component 2 can also be partly obsolete: At least for workloads running a local OCI Dedicated Region (or Alloy) deployment, there’s no need for parallel deployments in public regions, since everything can run sovereignly on a local dedicated OCI region in Switzerland (while workloads from MeteoSwiss on AWS remain in component 2).

A private public cloud thus merges sovereignty with innovation while reducing the need for parallel cloud deployments.

Another Possibility – Combine Capabilities From A Private Public Cloud With Component 2b

Another possibility could be to combine elements of components 3.5 and 2b into a federated architecture: Using OCI Dedicated Region or Oracle Alloy as the central control plane in Switzerland, while extending the ecosystem with Oracle Compute Cloud@Customer (C3) for satellite or edge use cases.

Oracle Compute Cloud@Customer – Wichtige Funktionen

This creates a hub-and-spoke model:

  • The Dedicated Region or Alloy in Switzerland acts as the sovereign hub for governance and innovation.

  • C3 appliances extend cloud services into distributed or remote locations, tightly integrated but optimized for local autonomy.

A practical example would be Swiss embassies around the world. Each embassy could host a lightweight C3 edge environment, connected securely back to the central sovereign infrastructure in Switzerland. This ensures local applications run reliably, even with intermittent connectivity, while remaining part of the overall sovereign cloud ecosystem.

By combining these capabilities, the FOITT could:

  • Use component 2b’s distributed deployment model (cloud stretched into local facilities)

  • Leverage component 3.5’s VMware continuity path with OCVS for easy migrations

  • Rely on component 3.5’s sovereignty and innovation model (private public cloud with full cloud parity)

This blended approach would allow Switzerland to centralize sovereignty while extending it globally to wherever Swiss institutions operate.

What If A Private Public Cloud Isn’t Sovereign Enough?

It is possible that policymakers or regulators could conclude that solutions such as OCI Dedicated Region or Oracle Alloy are not “sovereign enough”, given that their operational model still involves vendor-managed updates and tier-2/3 support from Oracle.

In such a scenario, the BIT would have fallback options:

Maintain a reduced local VMware footprint: BIT could continue to run a smaller, fully local VMware infrastructure that is managed entirely by Swiss staff. This would not deliver the breadth of services or pace of innovation of a hyperscale-based sovereign model, but it would provide the maximum possible operational autonomy and align closely with Switzerland’s definition of sovereignty.

Leverage component 4 from the FDJP: Using the existing “Secure Private Cloud” (SPC) from the Federal Department of Justice and Police (FDJP). But I am not sure if the FDJP wants to become a cloud service provider for other departments.

That said, these fallback strategies don’t necessarily exclude the use of OCI. Dedicated Region or Alloy could co-exist with a smaller VMware footprint, giving the FOITT both innovation and control. Alternatively, Oracle could adapt its operating model to meet Swiss sovereignty requirements – for example, by transferring more operational responsibility (tier-2 support) to certified Swiss staff.

This highlights the core dilemma: The closer Switzerland moves to pure operational sovereignty, the more it risks losing access to the innovation and agility that modern cloud architectures bring. Conversely, models like Dedicated Region or Alloy can deliver a full public cloud experience on Swiss soil, but they require acceptance that some operational layers remain tied to the vendor.

Ultimately, the Swiss Government Cloud must strike a balance between autonomy and innovation, and the decision about what “sovereign enough” means will shape the entire strategy.

Conclusion

The Swiss Government Cloud will not be a matter of choosing one single path. A hybrid approach is realistic: Public cloud for agility and non-critical workloads, stretched models for sensitive workloads, VMware or Nutanix for hybrid continuity, sovereign or “private public cloud” infrastructures for maximum control, and federated extensions for edge cases.

But it is important to be clear: Hybrid cloud or even hybrid multi-cloud does not automatically mean sovereign. What it really means is that a sovereign cloud must co-exist with other clouds – public or private.

To make this work, Switzerland (and each organization within it) must clearly define what sovereignty means in practice. Is it about data and information sovereignty? Is it really about operational autonomy? Or about jurisdiction and governance?

Only with such a definition can a sovereign cloud strategy deliver on its promise, especially when it comes to on-premises infrastructure, where control, operations, and responsibilities need to be crystal clear.

PS: Of course, the SGC is about much more than infrastructure. The Federal Council’s plan also touches networking, cybersecurity, automation and operations, commercial models, as well as training, consulting, and governance. In this article, I deliberately zoomed in on the infrastructure side, because it’s already big, complex, and critical enough to deserve its own discussion. And just imagine sitting on the other side, having to go through all the answers of the upcoming tender. Not an easy task.

From Castles to Credentials – Why Identities Are the New Perimeter

From Castles to Credentials – Why Identities Are the New Perimeter

The security world has outgrown its castle. For decades, enterprise networks operated on the principle of implicit trust: if a device or user could connect from inside the perimeter, they were granted access. Firewalls and VPNs acted as moats and drawbridges, controlling what entered the fortress. But the rise of clouds, remote work, and APIs has broken down those walls by replacing physical boundaries with something far more fluid: identity.

This shift has led to the emergence of Zero Trust Architecture (ZTA), which flips the traditional model. Instead of trusting users based on their location or device, we now assume that no actor should be trusted by default, whether inside or outside the network. Every access request must be verified, every time, using contextual signals like identity, posture, behavior, and intent.

But “Zero Trust” isn’t just about a philosophical change but about practical design as well. Many organizations start their Zero Trust journey by microsegmenting networks or rolling out identity-aware proxies. That’s a step in the right direction, but a true transformation goes deeper. It redefines identity as the central pillar of security architecture. Not just a gatekeeper, but the control plane through which access decisions are made, enforced, and monitored.

The Inherent Weakness of Place-Based Trust

The traditional security model depends on a dangerous assumption: if you are inside the network, you are trustworthy. That might have worked when workforces were centralized and systems were isolated. With hybrid work, multi-cloud adoption, and third-party integrations, physical locations mean very little nowadays.

Attackers know this. Once a single user account is compromised via phishing, credential stuffing, or social engineering, it can be used to move laterally across the environment, exploiting flat networks and overprovisioned access. The rise of ransomware, supply chain attacks, and insider threats all originate from this misplaced trust in location.

This is where identity-based security becomes essential. Instead of relying on IP addresses or subnet ranges, access policies are tied to who or what is making the request and under what conditions. For example, a user might only get access if their device is healthy, they are connecting from a trusted region, and they pass MFA.

By decoupling access decisions from the network and basing them on identity context, organizations can stop granting more access than necessary and prevent compromised actors from gaining a foothold.

Identities Take Center Stage

Identities are multiplying rapidly, not just users, but also workloads, devices, APIs, and service accounts. This explosion of non-human identities creates a massive attack surface. Yet, in many organizations, these identities are poorly managed, barely monitored, and rarely governed.

Identity-Centric Zero Trust changes that. It places identity at the center of every access flow, ensuring that each identity, human or machine, is:

  • Properly authenticated

  • Authorized for just what it needs

  • Continuously monitored for unusual behavior

Example: A CI/CD pipeline deploys an app into production. With traditional models, that pipeline might have persistent credentials with broad permissions. In an identity-centric model, the deployment service authenticates via workload identity, receives just-in-time credentials, and is granted only the permissions needed for that task.

This model reduces privilege sprawl, limits the blast radius of compromised credentials, and provides clear visibility and accountability. It’s about embedding least privilege, lifecycle management, and continuous validation into the DNA of how access is handled.

Routing With Intent

Zero Trust doesn’t mean the network no longer matters, it means the network must evolve. Today’s networks need to understand and enforce identity, just like the access layer.

A good example of this is Oracle Cloud Infrastructure’s Zero Trust Packet Routing (ZPR). With ZPR, packets are only routed if the source and destination identities are explicitly authorized to communicate. It’s not just about firewall rules or ACLs but also about intent-based networking, where identity and policy guide the flow of traffic. A backend service won’t even see packets from an unauthorized frontend. Routing decisions happen only after both parties are authenticated and authorized.

This is part of a bigger trend. Across the industry, cloud providers and SDN platforms are starting to embed identity metadata into network-level decisions, and routing and access enforcement are being infused with contextual awareness and identity-driven policies.

For architects and security teams, this opens new possibilities for building secure-by-design cloud networks, where you can enforce who talks to what, when, and under what conditions, down to the packet level.

Identity as the Control Plane of Modern Security

If Zero Trust has taught us anything, it’s that identity is the new perimeter and that it’s the control plane for the entire security architecture.

When identity becomes the central decision point, everything changes:

  • Network segmentation is enforced via identity-aware rules

  • Application access is governed by contextual IAM policies

  • Monitoring and detection pivot around behavioral baselines tied to identity

  • Automation and response are triggered by anomalies in identity behavior

This model allows for granular, adaptive, and scalable control, without relying on fixed infrastructure or fragile perimeters. It also provides a better experience for users: access becomes more seamless when trust is built dynamically based on real-time signals, rather than static rules.

Image with no description

Importantly, this approach doesn’t require a big bang overhaul. Organizations can start small by maturing IAM hygiene, implementing least privilege, or onboarding apps into SSO and MFA, and build toward more advanced use cases like workload identity, CIEM (Cloud Infrastructure Entitlement Management), and ITDR (Identity Threat Detection and Response).

Concluding Thoughts

We need a security model that reflects that reality. Perimeters no longer define trust. Location is no longer a proxy for legitimacy. And static controls are no match for dynamic threats – it’s like using static IPs when working with Kubernetes and containers.

Identity-Centric Zero Trust offers a modern foundation and a strategy. One that weaves together people, processes, and technologies to ensure that every access decision is intentional, contextual, and revocable.

Whether you are modernizing a legacy environment or building greenfield in the cloud, start by asking the right question.

Not “where is this request coming from?” but “who is making the request, and should they be allowed?”.

The Cloud Isn’t Eating Everything. And That’s a Good Thing

The Cloud Isn’t Eating Everything. And That’s a Good Thing

A growing number of experts warn that governments and enterprises are “digitally colonized” by U.S. cloud giants. A provocative claim and a partial truth. It’s an emotionally charged view, and while it raises valid concerns around sovereignty and strategic autonomy, it misses the full picture.

Because here’s the thing. Some (if not most) workloads in enterprise and public sector IT environments are still hosted on-premises. This isn’t due to resistance or stagnation. It’s the result of deliberate decisions made by informed IT leaders. Leaders who understand their business, compliance landscape, operational risks, and technical goals.

We are no longer living in a world where the public cloud is the default. We are living in a world where “cloud” is a choice and is used strategically. This is not failure. It’s maturity.

A decade ago, “cloud-first”  was often a mandate. CIOs and IT strategists were encouraged, sometimes pressured, to move as much as possible to the public cloud. It was seen as the only way forward. The public cloud was marketed as cheaper, faster, and more innovative by default.

But that narrative didn’t survive contact with reality. As migrations progressed, enterprises quickly discovered that not every workload belongs in the cloud. The benefits were real, but so were the costs, complexities, and trade-offs.

Today, most organizations operate with a much more nuanced perspective. They take the time to evaluate each application or service based on its characteristics. Questions like: Is this workload latency-sensitive? What are the data sovereignty requirements? Can we justify the ongoing operational cost at scale? Is this application cloud-native or tightly coupled to legacy infrastructure? What are the application’s dependencies?

This is what maturity looks like. It’s not about saying “yes” or “no” to the cloud in general. It’s about using the right tool for the right job. Public cloud remains an incredibly powerful option. But it is no longer a one-size-fits-all solution. And that’s a good thing.

On-Premises Infrastructure Is Still Valid

There is this persistent myth that running your own datacenter, or even part of your infrastructure, is a sign that you are lagging behind. That if you are not in the cloud, you are missing out on agility, speed, and innovation. That view simply doesn’t hold up.

In reality, on-premises infrastructure is still a valid, modern, and strategic choice for many enterprises, especially in regulated industries like healthcare, finance, manufacturing, and public services. These sectors often have clear, non-negotiable requirements around data locality, compliance, and performance. In many of these cases, operating infrastructure locally is not just acceptable. It’s the best option available.

Modern on-prem environments are nothing like the datacenters of the past. Thanks to advancements in software-defined infrastructure, automation, and platform engineering, on-prem can offer many of the same cloud-like capabilities: self-service provisioning, infrastructure-as-code, and full-stack observability. When properly built and maintained, on-prem can be just as agile as the public cloud.

That said, it’s important to acknowledge a key difference. While private infrastructure gives you full control, it can take longer to introduce new services and capabilities. You are not tapping into a global marketplace of pre-integrated services and APIs like you would with Oracle Cloud or Microsoft Azure. You are depending on your internal teams to evaluate, integrate, and manage each new component.

And that’s totally fine, if your CIO’s focus is stability, compliance, and predictable innovation cycles. For many organizations, that’s (still) exactly what’s needed. But if your business thrives on emerging technologies, needs instant access to the latest AI or analytics platforms, or depends on rapid go-to-market execution, then public cloud innovation cycles might offer an advantage that’s hard to replicate internally.

Every Enterprise Can Still Build Their Own Data Center Stack

It’s easy to assume that the era of enterprises building and running their own cloud-like platforms is over. After all, hyperscalers move faster, operate at massive scale (think about the thousands of engineers and product managers), and offer integrated services that are hard to match. For many organizations, especially those lacking deep infrastructure expertise or working with limited budgets, the public cloud is the most practical and cost-effective option.

But that doesn’t mean enterprises can’t or shouldn’t build their own platforms, especially when they have strong reasons to do so. Many still do, and do it effectively. With the right people, architecture, and operational discipline, it’s entirely possible to build private or hybrid environments that are tailored, secure, and strategically aligned.

The point isn’t to compete with hyperscalers on scale, it’s to focus on fit. Enterprises that understand their workloads, compliance requirements, and business goals can create infrastructure that’s more focused and more integrated with their internal systems.

Yes, private platforms may evolve more slowly. They may require more upfront investment and long-term commitment. But in return, they offer control, transparency, and alignment. Advantages that can outweigh speed in the right contexts!

And critically, the tooling has matured. Today’s internal platforms aren’t legacy silos but are built with the same modern engineering principles: Kubernetes, GitOps, telemetry, CI/CD, and self-service automation.

Note: If a customer wants the best of both worlds, there are options like OCI Dedicated Region.

The Right to Choose the Right Cloud

One of the most important shifts we are seeing in enterprise IT is the move away from single-platform thinking. No one-size-fits-all platform exists. And that’s precisely why the right to choose the right cloud matters.

Public cloud makes sense in many scenarios. Organizations might choose Azure because of its tight integration with Microsoft tools. They might select Oracle Cloud for better pricing or AI capabilities. At the same time, they continue to operate significant workloads on-premises, either by design or necessity.

This is the real world of enterprise IT: mixed environments, tailored solutions, and pragmatic trade-offs. These aren’t poor decisions or “technical debt”. Often, they are deliberate architectural choices made with a full understanding of the business and operational landscape. 

What matters most is flexibility. Organizations need the freedom to match workloads to the environments that best support them, without being boxed in by ideology, procurement bias, or compliance roadblocks. And that flexibility is what enables long-term resilience.

What the Cloud Landscape Actually Looks Like

Step into any enterprise IT environment today, and you will find a blend of technologies, platforms, and operational models. And the mix varies based on geography, industry, compliance rules, and historical investments.

The actual landscape is not black or white. It’s a continuum of choices. Some services live in hyperscale clouds. Others are hosted in sovereign, regional datacenters. Many still run in private infrastructure owned and operated by the organization itself.

This hybrid approach isn’t messy. It’s intentional and reflects the complexity of enterprise IT and the need to balance agility with governance, innovation with stability, and cost with performance.

What defines modern IT today is the operating model. The cloud is not a place. It’s a way of working. Whether your infrastructure is on-prem, in the public cloud, or somewhere in between, the key is how it’s automated, how it’s managed, how it integrates with developers and operations, and how it evolves with the business.

Conclusion: Strategy Over Hype – And Over Emotion

There’s no universal right or wrong when it comes to cloud strategy. Only what works for your organization based on risk, requirements, talent, and timelines. But we also can’t ignore the reality of the current market landscape.

Today, U.S. hyperscalers control over 70% of the European cloud market. Across infrastructure layers like compute, storage, networking, and software stacks, Europe’s digital economy relies on U.S. technologies for 85 to 90% of its foundational capabilities. 

But these numbers didn’t appear out of nowhere.

Let’s be honest: it’s not the fault of hyperscalers that enterprises and public sector organizations chose to adopt their platforms. Those were decisions made by people – CIOs, procurement teams, IT strategists – driven by valid business goals: faster time-to-market, access to innovation, cost modeling, availability of talent, or vendor consolidation.

These choices might deserve reevaluation, yes. But they don’t deserve emotional blame.

We need to stop framing the conversation as if U.S. cloud providers “stole” the European market. That kind of narrative doesn’t help anyone. The reality is more complex and far more human. Companies chose platforms that delivered, and hyperscalers were ready with the talent, services, and vision to meet that demand.

If we want alternatives, if we want European options to succeed, we need to stop shouting at the players and start changing the rules of the game. That means building competitive offerings, investing in skills, aligning regulation with innovation, and making sovereignty a business advantage, not just a political talking point.