Why Emulating the Cloud Isn’t the Same as Being One

Why Emulating the Cloud Isn’t the Same as Being One

It’s easy to mistake progress for innovation. VMware Cloud Foundation 9.0 (VCF) introduces long-awaited features like VPC-style networking, developer-centric automation, and bundled services. But let’s be honest: this is not the future of cloud. This is infrastructure catching up to where the public cloud world already was ten years ago.

Example: Moving some concepts and features from VMware Cloud Director (vCD) to Aria Automation and then calling it VCF Automation is also not innovative. It was the right thing to do, as vCD and Aria Automation (formerly known as vRealize Automation) shared many overlapping features and concepts. In other words, we can expect VCF Automation to be the future and vCD will be retired in a few years.

Anyway, there’s a pattern here. Platform vendors continue to position themselves as “private cloud providers”, yet the experience they offer remains rooted in managing hardware, scaling clusters, and applying patches. Whether it’s VCF or Nutanix, the story is always the same: it’s better infrastructure. But that’s the problem. It’s still infrastructure.

In contrast, the real shift toward cloud doesn’t start with software-defined storage or NSX overlay networks. It starts with the service model. That’s what makes cloud work. That’s what makes it scalable, elastic, and developer-first. That’s what customers actually need.

Let’s unpack where VCF 9.0 lands and why it still misses the mark.

What’s New in VCF 9.0. And What’s Not.

Broadcom deserves credit for moving VCF closer to what customers have been asking for since at least 2020. The platform now includes a proper developer consumption layer, integrated VPC-style networking, a simplified control plane, and aligned software versions for different products. Yes, it feels more like a cloud. It automates more, hides more complexity, and makes day 2 operations less painful. All good steps!

The new virtual private cloud constructs let teams carve out self-contained network domains – complete with subnets, NAT, firewall rules, and load balancers – all provisioned from a central interface. That’s a meaningful upgrade from the old NSX workflows. Now, transit gateways can be deployed automatically, reducing the friction of multi-domain connectivity. The whole setup is better, simpler, and more cloud-like. Well done.

On the consumption side, there’s a proper push toward unified APIs. Terraform support, policy-as-code blueprints in YAML, and native Kubernetes provisioning give developers a way to consume infrastructure more like they would in a hyperscaler environment. VCF customers can onboard teams faster, and the lifecycle engine behind the scenes handles upgrades, certificates, and best-practice configurations with far less manual effort.

So yes, VCF 9.0 is a big step forward for Broadcom and for existing VMware customers. But let’s put that progress into perspective.

Cloud Features Delivered Years Too Late

The features we’re seeing now – developer APIs, VPCs, self-service provisioning, built-in security, elastic-like networking – these aren’t breakthroughs. They are basic expectations. Public cloud providers like AWS and Azure introduced the VPC concept more than 10 years ago. Public clouds have offered full-stack policy automation, service mesh observability, and integrated load balancing for most of the last decade.

What VCF 9.0 delivers in 2025 is essentially what existing on-premises customers were asking for back in 2020.

The bigger concern is that VMware has always been the benchmark for enterprise-grade virtualization and private infrastructure. When customers bought into VCF years ago, they expected these capabilities then, not now. Broadcom has simply shipped the version of VCF that many customers assumed was already on the roadmap, five years ago.

And even now, many of the services (add-ons) in VCF 9.0 like Avi load balancing, vDefend IDS/IPS, integrated databases, and AI services, are optional components, mostly manually deployed, and not fully elastic or usage-based. These are integrations, not native services. You still need to operate them.

The Core Problem: It’s Still Infrastructure-Led

That’s the real difference. VCF and Nutanix remain infrastructure-led platforms. They require hardware planning, capacity management, lifecycle orchestration, and dependency tracking. Yes, they have APIs. Yes, they support Kubernetes. But at their core, they are platforms you need to own, operate, and scale yourself.

Cloud, on the other hand, is not about owning anything. It’s about consuming outcomes. VCF 9.0 and others are just not there yet.

The Illusion of a Private Cloud

This is why it’s time to call out the difference. Just because something looks like cloud – has some APIs, supports Kubernetes, uses words like “consumption” and “developer self-service” – doesn’t mean it actually behaves like cloud.

The illusion of a “private cloud” is seductive. You get to keep control. You get to use familiar tools. But control also means responsibility. Familiar tools mean legacy thinking. And a so-called private cloud, in most cases, just means more complex infrastructure with higher expectations.

That’s not transformation. That’s rebranding.

What VCF 9.0 delivers is an important evolution of VMware’s private infrastructure platform. But let’s not confuse that with cloud. Broadcom has moved in the right direction. They have shipped what customers needed years ago. But they are still delivering (virtual) infrastructure. Just better packaged.

Final Thought

You don’t transform your IT strategy by modernizing clusters. You transform it by changing how you consume and operate technology.

So the question isn’t whether your stack looks like “the cloud”. The question is whether you can stop operating infrastructure and start consuming services.

That’s the real line between emulating the cloud and actually being one. And as of today, VCF (and Nutanix) are still on the other side of that line. It’s not good. It’s not bad. It is what it is.

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

5 Strategic Paths from VMware to Oracle Cloud Infrastructure (OCI)

We all know that the future of existing VMware customers has become more complicated and less certain. Many enterprises are reevaluating their reliance on VMware as their core infrastructure stack. So, where to go next?

For enterprises already invested in Oracle technology, or simply those looking for a credible, flexible, and enterprise-grade alternative, Oracle Cloud Infrastructure (OCI) offers a comprehensive set of paths forward. Whether you want to modernize, rehost, or run hybrid workloads, OCI doesn’t force you to pick a single direction. Instead, it gives you a range of options: from going cloud-native, to running your existing VMware stack unchanged, to building your own sovereign cloud footprint.

Here are five realistic strategies for VMware customers considering OCI. Learn how to migrate from VMware to Oracle Cloud Infrastructure. It doesn’t need to be an either-or decision, it can also be an “and” approach.

1. Cloud-Native with OCI – Start Fresh, Leave VMware Behind

For organizations ready to move beyond traditional infrastructure altogether, the cloud-native route is the cleanest break you can make. This is where you don’t just move workloads; you rearchitect them. You replace VMs with containers where possible, and perhaps lift and shift some of the existing workloads. You replace legacy service dependencies with managed cloud services. And most importantly, you replace static, manually operated environments with API-driven infrastructure.

OCI supports this approach with a robust portfolio: you have got compute Instances that scale on demand, Oracle Kubernetes Engine (OKE) for container orchestration, OCI Functions for serverless workloads, and Autonomous Database for data platforms that patch and tune themselves. The tooling is modern, open, and mature – Terraform, Ansible, and native SDKs are all available and well-documented.

This isn’t a quick VMware replacement. It requires a DevOps mindset, application refactoring, and an investment in automation and CI/CD. It is not something you do in a weekend. But it’s the only path that truly lets you leave the baggage behind and design infrastructure the way it should work in 2025.

2. OCVS – Run VMware As-Is, Without the Hardware

If cloud-native is the clean break, then Oracle Cloud VMware Solution (OCVS) is the strategic pause. This is the lift-and-shift strategy for enterprises that need continuity now, but don’t want to double down on on-prem investment.

With OCVS, you’re not running a fully managed service (compared to AWS, Azure, GCP). You get the full vSphere, vSAN, NSX, and vCenter stack deployed on Oracle bare-metal infrastructure in your own OCI tenancy. You’re the admin. You manage the lifecycle. You patch and control access. But you don’t have to worry about hardware procurement, power and cooling, or supply chain delays. And you can integrate natively with OCI services: backup to OCI Object Storage, peer with Exadata, and extend IAM policies across the board.

Oracle Cloud VMware Solution

The migration is straightforward. You can replicate your existing environment (with HCX), run staging workloads side-by-side, and move VMs with minimal friction. You keep your operational model, your monitoring stack, and your tools. The difference is, you get out of your data center contract and stop burning time and money on hardware lifecycle management.

This isn’t about modernizing right now. It’s about escaping VMware hardware and licensing lock-in without losing operational control.

3. Hybrid with OCVS, Compute Cloud@Customer, and Exadata Cloud@Customer

Now we’re getting into enterprise-grade architecture. This is the model where OCI becomes a platform, not just a destination. If you’re in a regulated industry and you can’t run everything in the public cloud, but you still want the same elasticity, automation, and control, this hybrid model makes a lot of sense.

A diagram showing your tenancy in an OCI region, and how it connects to Compute Cloud@Customer in your data center.

Here’s how it works: you run OCVS in the OCI public region for DR, or workloads that have to stay on vSphere. But instead of moving everything to the cloud, you deploy Compute Cloud@Customer (C3) and Exadata Cloud@Customer (ExaCC) on-prem. That gives you a private cloud footprint with the same APIs and a subset of OCI IaaS/PaaS services but physically located in your own facility, behind your firewall, under your compliance regime.

You manage workloads on C3 using the exact same SDKs, CLI tools, and Terraform modules as the public cloud. You can replicate between on-prem and cloud, burst when needed, or migrate in stages. And with ExaCC running in the same data center, your Oracle databases benefit from the same SLA and performance guarantees, with none of the data residency headaches.

This model is ideal if you’re trying to modernize without breaking compliance. It keeps you in control, avoids migration pain, and still gives you access to the full OCI ecosystem when and where you need it.

4. OCI Dedicated Region – A Public Cloud That Lives On-Prem

When public cloud is not an option, OCI Dedicated Region becomes the answer.

This isn’t a rack. It is an entire cloud region. You get all OCI services like compute, storage, OCVS, OKE, Autonomous DB, identity, even SaaS, deployed inside your own facility. You retain data sovereignty and you control physical access. You also enforce local compliance rules and operate everything with the same OCI tooling and automation used in Oracle’s own hyperscale regions.

サーバラック3つで自分のOracle Cloudリージョンが持てる「Oracle Dedicated Region 25」発表 - Publickey

What makes Dedicated Region different from C3 is the scale and service parity. While C3 delivers core IaaS and some PaaS capabilities, Dedicated Region is literally the full stack. You can run OCVS in there, connect it to your enterprise apps, and have a fully isolated VMware environment that never leaves your perimeter.

For VMware customers, it means you don’t have to choose between control and modernization. You get both.

5. Oracle Alloy – Cloud Infrastructure for Telcos and VMware Service Providers

If you’re a VMware Cloud Director customer or a telco/provider building cloud services for others, then Oracle just handed you an entirely new business model. Oracle Alloy allows you to offer your own cloud under your brand, with your pricing, and your operational control based on the same OCI technology stack Oracle runs themselves.

This is not only reselling, it is operating your own OCI cloud.

Becoming an Oracle Alloy partner  diagram, description below

As a VMware-based cloud provider, Alloy gives you a path to modernize your platform and expand your services without abandoning your customer base. You can run your own VMware environment (OCVS), offer cloud-native services (OKE, DBaaS, Identity, Monitoring), and transition your customers at your own pace. All of it on a single platform, under your governance.

What makes Alloy compelling is that it doesn’t force you to pick between VMware and OCI, it lets you host both side by side. You keep your high-value B2B workloads and add modern, cloud-native services that attract new tenants or internal business units.

For providers caught in the middle of the VMware licensing storm, Alloy might be the most strategic long-term play available right now.

 

From Monolithic Data Centers to Modern Private Clouds

From Monolithic Data Centers to Modern Private Clouds

Behind every shift from old-school to new-school, there is a bigger story about people, power, and most of all, trust. And nowhere is that clearer than in the move from traditional monolithic data centers to what we now call a modern private cloud infrastructure.

A lot of people still think this evolution is just about better technology, faster hardware, or fancier dashboards. But it is not. If you zoom out, the core driver is not features or functions, it is trust in the executive vision, and the willingness to break from the past.

Monolithic data centers stall innovation

But here is the problem: monoliths do not scale in a modern world (or cloud). They slow down innovation, force one-size-fits-all models, and lock organizations into inflexible architectures. And as organizations grew, the burden of managing these environments became more political than practical.

The tipping point was not when better tech appeared. It was when leadership stopped trusting that the monolithic data centers with the monolithic applications could deliver what the business actually needed. That is the key. The failure of monolithic infrastructure was not technical – it was cultural.

Hypervisors are not the platform you think

Let us make that clear: hypervisor are not platforms! They are just silos and one piece of a bigger puzzle.

Yes, they play a role in virtualization. Yes, they helped abstract hardware and brought some flexibility. But let us not overstate it, they do not define modern infrastructure or a private cloud. Hypervisors solve a problem from a decade ago. Modern private infrastructure is not about stacking tools, it is about breaking silos, including the ones created by legacy virtualization models.

Private Cloud – Modern Infrastructure

So, what is a modern private infrastructure? What is a private cloud? It is not just cloud-native behind your firewall. It is not just running Kubernetes on bare metal. It is a mindset.

You do not get to “modern” by chasing features or by replacing one virtualization solution with another vendor. You get there by believing in the principles of openness, automation, decentralization, and speed. And that trust has to start from the top. If your CIO or CTO is still building for audit trails and risk reduction as their north star, you will end up with another monolithic data center stack. Just with fancier logos.

But if leadership leans into trust – trust in people, in automation, in feedback loops – you get a system that evolves. Call it modern. Call it next-gen.

It was never about the technology

We moved from monolithic data centers not because the tech got better (though it did), but because people stopped trusting the old system to serve the new mission.

And as we move forward, we should remember: it is not hypervisors or containers or even clouds that shape the future. It is trust in execution, leadership, and direction. That is the real platform everything else stands on. If your architecture still assumes manual control, ticketing systems, and approvals every step of the way, you are not building a modern infrastructure. You are simply replicating bureaucracy in YAML. A modern infra is about building a cloud that does not need micro-management.

Platform Thinking versus Control

A lot of organizations say they want a platform, but what they really want is control. Big difference.

Platform thinking is rooted in enablement. It is about giving teams consistent experiences, reusable services, and the freedom to ship without opening a support ticket every time they need a VM or a namespace.

And platform thinking only works when there is trust as well:

  • Trust in dev teams to deploy responsibly
  • Trust in infrastructure to self-heal and scale
  • Trust in telemetry and observability to show the truth

Trust is a leadership decision. It starts when execs stop treating infrastructure as a cost center and start seeing it as a product. Something that should deliver value, be measured, and evolve.

It is easy to get distracted. A new storage engine, a new control plane, a new AI-driven whatever. Features are tempting because they are measurable. You can point at them in a dashboard or a roadmap.

But features don’t create trust. People do. The most advanced platform in the world is useless if teams do not trust it to be available, understandable, and usable. 

So instead of asking “what tech should we buy?”, the real question is:

“Do we trust ourselves enough to let go of the old way?”

Because that is what building a modern private cloud is really about.

Trust at Scale

In Switzerland, we like things to work. Predictably. Reliably. On time. With the current geopolitical situation in the world, and especially when it comes to public institutions, that expectation is non-negotiable.

The systems behind those services are under more pressure than ever. Demands are rising and talent is shifting. Legacy infrastructure is getting more fragile and expensive. And at the same time, there is this quiet but urgent question being asked in every boardroom and IT strategy meeting:

Can we keep up without giving up control?

Public sector organizations (not only in Switzerland) face a unique set of constraints:

  • Critical infrastructure cannot go down, ever
  • Compliance and data protection are not just guidelines, they are legal obligations
  • Internal IT often has to serve a wide range of users, platforms, and expectations

So, it is no surprise that many of these organizations default to monolithic, traditional data centers. The logic is understandable: “If we can touch it, we can control it.”

But here is the reality though: control does not scale. And legacy does not adapt. Staying “safe” with old infrastructure might feel responsible, but it actually increases long-term risk, cost, and technical debt. There is a temptation to approach modernization as a procurement problem: pick a new vendor, install a new platform, run a few migrations, and check the box. Done.

But transformation doesn’t work that way. You can’t buy your way out of a culture that does not trust change.

In understand, this can feel uncomfortable. Many institutions are structured to avoid mistakes. But modern IT success requires a shift from control to resilience, and it is not about perfection. It is only perfect until you need to adapt again.

How to start?

By now, it is clear: modern private cloud infrastructure is not about chasing trends or blindly “moving to the cloud.” It’s about designing systems that reflect what your organization values: reliability, control, and trust, while giving teams the tools to evolve. But that still leaves the hardest question of all:

Where do we start?

First, ransparency is the first ingredient of trust. You can’t fix what you won’t name.

Second, modernizing safely does not mean boiling the ocean. It means starting with a thin slice of the future.

The goal is to identify a use case where you can:

  • Show real impact in under six months

  • Reduce friction for both IT and internal users

  • Create confidence that change is possible without risk

In short, it is about finding use cases with high impact but low risk.

Third, this is where a lot of transformation efforts stall. Organizations try to modernize the tech, but keep the old permission structures. The result? A shinier version of the same bottlenecks. Instead, shift from control to guardrails. Think less about who can approve what, and more about how the system enforces good behavior by default. For example:

  • Implement policy-as-code: rules embedded into the platform, not buried in documents

  • Automate security scans, RBAC, and drift detection

  • Give teams safe, constrained freedom instead of needing to ask for access

Guardrails enable trust without giving up safety. That’s the core of a modern infrastructure (private or public cloud).

And lastly, make trust measurable. Not just with uptime numbers or dashboards but with real signals:

  • Are teams delivering faster?

  • Are incidents down?

  • etc.

Make this measurable, visible, and repeatable. Success builds trust. Trust creates momentum.

Final Thoughts

IT organizations do not need moonshots. They need measured, meaningful modernization. The kind that builds belief internally, earns trust externally, and makes infrastructure feel like an asset again.

The technology matters, but how you introduce it matters even more. 

From Cloud-First to Cloud-Smart to Repatriation

From Cloud-First to Cloud-Smart to Repatriation

VMware Explore 2024 happened this week in Las Vegas. I think many people were curious about what Hock Tan, CEO of Broadcom, had to say during the general session. He delivered interesting statements and let everyone in the audience know that “the future of enterprise is private – private cloud, private AI, fueled by your own private data“. On social media, the following slide about “repatriation” made quite some noise:

VMware Explore 2024 Keynote Repatriation

The information on this slide came from Barcley’s CIO Survey in April 2024 and it says that 8 out of 10 CIOs today are planning to move workloads from the public cloud back to their on-premises data centers. It is interesting, and in some cases even funny, that other vendors in the hardware and virtualization business are chasing this ambulance now. Cloud migrations are dead, let us do reverse cloud migrations now. Hybrid cloud is dead, let us do hybrid multi-clouds now and provide workload mobility. My social media walls are full of such postings now. It seems Hock Tan presented the Holy Grail to the world.

Where is this change of mind from? Why did only 43% during COVID-19 plan a reverse cloud migration and now “suddenly” more than 80%?

I could tell you the story now about cloud-first not being cool anymore, that organizations started to follow a smarter cloud approach, and then concluded that cloud migrations are still not happening based on their expectations (e.g., costs and complexity). And that it is time now to bring workloads back on-premises. It is not that simple.

I looked at Barclay’s CIO survey and the chart (figure 20 in the survey) that served as a source for Hock Tan’s slide:

Barclays CIO Survey April 2024 Cloud RepatriationWe must be very careful with our interpretation of the results. Just because someone is “planning” a reverse cloud migration, does it mean they are executing? And if they execute such an exercise, is this going to be correctly reflected in a future survey?

And which are the workloads and services that are brought back to an enterprise’s data center? Are we talking about complete applications? Or is it more about load balancers, security appliances, databases and storage, and specific virtual machines? And if we understand the workloads, what are the real reasons to bring them back? Figure 22 of the survey shows “Workloads that Respondents Intend to Move Back to Private Cloud / On-Premise from Public Cloud”:

Barclays CIO Survey April 2024 Workload to migrate

Okay, we have a little bit more context now. Just because some workloads are potentially migrated back to private clouds, what does it mean for public cloud vs. private cloud spend? Question #11 of the survey “What percentage of your workloads and what percentage of your total IT spend are going towards the public cloud, and how have those evolved over time?” focuses on this matter.

Barclays CIO Survey April 2024 Percentage of Workloads and Spend My interpretation? Just because one slide or illustration talks about repatriation does not mean, that the entire world is just doing reverse migrations now. Cloud migrations and reverse cloud migrations can happen at the same time. You could bring one application or some databases back on-premises but decide to move all your virtual desktops to the public cloud in parallel. We could still bring workloads back to our data center and increase public cloud spend. 

Sounds like cloud-smart again, doesn’t it? Maybe I am an organization that realized that the applications A, B, C, and D shouldn’t run in Azure, AWS, Google, and Oracle anymore, but the applications W, X, Y, and Z are better suited for these hyperscalers.

What else?

I am writing about my views and my opinions here. There is more to share. During the pandemic, everything had to happen very quickly, and everyone suddenly had money to speed up migrations and application modernization projects. After that, I think it is a natural thing that everything was slowing down a bit after this difficult and exhausting phase.

Some of the IT teams are probably still documenting all their changes and new deployments on an internal wiki, and their bosses started to hire FinOps specialists to analyze their cloud spend. It is no shocking surprise to me that some of the financial goals haven’t been met and result in a reverse cloud migration a few years later.

But that is not all. Try to think about the past years. What else happened?

Yes, we almost forgot about Artificial Intelligence (AI) and Sovereign Clouds.

Before 2020, not many of us were thinking about sovereign clouds, data privacy, and AI.

Most enterprises are still hosting their data on-premises behind their own firewall. And some of this data is used to train or finetune models. We see (internal) chatbots popping up using Retrieval Augmented Generation (RAG), which delivers answers based on actual data and proprietary information.

Okay. What else? 

Yep, there is more. There are new technologies and offerings available that were not here before. We just covered AI and ML (machine learning) workloads that became a potential cost or compliance concern.

The concept of sovereign clouds has gained traction due to increasing concerns about data sovereignty and compliance with local regulations.

The adoption of hybrid and hybrid multi-cloud strategies has been a significant trend from 2020 to 2024. Think about VMware’s Cloud Foundation approach with Azure, Google, Oracle etc., AWS Outposts, Azure Stack, Oracle’s DRCC, or Nutanix’s.

Enterprises started to upskill and train their people to deliver their own Kubernetes platforms.

Edge computing has emerged as a crucial technology, particularly for industries like manufacturing, telecommunications, and healthcare, where real-time data processing is critical.

Conclusion

Reverse cloud migrations are happening for many different reasons like cost management, performance optimization, data security and compliance, automation and operations, or because of lock-in concerns.

Yes, (cloud) repatriation became prominent, but I think this is just a reflection of the maturing cloud market – and not an ambulance.

And no, it is not a better moment to position your hybrid multi-cloud solutions, unless you understand the services and workloads that need to be migrated from one cloud to another. Just because some CIOs plan to bring back some workloads on-premises, does it mean/imply that they will do it? What about the sunk cost fallacy?

Perhaps IT leaders are going to be more careful in the future and are trying to find other ways for potential cost savings and strategic benefits to achieve their business outcomes – and keep their workloads in the cloud versus repatriating them.

Businesses are adopting a more nuanced workload-centric strategy.

What’s your opinion?

Distributed Hybrid Infrastructure Offerings Are The New Multi-Cloud

Distributed Hybrid Infrastructure Offerings Are The New Multi-Cloud

Since VMware belongs to Broadcom, there was less focus and messaging on multi-cloud or supercloud architectures. Broadcom has drastically changed the available offerings and VMware Cloud Foundation is becoming the new vSphere. Additionally, we have seen big changes regarding the partnerships with hyperscalers (the Azures and AWSes of this world) and the VMware Cloud partners and providers. So, what happened to multi-cloud and how come that nobody (at Broadcom) talks about it anymore?

What is going on?

I do not know if it’s only me, but I do not see the term “multi-cloud” that often anymore. Do you? My LinkedIn feed is full of news about artificial intelligence (AI) and how Nvidia employees got rich. So, I have to admit that I lost track of hybrid clouds, multi-clouds, or hybrid multi-cloud architectures. 

Cloud-Inspired and Cloud-Native Private Clouds

It seems to me that the initial idea of multi-cloud has changed in the meantime and that private clouds are becoming platforms with features. Let me explain.

Organizations have built monolithic private clouds in their data centers for a long time. In software engineering, the word “monolithic” describes an application that consists of multiple components, which form something larger. To build data centers, we followed the same approach by using different components like compute, storage, and networking. And over time, IT teams started to think about automation and security, and the integration of different solutions from different vendors.

The VMware messaging was always pointing in the right direction: They want to provide a cloud operating system for any hardware and any cloud (by using VMware Cloud Foundation). On top of that, build abstraction layers and leverage a unified control plane (aka consistent automation and operations).

And I told all my customers since 2020 that they need to think like a cloud service provider, get rid of silos, implement new processes, and define a new operating model. That is VMware by Broadcom’s messaging today and this is where they and other vendors are headed: a platform with features that provide cloud services.

In other words, and this is my opinion, VMware Cloud Foundation is today a platform with different components like vSphere, vSAN, NSX, Aria, and so on. Tomorrow, it is still called VMware Cloud Foundation, a platform that includes compute, storage, networking, automation, operations, and other features. No more other product names, just capabilities, and services like IaaS, CaaS, DRaaS or DBaaS. You just choose the specs of the underlying hardware and networking, deploy your private clouds, and then start to build and consume your services.

Replace the name “VMware Cloud Foundation” in the last paragraph with AWS Outposts or Azure Stack. Do you see it now? Distributed unmanaged and managed hybrid cloud offerings with a (service) consumption interface on top.

That is the shift from monolithic data centers to cloud-native private clouds.

From Intercloud to Multi-Cloud

It is not the first time that I write about interclouds, that not many of us know. In 2012, there was this idea that different clouds and vendors need to be interoperable and agree on certain standards and protocols. Think about interconnected private and public clouds, which allow you to provide VM mobility or application portability. Can you see the picture in front of you? What is the difference today in 2024?

In 2023, I truly believed that VMware figured it out when they announced VMware Cloud on Equinix Metal (VMC-E). To me, VMC-E was different and special because of Equinix, who is capable of interconnecting different clouds, and at the same time could provide a baremetal-as-a-service (BMaaS) offering.

Workload Mobility and Application Portability

Almost 2 years ago, I started to write a book about this topic, because I wanted to figure out if workload mobility and application portability are things, that enterprises are really looking for. I interviewed many CIOs, CTOs, chief architects and engineers around the globe, and it became VERY clear: it seems nobody was changing anything to make app portability a design requirement.

Almost all of the people I have spoken to, told me, that a lot of things must happen that could trigger a cloud-exit and therefore they see this as a nice-to-have capability that helps them to move virtual machines or applications faster from one cloud to another.

VMware Workload Mobility

And I have also been told that a lift & shift approach is not providing any value to almost all of them.

But when I talked to developers and operations teams, the answers changed. Most of them did not know that a vendor could provide mobility or portability. Anyway, what has changed now?

Interconnected Multi-Clouds and Distributed Hybrid Clouds

I mentioned it already before. Some vendors have realized that they need to deliver a unified and integrated programmable platform with a control plane. Ideally, this control plane can be used on-premises, as a SaaS solution, or both. And according to Gartner, these are the leaders in this area (Magic Quadrant for Distributed Hybrid Infrastructure):

Gartner Magic-Quadrant-for-Distributed-Hybrid-Infrastructure

In my opinion, VMware and Nutanix are providing a hybrid multi-cloud approach.

AWS and Microsoft are providing hybrid cloud solutions. In Microsoft’s case, we see Azure Stack HCI, Azure Kubernetes Service (AKS incl. Hybrid AKS) and Azure Arc extending Microsoft’s Azure services to on-premises data centers and edge locations.

The only vendor, that currently offers true multi-cloud capabilities, is Oracle. Oracle has Dedicated Region Cloud@Customer (DRCC) and Roving Edge, but also partnerships with Microsoft and Google that allow customers to host Oracle databases in Azure and Google Cloud data centers. Both partnerships come with a cross-cloud interconnection.

That is one of the big differences and changes for me at the moment. Multi-cloud has become less about mobility or portability, a single global control plane, or the same Kubernetes distribution in all the clouds, but more about bringing different services from different cloud providers closer together.

This is the image I created for the VMC-E blog. Replace the words “AWS” and “Equinix” with “Oracle” and suddenly you have something that was not there before, an interconnected multi-cloud.

What’s Next?

Based on the conversations with my customers, it does not feel that public cloud migrations are happening faster than in 2020 or 2022 and we still see between 70 and 80% of the workloads hosted on-premises. While we see customers who are interested in a cloud-first approach, we see many following a hybrid multi-cloud and/or multi-cloud approach. It is still about putting the right applications in the right cloud based on the right decisions. This has not changed.

But the narrative of such conversations has changed. We will see more conversations about data residency, privacy, security, gravity, proximity, and regulatory requirements. Then there are sovereign clouds.

Lastly, enterprises are going to deploy new platforms for AI-based workloads. But that could still take a while.

Final Thoughts

As enterprises continue to navigate the above mentioned complexities, the need for flexible, scalable, and secure infrastructure solutions will only grow. There are a few compelling solutions that bridge the gap between traditional on-premises systems and modern cloud environments.

And since most enterprises are still hosting their workloads on-premises, they have to decide if they want to stretch the private cloud to the public cloud, or the other way around. Both options can co-exist, but would make it too big and too complex. What’s your conclusion?

VMware Cloud Foundation Spotlight – June 2024

VMware Cloud Foundation Spotlight – June 2024

This VMware Cloud Foundation spotlight article summarizes the latest information we have seen from VMware by Broadcom in June 2024. Big milestones and VERY exciting enhancements!

VMware Cloud Foundation 5.2

VMware by Broadcom introduces new features that a lot of customers have been waiting for:

  • VCF Import
  • VCF Edge
  • Independent TKG Service
  • vSAN enhancements
  • Dual-DPU support (active/standby and “max performance mode” (two independent DPUs))
  • vSAN data protection in ESA

Here is the VCF 5.2 bill of materials:

  • SDDC Manager 5.2 (Cloud Builder 5.2)
  • vSphere 8.0 U2 (ESXi 8.0 U3, vCenter 8.0 U3, TKG Standard Runtime 8.0 U3)
  • vSAN 8.0 U3
  • NSX 4.2.0
  • VMware Aria Suite Lifecycle 8.18 (Aria suite component versions are the same)
  • HCX 4.10
  • Aria Operations for Networks 6.12.1
  • Data Services Manager 2.0

Import vSphere Clusters to VMware Cloud Foundation

Customers can now easily import existing vSphere-based infrastructures to VCF!

VCF 5.2 vSphere Import

Please note that the are limitations when doing a VCF import in 5.2:

  • Storage must be vSAN, NFS, or VMFS-FC
  • When importing vSphere with vSAN, then “compression only” is not supported
  • VMkernel IPs must be static (no DHCP supported)
  • Importing VxRail clusters is not supported
  • Imported workload domains have no NSX requirement (configure the WLD using vSphere networking only)

Please note that cluster-level operations like adding or removing a host

VCF Edge

VMware Cloud Foundation Edge brings new possibilities and new supported architectures with it.

VCF 5.2 Edge

Important: A minimum of 25 sites is required, and a maximum of 256 cores per edge site

Edge customers receive the flexibility to start small with 1-node deployments!

Independent TKG Service

Finally! VMware by Broadcom decoupled the TKG Service from vCenter releases! In other words, vSphere/VCF admins can now independently upgrade the TKG Service without having to upgrade vCenter. 🙂

VCF 5.2 Independent TKG Service

This allows customers to upgrade the TKG service independently and to ship new Kubernetes versions faster.

More information about VMware Cloud Foundation 5.2 can be found here: https://blogs.vmware.com/cloud-foundation/2024/06/25/vmware-cloud-foundation-launch/ 

vSphere 8.0 Update 3

VMware Cloud Foundation 5.2 and vSphere Foundation 5.2 are both shipped with vSphere 8.0 U3. Here are some of the highlights that can come with this fantastic release:

The vCenter Server 8.0 Update 3 release notes can be found here: https://docs.vmware.com/en/VMware-vSphere/8.0/rn/vsphere-vcenter-server-803-release-notes/index.html

vSphere Live Patch

With the new Live Patching capability in ESXi, customers can address critical bugs in the virtual machine execution environment and apply patches to all components without reboot or VM evacuation. Virtual machines are Fast-Suspend-Resumed (FSR) as part of the host remediation process. As part of this action, a host enters partial maintenance mode, a new mount revision is loaded and patched and the VM is then fast-suspend-resumed to consume the patched mount revision. This action is non-disruptive to most virtual machines!

vSphere 8.0U3 Partial Maintenance Mode    vSphere 8.0U3 Live Patch Eligibility

 

vSphere IaaS Control Plane

Formerly known as “vSphere with Tanzu” or “TKGS”, VMware by Broadcom introduces the new name “vSphere IaaS Control Plane“, a declarative API that is embedded in the vSphere platform.

vSphere IaaS Control Plane

The vSphere IaaS Control Plane 8.0 Update 3 release notes can be found here: https://docs.vmware.com/en/VMware-vSphere/8.0/rn/vmware-vsphere-with-tanzu-80-release-notes/index.html

Autoscaling for Kubernetes Clusters

As part of the IaaS control plane, VMware by Broadcom introduces autoscaling for Kubernetes clusters using the “Cluster Autoscaler“.

vSphere 8.0U3 K8s Autoscaling

Cluster autoscaler can be installed as a standard package using kubectl or tanzu cli. The package version must match the minor Kubernetes versions, for example, in order to install the package on Kubernetes cluster version v1.26.5, you will have to install cluster autoscaler package version v1.26.2.

Minimum required version for cluster autoscaler is v1.25.

vSAN Stretched Cluster Support

Customers can now deploy the Supervisor on a vSAN stretched clusters, that spans two physical locations or sites.

Active/Active Deployment

vSAN 8.0 Update 3

vSAN 8.0 Update 3 introduces the following new features and enhancements:

  • Capacity-based licensing (1TiB entitlement for vSAN capacity per VCF core) for VCF 5.2
  • Stretched cluster support on vSAN ESA for VCF 5.2
  • vSAN Max as principal storage for VCF 5.2

From now on VCF customers can use vSAN Max as their primary, centralized shared storage solution for all of their VMware Cloud Foundation workloads!

VCF 5.2 vSAN Max Primary Storage

Did you know that you can use your vSAN entitlement (as part of VCF) for an aggregated HCI deployment (typical vSAN deployment) or a disaggregated deployment using vSAN Max: https://core.vmware.com/blog/starting-small-vsan-max

More details about the vSAN 8.0U3 release can be found here: https://docs.vmware.com/en/VMware-vSphere/8.0-Update-3/rn/vmware-vsan-803-release-notes/index.html and https://blogs.vmware.com/cloud-foundation/2024/06/25/vsan-8-update-3-initial-availability/

NSX ALB Integration with SDDC Manager

Starting with VCF 5.2, the NSX Advanced Load Balancer (aka Avi) integrates with the SDDC Manager. VCF admins have the option now to deploy Avi Controllers and Service Engines from SDDC Manager and to perform other lifecycle management tasks like password and certificate rotation related to the Avi Controller.

VCF 5.2 Deploy Avi from SDDC Manager

VCF 5.2 Technical Frequently Asked Questions

The technical FAQs can be found here: https://core.vmware.com/api/checkuseraccess?referer=/sites/default/files/associated-content/VCF_5_2_FAQ.pdf 

VMware Cloud Foundation 5.2 is GA (generally available) on July 22, 2024.

Let’s see what happens until then and what VMware by Broadcom announces at VMware Explore at the end of August. 🙂

VMware vSphere Foundation 5.2

The “what’s new” announcements for VVF 5.2 can be found here: https://blogs.vmware.com/cloud-foundation/2024/06/25/vmware-vsphere-foundation-launch-announcement/