What Is Unique About Oracle Cloud VMware Solution?

What Is Unique About Oracle Cloud VMware Solution?

Everyone talks about multi-cloud and in most cases they mean the so-called big 3 that consist of Amazon Web Services (AWS), Microsoft Azure and Google Cloud. If we are looking at the 2021 Gartner Magic Quadrant for Cloud Infrastructure & Platform Services, one can also spot Alibaba Cloud, Oracle, IBM and Tencent Cloud.

VMware has a strategic partnership with 6 of these hyperscalers and all of these 6 public clouds offer VMware’s software-defined data center (SDDC) stack on top of their global infrastructure:

While I mostly have to talk about AWS, AVS and GCVE, I am finally getting the chance to attend a OCVS customer workshop led by Oracle. That is why I wanted to prepare myself accordingly and share my learnings with you.

Amazon Web Services, Microsoft Azure and Google Cloud dominate the cloud market, but Oracle has unique capabilities and characteristics that no one else can deliver. Additionally, Oracle’s Cloud Infrastructure (OCI) has shown an impressive pace of innovation in the past two years, which led to a 16% increase on Gartner’s solution scorecard for OCI (November 2021, from 62% to 78%), which put them into the fourth place behind Alibaba Cloud!

What is Oracle Cloud VMware Solution?

Oracle Cloud VMware Solution or OCVS is a result of the strategic partnership announced by VMware and Oracle in September 2019. Like the other VMware Cloud solutions like VMC on AWS, AVS or GCVE, Oracle Cloud VMware Solution will enable customers to run VMware Cloud Foundation on Oracle’s Generation 2 Cloud Infrastructure.

Meaning, running an on-premises VMware-based infrastructure combined with OCVS should make cloud migrations easier and faster, because it is the same foundation with vSphere, vSAN and NSX.

Oracle Cloud VMware Solution Key Differentiator #1 – Different SDDC Bundles

Customers can choose between a multi-host SDDC (minimum of 3 production hosts) and a single-host SDDC, that is made for test and dev environments. Oracle guarantees a monthly uptime percentage of at least 99.9% for the OCVS service.

OCVS offers three different ESXi software versions and supports the following versions of other components:

  • ESXi 7.0, 6.7 or 6.5
  • vCenter 7.0, 6.7 or 6.5
  • vSAN 7.0, 6.7 or 6.5
  • NSX-T 3.0
  • HCX Advanced 4.0, 3.5 (default option)
  • HCX Enterprise (billed upgrade)

Note: vSphere 6.5 and vSphere 6.7 reach the End of General Support from VMware on October 15, 2022.

Key Differentiator #2 – Customer-Managed & Baremetal Hosts

The VMware Cloud offerings from AWS, Azure or Google are all vendor-controlled and customers get limited access to the VMware hosts and infrastructure components. With Oracle Cloud VMware Solution, customers get baremetal servers and the same operational experience as on-premises. This means full control over VMware infrastructure and its components:

  • SSH access to ESXi
  • Edit vSAN cluster settings
  • Browse datastores; upload and delete files
  • Customer controls the upgrade policy (version, time, defer)
  • Oracle has NO ACCESS after the SDDC provisioning!

Note: According to Oracle it takes about 2 hours to deploy a new SDDC that consists of 3 production hosts.

Customers can choose between Intel- and AMD-based hosts:

  • Two-socket BM.DenseIO2.52 with two CPUs each running 26 cores (Intel)
  • Two-socket BM.DenselO.E4.128 with two CPUs each running 16 cores (AMD)
  • Two-socket BM.DenselO.E4.128 with two CPUs each running 32 cores (AMD)
  • Two-socket BM.DenselO.E4.128 with two CPUs each running 64 cores (AMD)

Details about the compute shapes can be found here.

Key Differentiator #3 – Availability Domains

To provide high throughput and low latency, an OCVS SDDC is deployed by default across a minimum of three fault domains within a single availability domain in a region. But, upon request it is also possible to deploy your SDDC across multiple availability domains (AD), which comes with a few limitations:

  • While OCVS can scale from 3 up to 64 hosts in a single SDDC, Oracle recommends a maximum of 16 ESXi hosts in a multi-AD architecture
  • This architecture can have impacts on vSAN storage synchronization, and rebuild and resync times

Most hyperscaler only let you use two availability zones and fault domains in the same region. With Oracle it is possible to distribute the minimum of 3 hosts to 3 different availability domains.  An availability domain consists of one or more data centers within the same region.

Note: Traffic between ADs within a region is free of charge.

Key Differentiator #4 – Networking

Because OCVS is customer-managed and can be operated like your on-premises environment, you also get “full” control over the network. OCVS is installed within a customers’ tencancy, which gives customer the advantage to run their VMware SDDC workloads in the same subnet as OCI native services. This provides lower latency to the OCI native services, especially for customers that are using Exadata for example.

Another important advantage of this architecture is capability to create VLAN-backed port groups on your vSphere Distributed Switch (VDS).

Key Differentiator #5 – External Storage

Since March 2022 the OCI File Storage service (NFS) is certified as secondary storage for an OCVS cluster. This allows customers to scale the storage layer of the SDDC without adding new compute resources at the same time.

And just announced on 22 August 2022, with Oracle’s summer ’22 release, OCVS customers can now connect to a certified OCI Block Storage through iSCSI as a second external storage option.

Block Storage provides high IOPS to OCI, and data is stored redundantly across storage servers with built-in repair mechanisms with a 99.99% uptime SLA.

Key Differentiator #6 – Billing Options

OCVS is currently only sold and supported by Oracle. Like with other cloud providers and VMware Cloud offerings, customers have different pricing options depending upon their commitment levels:

  • On-demand (hourly)
  • 1 month
  • 1 year
  • 3 years

The rule of thumb for any hyperscaler says, that a 1-year commitment get around 30% discount and the 3-year commitments are around 50% discount.

The unique characteristic here is the monthly commitment option, which is caluclated with a discount of 16-17% depending on the compute shape.

Note: OCVS is not part (yet) of the VMware Cloud Universal subscription (VMCU).

Key Differentiator #7 – Global Reach

Currently, OCI is available in 39 different cloud regions (21 countries) and Oracle announced five more by the end of 2022. On day one of each region, OCVS is available with a consistent and predictable pricing that doesn’t vary from region to region.

To compare: AWS has launched 27 different regions with 19 being able to host the VMware Cloud on AWS service. In Switzerland, AWS just opened their new data center without having the VMware Cloud on AWS service available, while OCVS is already available in Zurich.

Use Cases

While OCVS is a great solution for joint VMware and Oracle customers, it is not necessary for customers to using Oracle Cloud Infrastructure native solutions.

Data Center Expansion

As you just learned before, OCVS is a great fit if you want to maintain the same VMware software versions on-premises and in OCI. The classic use case here is the pure data center expansion scenario, which allows you to stretch your on-premises infrastructure to OCI, without the need to use their native services.

VMware Horizon on OCVS

As I mentioned at the beginning, Oracle Cloud VMware Solution is based on VMware Cloud Foundation and so it is no surprise that Horizon on OCVS is fully supported.

The Horizon deployment on OCVS works a little bit different compared to the on-premises installation and there is no feature parity yet:

  • Horizon on OCVS does not support vGPUs yet.
  • Horizon on OCVS does not support IPv6 yet.
  • Horizon on OCVS does not support vTPM yet. In this situation it is recommended to use shielded OCVS instances.

Note: The support of NSX Advanced Load Balancer (Avi) is still a roadmap item

VMware Tanzu for OCVS

Since April 2022 it is possible for joint VMware and Oracle customers to use Tanzu Standard and its components with Oracle Cloud VMware Solution. Tanzu Standard comes with VMware’s Kubernetes distribution Tanzu Kubernetes Grid (TKG) and Tanzu Mission Control, which is the right solution for multi-cloud, multi-cluster K8s management.

With TMC you can deploy and manage TKG clusters on vSphere on-premises or on Oracle Cloud VMware Solution. You can even attach existing Kubernetes clusters from other vendors like RedHat OpenShift, Amazon EKS or Azure Kubernetes Service (AKS).

OCVS Tanzu Standard 

Oracle Cloud VMware Solution FAQ

VMware’s OCVS FAQ can be found here.

Oracle’s OCVS FAQ can be found here.

Additional Resources

Here is a list of additional resources:

The Backbone To Upgrade Your Multi-Cloud DevOps Experience

The Backbone To Upgrade Your Multi-Cloud DevOps Experience

Multi-Cloud is a mess. You cannot solve that multi-cloud complexity with a single vendor or one single supercloud (or intercloud), it’s just not possible. But different vendors can help you on your multi-cloud journey to make your and the platform team’s life easier. The whole world talks about DevOps or DevSecOps and then there’s the shift-left approach which puts more responsibility on developers. It seems to me that too many times we forget the “ops” part of DevOps. That is why I would like to highlight the need for Tanzu Mission Control (which is part of  Tanzu for Kubernetes Operations) and Tanzu Application Platform.

Challenges for Operations

What has started with a VMware-based cloud in your data centers, has evolved to a very heterogeneous architecture with two or more public clouds like Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform. IT analysts tell us that 75% of businesses are already using two or more public clouds. Businesses choose their public cloud providers based on workload or application characteristics and a public clouds known strengths. Companies want to modernize their current legacy applications in the public clouds, because in most cases a simple rehost or migration (lift & shift) doesn’t bring value or innovation they are aiming for.

A modern application is a collection of microservices, which are light, fault tolerant and small. Microservices can run in containers deployed in a private or public cloud. Many operations and platform teams see cloud-native as going to Kubernetes. But cloud-native is so much more than the provisioning and orchestration of containers with Kubernetes. It’s about collaboration, DevOps, internal processes and supply chains, observability/self-healing, continuous delivery/deployment and cloud infrastructures.

Expectation of Kubernetes

Kubernetes 1.0 was contributed as an open source seed technology by Google to the Linux Foundation in 2015, which formed the sub-foundation “Cloud Native Computing Foundation” (CNCF). Founding CNCF members include companies like Google, Red Hat, Intel, Cisco, IBM and VMware.

Currently, the CNCF has over 167k project contributors, around 800 members and more than 130 certified Kubernetes distributions and platforms. Open source projects and the adoption of cloud native technologies are constantly growing.

If we access the CNCF Cloud Native Interactive Landscape, one will get an understanding how many open source projects are supported by the CNCF and maintained this open source community. Since donated to CNCF, almost every company on this planet is using Kubernetes, or a distribution of it:

These were just a few of total 63 certified Kubernetes distributions. What about the certified hosted Kubernetes service offerings? Let me list here some of the popular ones:

  • Alibaba Cloud Container Service for Kubernetes
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Kubernetes Service (AKS)
  • Google Kubernetes Engine (GKE)
  • Nutanix Karbon
  • Oracle Container Engine
  • OVH Managed Kubernetes Service
  • Red Hat OpenShift Dedicated

All these clouds and vendors expose Kubernetes implementations, but writing software that performs equally well across all clouds seems to be still a myth. At least we have a common denominator, a consistency across all clouds, right? That’s Kubernetes.

Consistent Operations and Experience

It is very interesting to see that the big three hyperscalers Amazon, AWS and Google are moving towards multi-cloud enabled services and products to provide a consistent experience from an operations standpoint, especially for Kubernetes clusters.

Microsoft got Azure Arc now, Google provides Anthos (GKE clusters) for any cloud and AWS also realized that the future consists of multiple clouds and plans to provide AKS “anywhere”.

They all have realized that customers need a centralized management and control plane. Customers are looking for simplified operations and consistent experience when managing multi-cloud K8s clusters.

Tanzu Mission Control (TMC)

Imagine that you have a centralized dashboard with management capabilities, which provide a unified policy engine and allows you to lifecycle all the different K8s clusters you have.

TMC offers built-in security policies and cluster inspection capabilities (CIS benchmarks) so you can apply additional controls on your Kubernetes deployments. Leveraging the open source project Velero, Tanzu Mission Control gives ops teams the capability to very easily backup and restore your clusters and namespaces. Just 4 weeks ago, VMware announced cross-cluster backup and restore capabilities for Tanzu Mission Control, that let Kubernetes-based applications “become” infrastructure and distribution agnostic.

Tanzu Mission Control lets you attach any CNCF-conformant K8s cluster. When attached to TMC, you can manage policies for all Kubernetes distributions such as Tanzu Kubernetes Grid (TKG), Azure Kubernetes Service, Google Kubernetes Engine or OpenShift.

Tanzu Mission Control Dashboard

In VMware’s ongoing commitment to support customers in their multi-cloud application modernization efforts, the Tanzu Mission Control team introduced the preview of lifecycle management of Amazon AKS clusters at VMware Explore US 2022:

Preview for lifecycle management of Amazon Elastic Kubernetes Service (EKS) clusters can enable direct provisioning and management of Amazon EKS clusters so that developers and operators have less friction and more choices for cluster types. Teams will be able to simplify multi-cloud, multi-cluster Kubernetes management with centralized lifecycle management of Tanzu Kubernetes Grid and Amazon EKS cluster types.

Note: With this announcement I would expect that the support for Azure Kubernetes Service (AKS) is also coming soon.

Read the Tanzu Mission Control solution brief to get more information about its benefits and capabilities.

Challenges for Developers

Tanzu Mission Control provides cross-cloud services for your Kubernetes clusters deployed in multiple clouds. But there is still another problem.

Developers are being asked to write code and provide business logic that could run on-prem, on AWS, on Azure or any other public cloud. Every cloud provider has an interest to provide you their technologies and services. This includes the hosted Kubernetes offerings (with different Kubernetes distributions), load balancers, storage, databases, APIs, observability, security tools and so many other components. To me, it sounds very painful and difficult to learn and understand the details of every cloud provider.

Cross-cloud services alone don’t solve that problem. Obviously, neither Kubernetes solves that problem.

What if Kubernetes and centralized management and visibility are not “the” solution but rather something that sits on top of Kubernetes?

And Then Came PaaS

Kubernetes is a platform for building platforms and is not really meant to be used by developers.

The CNCF landscape is huge and complex to understand and integrate, so it is just a logical move that companies were looking more for pre-assembled solutions like platform as a service (PaaS). I think that Tanzu Application Service (formerly known as Pivotal Cloud Foundry), Heroku, RedHat OpenShift and AWS Elastic Beanstalk are the most famous examples for PaaS.

The challenge with building applications that run on a PaaS, is sometimes the need to leverage all the PaaS specific components to fully make use of it. What if someone wants to run her own database? What if the PaaS offering restricts programming languages, frameworks, or libraries? Or is it the vendor lock-in which bothers you?

PaaS solutions alone don’t seem to be solving the missing developer experience either for everyone.

Do you want to build the platform by yourself or get something off the shelf? There is a big difference between using a platform and running one. 🙂

Twitter Kelsey Hightower K8s PaaS

Bring Your Own Kubernetes To A Portable PaaS

What’s next after IaaS has evolved to CaaS (because of Kubernetes) and PaaS? It is adPaaS (Application Developer PaaS).

Have you ever heard of the “Golden Path“? Spotify uses this term and Netflix calls it “Paved Road“.

The idea behind the golden path or paved road is that the (internal) platform offers some form of pre-assembled components and supported approach (best practices) that make software development faster and more scalable. Developers don’t have to reinvent the wheel by browsing through a very fragmented ecosystem of developer tooling where the best way to find out how to do things was to ask the community or your colleagues.

VMware announced Tanzu Application Platform (TAP) in September 2021 with the statement, that TAP will provide a better developer experience on any Kubernetes.

VMware Tanzu Application Platform delivers a prepaved path to production and a streamlined, end-to-end developer experience on any Kubernetes.

It is the platform team’s duty to install and configure the opinionated Tanzu Application Platform as an overlay on top of any Kubernetes cluster. They also integrate existing components of Kubernetes such as storage and networking. An opinionated platform provides the structure and abstraction you are looking for: The platform “does” it for you. In other words, TAP is a prescribed architecture and path with the necessary modularity and flexibility to boost developer productivity.

Diagram depicting the layered structure of TAP

The developers can focus on writing code and do not have to fully understand the details like container image registries, image building and scanning, ingress, RBAC, deploying and running the application etc.

Illustration of TAP conceptual value, starting with components that serve the developer and finishing with the components that serve the operations staff and security staff.

 

TAP comes with many popular best-of-breed open source projects that are improving the DevSecOps experience:

  • Backstage. Backstage is an open platform for building developer portals, created at Spotify, donated to the CNCF, and maintained by a worldwide community of contributors.
  • Carvel. Carvel provides a set of reliable, single-purpose, composable tools that aid in your application building, configuration, and deployment to Kubernetes.
  • Cartographer. Cartographer is a VMware-backed project and is a Supply Chain Choreographer for Kubernetes. It allows App Operators to create secure and pre-approved paths to production by integrating Kubernetes resources with the elements of their existing toolchains (e.g. Jenkins).
  • Tekton. Tekton is a cloud-native, open source framework for creating CI/CD systems. It allows developers to build, test, and deploy across cloud providers and on-premise systems.
  • Grype. Grype is a vulnerability scanner for container images and file systems.
  • Cloud Native Runtimes for VMware Tanzu. Cloud Native Runtimes for Tanzu is a serverless application runtime for Kubernetes that is based on Knative and runs on a single Kubernetes cluster.

At VMware Explore US 2022, VMware announced new capabilities that will be released in Tanzu Application Platform 1.3. The most important added functionalities for me are:

  • Support for RedHat OpenShift. Tanzu Application Platform 1.3 will be available on RedHat OpenShift, running in vSphere and on baremetal.
  • Support for air-gapped installations. Support for regulated and disconnected environments, helping to ensure that the components, upgrades, and patches are made available to the system and that they operate consistently and correctly in the controlled environment and keep data secure.
  • Carbon Black Integration. Tanzu Application Platform expands the ecosystem of supported vulnerability scanners with a beta integration with VMware Carbon Black scanner to enable customer choice and leverage their existing investments in securing their supply chain.

The Power Combo for Multi-Cloud

A mix of different workloads like virtual machines and containers that are hosted in multiple clouds introduce complexity. With the powerful combination of Tanzu Mission Control and Tanzu Application Platform companies can unlock the full potential of their platform teams and developers by reducing complexity while creating and using abstraction layers on top your multi-cloud infrastructure.

VMware Explore US 2022 – Summary of Day 1 Announcements

VMware Explore US 2022 – Summary of Day 1 Announcements

VMworld is now VMware Explore and is currently happening in San Francisco! This is a consolidated of the announcements from day 1 (August 30th, 2022).

VMware Introduces vSphere 8, vSAN 8 and VMware Cloud Foundation+

VMware today introduced VMware vSphere 8 and VMware vSAN 8—major new releases of VMware’s compute and storage solutions.

vSphere 8 – vSphere 8 introduces vSphere on DPUs, previously known as Project Monterey. In close collaboration with technology partners AMD, Intel and NVIDIA as well as OEM system partners Dell Technologies, Hewlett Packard Enterprise and Lenovo, vSphere on DPUs will unlock hardware innovation helping customers meet the throughput and latency needs of modern distributed workloads. vSphere will enable this by offloading and accelerating network and security infrastructure functions onto DPUs from CPUs.

ESXi on DPU

vSphere 8 will dramatically accelerate AI and machine learning applications by doubling the virtual GPU devices per VM, delivering a 4x increase of passthrough devices, and supporting vendor device groups which enable binding of high-speed networking devices and the GPU.

vSAN 8: vSAN 8 introduces breakthrough performance and hyper-efficiency. Built from the ground up, the new vSAN Express Storage Architecture (ESA) will enhance the performance, storage efficiency, data protection and management of vSAN running on the latest generation storage devices. vSAN 8 will provide customers with a future ready infrastructure that supports modern TLC storage devices and delivers up to a 4x performance boost.

VMware Cloud Foundation+ – VMware introduces a new cloud-connected architecture for managing and operating full stack HCI in data centers. Built on vSphere+ and vSAN+, VMware Cloud Foundation+ will add a new cloud-connected architecture for managing and operating full-stack HCI in our data center or co-location facility.

VMware Cloud Foundation+ will deliver new admin, developer and hybrid cloud services through a simplified subscription model and keyless entitlement. VMware Cloud Foundation 4.5 will enable VMware Cloud Foundation+ by adding vSphere+ and vSAN+, plus a cloud gateway that provides access to the VMware Cloud Console as part of the full stack architecture.

VMware Cloud for Hyperscalers

VMC on AWS – Amazon Elastic Compute Cloud (Amazon EC2) I4i instances for I/O-intensive Workloads: Powered by 3rd generation Intel® Xeon® Scalable processors (Ice Lake), Amazon EC2 instances help deliver better workload support and delivery, lower TCO, and increased scalability and application performance. Compared to I3, the I4i instances provide nearly twice the number of physical cores, twice the memory, three times the storage capacity, and three times the network bandwidth.

Amazon FSx for NetApp ONTAP Integration Availability – as a native AWS cloud storage service that is certified as a supplemental datastore for VMware Cloud on AWS, FSx for ONTAP offers fully managed shared storage built on the familiar NetApp ONTAP file system trusted by VMware customers running on premises today. Customers can now use FSx for ONTAP as a simple and elastic datastore for VMware Cloud on AWS, enabling them to scale storage up or down independently from compute while paying only for the resources they need.

VMware Cloud Flex Storage Availability – A new VMware-managed and natively integrated cloud storage and data management solution that offers supplemental datastore-level access for VMware Cloud on AWS. With just a few clicks in the VMware Cloud Console, customers can scale their storage environment without adding hosts, and elastically adjust storage capacity up or down as needed for every application. Customers also benefit from a simple, pay-as-you-consume pricing model. Together with VMware vSAN, VMware Cloud Flex Storage offers flexibility and customer value in terms of resilience, performance, scale, and cost in the cloud.

VMware Cloud Flex Compute – “Preview” of a new cloud compute model that will help customers get started faster with VMware Cloud on AWS. With this new model, VMware introduces a “resource-defined” cloud compute model in place of “hardware-defined” compute instance model which will provide customers higher flexibility, elasticity, and speed to better meet cost and performance requirements of enterprise applications. It will help customers get started faster with VMware Cloud on AWS by using smaller consumable units.

Azure VMware Solution – Customers will be able to purchase Azure VMware Solution as part of VMware Cloud Universal, a flexible purchasing and consumption program for executing multi-cloud and digital transformation strategies. VMware Cloud Director Service for Azure VMware Solution is also now available in Public Preview.

Google Cloud VMware Engine – VMware announced VMware Tanzu Standard edition on Google Cloud VMware Engine to help simplify Kubernetes adoption and management.

Oracle Cloud VMware Solution – New features and capabilities with VMware Tanzu Standard Edition and introduced support for single host SDDCs for non-production workloads.

VMware Cloud Management – VMware Aria

VMware unveiled a multi-cloud management portfolio called VMware Aria, which provides a set of end-to-end solutions for managing the cost, performance, configuration, and delivery of infrastructure and cloud native applications.

VMware Aria is a new brand for the vRealize components, Tanzu Observability by Wavefront and CloudHealth unified under one umbrella, one name.

The VMware products and services within the VMware Aria portfolio are:

  • VMware Aria Automation (formerly, vRealize Automation)
  • VMware Aria Operations (formerly, vRealize Operations)
  • VMware Aria Operations for Networks (formerly, vRealize Network Insight)
  • VMware Aria Operations for Logs (formerly, vRealize Log Insight)
  • VMware Aria Operations for Secure Clouds (formerly, CloudHealth Secure State)
  • VMware Aria Cost powered by CloudHealth (formerly, CloudHealth)
  • VMware Aria Operations for Applications (formerly VMware Tanzu Observability)
  • VMware Skyline

VMware Aria Products

VMware Aria is anchored by VMware Aria Hub (formerly known as Project Ensemble), which provides centralized views and controls to manage the entire multi-cloud environment, and leverages VMware Aria Graph to provide a common definition of applications, resources, roles, and accounts.

VMware Aria Graph provides a single source of truth that is updated in near-real time. Other solutions on the market were designed in a slower moving era, primarily for change management processes and asset tracking. By contrast, VMware Aria Graph is designed expressly for cloud-native operations.

VMware Aria provides features and functions that span management disciplines and clouds to deliver unique value for multi-cloud governance, cross-cloud migration, and actionable business insights. In addition, there are three new end-to-end management services built on top of VMware Aria Hub and VMware Aria Graph:

  • VMware Aria Guardrails – Automate enforcement of cloud guardrails for networking, security, cost, performance, and configuration at scale for multi-cloud environments with an everything-as-code approach
  • VMware Aria Migration – Accelerate and simplify the multi-cloud migration journey by automating assessment, planning, and execution in conjunction with VMware HCX
  • VMware Aria Business Insights – Discern relevant business insights from full-stack event correlation leveraging AI/ML analytics

Networking and Security

Project Northstar – Project Northstar is a SaaS-based network and security offering that will empower NSX customers with a set of on-demand multi-cloud networking and security services, end-to-end visibility, and controls. Customers will be able to use a centralized cloud console to gain instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. It will support both private cloud and VMware Cloud deployments running on public clouds and enable enterprises to build flexible network infrastructure that they can spin up and down in minutes.

Graphical user interface Description automatically generated

DPU-based Acceleration for NSX – Formerly known as Project Monterey, VMware announced that starting with NSX 4.0 and vSphere 8.0, customers can leverage DPU-based acceleration using SmartNICs. Offloading NSX services to the DPU can accelerate networking and security functions without impacting the host CPUs, addressing the needs of modern applications and other network-intensive and latency-sensitive applications.

Image of a SmartNIC

Project Trinidad – Available as tech preview, Project Trinidad extends VMware’s API security and analytics by deploying sensors on Kubernetes clusters and uses machine learning with business logic inference to detect anomalous behavior in east-west traffic between microservices.

Project Watch – VMware unveiled Project Watch, a new approach to multi-cloud networking and security that will provide advanced app-to-app policy controls to help with continuous risk and compliance assessment. In technology preview, Project Watch will help network security and compliance teams to continuously observe, assess, and dynamically mitigate risk and compliance problems in composite multi-cloud applications.

Additionally, VMware NSX Advanced Load Balancer adds new bot management capabilities to help enterprises address threats quickly and efficiently, providing enhanced multi-layer application protection with existing Web Application Firewall, DDoS protection, and API security.

Edge

VMware Edge Compute Stack 2.0 – VMware announced the VMware Edge Compute Stack v1.0 last year and is now adding more features and functionalities optimized for different use cases at the enterprise edge – shipped with vSphere 8 and Tanzu Kubernetes Grid 2.0. VMware, for the first time, will introduce initial support for non-x86 processor-based specialized small form factor edge platforms to simultaneously run IT/OT workloads and workflows on a single stack.

 

VMware Private Mobile Network (Beta) – Delivered by service providers, this new managed service offering provides enterprises with private 4G/5G mobile connectivity in support of edge-native applications. VMware will empower partners with a single PMN orchestrator to operate multi-tenant private 4G/5G networks with an enterprise-grade solution. 

Modern Applications (VMware Tanzu)

Tanzu Application Platform – VMware pre-announced new Tanzu Application Platform (TAP) 1.3 capabilities like the availability on RedHat OpenShift or the support for air-gapped installations for regulated and disconnected environments.

Tanzu Mission Control – Finally, VMware announced the preview for lifecycle management of Amazon Elastic Kubernetes Service (EKS) clusters, which enables direct provisioning and management of EKS clusters, which is awesome! I suppose we can expect the support for Azure Kubernetes Service (AKS) also coming very soon.

Tanzu Kubernetes Grid – With the release of TKG 2.0, VMware now includes a unified experience for applications running on any cloud. In the near future, Tanzu Kubernetes Grid 2.0 should support both Supervisor-based and VM-based management cluster models. On vSphere 8, both Supervisor-based and VM-based models will be supported, and VM-based management clusters will continue to be available on previous versions of vSphere and public clouds. This means in other words, that VMware continues with their “TKGS” and “TKGm” flavors.

Tanzu Service Mesh – Also pre-announced, VMware is adding several enterprise and application resiliency capabilities into Tanzu Service Mesh:

  • Support for customer-owned enterprise certificate authority through integration with Venafi
  • Improved security with enterprise-approved container image registries, data services support, external services support
  • and a global SLO dashboard that allows developers and site-reliability engineers to view all managed service SLOs, helping with capacity planning, troubleshooting, and understanding the health of their applications.

Read more about all the Tanzu announcements here.

Anywhere Workspace

VMware unveiled how it is advancing self-configuring, self-healing and self-securing outcomes across four key technology areas that are delivered by the Anywhere Workspace platform:

  • VDI and DaaS
  • Digital Employee Experience
  • Unified Endpoint Management
  • Security

VMware is introducing a next generation of VMware Horizon Cloud that will enable multi-cloud agility and flexibility. This new release represents a major update to Horizon Cloud on Microsoft Azure that can dramatically simplify the infrastructure that needs to be deployed inside customer environments, reducing infrastructure costs in some cases by over 70% while increasing scalability and reliability of VMware’s DaaS platform.

20K user infrastructure cost comparison

Workspace ONE UEM’s Freestyle Orchestrator will be expanding to include support for mobile devices.

Workspace ONE support for Windows OS multi-user mode is now available in Tech Preview for Azure Active Directory-based deployments; and it will soon be extended to Active Directory-based deployments.

VMware also announced the coming tech preview of Workspace ONE Cloud Marketplace, which will feature dashboards, widgets, reports, Freestyle Orchestrator workflows, and other resources that can be imported to help customers adopt additional solutions.

Horizon Managed Desktop –  I am very excited about this announcement, because it will provide a managed service offering that takes care of lifecycle services, support, and more, on top of a customer-provided infrastructure. This will help customers that don’t have in-house experts get to value with VDI faster.

Availability

VMware Cloud Foundation+, VMware vSphere 8, VMware vSAN 8 and VMware Edge Compute Stack 2.0 are all expected to be available by October 28, 2022 (the close of VMware’s Q3 FY23). VMware Private Mobile Network is expected to be available in beta in VMware’s Q3 FY23.

Closing Comment

Not bad for the first day, right? Stay tuned for more exciting VMware Explore announcements!

How I Passed The AWS Certified Solutions Architect Associate Exam

How I Passed The AWS Certified Solutions Architect Associate Exam

I had no professional experience with Amazon Web Services (AWS) products, currently work for VMware, and passed the AWS Certified Solutions Architect Associate exam (AWS SAA-C02) with my first attempt.

AWS SAA Badge

This article gives you an overview which content and training material I used to prepare myself for the AWS SAA-C02 exam.

AWS Certified Solutions Architect Associate Exam

First of all, you need to make yourself familiar with the SAA-C02 exam guide. Do not let the low exam fee of $150 fool you. It is a tough exam! Here are the exam conditions:

  • Testing center can be Pearson VUE or PSI (I would recommend Pearson VUE)
  • You have to answer 65 questions in 130min (exam includes 15 unscored questions that do not affect your score)
  • Two type of questions – multiple choice and multiple response
  • No hands-on or lab questions
  • Minimum passing score is 720 of 1000
  • I had two wait almost 24h to get my result

Even I was not that well prepared and had to guess many times, the 130 minutes were enough to complete the exam and review some of the flagged questions. If English is difficult for you and you are a non-native English speaker, AWS gives you a 30-min exam extension for all future exam registrations.

To request this accommodation, follow these steps before you schedule your exam:

  • Sign in to aws.training/certification
  • Select the Go to your Account button
  • Select the Request Exam Accommodations button, followed by Request Accommodation
  • Using the Accommodation Type dropdown, select ESL +30 MINUTES
  • Select the Create button.

How did I prepare for the exam?

I always study with books, if possible. So, the first I would recommend is reading the “AWS Certified Solutions Architect Study Guide: Associate SAA-CO2 Exambook, which I bought on Amazon. This helped me very quickly to get a basic understanding of almost all the core services covered on the exam.

AWS Certified Solutions Architect Study Guide

Now you are prepared to start with one or more online courses.

If you already have a subscription for “A Cloud Guru“, then you are good to go. A lot of people are saying that their platform and content are very good. I worked with the following Udemy resources since I had no access to “A Cloud Guru” when I started studying:

Note: I paid around $15 (instead $100) for Stephane’s training. Jon’s practice exams were around the same (instead of $30 I believe). Wait for these kinds of special offers and do not pay the full price. 😉

I did not have the time to watch the complete 27 hours of on-demand video content, but Stephane Maarek gives you also his slide deck with over 800 pages that cover everything you need to know for the exam. This allowed me to learn faster and be more efficient!

The practice exams from Jon Bonso were very helpful. As a beginner like me, you get a better idea of the exam and how you can architect solutions and combine different AWS services. In the Udemy comment section people said that the practice exam questions were more difficult than the ones on the exam. While I recognized a few of the use cases during exam, I felt the opposite: the questions during the exam were more difficult

Note: If you think you can just memorize all the answers from the practice exams available on the internet (and Udemy), you are most probably going to fail. You need to know the different core service and understand the details like high availability options, encryption or the characteristics and differences of all the database offerings like Aurora, DynamoDB, ElastiCache, RDS or Redshift.

I recommend going through the slide deck at least two or three times before you start with the practice exams.

Additional Resources

I did not do it, but if you have time, then browse through the AWS product-related FAQs. I did that for some of the products that were a little bit more difficult to understand for me.

Example: Let’s say I had a hard time to understand VMware Cloud on AWS (not part of the exam blueprint), then the FAQs helped me to better understand the service availability, service definition, configuration options, integrations and use cases.

How much time to prepare?

I started in March 2022 and took the exam at the beginning of June 2022. It was planned that I take the exam after 6 weeks of preparation, but with three kids at home and very busy work schedule, I had to postpone the exam two times. I even had a break of three weeks where I could not study.

If you can focus for at least 4 to 6 weeks, depending also on the video content you want to consume, that is enough time to pass the exam. My plan was to invest 2-3 hours a day.

Note: I did not make use of the AWS Free Tier and hands-on labs from Stephane Maarek.

Are you ready?

Let me know in the comment section below if this study guide was helpful to you. Good luck! 🙂

Interclouds And The Future of Cloud Computing

Interclouds And The Future of Cloud Computing

I am finally taking the time to write this piece about interclouds, workload mobility and application portability. Some of my engagements during the past four weeks led me several times to discussions about interclouds and workload mobility.

Cloud to Cloud Interoperability and Federation

Who has thought back in 2012 that we will have so many (public) cloud providers like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud etc. in 2022?

10 years ago, many people and companies were convinced that the future consists of public cloud infrastructure only and that local self-managed data centers are going to disappear.

This vision and perception of cloud computing has dramatically changed over the past few years. We see public cloud providers stretching their cloud services and infrastructure to large data centers or edge locations. It seems they realized, that the future is going to look differently than a lot of people anticipated back then.

I was not aware that the word “intercloud” and the need for it exists for a long time already apparently. Let’s take David Bernstein’s presentation as an example, which I found by googling “intercloud”:

This presentation is about avoiding the mistake of using proprietary protocols and cloud infrastructures that lead to silos and a non-interoperable architecture. He was part of the IEEE Intercloud Working Group (P2302) which was working on a standard for “Intercloud Interoperability and Federation (SIIF)” (draft), which mentioned the following:

Currently there are no implicit and transparent interoperability standards in place in order for disparate
cloud computing environments to be able to seamlessly federate and interoperate amongst themselves.
Proposed P2302 standards are a layered set of such protocols, called “Intercloud Protocols”, to solve the interoperability related challenges. The P2302 standards propose the overall design of decentralized, scalable, self-organizing federated “Intercloud” topology.

David Bernstein Intercloud

I do not know David Bernstein and the IEEE working group personally, but it would be great to hear from some of them, what they think about the current cloud computing architectures and how they envision the future of cloud computing for the next 5 or 10 years.

As you can see, the wish for an intercloud protocol or an intercloud exists since a while. Let us quickly have a look how others define intercloud:

Cisco in 2008 (it seems that David Bernstein worked at Cisco that time). Intercloud is a network of clouds that are linked with each other. This includes private, public, and hybrid clouds that come together to provide a seamless exchange of data.

teradata. Intercloud is a cloud deployment model that links multiple public cloud services together as one holistic and actively orchestrated architecture. Its activities are coordinated across these clouds to move workloads automatically and intelligently (e.g., for data analytics), based on criteria like their cost and performance characteristics.

The Future of Cloud Computing

I found this post on Twitter on May 19th, 2022:

Alvin Cheung Berkeley Intercloud

Alvin Cheung is an associate professor at Berkeley EECS and wrote the following in his Twitter comments:

we argue that cloud computing will evolve to a new form of inter-cloud operation: instead of storing data and running code on a single cloud provider, apps will run on an inter-operating set of cloud providers to leverage their specialized services / hw / geo etc, much like ISPs.

Alvin and his colleagues wrote a publication which states “A Berkeley View on the Future of Cloud Computing” that mentions the following very early in the PDF:

We predict that this market, with the appropriate intermediation, could evolve into one with a far greater emphasis on compatibility, allowing customers to easily shift workloads between clouds.

[…] Instead, we argue that to achieve this goal of flexible workload placement, cloud computing will require intermediation, provided by systems we call intercloud brokers, so that individual customers do not have to make choices about which clouds to use for which workloads, but can instead rely on brokers to optimize their desired criteria (e.g., price, performance, and/or execution location).

We believe that the competitive forces unleashed by the existence of effective intercloud brokers will create a thriving market of cloud services with many of those services being offered by more than one cloud, and this will be sufficient to significantly increase workload portability.

Intercloud Broker

Organizations place their workloads in that cloud which makes the most sense for them. Depending on different regulations, data classification, different cloud services, locations, or pricing, they then decide which data or workload goes to which cloud.

The people from Berkeley do not necessarily promote a multi-cloud architecture, but have the idea of an intercloud broker that places your workload on the right cloud based on different factors. They see the intercloud as an abstraction layer with brokering services:

In my understanding their idea goes towards the direction of an intelligent and automated cloud management platform that takes the decision where a specific workload and its data should be hosted. And that it, for example, migrates the workload to another cloud which is cheaper than the current one.

Cloud Native Technologies for Multi-Cloud

Companies are modernizing/rebuilding their legacy applications or create new modern applications using cloud native technologies. Modern applications are collections of microservices, which are light, fault tolerant and small. These microservices can run in containers deployed on a private or public cloud.

Which means, that a modern application is something that can adapt to any environment and perform equally well.

The challenge today is that we have modern architectures, new technologies/services and multiple clouds running with different technology stacks. And we have Kubernetes as framework, which is available in different formats (DIY or offerings like Tanzu TKG, AKS, EKS, GKE etc.)

Then there is the Cloud Native Computing Foundation (CNCF) and the open source community which embrace the principal of “open” software that is created and maintained by a community.

It is about building applications and services that can run on any infrastructure, which also means avoiding vendor or cloud lock-in.

Challenges of Interoperability and Multiple Clouds

If you discuss multi-cloud and infrastructure independent applications, you mostly end up with an endless list of questions like:

  • How can we achieve true workload mobility or application portability?
  • How do we deal with the different technology formats and the “language” (API) of each cloud?
  • How can we standardize and automate our deployments?
  • Is latency between clouds a problem?
  • What about my stateful data?
  • How can we provide consistent networking and security?
  • What about identity federation and RBAC?
  • Is the performance of each cloud really the same?
  • How should we encrypt traffic between services in multiple clouds?
  • What about monitoring and observability?

Workload Mobility and Application Portability without an Intercloud

VMware has a different view and approach how workload mobility and application portability can be achieved.

Their value add and goal is the same, but with a different strategy of abstracting clouds.

VMware is not building an intercloud but they provide customer a  technology stack (compute, storage, networking), or a cloud operating system if you will, that can run on top of every major public cloud provider like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud and Alibaba Cloud.

VMware Workload Mobility

This consistent infrastructure makes it especially for virtual machines and legacy applications extremely easy to be migrated to any location.

What about modern applications and Kubernetes? What about developers who do not care about (cloud) infrastructures?

Project Cascade

At VMworld 2021, VMware announced the technology preview of “Project Cascade” which will provide a unified Kubernetes interface for both on-demand infrastructure (IaaS) and containers (CaaS) across VMware Cloud – available through an open command line interface (CLI), APIs, or a GUI dashboard.

The idea is to provide customers a converged IaaS and CaaS consumption service across any cloud, exposed through different Kubernetes APIs.

VMware Project Cascade

I heard the statement “Kubernetes is complex and hard” many times at KubeCon Europe 2022 and Project Cascade is clearly providing another abstraction layer for VM and container orchestration that should make the lives of developers and operators less complex.

Project Ensemble

Another project in tech preview since VMworld last year is “Project Ensemble“. It is about multi-cloud management platform that provides an app-centric self-service portal with predictive support.

Project Ensemble will deliver a unified consumption surface that meets the unique needs of the cloud administrator and SRE alike. From an architectural perspective, this means creating a platform designed for programmatic consumption and a firm “API First” approach.

I can imagine that it will be a service that leverages artificial intelligence and machine learning to simplify troubleshooting and that is capable in the future to intelligently place or migrate your workloads to the appropriate or best cloud (for example based on cost) including all attached networking and security policies.

Conclusion

I believe that VMware is on the right path by giving customers the option to build a cloud-agnostic infrastructure with the necessary abstraction layers for IaaS and CaaS including the cloud management platform. By providing a common way or standard to run virtual machines and containers in any cloud, I am convinced, VMware is becoming the defacto standard for infrastructure for many enterprises.

VMware Vision and Strategy 2022

By providing a consistent cloud infrastructure and a consistent developer model and experience, VMware bridges the gap between the developers and operators, without the need for an intercloud or intercloud protocol. That is the future of cloud computing.

 

Other relevant resources:

 

 

Multi-Cloud and Sovereign Cloud – Deploy the Right Data to the Right Cloud

Multi-Cloud and Sovereign Cloud – Deploy the Right Data to the Right Cloud

According to Gartner, regulated industry customers (such as finance and healthcare) and governments are looking for digital borders. Companies in these sectors are looking to reduce vendor lock-in and single points of failure with their cloud providers, whose data centers sometimes are also outside their country (e.g., Switzerland based customer with an AWS data center in Frankfurt).

The market for cloud technology and services is currently dominated by US and Asian cloud providers and many (European) companies store their data in these regions. There are European regions and data centers, but the geopolitical and legal challenges, concerns about data control, industry compliance and sovereignty are driving the creation of new national clouds.

That is why Gartner sees sovereign clouds as one of the emerging technologies, which is currently at the start of the August 2021 published hype cycle:

Das sind die aufstrebenden Technologien im Hype Cycle 2021 | IT-Markt

Image Source: https://www.it-markt.ch/news/2021-08-27/das-sind-die-aufstrebenden-technologien-im-hype-cycle-2021

Use Case 1 – Swiss Federal Administration

As an example and first use case I would mention the Swiss federal administration, which doesn’t see the need for an independent technical infrastructure under public law.

In June 2021 they published the statement that they notified the following cloud providers to become part of the federal administration’s initial multi-cloud architecture:

  • Amazon Web Services (AWS)
  • IBM
  • Microsoft
  • Oracle
  • Alibaba

There are several reasons (pricing, market share, local data center availability) that led to this decision to build a multi-cloud architecture with these cloud providers. But it was interesting to read that the government did an assessment and concluded that no technical independent infrastructure is needed – no need for a local sovereign cloud.

This means that they want to keep their existing data centers to provide infrastructure and data sovereignty.

Interestingly, the Swiss confederation is exploring initiatives for secure and trustworthy data infrastructure for Europe and is examining participation in GAIA-X.

Use Case 2 – Current Sovereign Cloud Providers

There are other examples where organizations and governments saw the need for a sovereign cloud. Having a public cloud provider’s data center in the same country does not necessarily mean, that it’s a sovereign cloud per se. Hyperscale clouds often rely on non-domestic resources that maintain their data centers or provide customer support.

Governments and regulated industries say that you need domestic resources to provide a true sovereign cloud.

A good example here is the UK government, who has chosen the provider UKCloud, that delivers a consistent experience that spans the edge, private cloud and sovereign cloud.

Another VMware sovereign cloud provider is AUCloud, who provides IaaS to the Australian government, defense, defense industries and Critical National Industry (CNI) communities.

The third example I would like to highlight is Saudi Telecom Company (STC), that brings sovereign cloud services to Saudi Arabia.

What do UKCloud, AUCloud and STC have in common? They all joined the pretty new VMware Sovereign Cloud initiative and built their sovereign clouds based on VMware technology.

Use Case 3 – Cloud Act

Another motivation for a sovereign cloud could be the Cloud Act, which is a U.S. law that gives American authorities unrestricted access to the data of American IT cloud providers. It does not matter where the data is effectively stored. In the event of a criminal prosecution, the authorities have a free hand and do not even have to notify the data owners.

What does this mean for cloud users? Because of the Cloud Act, they cannot be sure whether when and to what extent their data or the data of their customers will be read by foreign authorities.

Use Case 4 – GAIA-X

Let me quote the official explanation of GAIA-X:

The architecture of Gaia-X is based on the principle of decentralization. Gaia-X is the result of many individual data owners (users) and technology players (providers) – all adopting a common standard of rules and control mechanisms – the Gaia-X standard.

Together, we are developing a new concept of data infrastructure ecosystem, based on the values of openness, transparency, sovereignty, and interoperability, to enable trust. What emerges is not a new cloud physical infrastructure, but a software federation system that can connect several cloud service providers and data owners together to ensure data exchange in a trusted environment and boost the creation of new common data spaces to create digital economy.

Gaia-X aims to mitigate Europe’s dependency on non-European providers and there seems to be no pre-defined architecture or preferred vendor when it comes to the underlying cloud platform GAIA-X sits on top.

While one would believe that a sovereign cloud is mandatory for GAIA-X, it looks more like a cloud-agnostic data exchange platform hosted by European providers and customers.

I am curious how providers build, operate and maintain a sovereign cloud stack based on open-source software.

How real is the need for Sovereign Cloud?

If a company or government wants to keep, extend, and maintain their own local data centers, this is still a valid option of course. But the above examples showed that the need for sovereign clouds exists and that the global interest seems to be growing.

What is the VMware Sovereign Cloud Initiative?

In October 2021 VMware announced their VMware Sovereign Cloud initiative where they partnering with cloud service providers to deliver a sovereign cloud infrastructure with cloud services on top to customers in regulated industries.

To become a so-called VMware Sovereign Cloud Provider, partners must go through an assessment and meet specific requirements (framework) to show their capability to provide a sovereign cloud infrastructure.

VMware defines a sovereign cloud as one that:

  • Protects and unlocks the value of critical data (e.g., national data, corporate data, and personal data) for both private and public sector organizations
  • Delivers a national capability for the digital economy
  • Secures data with audited security controls
  • Ensures compliance with data privacy laws
  • Improves control of data by providing both data residency and data sovereignty with full jurisdictional control

VMware aims to help regulated industry and government customers to execute their cloud strategies by connecting them to VMware Sovereign Cloud Providers (like UKCloud, AUcloud, STC, Tietoevry, ThinkOn or OVHcloud).

Sovereign Cloud Providers in Switzerland

Currently, there is no official VMware sovereign cloud provider in Switzerland. We have a few and strong VMware cloud provider partners as part of the VMware Cloud Provider Program (VCPP):

Let us come back to the use case 1 with the Swiss federal administration. They are building a multi-cloud and would have in Switzerland a potential number of at least 10 cloud service providers, which could become an official VMware Sovereign Cloud Provider.

VMware Sovereign Cloud Borders 

Image Source: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/vmw-sovereign-cloud-solution-brief-customer.pdf

There are other Swiss providers who are building a sovereign cloud based on open-source technologies like OpenStack.

Hyperscalers like Microsoft or Google need to partner with local providers if they want to build a sovereign cloud and deliver services.

VMware already has 4300+ partners with the strategic partnerships and the same technology stack in 120+ countries and some of them are already sovereign cloud providers as mentioned before.

VMware Sovereign Cloud initiative

Image Source: https://blogs.vmware.com/cloud/2021/10/06/vmware-sovereign-cloud/

What are the biggest challenges with a multi-cloud and a sovereign cloud infrastructure?

What do you think are the biggest challenges of an organization that builds a multi-cloud with different public cloud providers and sovereign clouds?

Let me list a few questions here:

  • How can I easily migrate my workloads to the public or sovereign cloud?
  • How long does it take to migrate my applications?
  • Which cloud is the right one for a specific workload?
  • Do I need to refactor some of my applications?
  • How can I consistently manage and operate 5 different public/sovereign cloud providers?
  • What if I one of my cloud providers is not strategic anymore? How can I build a cloud exit strategy?
  • How do I implement and maintain security?
  • What if I want to migrate workloads back from a public cloud to an on-premises (sovereign) cloud?
  • Which Kubernetes am I going to use in all these different clouds?
  • How do I manage and monitor all these different Kubernetes clusters, networking and security policies, create secure application communication between clouds and so on?
  • How do I control costs?

These are just a small number of questions, but I think it would take your organization or your cloud platform team a while to come up with a solution.

What is the VMware approach? Let me list some other articles of mine that help you to better understand the VMware multi-cloud approach:

Conclusion

Public cloud providers build local data centers and provide data residency. Sovereign clouds provide data sovereignty. Resident data may be accessed by a foreign authority while data sovereignty refers to data being subject to privacy laws and governance structures within the nation where that data is collected.

Controlling the location and access of data in the cloud has become an important task for CIOs and CISOs and I personally believe that sovereign clouds are not becoming important in 2 or 3 years, they are already very important and relevant, and we can expect a growth in this area in the next months.

My conclusion here is, that sovereign clouds and the public clouds are not competitors, they complement each other.