VMware announced the availability of VMware Realize Cloud Universal (vRCU) back in September 2020. vRCU is a SaaS management suite of different products like vRealize Operations, vRealize Log Insight or vRealize Automation than can be consumed as managed cloud services, but VMware still gives you the option to use those subscription licenses for the on-premises products of the vRealize Suite.
This flexible licensing and delivery models enable customers to move at their own pace and give them the flexibility and choice to decide, what makes most sense for them.
Use Cases
I see three different use cases where vRealize Cloud Universal makes the most sense for customers:
“I don’t want to deploy and maintain vRealize products”
Company with a lot of edge locations and no more global/regional data centers
As always, VMware offers multiple editions for different use cases:
Standard – Focused on operations
Advanced – Adding automation capabilities
Enterprise – Adding cloud cost optimization, security and compliance
Enterprise Plus – This edition is only available as part of VMware Cloud Universal and add-on to VMware Cloud on AWS
Note: You can also consume vRealize Network Insight as a standalone SaaS service since March 2022 with vRealize Network Insight Universal.
VMware Cloud SaaS Services Availability
If you would like to know where the VMware Cloud services are hosted/available, click here.
How can I connect my environment to vRealize Cloud?
To collect and monitor data from your on-prem data center or cloud (VMC on AWS, Azure VMware Solution, Google Cloud VMware Engine) you need to deploy cloud proxies. They are one-way collectors (outbound connection initiated from the cloud proxy over TCP/433) that upload your data to vRealize Operations Cloud for example.
Note: It seems that you currently have to deploy separate cloud proxies for vRealize Operations Cloud and vRealize Log Insight Cloud for example. But you can use an existing proxy if it’s vRealize Log Insight Cloud, vRealize AI Cloud or vRealize Automation Cloud.
vRealize Cloud Subscription Manager – Metering and Usage
vRealize Cloud Subscription Manager is a cloud service that integrates with vRealize Suite Lifecycle Manager to collect data for your on-premises deployed products. It also monitors the subscription licenses usage for your SaaS products and visualizes the consumption of all vRealize Cloud components.
vCloud Suite Subscription
What about existing vCloud Suite customers that also bought vSphere Enterprise Plus? For those customers VMware offers a combination of vCloud Suite Subscription, which is a combination of vRealize Cloud Universal and term license of vSphere Enterprise Plus. vCloud Suite subscription comes in three different editions:
If you are interested in standalone vSphere subscription licensing, have a look at vSphere Advantage.
Upgrades and Add-ons
Standalone vRealize products and vRealize Suite customers can upgrade to vRealize Cloud Universal or vCloud Suite Subscription through the Subscription Upgrade Program (SUP). You can also upgrade the versions within the product.
Summary
To summarize your options:
You can get the standalone vRealize Cloud Universal offering
If you add a vSphere Enterprise Plus license to a vRCU edition (Std, Adv, Ent), it is called vCloud Suite Subscription
I am finally taking the time to write this piece about interclouds, workload mobility and application portability. Some of my engagements during the past four weeks led me several times to discussions about interclouds and workload mobility.
Cloud to Cloud Interoperability and Federation
Who has thought back in 2012 that we will have so many (public) cloud providers like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud etc. in 2022?
10 years ago, many people and companies were convinced that the future consists of public cloud infrastructure only and that local self-managed data centers are going to disappear.
This vision and perception of cloud computing has dramatically changed over the past few years. We see public cloud providers stretching their cloud services and infrastructure to large data centers or edge locations. It seems they realized, that the future is going to look differently than a lot of people anticipated back then.
I was not aware that the word “intercloud” and the need for it exists for a long time already apparently. Let’s take David Bernstein’s presentation as an example, which I found by googling “intercloud”:
This presentation is about avoiding the mistake of using proprietary protocols and cloud infrastructures that lead to silos and a non-interoperable architecture. He was part of the IEEE Intercloud Working Group (P2302) which was working on a standard for “Intercloud Interoperability and Federation (SIIF)” (draft), which mentioned the following:
Currently there are no implicit and transparent interoperability standards in place in order for disparate cloud computing environments to be able to seamlessly federate and interoperate amongst themselves. Proposed P2302 standards are a layered set of such protocols, called “Intercloud Protocols”, to solve the interoperability related challenges. The P2302 standards propose the overall design of decentralized, scalable, self-organizing federated “Intercloud” topology.
I do not know David Bernstein and the IEEE working group personally, but it would be great to hear from some of them, what they think about the current cloud computing architectures and how they envision the future of cloud computing for the next 5 or 10 years.
As you can see, the wish for an intercloud protocol or an intercloud exists since a while. Let us quickly have a look how others define intercloud:
Cisco in 2008 (it seems that David Bernstein worked at Cisco that time). Intercloud is a network of clouds that are linked with each other. This includes private, public, and hybrid clouds that come together to provide a seamless exchange of data.
teradata. Intercloud is a cloud deployment model that links multiple public cloud services together as one holistic and actively orchestrated architecture. Its activities are coordinated across these clouds to move workloads automatically and intelligently (e.g., for data analytics), based on criteria like their cost and performance characteristics.
Alvin Cheung is an associate professor at Berkeley EECS and wrote the following in his Twitter comments:
we argue that cloud computing will evolve to a new form of inter-cloud operation: instead of storing data and running code on a single cloud provider, apps will run on an inter-operating set of cloud providers to leverage their specialized services / hw / geo etc, much like ISPs.
Alvin and his colleagues wrote a publication which states “A Berkeley View on the Future of Cloud Computing” that mentions the following very early in the PDF:
We predict that this market, with the appropriate intermediation, could evolve into one with a far greater emphasis on compatibility, allowing customers to easily shift workloads between clouds.
[…] Instead, we argue that to achieve this goal of flexible workload placement, cloud computing will require intermediation, provided by systems we call intercloud brokers, so that individual customers do not have to make choices about which clouds to use for which workloads, but can instead rely on brokers to optimize their desired criteria (e.g., price, performance, and/or execution location).
We believe that the competitive forces unleashed by the existence of effective intercloud brokers will create a thriving market of cloud services with many of those services being offered by more than one cloud, and this will be sufficient to significantly increase workload portability.
Intercloud Broker
Organizations place their workloads in that cloud which makes the most sense for them. Depending on different regulations, data classification, different cloud services, locations, or pricing, they then decide which data or workload goes to which cloud.
The people from Berkeley do not necessarily promote a multi-cloud architecture, but have the idea of an intercloud broker that places your workload on the right cloud based on different factors. They see the intercloud as an abstraction layer with brokering services:
In my understanding their idea goes towards the direction of an intelligent and automated cloud management platform that takes the decision where a specific workload and its data should be hosted. And that it, for example, migrates the workload to another cloud which is cheaper than the current one.
Cloud Native Technologies for Multi-Cloud
Companies are modernizing/rebuilding their legacy applications or create new modern applications using cloud native technologies. Modern applications are collections of microservices, which are light, fault tolerant and small. These microservices can run in containers deployed on a private or public cloud.
Which means, that a modern application is something that canadapt to any environment and perform equally well.
The challenge today is that we have modern architectures, new technologies/services and multiple clouds running with different technology stacks. And we have Kubernetes as framework, which is available in different formats (DIY or offerings like Tanzu TKG, AKS, EKS, GKE etc.)
Then there is the Cloud Native Computing Foundation (CNCF) and the open source community which embrace the principal of “open” software that is created and maintained by a community.
It is about building applications and services that can run on any infrastructure, which also means avoiding vendor or cloud lock-in.
Challenges of Interoperability and Multiple Clouds
If you discuss multi-cloud and infrastructure independent applications, you mostly end up with an endless list of questions like:
How can we achieve true workload mobility or application portability?
How do we deal with the different technology formats and the “language” (API) of each cloud?
How can we standardize and automate our deployments?
Is latency between clouds a problem?
What about my stateful data?
How can we provide consistent networking and security?
What about identity federation and RBAC?
Is the performance of each cloud really the same?
How should we encrypt traffic between services in multiple clouds?
What about monitoring and observability?
Workload Mobility and Application Portability without an Intercloud
VMware has a different view and approach how workload mobility and application portability can be achieved.
Their value add and goal is the same, but with a different strategy of abstracting clouds.
VMware is not building an intercloud but they provide customer a technology stack (compute, storage, networking), or a cloud operating system if you will, that can run on top of every major public cloud provider like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud and Alibaba Cloud.
This consistent infrastructure makes it especially for virtual machines and legacy applications extremely easy to be migrated to any location.
What about modern applications and Kubernetes? What about developers who do not care about (cloud) infrastructures?
Project Cascade
At VMworld 2021, VMware announced the technology preview of “Project Cascade” which will provide a unified Kubernetes interface for both on-demand infrastructure (IaaS) and containers (CaaS) across VMware Cloud – available through an open command line interface (CLI), APIs, or a GUI dashboard.
The idea is to provide customers a converged IaaS and CaaS consumption service across any cloud, exposed through different Kubernetes APIs.
I heard the statement “Kubernetes is complex and hard” many times at KubeCon Europe 2022 and Project Cascade is clearly providing another abstraction layer for VM and container orchestration that should make the lives of developers and operators less complex.
Project Ensemble
Another project in tech preview since VMworld last year is “Project Ensemble“. It is about multi-cloud management platform that provides an app-centric self-service portal with predictive support.
Project Ensemble will deliver a unified consumption surface that meets the unique needs of the cloud administrator and SRE alike. From an architectural perspective, this means creating a platform designed for programmatic consumption and a firm “API First” approach.
I can imagine that it will be a service that leverages artificial intelligence and machine learning to simplify troubleshooting and that is capable in the future to intelligently place or migrate your workloads to the appropriate or best cloud (for example based on cost) including all attached networking and security policies.
Conclusion
I believe that VMware is on the right path by giving customers the option to build a cloud-agnostic infrastructure with the necessary abstraction layers for IaaS and CaaS including the cloud management platform. By providing a common way or standard to run virtual machines and containers in any cloud, I am convinced, VMware is becoming the defacto standard for infrastructure for many enterprises.
By providing a consistent cloud infrastructure and a consistent developer model and experience, VMware bridges the gap between the developers and operators, without the need for an intercloud or intercloud protocol. That is the future of cloud computing.
VMware is giving their customers more and more the option to move towards a subscription-based licensing model. In general, companies are moving away from the large pay-up-front deals and replace them with recurring subscriptions. Vendors like VMware are making a lot of investments to provide the structures, processes and capabilities to offer subscription licenses (and SaaS services). Organizations see the benefits of subscription licenses and this blog describes the current options if you want to move your vSphere perpetual licenses towards vSphere subscription.
vSphere+ Advantage – vSphere Subscription Service
Since December 2021, VMware offers vSphere Advantage in limited regions (aka Initial Availability).
vSphere Advantage gives you the flexibility to manage and operate your on-premises vSphere infrastructure while leveraging several VMware Cloud capabilities:
Transition from vSphere perpetual to vSphere subscription-based consumption for your vSphere deployments
Complete view of the globally distributed on-premises vSphere inventory
VMware-managed vCenter Servers (aka Project Arctic, not GA yet)
From a centralized VMware Cloud Console you can monitor events, alerts, capacity utilization, and the security posture of your vSphere infrastructure.
It is also possible now for you to plan and upgrade your existing vSphere licensing keys and replace them with vSphere Advantage, which enables you to make use of keyless entitlements. This keyless entitlement makes it very easy for customers to stay compliant all the time and to understand the current subscription usage.
To start using vSphere Advantage, you must enable communication between your on-premises vCenter Server and VMware Cloud by using a vCenter Cloud Gateway. This requires an outbound connection (443, HTTPS) only, no VPN is needed.
Current vCenter Server Requirements:
The vCenter Server version must be 7.0 Update 3a and later
Configure the vCenter Server with a backup and restore mechanism
Dedicate at least three ESXi hosts for the vCenter Server. (Recommended)
The vCenter Server must be self-managed. It must manage its own ESXi hosts and virtual machines
Unsupported vCenter Configurations:
Ensure that the vCenter Server is not configured in High Availability mode
If the vCenter Server is configured in Enhanced Linked Mode (ELM), unlink it from ELM. See Repoint a vCenter Server Node to a New Domain. ELM is no longer required because with vSphere Advantage you can monitor your entire vSphere inventory in a single pane of glass.
Ensure that the vCenter Server is not configured with NSX for vSphere, vRealize Operations Manager, Site Recovery Manager, vCloud Suite, or vSAN.
Project Arctic – VMware-Managed vCenter (Roadmap)
VMware introduced Project Arctic at VMworld 2021. Now it’s called vSphere Advantage. While a hybrid cloud operating model for vSphere becomes default now, it’s not yet possible to let VMware manage your vCenter Servers. We can expect that this capability will be shipped and made generally available somewhen in 2022.
VMware Edge Compute Stack
Edge Compute Stack (ECS) is a purpose-built stack that is available in three different editions (information based on initial availability from VMworld 2021):
As you can see, each VMware Edge Compute Stack edition has the vSphere Enterprise+ (hypervisor) included. Software-defined storage with vSAN is optional, but Tanzu for running containers is always part of each edition.
Note: The Edge Compute Stack includes vSphere subscription licenses.
Other Options
If you are running the VMware Cloud Foundation (VCF) stack and look for a managed service offering, which includes subscription-based licensing, have a look at the following alternatives:
VMware revealed their edge computing vision at VMworld 2021. In VMware’s view the multi-cloud extends from the public clouds to private clouds to edge. Edge is about bringing apps and services closer to where they are needed, especially in sectors like retail, transportation, energy and manufacturing.
In verticals like manufacturing the edge was always important. It’s about producing things than you can sell. If you cannot produce, you lose time and money. Reliability, stability and factory uptime are not new requirements. But why is edge becoming so important now?
Without looking at any analyst report and only providing experience from the field, it is clear why. Almost all of the large enterprises are migrating workloads from their global (central) data centers to the public cloud. At the same time, customers are looking at new innovations and technologies to connect their machines, processes, people and data in a much more efficient way.
Which requirement did all my customers have in common? They didn’t want to move their dozens or hundreds of edge infrastructures to the public cloud, because the factories should work independently and autonomously in case of a WAN outage for example. Additionally, some VMware technologies were already deployed at the edge.
VMware Edge Compute Stack
This is why VMware introduced the so-called “Edge Compute Stack” (ECS) in October 2021, which is provides a unified platform to run VMs alongside containerized applications at the far edge (aka enterprise edge). ECS is a purpose-built stack that is available in three different editions (information based on initial availability from VMworld 2021):
As you can see, each VMware Edge Compute Stack edition has the vSphere Enterprise+ (hypervisor) included, software-defined storage with vSAN is optional, but Tanzu for running containers is always included.
While ECS is great, the purpose of this article is about highlighting different solutions and technologies that help you to build the foundation for a digital manufacturing platform.
IT/OT Convergence
You most probably have a mix of home-grown and COTS (commercial off-the-shelf) software, that need to be deployed in your edge locations (e.g., factories, markets, shops etc.). In manufacturing, OT (operational technology) vendors have just started the adoption of container technologies due to unique technology requirements and the business model that relies on proprietary systems.
The OT world is typically very hardware-centric and uses proprietary architectures. These systems and architectures, which were put into production 15-20 years ago, are still functional. It just worked.
While these methods and architectures have been very good, the manufacturing industry realized that this static and inflexible approach resulted in a technology debt, that didn’t allow any innovation for a long period of time.
Manufacturing companies are moving to a cloud-native architecture that should provide more flexibility and vendor interoperability with the same focus in mind: To provide a reliable, scalable and flexible infrastructure.
This is when VMware becomes relevant again with their (edge) compute stack. VMware vSphere allows you to run VMs and containers on the same platform. This is true for IT and OT workloads, that’s IT partial IT/OT covergence.
You may ask yourself how you then would design the network. I’ll answer this topic in a minute.
Kubernetes Operations
IT platform teams, who design and manage the edge have to expand their (VMware) platform capabilities that allow them to deploy and host containers. Like I said before, this is why Tanzu is included in all the VMware Edge Compute Stack editions. Kubernetes is the new Infrastructure-as-a-Service (IaaS) and so it makes only sense that the container deployment and management capability is included.
How do you provide centralized or regional Kubernetes management and operations if you don’t have a global (regional) data center anymore?
With a hybrid approach, by using Tanzu for Kubernetes Operations (TKO), a set of SaaS services that allow you to run, manage, connect and secure your container infrastructure across clouds and edge locations.
IT/OT Security
Now you have the right platform to run your IT and OT workloads on the same hypervisor or compute platform. You also have a SaaS-based control plane to deploy and manage your Kubernetes clusters.
As soon as you are dealing with a very dynamic environment where containers exist, you are having discussions about software-defined networking or virtualized networks. Apart from that, every organization and manufacturer are transforming their network and security at the edge and talk about network segmentation (and cybersecurity!).
Traditionally, you’ll find the Purdue Model implemented, a concept model for industrial control systems (ICS) that breaks the network in two zones:
In these IT and OT zones you’ll find subzones that describe different layers and the ICS components. As you can see as well, each level is secured by a dedicated physical firewall appliance. From this drawing one could say that the IT and OT world converge in the DMZ layer, because of the bidirectional traffic flow.
VMware is one of the pioneers when it comes to network segmentation that helps you driving IT/OT convergence. This is made possible by using network virtualization. As soon as you are using the VMware hypervisor and its integrated virtual switch, you are already using a virtualized network.
To bring IT and OT closer together and to provide a virtualized network design based on the Purdue Model including a zero-trust network architecture, you would start looking at VMware NSX to implement that.
In level 2 of the Purdue Model, which hosts the systems for supervising, monitoring and controlling the physical process, you will find components like human-machine interfaces (HMI) and supervisory control and data acquisition (SCADA) software.
In level 3, manufacturing execution systems (MES) can be found.
Nowadays, most companies already run their HMIs, SCADAs and MES software in virtual machines on the VMware vSphere hypervisor.
The next big thing is the virtualization of PLCs (programmable logic controller), which is an industrial computer that controls manufacturing processes, such as machines, assembly lines and robotic devices. Traditional PLC implementations in hardware are costly and lack scalability.
That is why the company SDA was looking for a less hardware-centric but more software-centric approach and developed the SDA vPLC that is able to meet sub 10ms performance.
This vPLC solution is based on a hybrid architecture between a cloud system and the industrial workload at the edge, which has been tested on VMware’s Edge Compute Stack.
Monitoring & Troubleshooting
One area, which we haven’t highlighted yet, is the monitoring and troubleshooting of virtual machines (VMs). The majority of your workloads are still VM-based. How do you monitor these workloads and applications, deal with resource and capacity planning/management, and troubleshoot, if you don’t have a central data center anymore?
With the same approach as before – just with a cloud-based service. Most organizations rely on vRealize Operations (vROps) and vRealize Log Insight (vRLI) for their IT operations and platform teams gain visibility in all the main and edge data centers.
You can still use vROps and vRLI (on-premises) in your factories, but VMware recommends using the vRealize Cloud Universal (vRCU) SaaS management suite, that gives you the flexibility to deploy your vRealize products on-premises or as SaaS. In an edge use case the SaaS-based control plane just makes sense.
In addition to vRealize Operations Cloud you can make use of the vRealize True Visibility Suite (TVS), that extends your vRealize Operations platform with management packs and connectors to monitor different compute, storage, network, application and database vendors and solutions.
Factory VDI
Some of your factories may need virtual apps or desktops and for edge use cases there are different possible architectures available. Where a factory has a few hundred of concurrent users, a dedicated standalone VDI/RDSH deployment might make sense. What if you have hundreds of smaller factories and don’t want to maintain a complete VDI/RDSH infrastructure?
VMware is currently working on a new architecture for VMware Horizon (aka VMware Horizon Next-Generation) and their goal is to provide a single, unified platform across on-premises and cloud environments. They also plan to do that by introducing a pod-less architecture that moves key components to the VMware-hosted Horizon (Cloud) Control Plane.
This architecture is perfectly made for edge use cases and with this approach customers can reduce costs, expect increased scalability, improve troubleshooting and provide a seamless experience for any edge or cloud location.
Management for Enterprise Wearables
If your innovation and tech team are exploring new possibilities with wearable technologies like augmented reality (AR), mixed reality (MR) and virtual reality (VR) head-mounted displays (HMDs), then VMware Workspace ONE Unified Endpoint Management (UEM) can help you to securely manage these devices!
Workspace ONE UEM is very strong when it comes to the modern management of Windows Desktop and macOS operating systems, and device management (Android/iOS).
Conclusion
As you can see, VMware has a lot to offer for the enterprise edge. Organizations that are multi-cloud and keep their edge locations on-premises, have a lot of new technologies and possibilities nowadays.
VMware’s strengths are unfolded as soon as you combine different solutions. And these solutions help you to work on your priorities and requirements to build the right foundation for a digital manufacturing platform.
According to Gartner, regulated industry customers (such as finance and healthcare) and governments are looking for digital borders. Companies in these sectors are looking to reduce vendor lock-in and single points of failure with their cloud providers, whose data centers sometimes are also outside their country (e.g., Switzerland based customer with an AWS data center in Frankfurt).
The market for cloud technology and services is currently dominated by US and Asian cloud providers and many (European) companies store their data in these regions. There are European regions and data centers, but the geopolitical and legal challenges, concerns about data control, industry compliance and sovereignty are driving the creation of new national clouds.
That is why Gartner sees sovereign clouds as one of the emerging technologies, which is currently at the start of the August 2021 published hype cycle:
As an example and first use case I would mention the Swiss federal administration, which doesn’t see the need for an independent technical infrastructure under public law.
In June 2021 they published the statement that they notified the following cloud providers to become part of the federal administration’s initial multi-cloud architecture:
Amazon Web Services (AWS)
IBM
Microsoft
Oracle
Alibaba
There are several reasons (pricing, market share, local data center availability) that led to this decision to build a multi-cloud architecture with these cloud providers. But it was interesting to read that the government did an assessment and concluded that no technical independent infrastructure is needed – no need for a local sovereign cloud.
This means that they want to keep their existing data centers to provide infrastructure and data sovereignty.
Interestingly, the Swiss confederation is exploring initiatives for secure and trustworthy data infrastructure for Europe and is examining participation in GAIA-X.
Use Case 2 – Current Sovereign Cloud Providers
There are other examples where organizations and governments saw the need for a sovereign cloud. Having a public cloud provider’s data center in the same country does not necessarily mean, that it’s a sovereign cloud per se. Hyperscale clouds often rely on non-domestic resources that maintain their data centers or provide customer support.
Governments and regulated industries say that you need domestic resources to provide a true sovereign cloud.
A good example here is the UK government, who has chosen the provider UKCloud, that delivers a consistent experience that spans the edge, private cloud and sovereign cloud.
Another VMware sovereign cloud provider is AUCloud, who provides IaaS to the Australian government, defense, defense industries and Critical National Industry (CNI) communities.
The third example I would like to highlight is Saudi Telecom Company (STC), that brings sovereign cloud services to Saudi Arabia.
What do UKCloud, AUCloud and STC have in common? They all joined the pretty new VMware Sovereign Cloud initiative and built their sovereign clouds based on VMware technology.
Use Case 3 – Cloud Act
Another motivation for a sovereign cloud could be the Cloud Act, which is a U.S. law that gives American authorities unrestricted access to the data of American IT cloud providers. It does not matter where the data is effectively stored. In the event of a criminal prosecution, the authorities have a free hand and do not even have to notify the data owners.
What does this mean for cloud users? Because of the Cloud Act, they cannot be sure whether when and to what extent their data or the data of their customers will be read by foreign authorities.
Use Case 4 – GAIA-X
Let me quote the official explanation of GAIA-X:
The architecture of Gaia-X is based on the principle of decentralization. Gaia-X is the result of many individual data owners (users) and technology players (providers) – all adopting a common standard of rules and control mechanisms – the Gaia-X standard.
Together, we are developing a new concept of data infrastructure ecosystem, based on the values of openness, transparency, sovereignty, and interoperability, to enable trust. What emerges is not a new cloud physical infrastructure, but a software federation system that can connect several cloud service providers and data owners together to ensure data exchange in a trusted environment and boost the creation of new common data spaces to create digital economy.
Gaia-X aims to mitigate Europe’s dependency on non-European providers and there seems to be no pre-defined architecture or preferred vendor when it comes to the underlying cloud platform GAIA-X sits on top.
While one would believe that a sovereign cloud is mandatory for GAIA-X, it looks more like a cloud-agnostic data exchange platform hosted by European providers and customers.
I am curious how providers build, operate and maintain a sovereign cloud stack based on open-source software.
How real is the need for Sovereign Cloud?
If a company or government wants to keep, extend, and maintain their own local data centers, this is still a valid option of course. But the above examples showed that the need for sovereign clouds exists and that the global interest seems to be growing.
What is the VMware Sovereign Cloud Initiative?
In October 2021 VMware announced their VMware Sovereign Cloud initiative where they partnering with cloud service providers to deliver a sovereign cloud infrastructure with cloud services on top to customers in regulated industries.
To become a so-called VMware Sovereign Cloud Provider, partners must go through an assessment and meet specific requirements (framework) to show their capability to provide a sovereign cloud infrastructure.
VMware defines a sovereign cloud as one that:
Protects and unlocks the value of critical data (e.g., national data, corporate data, and personal data) for both private and public sector organizations
Delivers a national capability for the digital economy
Secures data with audited security controls
Ensures compliance with data privacy laws
Improves control of data by providing both data residency and data sovereignty with full jurisdictional control
VMware aims to help regulated industry and government customers to execute their cloud strategies by connecting them to VMware Sovereign Cloud Providers (like UKCloud, AUcloud, STC, Tietoevry, ThinkOn or OVHcloud).
Sovereign Cloud Providers in Switzerland
Currently, there is no official VMware sovereign cloud provider in Switzerland. We have a few and strong VMware cloud provider partners as part of the VMware Cloud Provider Program (VCPP):
Let us come back to the use case 1 with the Swiss federal administration. They are building a multi-cloud and would have in Switzerland a potential number of at least 10 cloud service providers, which could become an official VMware Sovereign Cloud Provider.
There are other Swiss providers who are building a sovereign cloud based on open-source technologies like OpenStack.
Hyperscalers like Microsoft or Google need to partner with local providers if they want to build a sovereign cloud and deliver services.
VMware already has 4300+ partners with the strategic partnerships and the same technology stack in 120+ countries and some of them are already sovereign cloud providers as mentioned before.
What are the biggest challenges with a multi-cloud and a sovereign cloud infrastructure?
What do you think are the biggest challenges of an organization that builds a multi-cloud with different public cloud providers and sovereign clouds?
Let me list a few questions here:
How can I easily migrate my workloads to the public or sovereign cloud?
How long does it take to migrate my applications?
Which cloud is the right one for a specific workload?
Do I need to refactor some of my applications?
How can I consistently manage and operate 5 different public/sovereign cloud providers?
What if I one of my cloud providers is not strategic anymore? How can I build a cloud exit strategy?
How do I implement and maintain security?
What if I want to migrate workloads back from a public cloud to an on-premises (sovereign) cloud?
Which Kubernetes am I going to use in all these different clouds?
How do I manage and monitor all these different Kubernetes clusters, networking and security policies, create secure application communication between clouds and so on?
How do I control costs?
These are just a small number of questions, but I think it would take your organization or your cloud platform team a while to come up with a solution.
What is the VMware approach? Let me list some other articles of mine that help you to better understand the VMware multi-cloud approach:
Public cloud providers build local data centers and provide data residency. Sovereign clouds provide data sovereignty. Resident data may be accessed by a foreign authority while data sovereignty refers to data being subject to privacy laws and governance structures within the nation where that data is collected.
Controlling the location and access of data in the cloud has become an important task for CIOs and CISOs and I personally believe that sovereign clouds are not becoming important in 2 or 3 years, they are already very important and relevant, and we can expect a growth in this area in the next months.
My conclusion here is, that sovereign clouds and the public clouds are not competitors, they complement each other.
VMware Cloud on AWS (VMC on AWS) brings VMware’s software-defined data center (SDDC) stack to the AWS cloud. By using the same vSphere-based virtualization/cloud technology on-premises and in the public cloud, you can create a true hybrid cloud architecture, that enables you to get consistent operations by using consistent infrastructure.
This solution comes with optimized access to the AWS services and is delivered, sold and supported by VMware, AWS and their partner networks.
As you can see above, VMC on AWS comes with the same VMware tools and integrates the VMware Cloud Foundation stack (vSphere for compute, vSAN for storage, NSX for networking) along with vCenter for management.
VMware Cloud on AWS comes with two different host configurations, which both require a minimum of two hosts per cluster.
For identifying the right host types for specific use cases, check out the VMware Cloud on AWS sizer.
Note: 99.9% SLA for non-stretched clusters, 99.99% for stretched clusters
Single Host Starter Configuration
VMC on AWS allows you to deploy a starter configuration with a single host only (not available with i3en.metal hosts).
This small SDDC configuration allows customers to get their first experiences with this hybrid cloud offering during a 60-day time period. Such a setup is only appropriate for test and development or proof of concept use cases. You can run production workloads on this small VMC on AWS environment if you scale up to the minimum of two hosts before the 60-day period ends, otherwise your evaluation ends with you losing data.
Note: Not all features of the standard VMC service offering are available in this limited setting. The VMC on AWS service level offering also does not apply to this one-node offering.
Included VMware Software
The following software is included in single host and production configurations:
Single Hosts (non-production environments)
Production (minimum 2 hosts)
Includes
VMware SDDC software: vSphere, vSAN, NSX-T, vCenter Server
VMware HCX
Dedicated Amazon EC2 Bare Metal Instances
VMware Global Support
Purchase separately
VMware Site Recovery
VMware Cloud Disaster Recovery
VMware vRealize Automation Cloud
VMware vRealize Operations Cloud
VMware vRealize Log Insight Cloud
VMware vRealize Network Insight Cloud
VMware Tanzu Standard
Not supported
Lifecycle management by VMware (updates, patches and upgrades)
High Availability (HA) and Stretched Clusters
Service Level Agreement (SLA)
Includes
VMware SDDC software: vSphere, vSAN, NSX-T, vCenter Server
VMware HCX
VMware Tanzu Services: TKG Service + TMC Essentials
Dedicated Amazon EC2 Bare Metal Instances
VMware Global Support
Lifecycle management by VMware (updates, patches and upgrades)
Support for High Availability (HA) and Stretched Clusters
Service Level Agreement (SLA)
Purchase separately
VMware Site Recovery
VMware Cloud Disaster Recovery
VMware NSX Advanced Firewall
VMware vRealize Automation Cloud
VMware vRealize Operations Cloud
VMware vRealize Log Insight Cloud
VMware vRealize Network Insight Cloud
VMware Tanzu Standard
VMware Cloud on AWS Outposts
If you want to get the agility and innovation of (VMware) Cloud in your own data center, delivered as a service, then VMC on AWS Outposts is for you.
VMC on AWS Outposts is a fully managed on-premises as-a-service offering, that stretches VMC on AWS to your data center or edge location. You’ll get dedicated Amazon Nitro-based EC2 bare-metal instances delivered on-premises with VMware Cloud Foundation running on top.
What’s included in the offering?
AWS Outposts 42u rack (we can also expect a half-rack offering in the future)
3-8 hosts configurations based on i3en.metal
Dark host capacity included (for remediation, EDRS, scale-out and lifecycle management purposes)
Installed by AWS
AWS managed dedicated Nitro-based i3en.metal EC2 instance with local SSD storage
VMware managed SDDC software – vSphere, vSAN, NSX-T, vCenter Server
VMware HCX
VMware Cloud Console
Support by VMware SREs
Supply chain, shipment logistics and onsite installation by AWS
Ongoing hardware monitoring with break/fix support.
Use Cases
VMware Cloud on AWS Outposts is made for multiple use cases:
Data/App Locality
Low latency
Local data processing
Data sovereignty/compliance
Infrastructure modernization
Branche Office or large edge modernization
But this offering and VMC on AWS in general come with multiple other use cases which help orgnaizations to fulfill their cloud strategy.
App Modernization
VMware Cloud on AWS provides an infrastructure platform option for customers to modernize their existing enterprise applications on and enables them to run their enterprise workloads of today and tomorrow. With VMware Cloud on AWS, customers can run, monitor, and manage their Kubernetes clusters and virtual machines – all on the same infrastructure. VMware Tanzu Kubernetes Grid provides a consistent, upstream-compatible distribution of Kubernetes, that is tested, signed, and supported by VMware. Tanzu Kubernetes Grid is central to many of the offerings in the VMware Tanzu portfolio.
VMC on AWS can help customers to expand to new locations. Maybe it’s an unplanned project or there are temporary or seasonal capacity needs. Some customers are also using such an offering to build a flexible test, lab or training environment in the public cloud.
Adopt a robust, feature-rich cloud platform for virtual desktops and applications that can be used to deliver complete VDI infrastructure from the cloud. Or you can extend an existing on-premises VDI environment for desktop bursting, protection or proximity to applications running in AWS. Optimize infrastructure costs with flexible, consumption-based billing while paying only for what you use.
Another typical use case is disaster recovery. Customers are looking for an offsite approach with which they can prepare themselves for different kind of scenarios with “warm standby” or “active/active” configurations. There are different architectural options and also different solutions from VMware available, e.g.:
How can you bridge the gap between on-premises data centers and VMC on AWS to enable application migrations or workload mobility? HCX creates an encrypted, high-throughput, WAN-optimized, load-balanced, traffic-engineered hybrid interconnect automates the creation of network extensions.
In short: VMware HCX can interconnect different vSphere-based clouds and with that you achieve a fabric for workload mobility by using vMotion over different clouds. It even preserves existing network connections!
Imagine how much easier and faster application migrations can be done now.
Let’s see if there is a future, that customers need full workload mobility where regular migrations from and to different clouds can be done. Maybe there is a customer, who migrates workloads today from on-prem to VMC on AWS, tomorrow to Azure VMware Solution, the next week to Google Cloud VMware Engine, and in the end back to an on-premises data center where another fully managed service like VMC on Dell EMC is deployed. 😀
VMware Cloud on AWS with Tanzu Services
It was mentioned above already, VMware Cloud on AWS includes “Tanzu Kubernetes Service” and “Tanzu Mission Control Essentials”.
VMware Cloud with Tanzu Services has been introduced at VMworld 2021 as the “Easy path to enterprise-grade Kubernetes on a fully managed, multi-cloud ready IaaS and CaaS platform”:
This was also when Tanzu Services became available for VMC on AWS with the following capabilities:
Managed Tanzu Kubernetes Grid Service: Provision Tanzu Kubernetes clusters within a few minutes using a simple, fast, and self-service experience in the VMware Cloud console. The underlying SDDC infrastructure and capacity required for Kubernetes workloads is fully managed by VMware. Use vCenter Server for managing Kubernetes workloads by deploying Kubernetes clusters, provisioning role-based access and allocating capacity for Developer teams. Manage multiple TKG clusters as namespaces with observability, troubleshooting and resiliency in vCenter Server.
Built in support for Tanzu Mission Control Essentials: Attach upstream compliant Kubernetes clusters including Amazon EKS and Tanzu Kubernetes Grid clusters. Manage lifecycle for Tanzu Kubernetes Grid clusters and centralize platform operations for Kubernetes clusters using the Kubernetes management plane offered by Tanzu Mission Control. Tanzu Mission Control provides a global visibility across clusters and clouds and increases security and governance by automating operational tasks such as access and security management at scale.
Did you know that the Tanzu Mission Control Standard Package is included with TMC Essentials?
As of November 2021, new clusters registered with TMC will have the Carvel package manager (the kapp-controller), deployed within the cluster. The “Catalog” page in the Tanzu Mission Control console allows you to view packages available from the Tanzu Standard repository (and your own custom Carvel package repositories) and install them in your Kubernetes clusters.
Application Transformer for VMware Tanzu for VMC on AWS
Application Transformer for VMware Tanzu is a tool that aids organizations in discovering application types, visualizing application topology, choosing a modernization approach based on scores, and containerizing and migrating suitable legacy applications to enhance business outcomes. As an agentless tool, Application Transformer for Tanzu utilizes the VMware vCenter API to introspect VMs across an entire vSphere or VMware Cloud on AWS-based data center.
Application Transformer can help you to convert virtual machines and application components to OCI-compliant container images, that then can be deployed into the Tanzu Kubernetes stack.
There are several ways how customers get access to Application Transformer for VMware Tanzu:
Good news for everyone is that Application Transformer for VMware Tanzu became generally available in February 2022. With this, VMware Cloud on AWS customers also have limited access to this offering from now on. The access is through integration with VMware Cloud console. If customers desire full access to Application Transformer, they need to buy Tanzu Standard, Tanzu Advanced, Tanzu for Kubernetes Operations, or App Navigator.
Features & Roadmap
VMware provides a lot of information about the features and roadmap of VMware Cloud on AWS.
VMC on AWS FAQ
There is a large collection of FAQs available that can be found here.
My name is Michael Rebmann. I am a cloud strategist at Oracle, helping public sector organizations and enterprise customers design sovereign and compliant cloud architectures using OCI. I focus on sovereign cloud, hybrid cloud infrastructure, and data privacy in regulated industries.
The views and opinions expressed here are entirely my own, reflecting my journey and insights.