VMworld 2021 – My Content Catalog and Session Recommendation

VMworld 2021 – My Content Catalog and Session Recommendation

VMworld 2021 is going to happen from October 6-7, 2021 (EMEA). This year you can expect so many sessions and presentations about the options you have when combining different products together, that help you to reduce complexity, provide more automation and therefore create less overhead.

Let me share my 5 personal favorite picks and also 5 recommended sessions based on the conversations I had with multiple customers this year.

My 5 Personal Picks

10 Things You Need to Know About Project Monterey [MCL1833]

Project Monterey was announced in the VMworld 2020 keynote. There has been tremendous work done since then. Hear Niels Hagoort and Sudhansu Jain talking about SmartNICs and how they will redefine the data center with decoupled control and data planes – for ESXi hosts and bare-metal systems. They are going to cover and demo the overall architecture and use cases!

Upskill Your Workforce with Augmented and Virtual Reality and VMware [VI1596]

Learn from Matt Coppinger how augmented realited (AR) and virtual reality (VR) are transforming employee productivity, and how these solutions can be deployed and managed using VMware technologies. Matt is going to cover the top enterprise use cases for AR/VR as well as the challenges you might face deploying these emerging technologies. Are you interested how to architect and configure VMware technologies to deploy and manage the latest AR/VR technology, applications and content? If yes, then this session is also for you.

Addressing Malware and Advanced Threats in the Network [SEC2027] (Tech+ Pass Only)

I am very interested to learn more cybersecurity. With Chad Skipper VMware has an expert who can give insights on how the Network Detection and Response (NDR) capabilities if NSX Advanced Threat Prevention provide visibility, detection and prevention of advanced threats.

60 Minutes of Non-Uniform Memory Access (NUMA) 3rd Edition [MCL1853]

Learn more about NUMA from Frank Denneman. You are going to learn more about the underlying configuration of a virtual machine and discover the connection between the Generapl-Purpose Graphics Processing Unit (GPGPU) and the NUMA node. You will also understand after how your knowledge of NUMA concepts in your cluster can help the developer by aligning the Kubernetes nodes to the physical infrastructure with the help of VM Service.

Mount a Robust Defense in Depth Strategy Against Ransomware [SEC1287]

Are you interested to learn more about how to protect, detect, respond to and recover from cybersecurity attacks across all technology stacks, regardless of their purpose or location? Learn more from Amanda Blevins about the VMware solutions for end users, private clouds, public clouds and modern applications.

5 Recommended Sessions based on Customer Conversations

Cryptographic Agility: Preparing for Quantum Safety and Future Transition [VI1505]

A lot of work is needed to better understand cryptographic agility and how we can address and manage the expected challenges that come with quantum computing. Hear VMware’s engineers from the Advanced Technology Group talking about the requirements of crypto agility and VMware’s recent research work on post-quantum cryptography in the VMware Unified Access Gateway (UAG) project.

Edge Computing in the VMware Office of the CTO: Innovations on the Horizon [VI2484]

Let Chris Wolf give you some insight into VMware’s strategic direction in support of edge computing. He is going to talk about solutions that will drive down costs while accelerating the velocity and agility in which new apps and services can be delivered to the edge.

Delivering a Continuous Stream of More Secure Containers on Kubernetes [APP2574]

In this session one can see how you can use two capabilities in VMware Tanzu Advanced, Tanzu Build Service and Tanzu Application Catalog, to feed a continuous stream of patched and compliant containers into your continuous delivery (CD) system. A must attend session delivered by David Zendzian, the VMware Tanzu Global Field CISO.

A Modern Firewall For any Cloud and any Workload [SEC2688]

VMware NSX firewall reimagines East-West security by using a distributed- and software-based approach to attach security policies to every workload in any cloud. Chris Kruegel gives you insights on how to stop lateral movement with advanced threat prevention (ATP) capabilities via IDS/IPS, sandboxing, NTA and NDR.

A Practical Approach for End-to-End Zero Trust [SEC2733]

Hear different the VMware CTOs Shawn Bass, Pere Monclus and Scott Lundgren talking about a zero trust approach. Shawn and the others will discuss specific capabilities that will enable customers to achieve a zero trust architecture that is aligned to the NIST guidance and covers secure access for users as well secure access to workloads.

Enjoy VMworld 2021! ūüôā

 

Application Modernization and Multi-Cloud Portability with VMware Tanzu

Application Modernization and Multi-Cloud Portability with VMware Tanzu

It was 2019 when VMware announced Tanzu and Project Pacific. A lot has happened since then and almost everyone is talking about application modernization nowadays. With my strong IT infrastructure background, I had to learn a lot of new things to survive initial conversations with application owners, developers and software architects. And in the same time VMware’s Kubernetes offering grew and became very complex – not only for customers, but for everyone I believe. ūüôā

I already wrote about VMware’s vision with Tanzu: To put a consistent “Kubernetes grid” over any cloud

This is the simple message and value hidden behind the much larger topics when discussing application modernization and application/data portability across clouds.

The goal of this article is to give you a better understanding about the real value of VMware Tanzu and to explain that it’s less about Kubernetes and the Kubernetes integration with vSphere.

Application Modernization

Before we can talk about the modernization of applications or the different migration approaches like:

  • Retain – Optimize and retain existing apps, as-is
  • Rehost/Migration (lift & shift) – Move an application to the public cloud without making any changes
  • Replatform (lift and reshape) – Put apps in containers and run in Kubernetes. Move apps to the public cloud
  • Rebuild and Refactor – Rewrite apps using cloud native technologies
  • Retire – Retire traditional apps and convert to new SaaS apps

…we need to have a look at the palette of our applications:

  • Web Apps – Apache Tomcat, Nginx, Java
  • SQL Databases – MySQL, Oracle DB, PostgreSQL
  • NoSQL Databases – MongoDB, Cassandra, Prometheus, Couchbase, Redis
  • Big Data – Splunk, Elasticsearch, ELK stack, Greenplum, Kafka, Hadoop

In an app modernization discussion, we very quickly start to classify applications as microservices or monoliths. From an infrastructure point of view you look at apps differently and call them “stateless” (web apps) or “stateful” (SQL, NoSQL, Big Data) apps.

And with Kubernetes we are trying to overcome the challenges, which come with the stateful applications related to app modernization:

  • What does modernization really mean?
  • How do I define “modernization”?
  • What is the benefit by modernizing applications?
  • What are the tools? What are my options?

What has changed? Why is everyone talking about modernization? Why are we talking so much about Kubernetes and cloud native? Why now?

To understand the benefits (and challenges) of app modernization, we can start looking at the definition from IBM for a “modern app”:

“Application modernization is the process of taking existing legacy applications and modernizing their platform infrastructure, internal architecture, and/or features. Much of the discussion around application modernization today is focused on monolithic, on-premises applications‚ÄĒtypically updated and maintained using waterfall development processes‚ÄĒand how those applications can be brought into cloud architecture and release patterns, namely microservices

Modern applications are collections of microservices, which are light, fault tolerant and small. Microservices can run in containers deployed on a private or public cloud.

Which means, that a modern application is something that can adapt to any environment and perform equally well.

Note: App modernization can also mean, that you must move your application from .NET Framework to .NET Core.

I have a customer, that is just getting started with the app modernization topic and has hundreds of Windows applications based on the .NET Framework. Porting an existing .NET app to .NET Core requires some work, but is the general recommendation for the future. This would also give you the option to run your .NET Core apps on Windows, Linux and macOS (and not only on Windows).

A modern application is something than can run on bare-metal, VMs, public cloud and containers, and that easily integrates with any component of your infrastructure. It must be something, that is elastic. Something, that can grow and shrink depending on the load and usage. Since it is something that needs to be able to adapt, it must be agile and therefore portable.

Cloud Native Architectures and Modern Designs

If I ask my VMware colleagues from our so-called MAPBU (Modern Application Platform Business Unit) how customers can achieve application portability, the answer is always: “Cloud Native!”

Many organizations and people see cloud native as going to Kubernetes. But cloud native is so much more than the provisioning and orchestration of containers with Kubernetes. It’s a about collaboration, DevOps, internal processes and supply chains, observability/self-healing, continuous delivery/deployment and cloud infrastructure.

There are so many definitions around “cloud native”, that Kamal Arora from Amazon Web Services and others wrote the book “Cloud Native Architecture“, which describes a maturity model. This model helps you to understand, that cloud native is more a journey than only restrictive definition.

Cloud Native Maturity Model

The adoption of cloud services and applying an application-centric design are very important, but the book also mentions that security and scalability rely on automation. And this for example could bring the requirement for Infrastructure as Code (IaC).

In the past, virtualization – moving from bare-metal to vSphere – didn’t force organizations to modernize their applications. The application didn’t need to change and VMware abstracted and emulated the bare-metal server. So, the transition (P2V) of an application was very smooth and not complicated.

And this is what has changed today. We have new architectures, new technologies and new clouds running with different technology stacks. We have Kubernetes as framework, which requires applications to be redesigned for these platforms.

That is the reason why enterprises have to modernize their applications.

One of the “five R’s” mentioned above is the lift and shift approach. If you don’t want or need to modernize some of your applications, but move to the public cloud in an easy, fast and cost efficient way, have a look at VMware’ hybrid cloud extension¬†(HCX).

In this article I focus more on the replatform and refactor approaches in a multi-cloud world.

Kubernetize and productize your applications

Assuming that you also define Kubernetes as the standard to orchestrate your containers where your microservices are running in, usually the next decision would be about the Kubernetes “product” (on-prem, OpenShift, public cloud).

Looking at the current CNCF Cloud Native Landscape, we can count over 50 storage vendors and over 20 networks vendors providing cloud native storage and networking solutions for containers and Kubernetes.

Talking to my customers, most of them mention the storage and network integration as one of their big challenges with Kubernetes. Their concern is about performance, resiliency, different storage and network patterns, automation, data protection/replication, scalability and cloud portability.

Why do organizations need portability?

There are many use cases and requirements that portability (infrastructure independence) becomes relevant. Maybe it’s about a hardware refresh or data center evacuation, to avoid vendor/cloud lock-in, not enough performance with the current infrastructure or it could be about dev/test environments, where resources are deployed and consumed on-demand.

Multi-Cloud Application Portability with VMware Tanzu

To explore the value of Tanzu, I would like to start by setting the scene with the following customer use case:

In this case the customer is following a cloud-appropriate approach to define which cloud is the right landing zone for their applications. They decided to develop new applications in the public cloud and use the native services from Azure and AWS. The customers still has hundreds of legacy applications (monoliths) on-premises and didn’t decide yet, if they want to follow a “lift and shift and then modernize” approach to migrate a number applications to the public cloud.

Multi-Cloud App Portability

But some of their application owners already gave the feedback, that their applications are not allowed to be hosted in the public cloud, have to stay on-premises and need to be modernized locally.

At the same time the IT architecture team receives the feedback from other application owners, that the journey to the public cloud is great on paper, but brings huge operational challenges with it. So, IT operations asks the architecture team if they can do something about that problem.

Both cloud operations for Azure and AWS teams deliver a different quality of their services, changes and deployments take longer with one of their public clouds, they have problems with overlapping networks, different storage performance characteristics and APIs.

Another challenge is the role-based access to the different clouds, Kubernetes clusters and APIs. There is no central log aggregation and no observability (intelligent monitoring & alerting). Traffic distribution and load balancing are also other items on this list.

Because of the feedback from operations to architecture, IT engineering received the task to define a multi-cloud strategy, that solves this operational complexity.

Notes: These are the regular multi-cloud challenges, where clouds are the new silos and enterprises have different teams with different expertise using different management and security tools.

This is the time when VMware’s multi-cloud approach Tanzu become very interesting for such customers.

Consistent Infrastructure and Management

The first discussion point here would be the infrastructure. It’s important, that the different private and public clouds are not handled and seen as silos. VMware’s approach is to connect all the clouds with the same underlying technology stack based on VMware Cloud Foundation.

Beside the fact, that lift and shift migrations would be very easy now, this approach brings two very important advantages for the containerized workloads and the cloud infrastructure in general. It solves the challenge with the huge storage and networking ecosystem available for Kubernetes workloads by using vSAN and NSX Data Center in any of the existing clouds. Storage and networking and security are now integrated and consistent.

For existing workloads running natively in public clouds, customers can use NSX Cloud, which uses the same management plane and control plane as NSX Data Center. That’s another major step forward.

Using consistent infrastructure enables customers for consistent operations and automation.

Consistent Application Platform and Developer Experience

Looking at organization’s application and container platforms, achieving consistent infrastructure is not required, but obviously very helpful in terms of operational and cost efficiency.

To provide a consistent developer experience and to abstract the underlying application or Kubernetes platform, you would follow the same VMware approach as always: to put a layer on top.

Here the solution is called Tanzu Kubernetes Grid (TKG), that provides a consistent, upstream-compatible implementation of Kubernetes, that is tested, signed and supported by VMware.

A Tanzu Kubernetes cluster is an opinionated installation of Kubernetes open-source software that is built and supported by VMware. In all the offerings, you provision and use Tanzu Kubernetes clusters in a declarative manner that is familiar to Kubernetes operators and developers. The different Tanzu Kubernetes Grid offerings provision and manage Tanzu Kubernetes clusters on different platforms, in ways that are designed to be as similar as possible, but that are subtly different.

VMware Tanzu Kubernetes Grid (TKG aka TKGm)

Tanzu Kubernetes Grid can be deployed across software-defined datacenters (SDDC) and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2. I would assume, that the Google Cloud is a roadmap item.

TKG allows you to run Kubernetes with consistency and makes it available to your developers as a utility, just like the electricity grid. TKG provides the services such as networking, authentication, ingress control, and logging that a production Kubernetes environment requires.

This TKG version is also known as TKGm for “TKG multi-cloud”.

VMware Tanzu Kubernetes Grid Service (TKGS aka vSphere with Tanzu)

TKGS is the option vSphere admins want to hear about first, because it allows you to turn a vSphere cluster to a platform running Kubernetes workloads in dedicated resources pools. TKGS is the thing that was known as “Project Pacific” in the past.

Once enabled on a vSphere cluster, vSphere with Tanzu creates a Kubernetes control plane directly in the hypervisor layer. You can then run Kubernetes containers by deploying vSphere Pods, or you can create upstream Kubernetes clusters through the VMware Tanzu Kubernetes Grid Service and run your applications inside these clusters.

VMware Tanzu Mission Control (TMC)

In our use case before, we have AKS and EKS for running Kubernetes clusters in the public cloud.

The VMware solution for multi-cluster Kubernetes management across clouds is called Tanzu Mission Control, which is a centralized management platform for the consistency and security the IT engineering team was looking for.

Available through VMware Cloud Services as SaaS offering, TMC provides IT operators with a single control point to provide their developers self-service access to Kubernetes clusters.

TMC also provides cluster lifecycle management for TKG clusters across environment such as vSphere, AWS and Azure.

It allows you to bring the clusters you already have in the public clouds or other environments (with Rancher or OpenShift for example) under one roof via the attachment of conformant Kubernetes clusters.

Not only do you gain global visibility across clusters, teams and clouds, but you also get centralized authentication and authorization, consistent policy management and data protection functionalities.

VMware Tanzu Observability by Wavefront (TO)

Tanzu Observability extends the basic observability provided by TMC with enterprise-grade observability and analytics.

Wavefront by VMware helps Tanzu operators, DevOps teams, and developers get metrics-driven insights into the real-time performance of their custom code, Tanzu platform and its underlying components. Wavefront proactively detects and alerts on production issues and improves agility in code releases.

TO is also a SaaS-based platform, that can handle the high-scale requirements of cloud native applications.

VMware Tanzu Service Mesh (TSM)

Tanzu Service Mesh, formerly known as NSX Service Mesh, provides consistent connectivity and security for microservices across all clouds and Kubernetes clusters. TSM can be installed in TKG clusters and third-party Kubernetes-conformant clusters.

Organizations that are using or looking at the popular Calico cloud native networking option for their Kubernetes ecosystem often consider an integration with Istio (Service Mesh) to connect services and to secure the communication between these services.

The combination of Calico and Istio can be replaced by TSM, which is built on VMware NSX for networking and that uses an Istio data plane abstraction. This version of Istio is signed and supported by VMware and is the same as the upstream version. TSM brings enterprise-grade support for Istio and a simplified installation process.

One of the primary constructs of Tanzu Service Mesh is the concept of a Global Namespace (GNS). GNS allows developers using Tanzu Service Mesh, regardless of where they are, to connect application services without having to specify (or even know) any underlying infrastructure details, as all of that is done automatically. With the power of this abstraction, your application microservices can ‚Äúlive‚ÄĚ anywhere, in any cloud, allowing you to make placement decisions based on application and organizational requirements‚ÄĒnot infrastructure constraints.

Note: On the 18th of March 2021 VMware announced the acquisition of Mesh7 and the integration of Mesh7’s contextual API behavior security solution with Tanzu Service Mesh to simplify DevSecOps.

Tanzu Editions

The VMware Tanzu portfolio comes with three different editions: Basic, Standard, Advanced

Tanzu Basic enables the straightforward implementation of Kubernetes in vSphere so that vSphere admins can leverage familiar tools used for managing VMs when managing clusters = TKGS

Tanzu Standard provides multi-cloud support, enabling Kubernetes deployment across on-premises, public cloud, and edge environments. In addition, Tanzu Standard includes a centralized multi-cluster SaaS control plane for a more consistent and efficient operation of clusters across environments = TKGS + TKGm + TMC

Tanzu Advanced builds on Tanzu Standard to simplify and secure the container lifecycle, enabling teams to accelerate the delivery of modern apps at scale across clouds. It adds a comprehensive global control plane with observability and service mesh, consolidated Kubernetes ingress services, data services, container catalog, and automated container builds = TKG (TKGS & TKGm) + TMC + TO + TSM + MUCH MORE

Tanzu Data Services

Another topic to reduce dependencies and avoid vendor lock-in would be Tanzu Data Services – a separate part of the Tanzu portfolio with on-demand caching (Tanzu Gemfire), messaging (Tanzu RabbitMQ) and database software (Tanzu SQL & Tanzu Greenplum) products.

Bringing all together

As always, I’m trying to summarize and simplify things where needed and I hope it helped you to better understand the value and capabilities of VMware Tanzu.

There are so many more products available in the Tanzu portfolio, that help you to build, run, manage, connect and protect your applications.

If you would like to know more about application and cloud transformation make sure to attend the 45 minute VMware event on March 31 (Americas) or April 1 (EMEA/APJ)!

Data Center as a Service based on VMware Cloud Foundation

Data Center as a Service based on VMware Cloud Foundation

IT organizations are looking for consistent operations, which is enabled by consistent infrastructure. Public cloud providers like AWS and Microsoft offer an extension of their cloud infrastructure and native services to the private cloud and edge, which is also known as Data Center as a Service.

Amazon Web Services (AWS) provides a fully managed service with AWS Outposts, that offers AWS infrastructure, AWS services, APIs and their tools to any data center or on-premises facility.

Microsoft has Azure Stack is even working on a new Azure Stack hybrid cloud solution that is codenamed “Fiji” to provide the ability to run Azure as a managed local cloud.

What do these offerings have in common or why would customers choose one (or even both) of these hybrid cloud options?

They bring the public cloud operation model to the private cloud or edge in form of one or more racks and servers provided as a fully managed service.

AWS Outposts (generally available since December 2019) and Azure Stack Fiji (in development) provide the following:

  • Extension of the public cloud services to the private cloud and edge
  • Consistent infrastructure with consistent operations
  • Local processing of data (e.g., analytics at the data source)
  • Local data residency (governance and security)
  • Low latency access to on-premises systems
  • Local migrations and modernization of applications with local system interdependencies
  • Build, run and manage on-premises applications using existing and familiar services and tools
  • Modernize applications on-prem resp. at the edge
  • Prescriptive infrastructure and vendor managed lifecycle and maintenance (racks and servers)
  • Creation of different physical pools and clusters depending on your compute and storage needs (different form factors)
  • Same licensing and pricing options on-premises (like in the public cloud)

The pretty new AWS Outposts or the future Azure Stack Fiji solution are also called “Local Cloud as a Service” (LCaaS) or “Data Center as a Service” and meant to be consumed and delivered in the on-prem data center or at the edge. It’s about bringing the public cloud to your data center or edge location.

The next phase of cloud transformations is about the “edge” of an enterprise cloud and we know today that private and hybrid cloud strategies are critical for the implementation of IT infrastructure and the operation of it.

If you come from VMware’s standpoint, then it’s not about extending the public cloud to the local data centers. It’s about extending your VMware-based private cloud to the edge or the public cloud.

This article focuses on the local (private) cloud as a service options from VMware, not the public cloud offerings.

In case you would like to know more about VMware’s multi-cloud strategy, which is about running the VMware Cloud Foundation stack on top of a public cloud like AWS, Azure or Google, please check some of my recent posts.

Features and Technologies

Before I describe the different VMware LCaaS offerings based on VMware Cloud Foundation, let me show and explain the different features and technologies my customers ask about when they plan to build a private cloud with public cloud characteristics in mind.

I work with customers from different verticals like

  • finance
  • fast-moving consumer goods
  • manufacturing
  • transportation (travel)

which are hosting IT infrastructure in multiple data centers all over the world including hundreds of smaller locations. My customers belong to different vertical markets, but are looking for the same features and technologies when it comes to edge computing and delivering a managed cloud on-premises. 

Compute and Storage. They are looking for pre-validated and standardized configuration offerings to meet their (application) needs. Most of them describe hardware blueprints with t-shirts sizes (small, medium, large). These different servers or instances provide different options and attributes, which should provide enough CPU, RAM, storage and networking capacity based on their needs. Usually you’ll find terms like “general purpose”, “compute optimized” or “memory optimized” node types or instances.

Networking. Most of my customers look for the possibility to extend their current network (aka elastic or cloud-scale networking) to any other cloud. They prefer a way to use the existing network and security policies and to provide software-defined networking (SDN) services like routing, firewalling and IDS/IPS, load balancing Рalso known as virtualized network functions (VNF). Service providers are also looking at network function virtualization (NFV), which includes emerging technologies like 5G and IoT. As cloud native or containerized applications become more important, service providers also discuss containerized network functions (CNF).

Services. Applications consist of one or many (micro-)services. All my conversations are application-centric and focus on the different application components. Most of my discussions are about containers, databases and video/data analytics at the edge.

Security. Customers, that are running workloads in the public cloud, are familiar with the shared responsibility model. The difference between public cloud and local cloud as a service offering is the physical security (racks, servers, network transits, data center access etc.).

Scalability and Elasticity. IT providers want to provide the simplicity and agility on-prem as their customers (the business) would expect it from a public cloud provider. Scalability is about a planned level of capacity that can grow or shrink as needed.

Resource Pooling and Sharing. Larger enterprises and service providers are interested in creating dedicated workload domains and resource clusters, but also look for a way to provide infrastructure multi-tenancy.

The challenge for today’s IT teams is, that edge locations are not often well defined. And these IT teams need an efficient way to manage different infrastructure sizes (can range from 2 nodes up to 16 or 24 nodes), for sometimes up to 400 edge locations.

Rethinking Private Clouds

Organizations have two choices when it comes to the deployment of a private cloud extension to the edge. They could continue using the current approach, which includes the design, deployment and operation of their own private cloud. Another pretty new option would be the subscription of a predefined “Data Center as a Service” offering.

Enterprises need to develop and implement a cloud strategy to support the existing workloads, which are still mostly running on VMware vSphere, and build something, which is vendor and cloud-agnostic. Something, that provides a (public) cloud exit strategy at the same time.

If you decide to go for AWS Outposts or the coming Azure Stack Fiji solution, which for sure are great options, how would you migrate or evacuate workloads to another cloud and technology stack?

VMware Cloud on Dell EMC

At VMworld 2019 VMware announced the general availability of VMware Cloud on Dell EMC (VMC on Dell EMC). In 2018 introduced as “Project Dimension”, the idea behind this concept was to deliver a (public) cloud experience to customers on-premises. Give customers the best of two worlds:

The simplicity, flexibility and cost model of the public cloud with the security and control of your private cloud infrastructure.

VMware Cloud on Dell EMC

Initially, Project Dimension was focused primarily on edge use cases and was not optimized for larger data centers.

Note: This has changed with the introduction of the 2nd generation of VMC on Dell EMC in May 2020 to support different density and performance use cases.

VMC on Dell EMC is a VMware-managed service offering with these components:

  • A software-defined data center based von VMware Cloud Foundation (VCF) running on Dell EMC VxRail
    • ESXi, vSAN, NSX, vCenter Server
    • HCX Advanced
  • Dell servers, management & ToR switches, racks, UPS
    • Standby VxRail node for expansion (unlicensed)
    • Option for half or full-height rack
  • Multiple cluster support in a single rack
    • Clusters start with a minimum of 3 nodes (not 4 as you would expect from a regular VCF deployment)
  • VMware SD-WAN (formerly known as VeloCloud) appliances for remote management purposes only at the moment
  • Customer self-service provisioning through cloud.vmware.com
  • Maintenance, patching and upgrades of the SDDC performed by VMware
  • Maintenance, patching and upgrades of the Dell hardware performed by VMware (Dell provides firmware, drivers and BIOS updates)
  • 1- or 3-year term subscription commitment (like with VMC on AWS)

There is no “one size fits all” when it comes to hosting workloads at the edge and in your data centers. VMC on Dell EMC provides also different hardware node types, which should match with your defined t-shirt sizes (blueprints).

VMC on Dell EMC HW Node Types

If we talk about at a small edge location with a maximum of 5 server nodes, you would go for a half-height rack. The full-height rack can host up to 24 nodes (8 clusters). Currently, the largest instance type would be a good match for high density, storage hungry workloads such as VDI deployments, databases or video analytics.

As HCX is part of the offering, you have the right tool and license included to migrate workloads between vSphere-based private and public clouds.

The following is a list of some VMworld 2020 breakout sessions presented by subject matter experts and focused on VMware Cloud on Dell EMC:

HCP1831: Building a successful VDI solution with VMware Cloud on Dell EMC ‚Äď Andrew Nielsen, Sr. Director, Workload and Technical Marketing, VMware

HCP1802: Extend Hybrid Cloud to the Edge and Data Center with VMware Cloud on Dell EMC ‚Äď Varun Chhabra, VP Product Marketing, Dell

HCP1834: Second-Generation VMware Cloud on Dell EMC, Explained by Product Experts ‚Äď Neeraj Patalay, Product Manager, VMware

VMware Cloud Foundation and HPE Synergy with HPE GreenLake

At VMworld 2019 VMware announced that VMware Cloud Foundation will be offered in HPE’s GreenLake program running on HPE Synergy composable infrastructure (Hybrid Cloud as a Service). This gives VMware customers the opportunity to build a fully managed private cloud with the public cloud benefits in an on-premises environment.

HPE’s vision is built on a single platform that can span across multiple clouds and GreenLake brings the cloud consumption model to joint HPE and VMware customers.

Today, this solution is fully supported and sold by HPE. In case you want to know more, have a look at the VMworld 2020 session Simplify IT with HPE GreenLake Cloud Services and VMware from Erik Vogel, Global VP, Customer Experience, HPE GreenLake, Hewlett Packard Enterprise.

VMC on AWS Outposts

If you are an AWS customer and look for a consistent hybrid cloud experience, then you would consider AWS Outposts.

There is also VMware variant of AWS Outposts available for customers, who already run their on-premises workloads on VMware vSphere or in a cloud vSphere-based environment running on top of the AWS global infrastructure (called VMC on AWS).

VMware Cloud on AWS Outposts is a  on-premises as-a-service offering based on VMware Cloud Foundation. It integrates VMware’s software-defined data center software, including vSphere, vSAN and
NSX. Ths Cloud Foundation stack runs on dedicated elastic Amazon EC2 bare-metal infrastructure, delivered on-premises with optimized access to local and remote AWS services.

VMC on AWS Outposts

Key capabilities and use cases:

  • Use familiar VMware tools and skillsets
  • No need to rewrite applications while migrating workloads
  • Direct access to local and native AWS services
  • Service is sold, operated and supported by VMware
  • VMware as the single point of primary contact for support needs,¬†supplemented by AWS for hardware shipping, installation and configuration
  • Host-level HA with automated failover to VMware Cloud on AWS
  • Resilient applications required to work in the event of WAN link downtime
  • Application modernization with access to local and native AWS services
  • 1- or 3-year term subscription commitment
  • 42U AWS Outposts rack, fully assembled and installed by AWS (including ToR switches)
  • Minimum cluster size of 3 nodes (plus 1 dark node)
  • Current cluster maximum of 16 nodes

Currently, VMware is running a VMware Cloud on AWS Outposts Beta program, that lets you try the pre-release software on AWS Outposts infrastructure. An early access program should start in the first half of 2021, which can be considered as a customer paid proof of concept intended for new workloads only (no migrations).

VMware on Azure Stack

To date there are no plans communicated by Microsoft or VMware to make Azure VMware Solution, the vSphere-based cloud offering running on top of Azure, available on-premises on the current or future Azure Stack family.

VMware on Google Anthos

To date there are no plans communicated by Google or VMware to make Google Cloud VMware Engine, the vSphere-based cloud offering running on top of the Google Cloud Platform (GCP), available on-premises.

The only known supported combination of a Google Cloud offering running VMware on-premises is Google Anthos (Google Kubernetes Engine on-prem).

Multi-Cloud Application Portability

Multi-cloud is now the dominant cloud strategy and many of my customers are maintaining a vSphere-based cloud on-premises and use at least two of the big three public clouds (AWS, Azure, Google).

Following a cloud-appropriate approach, customers are inspecting each application and decide which cloud (private or public) would be the best to run this application on. VMware gives customers the option to run the Cloud Foundation technology stack in any cloud, which doesn’t mean, that customers at the same time are not going cloud-native and still add AWS and Azure to the mix.

How can I achieve application portability in a multi-cloud environment when the underlying platform and technology formats differ from each other?

This is a question I hear a lot. Kubernetes is seen as THE container orchestration tool, which at the same time can abstract multiple public clouds and the complexity that comes with them.

A lot of people also believe that Kubernetes is enough to provide application portability and figure out later, that they have to use different Kubernetes APIs and management consoles for every cloud and Kubernetes (e.g., Rancher, Azure, AWS, Google, RedHat OpenShift etc.) flavor they work with.

That’s the moment we have to talk about VMware Tanzu and how it can simplify things for you.

The Tanzu portfolio provides the next generation the building blocks and steps for modernizing your existing workloads while providing capabilities of Kubernetes. Additionally, Tanzu also has broad support for containerization across the entire application lifecycle.

Tanzu gives you the possibility to build, run, manage, connect and protect applications and to achieve multi-cloud application portability with a consistent platform over any cloud – the so-called “Kubernetes grid”.

Note: I’m not talking about the product “Tanzu Kubernetes Grid” here!

I’m talking about the philosophy to put a virtual application service layer over your multi-cloud architecture, which provides a consistent application platform.

Tanzu Mission Control is a product under the Tanzu umbrella that provides central management and governance of containers and clusters across data centers, public clouds, and edge.

Conclusion

Enterprises must be able to extend the value of their cloud investments to the edge of the organization.

The edge is just one piece of a bigger picture and customers are looking for a hybrid cloud approach in a multi-cloud world.

Solutions like VMware Cloud on Dell EMC or running VCF on HPE Synergy with HPE Greenlake are only the first steps towards innovation in the private cloud and to bring the cost and operation model from the public cloud to the enterprises on-premises.

IT organizations are rather looking for ways to consume services in the future and care less about building the infrastructure or services by themselves.

The two most important differentiators for selecting an as-a-service infrastructure solution provider will be the provider’s ability to enable easy/consistent connectivity and the provider’s established software partner portfolio.

In cases where IT organizations want to host a self-managed data center or local cloud, you can expect, that VMware is going to provide a new and appropriate licensing model for it.

Multi-Tenancy on VMware Cloud Foundation with vRealize Automation and Cloud Director

Multi-Tenancy on VMware Cloud Foundation with vRealize Automation and Cloud Director

In my article VMware Cloud Foundation And The Cloud Management Platform Simply Explained I wrote about why customers need a VMware Cloud Foundation technology stack and what a VMware cloud management platform is.

One of the reasons and one of the essential characteristics of a cloud computing model I mentioned is resource pooling.

By the National Institute of Standards and Technology (NIST) resource pooling is defined with the following words:

The provider’s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual
resources dynamically assigned and reassigned according to consumer demand.
There is a sense of location independence in that the customer generally has no
control or knowledge over the exact location of the provided resources but may be
able to specify location at a higher level of abstraction (e.g., country, state, or
data center).

This time I would like to focus on multi-tenancy and how you can achieve that on top of VMware Cloud Foundation (VCF) with Cloud Director (formerly known as vCloud Director) and vRealize Automation, which both could be part of a VMware cloud management platform (CMP).

Multi-Tenancy

There are many understandings around about multi-tenancy and different people have different definitions for it.

If we start from the top of an IT infrastructure, we will have application or software multi-tenancy with a single instance of an application serving multiple tenants. And in the past even running on the same virtual or physical server. In this case the multi-tenancy feature is built into the software, which is commonly accessed by a group of users with specific permissions. Each tenant gets a dedicated or isolated share of this application instance.

Coming from the bottom of the data center, multi-tenancy describes the isolation of resources (compute, storage) and networks to deliver applications. The best example here are (cloud) services providers.

Their goal is to create and provide virtual data centers (VDC) or a virtual private cloud (VPC) on top of the same physical data center infrastructure Рfor different tenants aka customers. Normally, the right VMware solution for this requirement and service providers would be Cloud Director, but this is maybe not completely true anymore with the release of vRealize Automation 8.x. 

To make it easier for all of us, I’ll call Cloud Director and vCloud Director “vCD” from now on.

VMware Cloud Director (formerly vCloud Director)

Cloud Director is a product exclusively for cloud service providers via the VMware Cloud Provider Program (VCPP). Originally released in 2010, it enables service providers (SPs) to provision SDDC (Software-Defined Data Center) services as complete virtual data centers. vCD also keeps resources from different tenants isolated from each other.

Within vCD a unit of tenancy is called Organization VDC (OrgVDC). It is defined as a set of dedicated compute (CPU, RAM), storage and network resources. A tenant can be bound to a single OrgVDC or can be composed of multiple Organization VDCs. This is typically known as Infrastructure as a Service (IaaS).

A provider virtual data center (PVDC) is a grouping of compute, storage, and network resources from a single vCenter Server instance. Multiple organizations/tenants can share provider virtual data center resources.

Cloud Director Resource Abstraction

A lot of customers and VCPP partners have now started to offer their cloud services (IaaS, PaaS, SaaS etc.) based on VMware Cloud Foundation. For private and hybrid cloud scenarios, but also in the public cloud as a managed cloud service (VMware Cloud on AWS, Azure VMware Solution, Google Cloud VMware Engine, Alibaba Cloud VMware Solution and more).

Important: I assume that you are familiar with VCF, its core components (ESXi, vSAN, NSX, SDDC Manager) and architecture models (standard as the preferred).

Cloud Director components are currently not part of the VCF lifecycle automation, but it is a roadmap item!

Cloud Director Resource Hosting Models

vCD offers multiple hosting models:

  • In the shared hosting model, multiple tenant workloads run all together on the same
    resource groups without any performance assurance
  • In the reserved hosting model, performance of workloads is assured by resource
    reservation.
  • In the physical hosting model, hardware is dedicated to a single tenant and performance
    is assured by the allocated hardware

Tenant Using Shared Hosting on VCF Workload Domain

In this use case a tenant is using shared hosting backed by a VMware Cloud Foundation workload domain. A workload domain, which is mapped to a provider VDC.

vCD VCF Shared

Tenant Using Shared Hosting and Reserved Hosting on Multiple VCF Workload Domains

This use case describes the example of customer using shared and reserved hosting backed by multiple VCD workload domains. Here each cluster has a single resource pool mapped to a single PVDC.

vCD VCF Shared Reserved

Tenant Using Physical Hosting and Central Point of Management (CPOM)

The last example shows a single customer using physical hosting. You will notice that there is also a vSphere with
Kubernetes workload domain. VMware Cloud Foundation automates the installation of vSphere with Kubernetes (Tanzu) which makes it incredibly easy to deploy and manage.

You can see that there is an ‚ÄúSDDC‚ÄĚ box on top of the Kubernetes Cluster vCenter, which is attached to
the “SDDC Proxy” entity. vCD can act as an HTTP/S proxy server between tenants and the
underlying vSphere environment in VMware Cloud Foundation. An SDDC proxy is an
access point to a component from an SDDC, for example, a vCenter Server instance, an ESXi host, or
an NSX Manager instance.

The vCD becomes the central point of management (CPOM) in this case and the customer gets a complete dedicated SDDC with vCenter access.

vCD VCF Physical CPOM

Note: Since vCD 9.7 it is possible to present for example a vCenter Server instance securely to a tenant’s organization using the Cloud Director user interface.¬†This is how you could build your own VMC-on-AWS-like cloud offering!

Cloud Director CPOM

All 3 Tenants Together

Finally, we put it all together. In the first use case we can see that different customers are sharing resources from a
single PVDC. We can also see that resources from a single vCenter can be split across different provider virtual datacenters and that we can mix and match multi-tenants workload domains and workload domains offering dedicated private cloud all together.

vCD VCF All Together

Cloud Director Service and VMware Cloud on AWS

If you don’t want to extend or operate your own data center or cloud infrastructure anymore and provide a managed service to multiple customer, there are still options for you available backed by VMware Cloud Foundation as well.

Since October 2020 you have Cloud Director Service globally available, which delivers multi-tenancy to VMware Cloud on AWS for managed service providers (MSP).

VMware sees not only new, but also existing VCPP partners moving towards a mixed-asset portfolio, where their cloud management platform consists of a VCPP and MSP (VMware SaaS offerings) contract. This allows them for example to run vCD on-premises for their current customers and the onboarding of new tenants would happen in the public cloud with CDS and VMC on AWS.

vCD CDS Mixed Mode

Enterprise Multi-Tenancy with vRealize Automation

With the release of vRealize Automation 8.1 (vRA) VMware offered support for dedicated infrastructure multi-tenancy, created and managed through vRealize Suite Lifecycle Manager. This means vRealize Automation enables customers or IT providers to set up multiple tenants or organizations within each deployment.

Providers can set up multiple tenant organizations and allocate infrastructure. Each tenant manages its own projects (team structures), resources and deployments.

Enabling tenancy creates a new Provider (default) organization. The Provider Admin will create new tenants, add tenant admins, setup directory synchronization, and add users. Tenant admins can also control directory synchronization for their tenant and will grant users access to services within their tenant. Additionally, tenant admins will configure Policies, Governance, Cloud Zones, Profiles, access to content and provisioned resources; within their tenant. A single shared SDDC or separate SDDCs can be used among tenants depending on available resources.

vRealize Automation 8.1 Multi-Tenancy

With vRealize Automation 8.2, provider administrators got the ability to share infrastructure by creating and assigning Virtual Private Zones (VPZ) to tenant organizations.

Think of VPZs as a kind of container of infrastructure capacity and services which can be defined and allocated to a Tenant. You can add unique or shared cloud accounts, with associated compute, flavors, images, storage, networking, and tags to each VPZ. Each component offers the same configuration options you would see for a standalone configuration.

vRealize Automation 8.2 Multi-Tenancy

vRealize Automation and VMware Cloud Foundation

With the pretty new multi-tenancy and VPZ capability a new consumption model on top of VCF can be built. You (provider) would map the Cloud Zones (compute resources on vSphere (or AWS for example)) to a VCF workload domain.

The provider sets these cloud zones up for their customers and provides dedicated or shared infrastructure backed by Cloud Foundation workload domains.

This combination would allow you to build an enterprise VPC construct (like AWS for example), a logically isolated section of your provider cloud.

vRealize Automation and VMware Cloud Foundation

SDDC Manager Integration and VMware Cloud Foundation (VCF) Cloud Account

Since the vRA 8.2 release customers are also able to configure a SDDC Manager integration and on-board workload domains as VMware Cloud Foundation cloud accounts into the VMware Cloud Assembly service.

VMware Cloud Director or vRealize Automation?

You wonder if vRealize Automation could replace existing vCD installations? Or if both cloud management platforms can do the same?

I can assure you, that you can provide a self-service provisioning experience with both solutions and that you can provide any technology or cloud service “as a service”. Both have in common to be backed by Cloud Foundation, have some form of integration (vRA) and can be built by a VMware Validated Design (VVD).

vCD is known to be a service provider solution, where vRA is more common in enterprise environments. VMware has VCPP partners, that use Cloud Director for their external customers and vRealize Automation for their internal IT and customers.

If you are looking for a “cloud broker” and Infrastructure as Code (IaC), because you also want to provision workloads on AWS, Azure or GCP as well, then vRealize Automation is the better solution since vCD doesn’t offer this deep integration and these deployment options yet.

Depending on your multi-tenant needs and if you for example only have chosen vCD in the past, because of the OrgVDC and resource pooling feature, vRealize Automation would be enough and could replace vCD in this case.

It is also very important to understand how your current customer onboarding process and operational model look like:

  • How do you want to create a new tenant?¬†
  • How do you want to onboard/migrate existing customer workloads to your provider infrastructure?
  • Do you need versioning of deployments or templates?
  • Do customers require access to the virtual infrastructure (e.g. vCenter or OrgVDC) or do you just provide SaaS or PaaS?
  • Do customers need a VPN or hybrid cloud extension into your provider cloud?
  • How would you onboard non-vSphere customers (Hyper-V, KVM) to your vSphere-based cloud?
  • Does your customer rely on other clouds like AWS or Azure?
  • How do you do billing for your vSphere-based cloud or multi-cloud environment?
  • What is your Kubernetes/container strategy?
  • And 100 other things ūüėČ

There are so many factors and criteria to talk about, which would influence such a decision. There is no right or wrong answer to the question, if it should be VMware Cloud Director or vRealize Automation. Use what makes sense.

Which could also be a combination of both.

VMware Carbon Black Cloud Workload – Agentless Protection for vSphere Workloads

VMware Carbon Black Cloud Workload – Agentless Protection for vSphere Workloads

At VMworld 2020 VMware announced Carbon Black Cloud Workload (CBC Workload) as part of their intrinsic security approach.

For me, this was the biggest and most important announcement from this year’s VMworld. It is a new offering, which is relevant for every vSphere customer out there – even the small and medium enterprises, which maybe still just rely on ESXi and vCenter only for their environment.

CBC Workload introduces protection for workloads in private and public clouds. For vSphere, there is no additional agent installation needed, because the Carbon Black sensor (agent) is built into vSphere. That’s why you may hear that this solution is “agentless”.

Carbon Black Cloud Workload Bundles

This cloud-native (SaaS) solution provides foundational workload hardening and vulnerability management combined with prevention, detection and response capabilities to protect workloads running in virtualized private cloud and hybrid cloud environments.

Carbon Black Cloud Workload Protection Bundles

Note: Customers, that are using vSphere and VMware Horizon, should take a look at Workspace Security VDI, which has also been announced at VMworld 2020. A single-vendor solution with the combination of VMware Horizon and Carbon Black.

If you would like to know more about the interoperability of Carbon Black and Horizon, have a look at KB79180.

Carbon Black Cloud Workload Overview

Customers and partners have now the possibility to provide a workload security solution for Windows and Linux virtual machines. The complete system requirements can be found here.

“You can enable Carbon Black in your data center with an easy one-click deployment. To minimize your deployment efforts, a lightweight Carbon Black launcher is made available with VMware Tools. Carbon Black launcher must be available on the Windows and Linux VMs.”

Carbon Black enable via vCenter

Carbon Black Cloud Workload consists of a few key components that interact with each other:

CBC Workload Components

You must first deploy an on-premises OVF/OVA template for the Carbon Black Cloud Workload appliance (4 vCPU, 4GB RAM, 41GB storage) that connects the Carbon Black Cloud to the vCenter Server through a registration process. After the registration is complete, the Carbon Black Cloud Workload appliance deploys the Carbon Black Cloud Workload plug-in and collects the inventory from the vCenter Server.

The plug-in provides visibility into processes and network connections running on a virtual machine.

As a vCenter Server administrator, you want to have visibility of known vulnerabilities in your environment to understand your security posture and schedule maintenance windows for patching and remediation. With the help of vulnerability assessment, you can proactively minimize the risk in your environment. You can now monitor known vulnerabilities from the Carbon Black Cloud Workload plug-in:

vSphere Client Carbon Black

The infosec guys in your company would do the vulnerability assessment from the CBC console:

CBC Vulnerabilities

Carbon Black Cloud Workload protection provides vSphere administrators a full inventory, appliance health and vulnerability reporting from one console, the already well-known vSphere Client.

Carbon Black vSphere Client Summary

Cybersecurity Requirements

According to the NIST Cybersecurity Framework the security lifecycle is made of five functions:

  1. Identify – Cloud & Service Context, Dynamic Asset Visibility, Compliance & Standards, Cloud Risk Management
  2. Protect –¬†Services / API Defined, Cloud Access Control, Network Integrity, Data Security, Change Control & Guardrails
  3. Detect –¬†Cloud-Speed, Inter-connected Services, Events & Anomalies, Continuous Monitoring
  4. Respond –¬†DevOps Collaboration, Real-time Notifications, Automated Actions, Response as Code
  5. Recover –¬†Templates / Code Review,¬†Shift Left / Pipeline,¬†Exceptions and Verification

Workload Security Lifecycle

CBC Workload focuses on identifying the risks with workload visibility and vulnerability management, which are part of the “Workload Essentials” edition.

If you would like to prevent malicious activities to protect your workloads and replace your existing legacy anti-virus (AV) solution, then “Workload Advanced” would be the right edition for you as it includes Next-Gen AV (NGAV).

Behavioral EDR (Endpoint Detection & Response), also part of the “Advanced” bundle, belongs to “detect & respond” of the security lifecycle.

Workload Security for Kubernetes

Carbon Black Guardrails and Runtime Security

You just learned that Carbon Black Cloud gives workload protection for virtualized Windows or Linux virtual machines running on vSphere. What about container security for Kubernetes?

In May 2020 VMware officially closed its acquisition of Octarine, a SaaS security platform for protecting containers and Kubernetes. VMware bought Octarine to enable Carbon Black to secure applications running in Kubernetes.

Traditional security is no longer relevant for the security of Kubernetes, because Kubernetes is so powerful and hence risky, networking is very complex and a total different game, because static IPs and ports are no longer relevant. And you need a new security approach which is compatible with IT’s organizational shift from traditional to a DevSecOps approach.

VMware’s solution covers the whole lifecycle of the application from building the container to the app running in production. It is a two-part solution with the first one being “Guardrails“. It is able to scan container images for vulnerabilities and Kubernetes manifests for any misconfigurations.

Carbon Black Cloud Guardrails Module

The second part is runtime protection. When the workloads are deployed in production, the Carbon Black security agent is able to detect malicious activities.

Carbon Black Cloud Runtime Module 

Let’s have a look at the different features the Kubernetes “Guardrails” provide for each phase of the application:

  • Build: Image vulnerability scanning, Kubernetes configuration hardening
  • Deploy: Policy governance, compliance reporting, visibility and hardening
  • Operate: Threat detection and response, anomaly detection and least privilege runtime, event monitoring

And these were the key capabilities and benefits, which have been mentioned at VMworld 2020 for “Guardrails”:

Carbon Black Kubernetes Guardrails Features

For “runtime” security the following key capabilities and benefits were mentioned:

  • Visibility of network traffic
  • Coverage of workloads and hosts activity
  • Network policy management
  • Threat detection
  • Anomaly detection
  • Egress security
  • SIEM integration

Customers will be able to have visibility of all the workloads running in the local or cloud-native production clusters and how they interact with each other. They will also see which services are exposed to ingress traffic, which services are exiting the cluster and where this egress traffic is going to. It is also going to be visible which communication is encrypted and what type of encryption is used.

Note: The Carbon Black Cloud module for hardening and securing Kubernetes workloads is expected to be generally available until the end of 2020.

The launch of Carbon Black Workload was the first important step to let the intrinsic security vision become more a reality (after VMware acquired Carbon Black). Moving on with Kubernetes and bringing new container security capabilities is going to be the next big move forward, that VMware can become a major security provider. 

Stay tuned for more security announcements!

Additional Resources

If you would like to know more about Carbon Black Cloud Workload and security for Kubernetes, have a look at:

Introduction to Alibaba Cloud VMware Solution (ACVS)

Introduction to Alibaba Cloud VMware Solution (ACVS)

VMware’s hybrid and multi-cloud strategy is to run their Cloud Foundation technology stack with vSphere, vSAN and NSX in any private or public cloud including edge locations. I already introduced VMC on AWS, Azure VMware Solution (AVS), Google Cloud VMware Engine (GCVE) and now I would like to briefly summarize Alibaba Cloud VMware Solution (ACVS).

VMware Multi-Cloud Offerings

A lot of European companies, this includes one of my large Swiss enterprise account, defined Alibaba Cloud as strategic for their multi-cloud vision, because they do business in China. The Ali Cloud is the largest cloud computing provider in China and is known for their cloud security, reliable and trusted offerings and their hybrid cloud capabilities.

In September 2018, Alibaba Cloud (also known as Aliyun), a Chinese cloud computing company that belongs to the Alibaba Group, has announced a partnership with VMware to deliver hybrid cloud solutions to help organizations with their digital transformation.

Alibaba Cloud was the first VMware Cloud Verified Partner in China and brings a lot of capabilities and services to a large number of customers in China and Asia. Their current global infrastructure operates worldwide in 22 regions and 67 availability zones with more regions to follow. Outside Main China you find Alibaba Cloud data centers in Sydney, Singapore, US, Frankfurt and London.

As this is a first-party offering from Alibaba Cloud, this service is owned and delivered by them (not VMware). Alibaba is responsible for the updates, patches, billing and first-level support.

Alibaba Cloud is among the world’s top 3 IaaS providers according to Gartner and is China’s largest provider of public cloud services. Alibaba Cloud provides industry-leading flexible, cost-effective, and secure solutions. Services are available on a pay-as-you-go basis and include data storage, relational databases, big-data processing, and content delivery networks.

Currently,  Alibaba Cloud has been declared as a Niche player according to the actual Gartner Magic Quadrant for Cloud Infrastructure and Platform Services (CIPS) with Oracle, IBM and Tencent Cloud.

Alibaba Gartner CIPS MQ

Note: If you would like to know more about running the VMware Cloud Foundation stack on top of the Oracle Cloud as well, I can recommend Simon Long’s article, who just started to write about¬†Oracle Cloud VMware Solution (OCVS).

This partnership with VMware and Alibaba Cloud has the same goals like other VMware hybrid cloud solutions like VMC on AWS, OCVS or GCVE – to provide enterprises the possibility to meet their cloud computing needs and the flexibility to move existing workloads easily from on-premises to the public cloud and have highspeed access to the public cloud provider’s native services.

ACVS vSphere Architecture

In April 2020, Alibaba Cloud and VMware finally announced the general availability of Alibaba Cloud VMware Solution for the Main China and Hongkong region (initially). This enables customers to seamlessly move existing vSphere-based workloads to the Alibaba Cloud, where VMware Cloud Foundation is running on top of Aliyun’s infrastructure.

As already common with such VMware-based hybrid cloud offerings, this let’s you move from a Capex to a Opex-based cost model based on subscription licensing.

Joint Development

X-Dragon ‚Äď Shenlong in Chinese ‚Äď is a proprietary bare metal server architecture developed by Alibaba Cloud for their cloud computing requirements. It offers direct access to CPU and RAM resources without virtualization overheads that bare metal servers offer (built around a custom X-Dragon MOC card). The virtualization technology, X-Dragon, behind Alibaba Cloud Elastic Compute Service (ECS) is now in its third generation. The first two generations were called Xen and KVM.

X-Dragon  NIC

VMware works closely together with the Alibaba Cloud engineers to develop a VMware SDDC (software-defined data center based on vSphere and NSX) which runs on this X-Dragon bare metal architecture.

The core of the MOC NIC is the X-Dragon chip. The X-Dragon software system runs on the X-Dragon chip to provide virtual private cloud (VPC) and EBS disk capabilities. It offers these capabilities to ECS instances and ECS bare metal instances through VirtIO-net and VirtIO-blk standard interfaces.

Note: The support for vSAN is still roadmap and comes later in the future (no date committed yet). Because the X-Dragon architecture is a proprietary architecture, running vSAN over it requires official certification. 

Project Monterey

Have you seen VMware’s announcement at VMworld 2020 about Project Monterey¬†which allows you to run VMware Cloud Foundation on a SmartNIC? For me, this looks similar to the X-Dragon architecture ūüėČ

Project Monterey VMware Cloud Foundation Use Cases

Data Center extension or retirement. You can scale the data center capacity in the cloud on-demand, if you for example don’t want to invest in your on-premises environment anymore. In case you just refreshed your current hardware, another use case would be the extension of your on-premises vSphere cloud to Alibaba Cloud.ACVS Disaster Recovery

Disaster Recovery and data protection. Here we’ll find different scenarios like recovery (replication) or backup/archive (data protection) use cases. You can use your ACVS private clouds as a disaster recovery (DR) site for your on-premises workloads. This DR solution would be based on VMware Site Recovery Manager (SRM) which can be also used together with HCX. At the moment Alibaba Cloud offers 9 regions for DR sites.

Cloud migrations or consolidation. If you want to start with a lift & shift approach to migrate specific applications to the cloud, then ACVS is the right choice for you. Maybe you want to refresh your current infrastructure and need to relocate or migrate your workloads in an easy and secure way? Another perfect scenario would be the consolidation of different vSphere-based clouds.

ACVS Migration to Alibaba Cloud

Multicast Support with NSX-T

Like with Microsoft Azure and Google Cloud, an Alibaba Cloud ECS instance or VPC in general doesn’t support multicast and broadcast. That is one specific reason why customers need to run NSX-T on top of their public cloud provder’s global cloud infrastructure.

Connectivity Options

For (multi-)national companies Alibaba Cloud has different enterprise-class networking offerings to connect different sites or regions in a secure and reliable way.

Cloud Enterprise Network (CEN) is a highly-available network built on the high-performance and low-latency global private network provided by Alibaba Cloud. By using CEN, you can establish private network connections between Virtual Private Cloud (VPC) networks in different regions, or between VPC networks and on-premises data centers.  The CEN is also available in Europe in Germany (Frankfurt) and UK (London).

Alibaba Cloud Cloud Enterprise Network

Alibaba Cloud Express Connect helps you build internal network communication channels that feature enhanced cross-network communication speed, quality, and security. If your on-premises data center needs to communicate with an Alibaba Cloud VPC through a private network, you can apply for a dedicated physical connection interface from Alibaba Cloud to establish a physical connection between the on-premises data center and the VPC. Through physical connections, you can implement high-quality, highly reliable, and highly secure internal communication between your on-premises data center and the VPC. 

Alibaba Cloud Express Connect

ACVS Architecture and Supported VMware Cloud Services

Let’s have a look at the ACVS architecture below. On the left side you see the Alibaba Cloud with the VMware SDDC stack loaded onto the Alibaba bare metal servers with NSX-T connected to the Alibaba VPC network.

This VPC network allows customers to connect their on-premises network and to have direct acccess to Alibaba Cloud’s native services.

Customers have the advantage to use vSphere 7 with Tanzu Kubernetes Grid and could leverage their existing tool set from the VMware Cloud Management Platform like vRealize Automation (native integration of vRA with Alibaba Cloud is still a roadmap item) and vRealize Operations.

Alibaba Cloud VMware Solution Architecture

The right side of the architecture shows the customer data centers, which run as a vSphere-based cloud on-premises managed by the customer themselves or as a managed service offering from any service provider. In between, with the red lines, the different connectivity options like Alibaba Direct Connect, SD-WAN or VPN connections are mentioned with different technologies like NSX-T layer 3 VPN, HCX and Site Recovery Manager (SRM).

To load balance the different application services across the different vSphere-based or native clouds, you can use NSX Advanced Load Balancer (aka Avi) to configure GSLB (Global Server Load Balancing) for high availability reasons.

Because the entire stack on top of Alibaba Cloud’s infrastructure is based on VMware Cloud Foundation, you can expect to run everything in VMware’s product portfolio like Horizon, Carbon Black, Workspace ONE etc. as well.

You can also deploy AliCloud Virtual Edges with VMware SD-WAN by VeloCloud.

Node Specifications

The Alibaba Cloud VMware Solution offering is a little bit special and I hope that I was able to translate the Chinese presentations correctly.

First, you have to choose the amount of hosts which gives you specific options.

1 Host (for testing purposes): vSphere Enterprise Plus, NSX Data Center Advanced, vCenter

2+ Hosts (basic type): vSphere Enterprise Plus, NSX Data Center Advanced, vCenter

3+ Hosts (flexibility and elasticity): vSphere Enterprise Plus, NSX Data Center Advanced, vCenter, (vSAN Enterprise)

Site Recovery Manager, vRealize Log Insight and vRealize Operations need to be licensed separately as they are not included in the ACVS bundle.

The current ACVS offering has the following node options and specifications (maximum 32 hosts per VPC):

ACVS Node Specifications

All sixth-generation ECS instance come equipped with Intel¬ģ Xeon¬ģ Platinum 8269CY processors. These processors were customized based on the Cascade Lake microarchitecture, which is designed for the second-generation Intel¬ģ Xeon¬ģ Scalable processors. These processors have a turbo boost with an increased burst frequency of 3.2 GHz, and can provide up to a 30% increase in floating performance over the fifth generation ECS instances.

Component Version License
vCenter 7.0 vCenter Standard
ESXi 7.0 Enterprise Plus
vSAN (support coming later) n/a Enterprise
NSX Data Center (NSX-T) 3.0 Advanced
HCX n/a Enterprise

Note: Customers have the possibility to install any VIBs by themselves with full console access. This allows the customer to assess the risk and performance impacts by themselves and install any needed 3rd party software (e.g. Veeam, Zerto etc.).

If you want to more about how to accelerate your multi-cloud digital transformation initiatives in Asia, you can watch the VMworld presentation from this year. I couldn’t find any other presentation (except the exact¬†same recording on YouTube) and believe that this article is the first publicy available summary about Alibaba Cloud VMware Solution. ūüôā