Application Modernization and Multi-Cloud Portability with VMware Tanzu

Application Modernization and Multi-Cloud Portability with VMware Tanzu

It was 2019 when VMware announced Tanzu and Project Pacific. A lot has happened since then and almost everyone is talking about application modernization nowadays. With my strong IT infrastructure background, I had to learn a lot of new things to survive initial conversations with application owners, developers and software architects. And in the same time VMware’s Kubernetes offering grew and became very complex – not only for customers, but for everyone I believe. ūüôā

I already wrote about VMware’s vision with Tanzu: To put a consistent “Kubernetes grid” over any cloud

This is the simple message and value hidden behind the much larger topics when discussing application modernization and application/data portability across clouds.

The goal of this article is to give you a better understanding about the real value of VMware Tanzu and to explain that it’s less about Kubernetes and the Kubernetes integration with vSphere.

Application Modernization

Before we can talk about the modernization of applications or the different migration approaches like:

  • Retain – Optimize and retain existing apps, as-is
  • Rehost/Migration (lift & shift) – Move an application to the public cloud without making any changes
  • Replatform (lift and reshape) – Put apps in containers and run in Kubernetes. Move apps to the public cloud
  • Rebuild and Refactor – Rewrite apps using cloud native technologies
  • Retire – Retire traditional apps and convert to new SaaS apps

…we need to have a look at the palette of our applications:

  • Web Apps – Apache Tomcat, Nginx, Java
  • SQL Databases – MySQL, Oracle DB, PostgreSQL
  • NoSQL Databases – MongoDB, Cassandra, Prometheus, Couchbase, Redis
  • Big Data – Splunk, Elasticsearch, ELK stack, Greenplum, Kafka, Hadoop

In an app modernization discussion, we very quickly start to classify applications as microservices or monoliths. From an infrastructure point of view you look at apps differently and call them “stateless” (web apps) or “stateful” (SQL, NoSQL, Big Data) apps.

And with Kubernetes we are trying to overcome the challenges, which come with the stateful applications related to app modernization:

  • What does modernization really mean?
  • How do I define “modernization”?
  • What is the benefit by modernizing applications?
  • What are the tools? What are my options?

What has changed? Why is everyone talking about modernization? Why are we talking so much about Kubernetes and cloud native? Why now?

To understand the benefits (and challenges) of app modernization, we can start looking at the definition from IBM for a “modern app”:

“Application modernization is the process of taking existing legacy applications and modernizing their platform infrastructure, internal architecture, and/or features. Much of the discussion around application modernization today is focused on monolithic, on-premises applications‚ÄĒtypically updated and maintained using waterfall development processes‚ÄĒand how those applications can be brought into cloud architecture and release patterns, namely microservices

Modern applications are collections of microservices, which are light, fault tolerant and small. Microservices can run in containers deployed on a private or public cloud.

Which means, that a modern application is something that can adapt to any environment and perform equally well.

Note: App modernization can also mean, that you must move your application from .NET Framework to .NET Core.

I have a customer, that is just getting started with the app modernization topic and has hundreds of Windows applications based on the .NET Framework. Porting an existing .NET app to .NET Core requires some work, but is the general recommendation for the future. This would also give you the option to run your .NET Core apps on Windows, Linux and macOS (and not only on Windows).

A modern application is something than can run on bare-metal, VMs, public cloud and containers, and that easily integrates with any component of your infrastructure. It must be something, that is elastic. Something, that can grow and shrink depending on the load and usage. Since it is something that needs to be able to adapt, it must be agile and therefore portable.

Cloud Native Architectures and Modern Designs

If I ask my VMware colleagues from our so-called MAPBU (Modern Application Platform Business Unit) how customers can achieve application portability, the answer is always: “Cloud Native!”

Many organizations and people see cloud native as going to Kubernetes. But cloud native is so much more than the provisioning and orchestration of containers with Kubernetes. It’s a about collaboration, DevOps, internal processes and supply chains, observability/self-healing, continuous delivery/deployment and cloud infrastructure.

There are so many definitions around “cloud native”, that Kamal Arora from Amazon Web Services and others wrote the book “Cloud Native Architecture“, which describes a maturity model. This model helps you to understand, that cloud native is more a journey than only restrictive definition.

Cloud Native Maturity Model

The adoption of cloud services and applying an application-centric design are very important, but the book also mentions that security and scalability rely on automation. And this for example could bring the requirement for Infrastructure as Code (IaC).

In the past, virtualization – moving from bare-metal to vSphere – didn’t force organizations to modernize their applications. The application didn’t need to change and VMware abstracted and emulated the bare-metal server. So, the transition (P2V) of an application was very smooth and not complicated.

And this is what has changed today. We have new architectures, new technologies and new clouds running with different technology stacks. We have Kubernetes as framework, which requires applications to be redesigned for these platforms.

That is the reason why enterprises have to modernize their applications.

One of the “five R’s” mentioned above is the lift and shift approach. If you don’t want or need to modernize some of your applications, but move to the public cloud in an easy, fast and cost efficient way, have a look at VMware’ hybrid cloud extension¬†(HCX).

In this article I focus more on the replatform and refactor approaches in a multi-cloud world.

Kubernetize and productize your applications

Assuming that you also define Kubernetes as the standard to orchestrate your containers where your microservices are running in, usually the next decision would be about the Kubernetes “product” (on-prem, OpenShift, public cloud).

Looking at the current CNCF Cloud Native Landscape, we can count over 50 storage vendors and over 20 networks vendors providing cloud native storage and networking solutions for containers and Kubernetes.

Talking to my customers, most of them mention the storage and network integration as one of their big challenges with Kubernetes. Their concern is about performance, resiliency, different storage and network patterns, automation, data protection/replication, scalability and cloud portability.

Why do organizations need portability?

There are many use cases and requirements that portability (infrastructure independence) becomes relevant. Maybe it’s about a hardware refresh or data center evacuation, to avoid vendor/cloud lock-in, not enough performance with the current infrastructure or it could be about dev/test environments, where resources are deployed and consumed on-demand.

Multi-Cloud Application Portability with VMware Tanzu

To explore the value of Tanzu, I would like to start by setting the scene with the following customer use case:

In this case the customer is following a cloud-appropriate approach to define which cloud is the right landing zone for their applications. They decided to develop new applications in the public cloud and use the native services from Azure and AWS. The customers still has hundreds of legacy applications (monoliths) on-premises and didn’t decide yet, if they want to follow a “lift and shift and then modernize” approach to migrate a number applications to the public cloud.

Multi-Cloud App Portability

But some of their application owners already gave the feedback, that their applications are not allowed to be hosted in the public cloud, have to stay on-premises and need to be modernized locally.

At the same time the IT architecture team receives the feedback from other application owners, that the journey to the public cloud is great on paper, but brings huge operational challenges with it. So, IT operations asks the architecture team if they can do something about that problem.

Both cloud operations for Azure and AWS teams deliver a different quality of their services, changes and deployments take longer with one of their public clouds, they have problems with overlapping networks, different storage performance characteristics and APIs.

Another challenge is the role-based access to the different clouds, Kubernetes clusters and APIs. There is no central log aggregation and no observability (intelligent monitoring & alerting). Traffic distribution and load balancing are also other items on this list.

Because of the feedback from operations to architecture, IT engineering received the task to define a multi-cloud strategy, that solves this operational complexity.

Notes: These are the regular multi-cloud challenges, where clouds are the new silos and enterprises have different teams with different expertise using different management and security tools.

This is the time when VMware’s multi-cloud approach Tanzu become very interesting for such customers.

Consistent Infrastructure and Management

The first discussion point here would be the infrastructure. It’s important, that the different private and public clouds are not handled and seen as silos. VMware’s approach is to connect all the clouds with the same underlying technology stack based on VMware Cloud Foundation.

Beside the fact, that lift and shift migrations would be very easy now, this approach brings two very important advantages for the containerized workloads and the cloud infrastructure in general. It solves the challenge with the huge storage and networking ecosystem available for Kubernetes workloads by using vSAN and NSX Data Center in any of the existing clouds. Storage and networking and security are now integrated and consistent.

For existing workloads running natively in public clouds, customers can use NSX Cloud, which uses the same management plane and control plane as NSX Data Center. That’s another major step forward.

Using consistent infrastructure enables customers for consistent operations and automation.

Consistent Application Platform and Developer Experience

Looking at organization’s application and container platforms, achieving consistent infrastructure is not required, but obviously very helpful in terms of operational and cost efficiency.

To provide a consistent developer experience and to abstract the underlying application or Kubernetes platform, you would follow the same VMware approach as always: to put a layer on top.

Here the solution is called Tanzu Kubernetes Grid (TKG), that provides a consistent, upstream-compatible implementation of Kubernetes, that is tested, signed and supported by VMware.

A Tanzu Kubernetes cluster is an opinionated installation of Kubernetes open-source software that is built and supported by VMware. In all the offerings, you provision and use Tanzu Kubernetes clusters in a declarative manner that is familiar to Kubernetes operators and developers. The different Tanzu Kubernetes Grid offerings provision and manage Tanzu Kubernetes clusters on different platforms, in ways that are designed to be as similar as possible, but that are subtly different.

VMware Tanzu Kubernetes Grid (TKG aka TKGm)

Tanzu Kubernetes Grid can be deployed across software-defined datacenters (SDDC) and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2. I would assume, that the Google Cloud is a roadmap item.

TKG allows you to run Kubernetes with consistency and makes it available to your developers as a utility, just like the electricity grid. TKG provides the services such as networking, authentication, ingress control, and logging that a production Kubernetes environment requires.

This TKG version is also known as TKGm for “TKG multi-cloud”.

VMware Tanzu Kubernetes Grid Service (TKGS aka vSphere with Tanzu)

TKGS is the option vSphere admins want to hear about first, because it allows you to turn a vSphere cluster to a platform running Kubernetes workloads in dedicated resources pools. TKGS is the thing that was known as “Project Pacific” in the past.

Once enabled on a vSphere cluster, vSphere with Tanzu creates a Kubernetes control plane directly in the hypervisor layer. You can then run Kubernetes containers by deploying vSphere Pods, or you can create upstream Kubernetes clusters through the VMware Tanzu Kubernetes Grid Service and run your applications inside these clusters.

VMware Tanzu Mission Control (TMC)

In our use case before, we have AKS and EKS for running Kubernetes clusters in the public cloud.

The VMware solution for multi-cluster Kubernetes management across clouds is called Tanzu Mission Control, which is a centralized management platform for the consistency and security the IT engineering team was looking for.

Available through VMware Cloud Services as SaaS offering, TMC provides IT operators with a single control point to provide their developers self-service access to Kubernetes clusters.

TMC also provides cluster lifecycle management for TKG clusters across environment such as vSphere, AWS and Azure.

It allows you to bring the clusters you already have in the public clouds or other environments (with Rancher or OpenShift for example) under one roof via the attachment of conformant Kubernetes clusters.

Not only do you gain global visibility across clusters, teams and clouds, but you also get centralized authentication and authorization, consistent policy management and data protection functionalities.

VMware Tanzu Observability by Wavefront (TO)

Tanzu Observability extends the basic observability provided by TMC with enterprise-grade observability and analytics.

Wavefront by VMware helps Tanzu operators, DevOps teams, and developers get metrics-driven insights into the real-time performance of their custom code, Tanzu platform and its underlying components. Wavefront proactively detects and alerts on production issues and improves agility in code releases.

TO is also a SaaS-based platform, that can handle the high-scale requirements of cloud native applications.

VMware Tanzu Service Mesh (TSM)

Tanzu Service Mesh, formerly known as NSX Service Mesh, provides consistent connectivity and security for microservices across all clouds and Kubernetes clusters. TSM can be installed in TKG clusters and third-party Kubernetes-conformant clusters.

Organizations that are using or looking at the popular Calico cloud native networking option for their Kubernetes ecosystem often consider an integration with Istio (Service Mesh) to connect services and to secure the communication between these services.

The combination of Calico and Istio can be replaced by TSM, which is built on VMware NSX for networking and that uses an Istio data plane abstraction. This version of Istio is signed and supported by VMware and is the same as the upstream version. TSM brings enterprise-grade support for Istio and a simplified installation process.

One of the primary constructs of Tanzu Service Mesh is the concept of a Global Namespace (GNS). GNS allows developers using Tanzu Service Mesh, regardless of where they are, to connect application services without having to specify (or even know) any underlying infrastructure details, as all of that is done automatically. With the power of this abstraction, your application microservices can ‚Äúlive‚ÄĚ anywhere, in any cloud, allowing you to make placement decisions based on application and organizational requirements‚ÄĒnot infrastructure constraints.

Note: On the 18th of March 2021 VMware announced the acquisition of Mesh7 and the integration of Mesh7’s contextual API behavior security solution with Tanzu Service Mesh to simplify DevSecOps.

Tanzu Editions

The VMware Tanzu portfolio comes with three different editions: Basic, Standard, Advanced

Tanzu Basic enables the straightforward implementation of Kubernetes in vSphere so that vSphere admins can leverage familiar tools used for managing VMs when managing clusters = TKGS

Tanzu Standard provides multi-cloud support, enabling Kubernetes deployment across on-premises, public cloud, and edge environments. In addition, Tanzu Standard includes a centralized multi-cluster SaaS control plane for a more consistent and efficient operation of clusters across environments = TKGS + TKGm + TMC

Tanzu Advanced builds on Tanzu Standard to simplify and secure the container lifecycle, enabling teams to accelerate the delivery of modern apps at scale across clouds. It adds a comprehensive global control plane with observability and service mesh, consolidated Kubernetes ingress services, data services, container catalog, and automated container builds = TKG (TKGS & TKGm) + TMC + TO + TSM + MUCH MORE

Tanzu Data Services

Another topic to reduce dependencies and avoid vendor lock-in would be Tanzu Data Services – a separate part of the Tanzu portfolio with on-demand caching (Tanzu Gemfire), messaging (Tanzu RabbitMQ) and database software (Tanzu SQL & Tanzu Greenplum) products.

Bringing all together

As always, I’m trying to summarize and simplify things where needed and I hope it helped you to better understand the value and capabilities of VMware Tanzu.

There are so many more products available in the Tanzu portfolio, that help you to build, run, manage, connect and protect your applications.

If you would like to know more about application and cloud transformation make sure to attend the 45 minute VMware event on March 31 (Americas) or April 1 (EMEA/APJ)!

Data Center as a Service based on VMware Cloud Foundation

Data Center as a Service based on VMware Cloud Foundation

IT organizations are looking for consistent operations, which is enabled by consistent infrastructure. Public cloud providers like AWS and Microsoft offer an extension of their cloud infrastructure and native services to the private cloud and edge, which is also known as Data Center as a Service.

Amazon Web Services (AWS) provides a fully managed service with AWS Outposts, that offers AWS infrastructure, AWS services, APIs and their tools to any data center or on-premises facility.

Microsoft has Azure Stack is even working on a new Azure Stack hybrid cloud solution that is codenamed “Fiji” to provide the ability to run Azure as a managed local cloud.

What do these offerings have in common or why would customers choose one (or even both) of these hybrid cloud options?

They bring the public cloud operation model to the private cloud or edge in form of one or more racks and servers provided as a fully managed service.

AWS Outposts (generally available since December 2019) and Azure Stack Fiji (in development) provide the following:

  • Extension of the public cloud services to the private cloud and edge
  • Consistent infrastructure with consistent operations
  • Local processing of data (e.g., analytics at the data source)
  • Local data residency (governance and security)
  • Low latency access to on-premises systems
  • Local migrations and modernization of applications with local system interdependencies
  • Build, run and manage on-premises applications using existing and familiar services and tools
  • Modernize applications on-prem resp. at the edge
  • Prescriptive infrastructure and vendor managed lifecycle and maintenance (racks and servers)
  • Creation of different physical pools and clusters depending on your compute and storage needs (different form factors)
  • Same licensing and pricing options on-premises (like in the public cloud)

The pretty new AWS Outposts or the future Azure Stack Fiji solution are also called “Local Cloud as a Service” (LCaaS) or “Data Center as a Service” and meant to be consumed and delivered in the on-prem data center or at the edge. It’s about bringing the public cloud to your data center or edge location.

The next phase of cloud transformations is about the “edge” of an enterprise cloud and we know today that private and hybrid cloud strategies are critical for the implementation of IT infrastructure and the operation of it.

If you come from VMware’s standpoint, then it’s not about extending the public cloud to the local data centers. It’s about extending your VMware-based private cloud to the edge or the public cloud.

This article focuses on the local (private) cloud as a service options from VMware, not the public cloud offerings.

In case you would like to know more about VMware’s multi-cloud strategy, which is about running the VMware Cloud Foundation stack on top of a public cloud like AWS, Azure or Google, please check some of my recent posts.

Features and Technologies

Before I describe the different VMware LCaaS offerings based on VMware Cloud Foundation, let me show and explain the different features and technologies my customers ask about when they plan to build a private cloud with public cloud characteristics in mind.

I work with customers from different verticals like

  • finance
  • fast-moving consumer goods
  • manufacturing
  • transportation (travel)

which are hosting IT infrastructure in multiple data centers all over the world including hundreds of smaller locations. My customers belong to different vertical markets, but are looking for the same features and technologies when it comes to edge computing and delivering a managed cloud on-premises. 

Compute and Storage. They are looking for pre-validated and standardized configuration offerings to meet their (application) needs. Most of them describe hardware blueprints with t-shirts sizes (small, medium, large). These different servers or instances provide different options and attributes, which should provide enough CPU, RAM, storage and networking capacity based on their needs. Usually you’ll find terms like “general purpose”, “compute optimized” or “memory optimized” node types or instances.

Networking. Most of my customers look for the possibility to extend their current network (aka elastic or cloud-scale networking) to any other cloud. They prefer a way to use the existing network and security policies and to provide software-defined networking (SDN) services like routing, firewalling and IDS/IPS, load balancing Рalso known as virtualized network functions (VNF). Service providers are also looking at network function virtualization (NFV), which includes emerging technologies like 5G and IoT. As cloud native or containerized applications become more important, service providers also discuss containerized network functions (CNF).

Services. Applications consist of one or many (micro-)services. All my conversations are application-centric and focus on the different application components. Most of my discussions are about containers, databases and video/data analytics at the edge.

Security. Customers, that are running workloads in the public cloud, are familiar with the shared responsibility model. The difference between public cloud and local cloud as a service offering is the physical security (racks, servers, network transits, data center access etc.).

Scalability and Elasticity. IT providers want to provide the simplicity and agility on-prem as their customers (the business) would expect it from a public cloud provider. Scalability is about a planned level of capacity that can grow or shrink as needed.

Resource Pooling and Sharing. Larger enterprises and service providers are interested in creating dedicated workload domains and resource clusters, but also look for a way to provide infrastructure multi-tenancy.

The challenge for today’s IT teams is, that edge locations are not often well defined. And these IT teams need an efficient way to manage different infrastructure sizes (can range from 2 nodes up to 16 or 24 nodes), for sometimes up to 400 edge locations.

Rethinking Private Clouds

Organizations have two choices when it comes to the deployment of a private cloud extension to the edge. They could continue using the current approach, which includes the design, deployment and operation of their own private cloud. Another pretty new option would be the subscription of a predefined “Data Center as a Service” offering.

Enterprises need to develop and implement a cloud strategy to support the existing workloads, which are still mostly running on VMware vSphere, and build something, which is vendor and cloud-agnostic. Something, that provides a (public) cloud exit strategy at the same time.

If you decide to go for AWS Outposts or the coming Azure Stack Fiji solution, which for sure are great options, how would you migrate or evacuate workloads to another cloud and technology stack?

VMware Cloud on Dell EMC

At VMworld 2019 VMware announced the general availability of VMware Cloud on Dell EMC (VMC on Dell EMC). In 2018 introduced as “Project Dimension”, the idea behind this concept was to deliver a (public) cloud experience to customers on-premises. Give customers the best of two worlds:

The simplicity, flexibility and cost model of the public cloud with the security and control of your private cloud infrastructure.

VMware Cloud on Dell EMC

Initially, Project Dimension was focused primarily on edge use cases and was not optimized for larger data centers.

Note: This has changed with the introduction of the 2nd generation of VMC on Dell EMC in May 2020 to support different density and performance use cases.

VMC on Dell EMC is a VMware-managed service offering with these components:

  • A software-defined data center based von VMware Cloud Foundation (VCF) running on Dell EMC VxRail
    • ESXi, vSAN, NSX, vCenter Server
    • HCX Advanced
  • Dell servers, management & ToR switches, racks, UPS
    • Standby VxRail node for expansion (unlicensed)
    • Option for half or full-height rack
  • Multiple cluster support in a single rack
    • Clusters start with a minimum of 3 nodes (not 4 as you would expect from a regular VCF deployment)
  • VMware SD-WAN (formerly known as VeloCloud) appliances for remote management purposes only at the moment
  • Customer self-service provisioning through cloud.vmware.com
  • Maintenance, patching and upgrades of the SDDC performed by VMware
  • Maintenance, patching and upgrades of the Dell hardware performed by VMware (Dell provides firmware, drivers and BIOS updates)
  • 1- or 3-year term subscription commitment (like with VMC on AWS)

There is no “one size fits all” when it comes to hosting workloads at the edge and in your data centers. VMC on Dell EMC provides also different hardware node types, which should match with your defined t-shirt sizes (blueprints).

VMC on Dell EMC HW Node Types

If we talk about at a small edge location with a maximum of 5 server nodes, you would go for a half-height rack. The full-height rack can host up to 24 nodes (8 clusters). Currently, the largest instance type would be a good match for high density, storage hungry workloads such as VDI deployments, databases or video analytics.

As HCX is part of the offering, you have the right tool and license included to migrate workloads between vSphere-based private and public clouds.

The following is a list of some VMworld 2020 breakout sessions presented by subject matter experts and focused on VMware Cloud on Dell EMC:

HCP1831: Building a successful VDI solution with VMware Cloud on Dell EMC ‚Äď Andrew Nielsen, Sr. Director, Workload and Technical Marketing, VMware

HCP1802: Extend Hybrid Cloud to the Edge and Data Center with VMware Cloud on Dell EMC ‚Äď Varun Chhabra, VP Product Marketing, Dell

HCP1834: Second-Generation VMware Cloud on Dell EMC, Explained by Product Experts ‚Äď Neeraj Patalay, Product Manager, VMware

VMware Cloud Foundation and HPE Synergy with HPE GreenLake

At VMworld 2019 VMware announced that VMware Cloud Foundation will be offered in HPE’s GreenLake program running on HPE Synergy composable infrastructure (Hybrid Cloud as a Service). This gives VMware customers the opportunity to build a fully managed private cloud with the public cloud benefits in an on-premises environment.

HPE’s vision is built on a single platform that can span across multiple clouds and GreenLake brings the cloud consumption model to joint HPE and VMware customers.

Today, this solution is fully supported and sold by HPE. In case you want to know more, have a look at the VMworld 2020 session Simplify IT with HPE GreenLake Cloud Services and VMware from Erik Vogel, Global VP, Customer Experience, HPE GreenLake, Hewlett Packard Enterprise.

VMC on AWS Outposts

If you are an AWS customer and look for a consistent hybrid cloud experience, then you would consider AWS Outposts.

There is also VMware variant of AWS Outposts available for customers, who already run their on-premises workloads on VMware vSphere or in a cloud vSphere-based environment running on top of the AWS global infrastructure (called VMC on AWS).

VMware Cloud on AWS Outposts is a  on-premises as-a-service offering based on VMware Cloud Foundation. It integrates VMware’s software-defined data center software, including vSphere, vSAN and
NSX. Ths Cloud Foundation stack runs on dedicated elastic Amazon EC2 bare-metal infrastructure, delivered on-premises with optimized access to local and remote AWS services.

VMC on AWS Outposts

Key capabilities and use cases:

  • Use familiar VMware tools and skillsets
  • No need to rewrite applications while migrating workloads
  • Direct access to local and native AWS services
  • Service is sold, operated and supported by VMware
  • VMware as the single point of primary contact for support needs,¬†supplemented by AWS for hardware shipping, installation and configuration
  • Host-level HA with automated failover to VMware Cloud on AWS
  • Resilient applications required to work in the event of WAN link downtime
  • Application modernization with access to local and native AWS services
  • 1- or 3-year term subscription commitment
  • 42U AWS Outposts rack, fully assembled and installed by AWS (including ToR switches)
  • Minimum cluster size of 3 nodes (plus 1 dark node)
  • Current cluster maximum of 16 nodes

Currently, VMware is running a VMware Cloud on AWS Outposts Beta program, that lets you try the pre-release software on AWS Outposts infrastructure. An early access program should start in the first half of 2021, which can be considered as a customer paid proof of concept intended for new workloads only (no migrations).

VMware on Azure Stack

To date there are no plans communicated by Microsoft or VMware to make Azure VMware Solution, the vSphere-based cloud offering running on top of Azure, available on-premises on the current or future Azure Stack family.

VMware on Google Anthos

To date there are no plans communicated by Google or VMware to make Google Cloud VMware Engine, the vSphere-based cloud offering running on top of the Google Cloud Platform (GCP), available on-premises.

The only known supported combination of a Google Cloud offering running VMware on-premises is Google Anthos (Google Kubernetes Engine on-prem).

Multi-Cloud Application Portability

Multi-cloud is now the dominant cloud strategy and many of my customers are maintaining a vSphere-based cloud on-premises and use at least two of the big three public clouds (AWS, Azure, Google).

Following a cloud-appropriate approach, customers are inspecting each application and decide which cloud (private or public) would be the best to run this application on. VMware gives customers the option to run the Cloud Foundation technology stack in any cloud, which doesn’t mean, that customers at the same time are not going cloud-native and still add AWS and Azure to the mix.

How can I achieve application portability in a multi-cloud environment when the underlying platform and technology formats differ from each other?

This is a question I hear a lot. Kubernetes is seen as THE container orchestration tool, which at the same time can abstract multiple public clouds and the complexity that comes with them.

A lot of people also believe that Kubernetes is enough to provide application portability and figure out later, that they have to use different Kubernetes APIs and management consoles for every cloud and Kubernetes (e.g., Rancher, Azure, AWS, Google, RedHat OpenShift etc.) flavor they work with.

That’s the moment we have to talk about VMware Tanzu and how it can simplify things for you.

The Tanzu portfolio provides the next generation the building blocks and steps for modernizing your existing workloads while providing capabilities of Kubernetes. Additionally, Tanzu also has broad support for containerization across the entire application lifecycle.

Tanzu gives you the possibility to build, run, manage, connect and protect applications and to achieve multi-cloud application portability with a consistent platform over any cloud – the so-called “Kubernetes grid”.

Note: I’m not talking about the product “Tanzu Kubernetes Grid” here!

I’m talking about the philosophy to put a virtual application service layer over your multi-cloud architecture, which provides a consistent application platform.

Tanzu Mission Control is a product under the Tanzu umbrella that provides central management and governance of containers and clusters across data centers, public clouds, and edge.

Conclusion

Enterprises must be able to extend the value of their cloud investments to the edge of the organization.

The edge is just one piece of a bigger picture and customers are looking for a hybrid cloud approach in a multi-cloud world.

Solutions like VMware Cloud on Dell EMC or running VCF on HPE Synergy with HPE Greenlake are only the first steps towards innovation in the private cloud and to bring the cost and operation model from the public cloud to the enterprises on-premises.

IT organizations are rather looking for ways to consume services in the future and care less about building the infrastructure or services by themselves.

The two most important differentiators for selecting an as-a-service infrastructure solution provider will be the provider’s ability to enable easy/consistent connectivity and the provider’s established software partner portfolio.

In cases where IT organizations want to host a self-managed data center or local cloud, you can expect, that VMware is going to provide a new and appropriate licensing model for it.

Introduction to Alibaba Cloud VMware Solution (ACVS)

Introduction to Alibaba Cloud VMware Solution (ACVS)

VMware’s hybrid and multi-cloud strategy is to run their Cloud Foundation technology stack with vSphere, vSAN and NSX in any private or public cloud including edge locations. I already introduced VMC on AWS, Azure VMware Solution (AVS), Google Cloud VMware Engine (GCVE) and now I would like to briefly summarize Alibaba Cloud VMware Solution (ACVS).

VMware Multi-Cloud Offerings

A lot of European companies, this includes one of my large Swiss enterprise account, defined Alibaba Cloud as strategic for their multi-cloud vision, because they do business in China. The Ali Cloud is the largest cloud computing provider in China and is known for their cloud security, reliable and trusted offerings and their hybrid cloud capabilities.

In September 2018, Alibaba Cloud (also known as Aliyun), a Chinese cloud computing company that belongs to the Alibaba Group, has announced a partnership with VMware to deliver hybrid cloud solutions to help organizations with their digital transformation.

Alibaba Cloud was the first VMware Cloud Verified Partner in China and brings a lot of capabilities and services to a large number of customers in China and Asia. Their current global infrastructure operates worldwide in 22 regions and 67 availability zones with more regions to follow. Outside Main China you find Alibaba Cloud data centers in Sydney, Singapore, US, Frankfurt and London.

As this is a first-party offering from Alibaba Cloud, this service is owned and delivered by them (not VMware). Alibaba is responsible for the updates, patches, billing and first-level support.

Alibaba Cloud is among the world’s top 3 IaaS providers according to Gartner and is China’s largest provider of public cloud services. Alibaba Cloud provides industry-leading flexible, cost-effective, and secure solutions. Services are available on a pay-as-you-go basis and include data storage, relational databases, big-data processing, and content delivery networks.

Currently,  Alibaba Cloud has been declared as a Niche player according to the actual Gartner Magic Quadrant for Cloud Infrastructure and Platform Services (CIPS) with Oracle, IBM and Tencent Cloud.

Alibaba Gartner CIPS MQ

Note: If you would like to know more about running the VMware Cloud Foundation stack on top of the Oracle Cloud as well, I can recommend Simon Long’s article, who just started to write about¬†Oracle Cloud VMware Solution (OCVS).

This partnership with VMware and Alibaba Cloud has the same goals like other VMware hybrid cloud solutions like VMC on AWS, OCVS or GCVE – to provide enterprises the possibility to meet their cloud computing needs and the flexibility to move existing workloads easily from on-premises to the public cloud and have highspeed access to the public cloud provider’s native services.

ACVS vSphere Architecture

In April 2020, Alibaba Cloud and VMware finally announced the general availability of Alibaba Cloud VMware Solution for the Main China and Hongkong region (initially). This enables customers to seamlessly move existing vSphere-based workloads to the Alibaba Cloud, where VMware Cloud Foundation is running on top of Aliyun’s infrastructure.

As already common with such VMware-based hybrid cloud offerings, this let’s you move from a Capex to a Opex-based cost model based on subscription licensing.

Joint Development

X-Dragon ‚Äď Shenlong in Chinese ‚Äď is a proprietary bare metal server architecture developed by Alibaba Cloud for their cloud computing requirements. It offers direct access to CPU and RAM resources without virtualization overheads that bare metal servers offer (built around a custom X-Dragon MOC card). The virtualization technology, X-Dragon, behind Alibaba Cloud Elastic Compute Service (ECS) is now in its third generation. The first two generations were called Xen and KVM.

X-Dragon  NIC

VMware works closely together with the Alibaba Cloud engineers to develop a VMware SDDC (software-defined data center based on vSphere and NSX) which runs on this X-Dragon bare metal architecture.

The core of the MOC NIC is the X-Dragon chip. The X-Dragon software system runs on the X-Dragon chip to provide virtual private cloud (VPC) and EBS disk capabilities. It offers these capabilities to ECS instances and ECS bare metal instances through VirtIO-net and VirtIO-blk standard interfaces.

Note: The support for vSAN is still roadmap and comes later in the future (no date committed yet). Because the X-Dragon architecture is a proprietary architecture, running vSAN over it requires official certification. 

Project Monterey

Have you seen VMware’s announcement at VMworld 2020 about Project Monterey¬†which allows you to run VMware Cloud Foundation on a SmartNIC? For me, this looks similar to the X-Dragon architecture ūüėČ

Project Monterey VMware Cloud Foundation Use Cases

Data Center extension or retirement. You can scale the data center capacity in the cloud on-demand, if you for example don’t want to invest in your on-premises environment anymore. In case you just refreshed your current hardware, another use case would be the extension of your on-premises vSphere cloud to Alibaba Cloud.ACVS Disaster Recovery

Disaster Recovery and data protection. Here we’ll find different scenarios like recovery (replication) or backup/archive (data protection) use cases. You can use your ACVS private clouds as a disaster recovery (DR) site for your on-premises workloads. This DR solution would be based on VMware Site Recovery Manager (SRM) which can be also used together with HCX. At the moment Alibaba Cloud offers 9 regions for DR sites.

Cloud migrations or consolidation. If you want to start with a lift & shift approach to migrate specific applications to the cloud, then ACVS is the right choice for you. Maybe you want to refresh your current infrastructure and need to relocate or migrate your workloads in an easy and secure way? Another perfect scenario would be the consolidation of different vSphere-based clouds.

ACVS Migration to Alibaba Cloud

Multicast Support with NSX-T

Like with Microsoft Azure and Google Cloud, an Alibaba Cloud ECS instance or VPC in general doesn’t support multicast and broadcast. That is one specific reason why customers need to run NSX-T on top of their public cloud provder’s global cloud infrastructure.

Connectivity Options

For (multi-)national companies Alibaba Cloud has different enterprise-class networking offerings to connect different sites or regions in a secure and reliable way.

Cloud Enterprise Network (CEN) is a highly-available network built on the high-performance and low-latency global private network provided by Alibaba Cloud. By using CEN, you can establish private network connections between Virtual Private Cloud (VPC) networks in different regions, or between VPC networks and on-premises data centers.  The CEN is also available in Europe in Germany (Frankfurt) and UK (London).

Alibaba Cloud Cloud Enterprise Network

Alibaba Cloud Express Connect helps you build internal network communication channels that feature enhanced cross-network communication speed, quality, and security. If your on-premises data center needs to communicate with an Alibaba Cloud VPC through a private network, you can apply for a dedicated physical connection interface from Alibaba Cloud to establish a physical connection between the on-premises data center and the VPC. Through physical connections, you can implement high-quality, highly reliable, and highly secure internal communication between your on-premises data center and the VPC. 

Alibaba Cloud Express Connect

ACVS Architecture and Supported VMware Cloud Services

Let’s have a look at the ACVS architecture below. On the left side you see the Alibaba Cloud with the VMware SDDC stack loaded onto the Alibaba bare metal servers with NSX-T connected to the Alibaba VPC network.

This VPC network allows customers to connect their on-premises network and to have direct acccess to Alibaba Cloud’s native services.

Customers have the advantage to use vSphere 7 with Tanzu Kubernetes Grid and could leverage their existing tool set from the VMware Cloud Management Platform like vRealize Automation (native integration of vRA with Alibaba Cloud is still a roadmap item) and vRealize Operations.

Alibaba Cloud VMware Solution Architecture

The right side of the architecture shows the customer data centers, which run as a vSphere-based cloud on-premises managed by the customer themselves or as a managed service offering from any service provider. In between, with the red lines, the different connectivity options like Alibaba Direct Connect, SD-WAN or VPN connections are mentioned with different technologies like NSX-T layer 3 VPN, HCX and Site Recovery Manager (SRM).

To load balance the different application services across the different vSphere-based or native clouds, you can use NSX Advanced Load Balancer (aka Avi) to configure GSLB (Global Server Load Balancing) for high availability reasons.

Because the entire stack on top of Alibaba Cloud’s infrastructure is based on VMware Cloud Foundation, you can expect to run everything in VMware’s product portfolio like Horizon, Carbon Black, Workspace ONE etc. as well.

You can also deploy AliCloud Virtual Edges with VMware SD-WAN by VeloCloud.

Node Specifications

The Alibaba Cloud VMware Solution offering is a little bit special and I hope that I was able to translate the Chinese presentations correctly.

First, you have to choose the amount of hosts which gives you specific options.

1 Host (for testing purposes): vSphere Enterprise Plus, NSX Data Center Advanced, vCenter

2+ Hosts (basic type): vSphere Enterprise Plus, NSX Data Center Advanced, vCenter

3+ Hosts (flexibility and elasticity): vSphere Enterprise Plus, NSX Data Center Advanced, vCenter, (vSAN Enterprise)

Site Recovery Manager, vRealize Log Insight and vRealize Operations need to be licensed separately as they are not included in the ACVS bundle.

The current ACVS offering has the following node options and specifications (maximum 32 hosts per VPC):

ACVS Node Specifications

All sixth-generation ECS instance come equipped with Intel¬ģ Xeon¬ģ Platinum 8269CY processors. These processors were customized based on the Cascade Lake microarchitecture, which is designed for the second-generation Intel¬ģ Xeon¬ģ Scalable processors. These processors have a turbo boost with an increased burst frequency of 3.2 GHz, and can provide up to a 30% increase in floating performance over the fifth generation ECS instances.

Component Version License
vCenter 7.0 vCenter Standard
ESXi 7.0 Enterprise Plus
vSAN (support coming later) n/a Enterprise
NSX Data Center (NSX-T) 3.0 Advanced
HCX n/a Enterprise

Note: Customers have the possibility to install any VIBs by themselves with full console access. This allows the customer to assess the risk and performance impacts by themselves and install any needed 3rd party software (e.g. Veeam, Zerto etc.).

If you want to more about how to accelerate your multi-cloud digital transformation initiatives in Asia, you can watch the VMworld presentation from this year. I couldn’t find any other presentation (except the exact¬†same recording on YouTube) and believe that this article is the first publicy available summary about Alibaba Cloud VMware Solution. ūüôā

VMware Cloud Foundation And The Cloud Management Platform Simply Explained

VMware Cloud Foundation And The Cloud Management Platform Simply Explained

I think that it is pretty clear what VMware Cloud Foundation (VCF) is and what it does. And it is also clear to a lot of people how of where you could use VCF. But very few organizations and customers know why they should or could use Cloud Foundation and what its purpose is. This article will give you a better understanding about the “hidden” value that VMware Cloud Foundation has to offer.

My last contributions focused on VMware’s multi-cloud strategy and how they provide consistency in any layer of their vision:

VMware Strategy

The VMware messaging is clear. By deploying consistent infrastructure across clouds, customers gain consistent operations and intrinsic security in hybrid or multi-cloud operating models. The net result is, that the intricacies of infrastructure fade, allowing IT to focus more on deploying applications and providing secure access to those applications and data from any device.

The question is now, what are the building blocks and how can you fulfill this strategy? And why is VMware Cloud Foundation really so important?

Cloud Computing

To answer these questions we have to start with the basics and look at the NIST definition of cloud computing first:

Cloud computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction. This cloud model promotes availability and is composed of five
essential characteristics, three service models, and four deployment models.

Data Center Cloud Computing

Let’s start with the three service models and the capabilities each is aiming to provide:

  • Software as a Service (SaaS). Centrally hosted software, which is licensed on a subscription basis. They are also known as web-based or hosted software. The consumer of this service does not manage or control the underlying cloud infrastructure (servers, network, storage, operating system)
  • Platform as a Service (PaaS). This application platform allows the consumer to build, run and manage applications without the complex building of the application infrastructure to launch the applications. Like with SaaS, the consumer doesn’t manage or control the underlying cloud infrastructure, but has the control over the deployed applications.
  • Infrastructure as a Service (IaaS). IaaS provides the customer fundamental resources like compute, storage and network where they are able to deploy and run software in virtual machines or containers. The consumer doesn’t manage the underlying infrastructure, but manages the virtual machines including the operating systems and applications.

Deployment Models

There are four cloud computing deployment models defined today and mostly we talk only about three (I excluded the community cloud) of them. Let’s consult the VMware glossary for each definition.

  • Private Cloud. Private cloud is an on-demand cloud deployment model where cloud computing services and infrastructure are hosted privately, often within a company‚Äôs own data center using proprietary resources and are not shared with other organizations. The company usually oversees the management, maintenance, and operation of the private cloud. A private cloud offers an enterprise more control and better security than a public cloud, but managing it requires a higher level of IT expertise.
  • Public Cloud. Public cloud is an IT model where on-demand computing services and infrastructure are managed by a third-party provider and shared with multiple organizations using the public Internet. Public cloud service providers may offer cloud-based services such as infrastructure as a service, platform as a service, or software as a service to users for either a monthly or pay-per-use fee, eliminating the need for users to host these services on site in their own data center.
  • Hybrid Cloud. Hybrid cloud describes the use of both private cloud and public cloud platforms, which can work together on-premises and off-site to provide a flexible mix of cloud computing services. Integrating both platforms can be challenging, but ideally, an effective hybrid cloud extends consistent infrastructure and consistent operations to utilize a single operating model that can manage multiple application types deployed in multiple environments.

Hybrid Cloud Model

Multi-Cloud is a term for the use of more than one public cloud service provider for virtual data storage or computing power resources, with or without any existing private cloud and on-premises infrastructure. A multi-cloud strategy not only provides more flexibility for which cloud services an enterprise chooses to use, it also reduces dependence on just one cloud vendor. Multi-Cloud service providers may host three main types of services IaaS, PaaS and SaaS.

With IaaS, the cloud provider hosts servers, storage and networking hardware with accompanying services, including backup, security and load balancing. PaaS adds operating systems and middleware to their IaaS offering, and SaaS includes applications so that nothing is hosted on a customer’s site. Cloud providers may also offer these services independently.

Note: It is very important to understand which cloud computing deployment is the right one for your organization and which services your IT needs to offer to your internal or external customers.

Essential Characteristics

If you look at the five essential cloud computing characteristics from the NIST (National Institute of Standards and Technology), you’ll find attributes which you would also consider as natural requirements for any public cloud (e.g. Azure, Google Cloud Platform, Amazon Web Services):

  • On-demand self-service. A consumer can unilaterally provision computing capabilities,
    such as server time and network storage, as needed automatically without
    requiring human interaction with each service’s provider.
  • Broad Network Access. Capabilities are available over the network and accessed through
    standard mechanisms that promote use by heterogeneous thin or thick client
    platforms (e.g. PCs, laptops, smartphones, tablets).
  • Resource Pooling. The provider‚Äôs computing resources are pooled to serve multiple
    consumers using a multi-tenant model, with different physical and virtual
    resources dynamically assigned and reassigned according to consumer demand.
    There is a sense of location independence in that the customer generally has no
    control or knowledge over the exact location of the provided resources but may be
    able to specify location at a higher level of abstraction (e.g., country, state, or
    data center).
  • Scalability and Elasticity. Capabilities can be rapidly and elastically provisioned, in some cases
    automatically, to quickly scale out and rapidly released to quickly scale in. To the
    consumer, the capabilities available for provisioning often appear to be unlimited
    and can be purchased in any quantity at any time.
  • Measure Service.¬†Cloud systems automatically control and optimize resource use by
    leveraging a metering capability at some level of abstraction appropriate to the
    type of service (e.g., storage, processing, bandwidth, and active user accounts).
    Resource usage can be monitored, controlled, and reported providing
    transparency for both the provider and consumer of the utilized service.

And besides the five essentials, you look for security, flexibility and reliability. With all these properties in mind, you would follow the same approach today, if you build a new data center or have to modernize your current cloud infrastructure. A digital foundation, or a platform, which can adopt to any changes and serve as expected.

5 Characteristics of Cloud Computing

This is why VMware has built VMware Cloud Foundation! This is why we need VCF, which is the core of VMware’s multi-cloud strategy.

To be able to meet the above characteristics/criteria, you need a set of software-defined components for compute, storage, networking, security and cloud management in private and public environments – also called the software-defined data center (SDDC). VCF makes operating the data center fundamentally simpler by bringing the ease and automation of the public cloud in-house by deploying a standardized and validated architecture with built in lifecycle management and automation capabilities for the entire cloud stack.

As automation is already integrated and part from the beginning, and not something you would integrate later, you are going to be able to adopt to changes and have already one of the elements in place to achieve the needed security requirements. Automation is key to provide security through the whole stack.

In short, Cloud Foundation gives you the possibility and the right tools to build your private cloud based on public cloud characteristics and also an easy path towards a hybrid cloud architecture. Consider VCF as VMware’s cloud operating system, which enables a hybrid cloud based on a common and compatible platform that stretches from on-premises to any public cloud. Or from public cloud to another public cloud.

Note: VMware Cloud Foundation can also be consumed as a service (aka SDDC as a service) through their partners like Google, Amazon Web Services, Microsoft and many more.

Why Hybrid or Multi-Cloud?

A hybrid cloud with a consistent infrastructure approach enables organizations to use the same tools, policies and teams to manage the cloud infrastructure, which hosts the virtual machines and containers.

Companies want to have the flexibility to deploy and manage new and old applications in the right cloud. They are looking for an architecture, which allows them to migrate on-premises workloads to the public cloud and modernize these applications (partially or completely) with the cloud provider’s native services.

Customers have changed their perception from cloud-first to a cloud-appropriate strategy where they choose the right cloud for each specific application. And to avoid a vendor lock-in, you suddenly see two or three additional public clouds joining the cloud architecture, which by definition now is a multi-cloud environment.

Now you have a mix of a VMware-based cloud with AWS, Azure and GCP for example. It is possible to build new applications in one of the VMware “SDDC as a service” (e.g. VMware Cloud on AWS, Azure VMware Solution, Google Cloud VMware Engine) offerings, but customers also want deploy and use cloud-native service offerings.

Multi-Cloud Reality

How you deal with this challenge with the different architectures, operational inconsistencies, varying skill sets or your people, different management and security controls and incompatible technology formats?

Well, the first answer could be, that your IT needs to be able to treat all clouds and applications consistently and run the VCF stack ideally in any (private or public) cloud.

But this is not where I want to head to. There is something else, which we need to transform in this multi-cloud environment.

We only have consistent infrastructure with consistent operations, because of VMware Cloud Foundation, so far.

  • How does your deployment and automation model for your virtual machines and containers look like now?
  • How would you automate the provisioning these workloads and needed application components?

With your current tool set you have to talk four “languages” via the graphical management console or API (application programming interface).

In an international organization, where people come from different countries and talk different languages, we usually agree to English as corporate language. VMware is following the same approach in this case and puts an abstraction layer above the clouds and expose the APIs.

VMware Cloud-Agnostic CMP

This helps to manage the different objects and workloads you have deployed in any cloud. You don’t have to use your cloud accounts anymore and can define a consistent and centralized team and permission structure as well.

On top of this cloud-agnostic API you can provide all means for a self-service catalog, use programmable provisioning and provide the operations (e.g. cost or log management) and visibility (powered by artificial intelligence where needed) tool set (e.g. application and networks) to build, run, manage, connect and protect your applications.

Your applications, which are part of the different main services (IaaS, PaaS, SaaS) and most probably many other services (like DaaS, DBaaS, FaaS, DRaaS, CaaS, Backup as a Services, MongoDB as Service etc.) you are going to offer to your internal consumers or customers, are deployed via this cloud abstraction layer.

VMware CMP and Services

This abstraction layer forms the VMware cloud management platform (CMP), which consists of the vRealize Suite and VMware Cloud Services. This CMP also provides you with the necessary interfaces and integration options to other existing backend services or tools like a ticketing system, change management database (CMDB), IP address management (IPAM) and so on.

In short this means, that the VMware cloud operation model treats each private or public cloud as a landing zone.

VMware Cloud Foundation Is More About Business Value

Yes, Cloud Foundation is a very technical topic and most people see it only like that. But the hidden and real value are the ones nobody sees or talk about. The business values and the fact, that you can operate your private cloud with the ease like a public cloud provider and that you can follow the same principles for any cloud delivery model.

On-Demand self-service is offered through the lifecycle management capabilities VCF has included in combination with the cloud-agnostic API from VMware’s cloud management platform.

Broad network access starts with VMware’s digital workspace offerings and ends in the data center, at the edge or any cloud with their cloud-scale networking portfolio, which includes software-defined networking (SDN), software-defined WAN (SD-WAN) and software-defined application delivery controller (SD-ADC).

Multi-tenancy and resource pooling can only be achieved with automation and security. Two items which are naturally integrated into Cloud Foundation. The SDDC management component of VCF also gives you the technical capability to create your regions and availability zones. Something a public cloud providers let’s you choose as well.

Rapid elasticity is provided with the hardware-agnostic (for the physical servers in your data centers) approach VMware offers to their customers. Besides that, all cloud computing components are software-defined, which can run on-premises, at the edge or in any public cloud, which allows you to quickly scale out and scale in according to your needs.

Service usage and resource usage (compute, storage, network) are automatically controlled and optimized by leveraging some level of abstraction of all different clouds. Resource usage can be monitored and reported in a transparent way for the service provider and the consumer.

VMware Multi-Cloud Services

In addition to that, VMware provides their customers the choice to consume the VMware operation tools on-premises or as a SaaS offering, which is then hosted in the cloud. With perpetual and subscription licenses you can define your own pay-per-use or pay-as-you-go pricing options and if you want to move from a CAPEX to a OPEX cost model. The same will be true somewhen for VCF and VCF in the public cloud as well. A single universal license which allows you to run the different components and tools everywhere.

Customers need the flexibility to build the applications in any environment, matching the needs of the application and the best infrastructure. They need to manage and operate different environments as one, as efficiently as possible, with common models of security and governance.

Customers need to shift workloads seamlessly between cloud providers (also known as cross-cloud workload mobility) without the cost, complexity or risk of rewriting applications, rebuilding process or retraining IT resources.

And that’s my simple explanation¬†of VMware Cloud Foundation and why it so important and the core of the VMware (Multi-Cloud) strategy.

Let me know what you think! ūüôā

A big thank you to my colleagues Christian Dudler, Gavin Egli and Danny Stettler who reviewed my content and illustrations.

Google Cloud VMware Engine (GCVE)

Google Cloud VMware Engine (GCVE)

In June 2020 VMware and Google made the announcement that Google Cloud VMware Engine (GCVE) is generally available. Almost exactly one year ago, the market received the information that VMware’s Cloud Foundation (vSphere, vSAN and NSX) stack will come to Google Cloud.

With this milestone VMware is now present on top of all the so-called “big three” hyperscalers.

GCVE has the same goals like the other similar offerings like VMware Cloud on AWS or Azure VMware Solution and belongs to the VMware multi-cloud strategy Рto seamlessly migrate and run applications in the public cloud. In this case in Google Cloud! Run your applications in the public cloud exactly the same way as you already do now withh your on-premises VMware environment. With the very important addition, that you have high speed access to Google Cloud services like Cloud SQL, Cloud Storage, big data or AI/ML services.

To be able to run VMware workloads on top of the Google Cloud global infrastructure, Google acquired CloudSimple (with which they partnered with already) last November 2019.

At the moment of writing, the VMware hybrid cloud experience on Google Cloud is sold, operated and supported by Google and their partners.

Many customers are already looking at this very interesting offer, which is going to be available in more regions until the end of 2020. But there are also already a few customers using the joint offering. Google just published a customer reference story about the “Deutsche B√∂rse Group”, a large and international financial organization, which extended their on-premises environment to Google Cloud with Google Cloud VMware Engine. One of the reasons why Deutsche B√∂rse went for this vSphere-based cloud approach, was, to keep migrations to the cloud easy. I expect we can hear more about this success story at VMworld 2020.

Cloud Migration and Workload Mobility

A lot of customers underestimate the amount of work, time and costs involved in refactoring or re-platforming applications and the overall challenges when it comes to migrations from on-prem to the cloud. To build this secure hybrid cloud extension with GCVE, you’ll need VMware HCX, which is included in the GCVE offering.

There are different options available to connect both worlds:

GCVE Connectivity Options

  • VPN Gateway for point-to-point connections, used for the secure admin access to vCenter. Useful for the initial setup of the GCVE environment.
  • Cloud VPN for site-to-site connections, a secure layer 3 connection over the internet. This is one of the lower cost options for use cases, that don’t require high bandwidth.
  • Dedicated Cloud Interconnect with a direct traffic flow to Google with 10Gbps or 100Gbps circuits with 50Mbps to 50 Gbps connection capacities. This direct connection is required for HCX and the preferable connectivity option for customers requiring high speed and low latency.
  • Partner (Cloud) Interconnect is another option of a Cloud Interconnect, where your traffic flows through one of the supported service providers (e.g. Colt, Equinix, BT, e-shelter, Verizon, InterCloud, Interxion, Megaport)

Note: One unique feature of GCVE is the ability to route between different GCVE environments in the same region, without the need for additional configuration. 

Use Cases

These use cases, if you made yourself already familiar with a hybrid cloud approach, shouldn’t be new to you.

Data Center extension or retirement. You can scale the data center capacity in the cloud on-demand, if you for example don’t want to invest anymore in your on-premises environment. In case you just refreshed your current hardware, another use case would be the extension of your on-premises vSphere cloud to Google Cloud.

Disaster Recovery and data protection. Here we’ll find different scenarios like recovery (replication) or backup/archive (data protection) use cases. You can also still use your existing 3rd party tools from Zerto or Veeam to replace or complement existing DR locations and leverage the Cloud Storage service. You can also use your GCVE private clouds as a disaster recovery (DR) site for your on-premises workloads. This DR solution would be based on VMware Site Recovery Manager (SRM) which can be also used together with HCX.

Cloud migrations or consolidation. If you want to start with a lift & shift approach to migrate specific applications to the cloud, then GCVE is definitely right for you. Maybe you want to refresh your current infrastructure and need to relocate or migrate your workloads in an easy and secure way? Another perfect scenario would be the consolidation of different vSphere-based clouds.

Application modernization. Re-architecting or refactoring applications is not that easy. Most customers start with a partial approach to modernize their applications and leverage cloud-native services (e.g. databases, AI/ML engines).

Interesting: Did you know that Google’s on-prem GKE (Google Anthos) is running on vSphere?

VMware Horizon on VMware Engine

The advantages of a public cloud like Google Cloud are the “endless” capacity, agility and high-bandwidth connections. These items are very important for a virtual desktop infrastructure (VDI) and specially during disaster scenarios, when onboardings have to happen fast or if you look for on-demand growth.

Another regular example could be a merger & acquisition use case, where we the main infrastructure doesn’t have the necessary physical resources to onboard to new company and their employees.

Because something like this has always happen as easy and fast as possible. Running virtual desktops in Google Cloud VMware Engine can help in such situations. Together with VMware Horizon, organizations could install a VDI environment in GCVE and connect it to their Horizon on-premises infrastructure using the Cloud Pod Architecture (CPA). 

Note: When migrating applications to the cloud (GCVE), it is a best practice to keep the virtual desktop close to the application, which is a general use case we see when talking about application locality.

Horizon Global Pod GCVE

With the release of Horizon 2006 (aka Horizon 8) it is also possible to choose “Google Cloud” as deployment option during the connection server installation.

C:\\Users\\mrebmann\\OneDrive - VMware, Inc\\cloud13\\2020 - Google Cloud VMware Engine\\Horizon on GCVE.png

In case you need a load balancer (for your Horizon components and in general) for your on-premises environment and the public cloud, have a look at NSX Advanced Load Balancer.

GCVE Node Specs

When planning your GCVE resource needs, be aware of the following specifications and limits:

CPU: Intel Xeon Gold 6240 (Cascade Lake) 2.6 GHz (x2), 36 Cores, 72 Hyper-Threads

Memory: 768 GB

Storage (vSAN): 2 × 1.6 TB (3.2 TB) NVMe (Cache), 6 × 3.2 TB (19.2 TB) NVMe (Data)

Number of nodes required to create a private cloud: 3 (up to 64 hosts per private cloud)

Number of nodes allowed in a cluster on a private cloud: 16

3rd party tools compatibility: Yes, you can use existing tools (elevated privileges allow you to install 3rd party software)

Interesting facts: It only takes about a half hour to spin up your private cloud with three nodes! The addition of a new node takes approximately 15 minutes.

GCVE Elevated Privileges

Software License and Versions

Please find the current software versions and licenses below used for the GCVE offering (purchased with a 1- or 3- year commitment). The listed software versions are fixed and all updates are managed by Google. Google is responsible for the lifecycle management of the VMware software, which includes ESXi, vCenter and NSX.

Component Version License
vCenter 6.7 U3 vCenter Standard
ESXi 6.7 U3 Enterprise Plus
vSAN 6.7 U3 Enterprise
NSX Data Center (NSX-T) 2.5.1 Advanced
HCX 3.5.3 Advanced

Shared Responsibilities

Google Cloud VMware Engine is coming with all components you need to securely run VMware natively in a dedicated private cloud. Google takes care of the infrastructure (service) and their native service integrations. As a customer you only need to take care of your virtual machines or containers with your applications and data. Besides that, you also need to make sure that your configurations, policies, network portgroups, authentication and capacity management are properly configured.

GCVE Shared Responsibilities

If you want to know and learn more about Google Cloud VMware Engine, have a look at the following resources: 

VMware Multi-Cloud and Hyperscale Computing

VMware Multi-Cloud and Hyperscale Computing

In my previous article¬†Cross-Cloud Mobility with VMware HCX I already very briefly touched VMware’s hybrid and multi-cloud vision and strategy. I mentioned, that VMware is coming from the on-premises world if you compare them with AWS, Azure or Google, but have the same “consistent infrastructure with consistent operations” messaging. And that the difference would be, that VMware is not only hardware-agnostic, but even cloud-agnostic. To abstract the technology format and infrastructure in the public cloud, their idea is to run VMware Cloud Foundation (VCF) everywhere (e.g. Azure VMware Solution), on-premises on top of any hardware and in the cloud on any global infrastructure from any hyperscaler like AWS, Azure, Google, Oracle, IBM, Alibaba. Or you can run your workloads in a VMware cloud provider’s cloud based on VCF. That’s the VMware multi-cloud.

The goal of this article is not compare any features from different vendors and products, but to give you a better idea why multi-cloud is becoming a strategic priority for most enterprises and why VMware could be right partner for your journey to the cloud.

To get started, let’s get an understanding what the three big hyperscalers are doing is when it comes to a hybrid or multi-cloud.

Microsoft

To bring Azure services to your data center and to benefit from a hybrid cloud approach, you would probably go for Azure Stack to run virtualized applications on-premises. Their goal is to build consistent experiences in the cloud and at the edge, even for scenarios where you have no internet connection. This would be by VMware’s definition a typical hybrid cloud architecture.

Multi-cloud refers to the use of multiple public cloud service providers in a multi-cloud architecture, whereas hybrid cloud describes the use of public cloud in conjunction with private cloud. In a hybrid cloud environment, specific applications leverage both the private and public clouds to operate. In a multi-cloud environment, two or more public cloud vendors provide a variety of cloud-based services to a business.

With the announcement of Azure Arc at MS Ignite 2019, Microsoft introduced a new product, which “simplifies complex and distributed environments across on-premises, edge and multi-cloud“. Beside the fact that you can run Azure data services anywhere, it gives you the possibility to govern and secure your Windows servers, Linux servers and Kubernetes (K8s) clusters across different clouds. Arc can also deploy and manage K8s applications consistently (from source control).

Azure Arc InfographicYou could summarize it like this, that Microsoft is bringing Azure infrastructure and services to any infrastructure. It’s not necessary to understand the technical details of Azure Stack and Azure Arc. More important is the messaging and the strategy. It’s about managing and securing Windows/Linux servers, virtual machines and K8s clusters everywhere and this with their Azure Resource Manager (ARM). Arc ensures that the right configurations and policies are in place to fulfill governance requirements across clouds. Run your workloads where you need it and where it makes sense, even it isn’t Azure.

Google Anthos

Google open-sourced their own implementation of containers to the Linux kernel in about 2006 or 2007. It was called cgroups, which stands for control groups. Docker appeared in 2013 and provided some nice tooling for containers. Over the next years, Microservices were used more often to divide monoliths into different pieces and services. Because of the growing numbers of containers, Google saw the need to make this technology easy to manage and orchestrate for everyone. This was six years ago when they released Kubernetes.

By the way, two of the three Kubernetes founders, namely Joe Beda and Craig McLuckie, are working for VMware since their company Heptio has been acquired by VMware in November 2018.

Today, Kubernetes is the standard way to run containers at scale.

We know by now that the future is hybrid or even multi-cloud, and not public cloud only. Also Google realized that years ago. Besides that, a lot of enterprises made the experience that moving to the cloud and re-engineering the whole application at the same time mostly fail. This means, that moving applications from your on-premises data center, refactoring the application at the same time and run it in the public cloud, is not that easy.

Why isn’t it easy? Because you are re-engineering the whole application, have to take care of other application and network dependencies, think about security, governance and have to train your staff to cope with all the new management consoles and processes.

Google’s answer and approach here is to modernize applications on-premises and then move them to the cloud after the modernization happened. They say that you need a platform, that runs in the cloud and in your data center. A platform, that runs consistently across different environments – same technology, same tools and policies everywhere.

This platform is called Google Anthos. Anthos is 100% software-defined and (hardware) vendor-agnostic. To deliver their desired developer experience on-prem as well, they rely on VMware. This is GKE running on-prem on top of vSphere:

Anthos vSphere on-prem

Amazon Web Services

The last solution I would like to mention is AWS Outposts, which is a fully managed service that extends their AWS infrastructure, services and tools to any data center for a “truly consistent hybrid experience”. What are the AWS services running on Outposts?

  • Containers (EKS)
  • Compute (EC2)
  • Storage (EBS)
  • Databases (Amazon RDS)
  • Data Analytics (Amazon EMR)
  • Different tools and APIs

AWS Outposts are delivered as an industry-standard 42U rack. The Outpost rack is 80 inches (203.2cm) tall, 24 inches (60.96cm) wide, and 48 inches (121.92cm) deep. Inside we have hosts, switches, a network patch panel, a power shelf, and blank panels. It has redundant active components including network switches and hot spare hosts.

If you visit the Outposts website, you’ll find the following information:

Coming soon in 2020, a VMware variant of AWS Outposts will be available. VMware Cloud on AWS Outposts delivers a fully managed VMware Software-Defined Data Center (SDDC) running on AWS Outposts infrastructure on premises.

VMC on AWS Outposts is for customers, who want to use the same VMware software conventions and control plane as they have been using for years. It can be seen as an extension from the regular VMC on AWS offering which is now made available on-premises (on top of the AWS Outposts infrastructure) for a hybrid approach.

VMC on AWS Outposts

What do all these options have in common? It is always about consistent infrastructure with consistent operations. To have one platform in the cloud and on-premises in your data center or at the edge. Most of today’s hybrid cloud strategies rely on the facts, that migrations to the cloud are not easy, fail a lot and so it’s clear why we still have 90% of all workloads running on-premises. We are going to have many million containers more in the future, which need to be orchestrated with Kubernetes, but virtual machines are not just disappearing or being replaced tomorrow.

My conclusion here is, that every hyperscaler is seeing cloud-native in our (near) future and wants to provide their services in the cloud and on-prem. That customer can build their new applications with a service-oriented architecture or partially modernize existing monoliths (big legacy applications) on the same technology stack.

Consistent Infrastructure & Consistent Operations

All hyperscalers mention as well, that you have to take care of different management and security consoles, skills set¬†and tools in general. Except Microsoft with Azure Arc, nobody else is having a “real” multi-cloud solution or platform. I want to highlight, that even Azure Arc is only here for some servers, Kubernetes clusters and takes care of governance.

Let’s assume you have a hybrid cloud setup in place. Your current project requirements tell you to develop new applications in the Google Cloud using GKE. That’s fine. Your current on-premises data centers run with VMware vSphere for virtualization. Tomorrow, you have to think about edge computing for specific use cases where AI and ML-based workloads are involved. Then you decide to go for Azure and create a hybrid architecture with Azure Stack and Arc. Now you are using two different public cloud providers, one with their specific hybrid cloud offering and also VMware vSphere on-premises.

What are you going to do now? How do you manage and secure all these different clouds and technologies? Or do you think about migrating all the application workloads from on-prem to GCP and Azure? Or do you start with Anthos now for other use cases and applications? Maybe you decide later to move away from VMware and evacuate the VMware-based private cloud to any hyperscaler? Is it even possible to do that? If yes, how long would this technology change and migration take?

Let’s assume for this exercise, that this would be a feasable option with an acceptable timeframe. How are you going to manage the different servers, applications, dependencies and secure everything at the same time? How can you manage and provision infrastructure in an easy and efficient way? What about cost control? What happens if you don’t see Azure as strategic anymore and want to move to AWS tomorrow? Then you figure out, that cloud is more expensive than you thought and experience yourself why only 10% of all workloads are running in the public cloud today.

Multi-Cloud Reality

I think people can pretty easy handle an infrastructure which runs VMware on-premises and have maximum one public cloud only – a hybrid cloud architecture. If we are talking about a greenfield scenario where you could start from scratch and choose AWS including AWS Outposts, because you think it’s best for you and matches all the requirements, go for it. You know what is right for you.

But I believe, and this is also what I see with larger customers, the current reality is hybrid and the future is multi-cloud.

VMware Multi-Cloud Strategy

And a multi-cloud environment is a totally different game to manage. What is the VMware multi-cloud strategy exactly and why is it different?

Consistent VMware Multi-Cloud

VMware’s approach is always to abstract complexity. This doesn’t mean that everything is getting less complex, but you will get the right platform and tooling to deal with this complexity.

A decade ago, abstracting meant providing a hypervisor (vSphere) for any hardware (being vendor-agnostic). After that we had software-defined storage (vSAN) followed software-defined networking (NSX). Beside these three major software pieces, we also have the vRealize suite, which is mainly known for products like vRealize Automation and vRealize Operations. The technology stack consisting of vSphere, vSAN, NSX, vRealize and some management components from the software-defined data center and is called VMware Cloud Foundation. A technology stack that allows you to experience the ease of public cloud in your data center. Again, if wanted and required, you can run this stack on top of any hyperscaler like AWS, Azure, Google Cloud, Alibaba Cloud, Oracle Cloud or IBM.

VMware Cloud Foundation

It’s a platform which can deliver services as you would expect in the public cloud. The vRealize suite can help you to automatically provision virtual machines and containers including the right network and storage (any vSphere-based cloud or cloud-native on AWS, GCP, Azure or Alibaba). Build your own templates or blueprints (Infrastructure as Code) to deliver services IaaS, DBaaS, CaaS, DaaS, FaaS, PaaS, SaaS and DRaaS, which can be ordered and consumed by your users or your IT. Put a price tag behind any service or workload you deploy, and include your public cloud spending as well (e.g. with CloudHealth) in this calculation.

You want to deliver vGPU enabled virtual machines or containers? Also possible with vSphere. Modern AI/ML based applications need compute acceleration to handle large and complex computation. vSphere Bitfusion allows you to access GPUs in a virtualized environment over the network (ethernet). Bitfusion works across any cloud and environment and can be accessed from any workload from any network. This topic gets very interesting if we talk about edge computing for example.

vSphere Bitfusion

Modern applications obviously demand a modern infrastructure. An infrastructure with a hybrid or multi-cloud architecture. With that you are facing the challenge of maintaining control and visibility over a growing number of environments. In such a modern environment, how do you automate configuration and management? What about networking and security policies applied at a cluster level? How you handle identity and access management (IAM)? Any clue about backup and restore? And what would be your approach for cost management in a multi-cloud world?

Modern Applications Challenges

To improve the IT ops and developer experience, VMware announced the Tanzu portfolio including something they call the Tanzu Kubernetes Grid (TKG). The promise of TKG is to provide developers a consistent and on-demand access to infrastructure across clouds and is considered to be the enterprise-ready Kubernetes runtime.

Since vSphere 7, TKG has been embedded into the control plane vSphere 7 with Kubernetes as a service. Finally, as Kubernetes is natively integrated into the hypervisor, we have a converged platform for VMs and containers. IT ops now can see and manage Kubernetes objects (e.g. pods) from the vSphere client and developers use the Kubernetes APIs to access the SDDC infrastructure.

There are different ways to consume TKG beside “vSphere 7 with Kubernetes“. TKG is a consistent and upstream compatible Kubernetes runtime with preintegrated and validated components, that also runs in any public cloud or edge environments.

Tanzu Kubernetes Grid

If you have to run Kubernetes clusters natively on Azure, AWS, Google and on vSphere on-premises, how would you manage IAM, lifecycle, policies, visibility, compliance and security? How would you manage any new or existing clusters?

Tanzu Mission Control

Here, VMware’s solution would be Tanzu Mission Control (TMC). A centralized management platform (operated by VMware as SaaS) for all your clusters in any cloud. TMC allows you to provision TKG workload clusters to your environment of choice and manage the lifecycle of each cluster via TMC. To date, the supported deployments are in vSphere and AWS EC2 accounts. The deployment on Azure is coming very soon.

Existing Kubernetes clusters from any vendor such as EKS, AKS, GKE or OpenShift can be attached to TMC. As long as you are maintaining CNCF conformant clusters, you can attach them to TMC so that you can manage all of them centrally.

The Tanzu portfolio is much bigger and includes more than TKG and TMC, which only address the “where and how to run Kubernetes” and “how to deploy and manage Kubernetes”. Tanzu has other solutions like an application catalog, build service, application service (previously Pivotal Cloud Foundry) and observability (monitoring and metrics) for example.

VMware Tanzu Products

And this Tanzu products can be complemented with cloud-scale networking solutions like an application delivery controller (ADC) or software-defined WAN (SD-WAN). To deliver the “public cloud experience” to developers for any infrastructure, we need to provide agility. From an infrastructure perspective we’ll find VMware Cloud Foundation and from application or developer perspective we learned that Tanzu covers that.

For a distributed application architecture, you also need a software-defined ADC architecture that is fully distributed, auto scalable and provides real-time analytics and security for VMs or containers. VMware’s NSX Advanced Load Balancer (formerly known as Avi Networks) runs on AWS, GCP, Azure, OpenStack and VMware and has a rich feature set:

AVI Networks Features

Hypervisor versus Public Cloud

What I am trying to say here, is, that cloud-native at scale requires much more than containers only. While hypervisors are obviously not disappearing and getting replaced by containers from the public cloud very soon, they will co-exist and therefore it is very important to implement solutions which can be used everywhere. If you can ignore the cost factor for a moment, probably the best solution would be using the exact same technology stack and tools for all the clouds your workloads are running on.

You need to rely on a partner and solution portfolio that could address or solve anything (or almost anything) you are building in your IT landscape. As I already said, VCF and Tanzu are just a few pieces of the big puzzle. Important would be an end-to-end approach from any layer or perspective.

Therefore, I believe, VMware is very relevant and very well-positioned to support your journey to the multi-cloud.

The application you migrate or modernize need to be accessed by your users in a simple and secure way. This would lead us for example to the next topic, where we could start a discussion about the digital workspace or end-user computing (EUC).

Talking about EUC and the future-ready workplace would involve other IT initiatives like hybrid or multi-cloud, application modernization, data center and cloud networking, workspace security, network security and so on. A discussion which would touch all strategic pillars VMware defined and presented since VMworld 2019.

VMware 5 Strategic Pillars

If your goal is also to remove silos, provide a better user and admin experience, and this in a secure way over any cloud, then I would say that VMware’s unique platform approach is the best option you’ll find on the market.

And since VMware can and will co-exist with the hyperscalers, and even run on top of all them, I would consider to talk about the “big four” and not “big three” hyperscalers from now on.