Multi-Tenancy on VMware Cloud Foundation with vRealize Automation and Cloud Director

Multi-Tenancy on VMware Cloud Foundation with vRealize Automation and Cloud Director

In my article VMware Cloud Foundation And The Cloud Management Platform Simply Explained I wrote about why customers need a VMware Cloud Foundation technology stack and what a VMware cloud management platform is.

One of the reasons and one of the essential characteristics of a cloud computing model I mentioned is resource pooling.

By the National Institute of Standards and Technology (NIST) resource pooling is defined with the following words:

The provider’s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual
resources dynamically assigned and reassigned according to consumer demand.
There is a sense of location independence in that the customer generally has no
control or knowledge over the exact location of the provided resources but may be
able to specify location at a higher level of abstraction (e.g., country, state, or
data center).

This time I would like to focus on multi-tenancy and how you can achieve that on top of VMware Cloud Foundation (VCF) with Cloud Director (formerly known as vCloud Director) and vRealize Automation, which both could be part of a VMware cloud management platform (CMP).

Multi-Tenancy

There are many understandings around about multi-tenancy and different people have different definitions for it.

If we start from the top of an IT infrastructure, we will have application or software multi-tenancy with a single instance of an application serving multiple tenants. And in the past even running on the same virtual or physical server. In this case the multi-tenancy feature is built into the software, which is commonly accessed by a group of users with specific permissions. Each tenant gets a dedicated or isolated share of this application instance.

Coming from the bottom of the data center, multi-tenancy describes the isolation of resources (compute, storage) and networks to deliver applications. The best example here are (cloud) services providers.

Their goal is to create and provide virtual data centers (VDC) or a virtual private cloud (VPC) on top of the same physical data center infrastructure – for different tenants aka customers. Normally, the right VMware solution for this requirement and service providers would be Cloud Director, but this is maybe not completely true anymore with the release of vRealize Automation 8.x. 

To make it easier for all of us, I’ll call Cloud Director and vCloud Director “vCD” from now on.

VMware Cloud Director (formerly vCloud Director)

Cloud Director is a product exclusively for cloud service providers via the VMware Cloud Provider Program (VCPP). Originally released in 2010, it enables service providers (SPs) to provision SDDC (Software-Defined Data Center) services as complete virtual data centers. vCD also keeps resources from different tenants isolated from each other.

Within vCD a unit of tenancy is called Organization VDC (OrgVDC). It is defined as a set of dedicated compute (CPU, RAM), storage and network resources. A tenant can be bound to a single OrgVDC or can be composed of multiple Organization VDCs. This is typically known as Infrastructure as a Service (IaaS).

A provider virtual data center (PVDC) is a grouping of compute, storage, and network resources from a single vCenter Server instance. Multiple organizations/tenants can share provider virtual data center resources.

Cloud Director Resource Abstraction

A lot of customers and VCPP partners have now started to offer their cloud services (IaaS, PaaS, SaaS etc.) based on VMware Cloud Foundation. For private and hybrid cloud scenarios, but also in the public cloud as a managed cloud service (VMware Cloud on AWS, Azure VMware Solution, Google Cloud VMware Engine, Alibaba Cloud VMware Solution and more).

Important: I assume that you are familiar with VCF, its core components (ESXi, vSAN, NSX, SDDC Manager) and architecture models (standard as the preferred).

Cloud Director components are currently not part of the VCF lifecycle automation, but it is a roadmap item!

Cloud Director Resource Hosting Models

vCD offers multiple hosting models:

  • In the shared hosting model, multiple tenant workloads run all together on the same
    resource groups without any performance assurance
  • In the reserved hosting model, performance of workloads is assured by resource
    reservation.
  • In the physical hosting model, hardware is dedicated to a single tenant and performance
    is assured by the allocated hardware

Tenant Using Shared Hosting on VCF Workload Domain

In this use case a tenant is using shared hosting backed by a VMware Cloud Foundation workload domain. A workload domain, which is mapped to a provider VDC.

vCD VCF Shared

Tenant Using Shared Hosting and Reserved Hosting on Multiple VCF Workload Domains

This use case describes the example of customer using shared and reserved hosting backed by multiple VCD workload domains. Here each cluster has a single resource pool mapped to a single PVDC.

vCD VCF Shared Reserved

Tenant Using Physical Hosting and Central Point of Management (CPOM)

The last example shows a single customer using physical hosting. You will notice that there is also a vSphere with
Kubernetes workload domain. VMware Cloud Foundation automates the installation of vSphere with Kubernetes (Tanzu) which makes it incredibly easy to deploy and manage.

You can see that there is an “SDDC” box on top of the Kubernetes Cluster vCenter, which is attached to
the “SDDC Proxy” entity. vCD can act as an HTTP/S proxy server between tenants and the
underlying vSphere environment in VMware Cloud Foundation. An SDDC proxy is an
access point to a component from an SDDC, for example, a vCenter Server instance, an ESXi host, or
an NSX Manager instance.

The vCD becomes the central point of management (CPOM) in this case and the customer gets a complete dedicated SDDC with vCenter access.

vCD VCF Physical CPOM

Note: Since vCD 9.7 it is possible to present for example a vCenter Server instance securely to a tenant’s organization using the Cloud Director user interface. This is how you could build your own VMC-on-AWS-like cloud offering!

Cloud Director CPOM

All 3 Tenants Together

Finally, we put it all together. In the first use case we can see that different customers are sharing resources from a
single PVDC. We can also see that resources from a single vCenter can be split across different provider virtual datacenters and that we can mix and match multi-tenants workload domains and workload domains offering dedicated private cloud all together.

vCD VCF All Together

Cloud Director Service and VMware Cloud on AWS

If you don’t want to extend or operate your own data center or cloud infrastructure anymore and provide a managed service to multiple customer, there are still options for you available backed by VMware Cloud Foundation as well.

Since October 2020 you have Cloud Director Service globally available, which delivers multi-tenancy to VMware Cloud on AWS for managed service providers (MSP).

VMware sees not only new, but also existing VCPP partners moving towards a mixed-asset portfolio, where their cloud management platform consists of a VCPP and MSP (VMware SaaS offerings) contract. This allows them for example to run vCD on-premises for their current customers and the onboarding of new tenants would happen in the public cloud with CDS and VMC on AWS.

vCD CDS Mixed Mode

Enterprise Multi-Tenancy with vRealize Automation

With the release of vRealize Automation 8.1 (vRA) VMware offered support for dedicated infrastructure multi-tenancy, created and managed through vRealize Suite Lifecycle Manager. This means vRealize Automation enables customers or IT providers to set up multiple tenants or organizations within each deployment.

Providers can set up multiple tenant organizations and allocate infrastructure. Each tenant manages its own projects (team structures), resources and deployments.

Enabling tenancy creates a new Provider (default) organization. The Provider Admin will create new tenants, add tenant admins, setup directory synchronization, and add users. Tenant admins can also control directory synchronization for their tenant and will grant users access to services within their tenant. Additionally, tenant admins will configure Policies, Governance, Cloud Zones, Profiles, access to content and provisioned resources; within their tenant. A single shared SDDC or separate SDDCs can be used among tenants depending on available resources.

vRealize Automation 8.1 Multi-Tenancy

With vRealize Automation 8.2, provider administrators got the ability to share infrastructure by creating and assigning Virtual Private Zones (VPZ) to tenant organizations.

Think of VPZs as a kind of container of infrastructure capacity and services which can be defined and allocated to a Tenant. You can add unique or shared cloud accounts, with associated compute, flavors, images, storage, networking, and tags to each VPZ. Each component offers the same configuration options you would see for a standalone configuration.

vRealize Automation 8.2 Multi-Tenancy

vRealize Automation and VMware Cloud Foundation

With the pretty new multi-tenancy and VPZ capability a new consumption model on top of VCF can be built. You (provider) would map the Cloud Zones (compute resources on vSphere (or AWS for example)) to a VCF workload domain.

The provider sets these cloud zones up for their customers and provides dedicated or shared infrastructure backed by Cloud Foundation workload domains.

This combination would allow you to build an enterprise VPC construct (like AWS for example), a logically isolated section of your provider cloud.

vRealize Automation and VMware Cloud Foundation

SDDC Manager Integration and VMware Cloud Foundation (VCF) Cloud Account

Since the vRA 8.2 release customers are also able to configure a SDDC Manager integration and on-board workload domains as VMware Cloud Foundation cloud accounts into the VMware Cloud Assembly service.

VMware Cloud Director or vRealize Automation?

You wonder if vRealize Automation could replace existing vCD installations? Or if both cloud management platforms can do the same?

I can assure you, that you can provide a self-service provisioning experience with both solutions and that you can provide any technology or cloud service “as a service”. Both have in common to be backed by Cloud Foundation, have some form of integration (vRA) and can be built by a VMware Validated Design (VVD).

vCD is known to be a service provider solution, where vRA is more common in enterprise environments. VMware has VCPP partners, that use Cloud Director for their external customers and vRealize Automation for their internal IT and customers.

If you are looking for a “cloud broker” and Infrastructure as Code (IaC), because you also want to provision workloads on AWS, Azure or GCP as well, then vRealize Automation is the better solution since vCD doesn’t offer this deep integration and these deployment options yet.

Depending on your multi-tenant needs and if you for example only have chosen vCD in the past, because of the OrgVDC and resource pooling feature, vRealize Automation would be enough and could replace vCD in this case.

It is also very important to understand how your current customer onboarding process and operational model look like:

  • How do you want to create a new tenant? 
  • How do you want to onboard/migrate existing customer workloads to your provider infrastructure?
  • Do you need versioning of deployments or templates?
  • Do customers require access to the virtual infrastructure (e.g. vCenter or OrgVDC) or do you just provide SaaS or PaaS?
  • Do customers need a VPN or hybrid cloud extension into your provider cloud?
  • How would you onboard non-vSphere customers (Hyper-V, KVM) to your vSphere-based cloud?
  • Does your customer rely on other clouds like AWS or Azure?
  • How do you do billing for your vSphere-based cloud or multi-cloud environment?
  • What is your Kubernetes/container strategy?
  • And 100 other things 😉

There are so many factors and criteria to talk about, which would influence such a decision. There is no right or wrong answer to the question, if it should be VMware Cloud Director or vRealize Automation. Use what makes sense.

Which could also be a combination of both.

VMware Cloud Foundation And The Cloud Management Platform Simply Explained

VMware Cloud Foundation And The Cloud Management Platform Simply Explained

I think that it is pretty clear what VMware Cloud Foundation (VCF) is and what it does. And it is also clear to a lot of people how where you could use VCF. But very few organizations and customers know why they should or could use Cloud Foundation and what its purpose is. This article will give you a better understanding about the “hidden” value that VMware Cloud Foundation has to offer.

My last contributions focused on VMware’s multi-cloud strategy and how they provide consistency in any layer of their vision:

VMware Strategy

The VMware messaging is clear. By deploying consistent infrastructure across clouds, customers gain consistent operations and intrinsic security in hybrid or multi-cloud operating models. The net result is, that the intricacies of infrastructure fade, allowing IT to focus more on deploying applications and providing secure access to those applications and data from any device.

The question is now, what are the building blocks and how can you fulfill this strategy? And why is VMware Cloud Foundation really so important?

Cloud Computing

To answer these questions we have to start with the basics and look at the NIST definition of cloud computing first:

Cloud computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction. This cloud model promotes availability and is composed of five
essential characteristics, three service models, and four deployment models.

Data Center Cloud Computing

Let’s start with the three service models and the capabilities each is aiming to provide:

  • Software as a Service (SaaS). Centrally hosted software, which is licensed on a subscription basis. They are also known as web-based or hosted software. The consumer of this service does not manage or control the underlying cloud infrastructure (servers, network, storage, operating system)
  • Platform as a Service (PaaS). This application platform allows the consumer to build, run and manage applications without the complex building of the application infrastructure to launch the applications. Like with SaaS, the consumer doesn’t manage or control the underlying cloud infrastructure, but has the control over the deployed applications.
  • Infrastructure as a Service (IaaS). IaaS provides the customer fundamental resources like compute, storage and network where they are able to deploy and run software in virtual machines or containers. The consumer doesn’t manage the underlying infrastructure, but manages the virtual machines including the operating systems and applications.

Deployment Models

There are four cloud computing deployment models defined today and mostly we talk only about three (I excluded the community cloud) of them. Let’s consult the VMware glossary for each definition.

  • Private Cloud. Private cloud is an on-demand cloud deployment model where cloud computing services and infrastructure are hosted privately, often within a company’s own data center using proprietary resources and are not shared with other organizations. The company usually oversees the management, maintenance, and operation of the private cloud. A private cloud offers an enterprise more control and better security than a public cloud, but managing it requires a higher level of IT expertise.
  • Public Cloud. Public cloud is an IT model where on-demand computing services and infrastructure are managed by a third-party provider and shared with multiple organizations using the public Internet. Public cloud service providers may offer cloud-based services such as infrastructure as a service, platform as a service, or software as a service to users for either a monthly or pay-per-use fee, eliminating the need for users to host these services on site in their own data center.
  • Hybrid Cloud. Hybrid cloud describes the use of both private cloud and public cloud platforms, which can work together on-premises and off-site to provide a flexible mix of cloud computing services. Integrating both platforms can be challenging, but ideally, an effective hybrid cloud extends consistent infrastructure and consistent operations to utilize a single operating model that can manage multiple application types deployed in multiple environments.

Hybrid Cloud Model

Multi-Cloud is a term for the use of more than one public cloud service provider for virtual data storage or computing power resources, with or without any existing private cloud and on-premises infrastructure. A multi-cloud strategy not only provides more flexibility for which cloud services an enterprise chooses to use, it also reduces dependence on just one cloud vendor. Multi-Cloud service providers may host three main types of services IaaS, PaaS and SaaS.

With IaaS, the cloud provider hosts servers, storage and networking hardware with accompanying services, including backup, security and load balancing. PaaS adds operating systems and middleware to their IaaS offering, and SaaS includes applications so that nothing is hosted on a customer’s site. Cloud providers may also offer these services independently.

Note: It is very important to understand which cloud computing deployment is the right one for your organization and which services your IT needs to offer to your internal or external customers.

Essential Characteristics

If you look at the five essential cloud computing characteristics from the NIST (National Institute of Standards and Technology), you’ll find attributes which you would also consider as natural requirements for any public cloud (e.g. Azure, Google Cloud Platform, Amazon Web Services):

  • On-demand self-service. A consumer can unilaterally provision computing capabilities,
    such as server time and network storage, as needed automatically without
    requiring human interaction with each service’s provider.
  • Broad Network Access. Capabilities are available over the network and accessed through
    standard mechanisms that promote use by heterogeneous thin or thick client
    platforms (e.g. PCs, laptops, smartphones, tablets).
  • Resource Pooling. The provider’s computing resources are pooled to serve multiple
    consumers using a multi-tenant model, with different physical and virtual
    resources dynamically assigned and reassigned according to consumer demand.
    There is a sense of location independence in that the customer generally has no
    control or knowledge over the exact location of the provided resources but may be
    able to specify location at a higher level of abstraction (e.g., country, state, or
    data center).
  • Scalability and Elasticity. Capabilities can be rapidly and elastically provisioned, in some cases
    automatically, to quickly scale out and rapidly released to quickly scale in. To the
    consumer, the capabilities available for provisioning often appear to be unlimited
    and can be purchased in any quantity at any time.
  • Measure Service. Cloud systems automatically control and optimize resource use by
    leveraging a metering capability at some level of abstraction appropriate to the
    type of service (e.g., storage, processing, bandwidth, and active user accounts).
    Resource usage can be monitored, controlled, and reported providing
    transparency for both the provider and consumer of the utilized service.

And besides the five essentials, you look for security, flexibility and reliability. With all these properties in mind, you would follow the same approach today, if you build a new data center or have to modernize your current cloud infrastructure. A digital foundation, or a platform, which can adopt to any changes and serve as expected.

5 Characteristics of Cloud Computing

This is why VMware has built VMware Cloud Foundation! This is why we need VCF, which is the core of VMware’s multi-cloud strategy.

To be able to meet the above characteristics/criteria, you need a set of software-defined components for compute, storage, networking, security and cloud management in private and public environments – also called the software-defined data center (SDDC). VCF makes operating the data center fundamentally simpler by bringing the ease and automation of the public cloud in-house by deploying a standardized and validated architecture with built in lifecycle management and automation capabilities for the entire cloud stack.

As automation is already integrated and part from the beginning, and not something you would integrate later, you are going to be able to adopt to changes and have already one of the elements in place to achieve the needed security requirements. Automation is key to provide security through the whole stack.

In short, Cloud Foundation gives you the possibility and the right tools to build your private cloud based on public cloud characteristics and also an easy path towards a hybrid cloud architecture. Consider VCF as VMware’s cloud operating system, which enables a hybrid cloud based on a common and compatible platform that stretches from on-premises to any public cloud. Or from public cloud to another public cloud.

Note: VMware Cloud Foundation can also be consumed as a service (aka SDDC as a service) through their partners like Google, Amazon Web Services, Microsoft and many more.

Why Hybrid or Multi-Cloud?

A hybrid cloud with a consistent infrastructure approach enables organizations to use the same tools, policies and teams to manage the cloud infrastructure, which hosts the virtual machines and containers.

Companies want to have the flexibility to deploy and manage new and old applications in the right cloud. They are looking for an architecture, which allows them to migrate on-premises workloads to the public cloud and modernize these applications (partially or completely) with the cloud provider’s native services.

Customers have changed their perception from cloud-first to a cloud-appropriate strategy where they choose the right cloud for each specific application. And to avoid a vendor lock-in, you suddenly see two or three additional public clouds joining the cloud architecture, which by definition now is a multi-cloud environment.

Now you have a mix of a VMware-based cloud with AWS, Azure and GCP for example. It is possible to build new applications in one of the VMware “SDDC as a service” (e.g. VMware Cloud on AWS, Azure VMware Solution, Google Cloud VMware Engine) offerings, but customers also want deploy and use cloud-native service offerings.

Multi-Cloud Reality

How you deal with this challenge with the different architectures, operational inconsistencies, varying skill sets or your people, different management and security controls and incompatible technology formats?

Well, the first answer could be, that your IT needs to be able to treat all clouds and applications consistently and run the VCF stack ideally in any (private or public) cloud.

But this is not where I want to head to. There is something else, which we need to transform in this multi-cloud environment.

We only have consistent infrastructure with consistent operations, because of VMware Cloud Foundation, so far.

  • How does your deployment and automation model for your virtual machines and containers look like now?
  • How would you automate the provisioning these workloads and needed application components?

With your current tool set you have to talk four “languages” via the graphical management console or API (application programming interface).

In an international organization, where people come from different countries and talk different languages, we usually agree to English as corporate language. VMware is following the same approach in this case and puts an abstraction layer above the clouds and expose the APIs.

VMware Cloud-Agnostic CMP

This helps to manage the different objects and workloads you have deployed in any cloud. You don’t have to use your cloud accounts anymore and can define a consistent and centralized team and permission structure as well.

On top of this cloud-agnostic API you can provide all means for a self-service catalog, use programmable provisioning and provide the operations (e.g. cost or log management) and visibility (powered by artificial intelligence where needed) tool set (e.g. application and networks) to build, run, manage, connect and protect your applications.

Your applications, which are part of the different main services (IaaS, PaaS, SaaS) and most probably many other services (like DaaS, DBaaS, FaaS, DRaaS, CaaS, Backup as a Services, MongoDB as Service etc.) you are going to offer to your internal consumers or customers, are deployed via this cloud abstraction layer.

VMware CMP and Services

This abstraction layer forms the VMware cloud management platform (CMP), which consists of the vRealize Suite and VMware Cloud Services. This CMP also provides you with the necessary interfaces and integration options to other existing backend services or tools like a ticketing system, change management database (CMDB), IP address management (IPAM) and so on.

In short this means, that the VMware cloud operation model treats each private or public cloud as a landing zone.

VMware Cloud Foundation Is More About Business Value

Yes, Cloud Foundation is a very technical topic and most people see it only like that. But the hidden and real value are the ones nobody sees or talk about. The business values and the fact, that you can operate your private cloud with the ease like a public cloud provider and that you can follow the same principles for any cloud delivery model.

On-Demand self-service is offered through the lifecycle management capabilities VCF has included in combination with the cloud-agnostic API from VMware’s cloud management platform.

Broad network access starts with VMware’s digital workspace offerings and ends in the data center, at the edge or any cloud with their cloud-scale networking portfolio, which includes software-defined networking (SDN), software-defined WAN (SD-WAN) and software-defined application delivery controller (SD-ADC).

Multi-tenancy and resource pooling can only be achieved with automation and security. Two items which are naturally integrated into Cloud Foundation. The SDDC management component of VCF also gives you the technical capability to create your regions and availability zones. Something a public cloud providers let’s you choose as well.

Rapid elasticity is provided with the hardware-agnostic (for the physical servers in your data centers) approach VMware offers to their customers. Besides that, all cloud computing components are software-defined, which can run on-premises, at the edge or in any public cloud, which allows you to quickly scale out and scale in according to your needs.

Service usage and resource usage (compute, storage, network) are automatically controlled and optimized by leveraging some level of abstraction of all different clouds. Resource usage can be monitored and reported in a transparent way for the service provider and the consumer.

VMware Multi-Cloud Services

In addition to that, VMware provides their customers the choice to consume the VMware operation tools on-premises or as a SaaS offering, which is then hosted in the cloud. With perpetual and subscription licenses you can define your own pay-per-use or pay-as-you-go pricing options and if you want to move from a CAPEX to a OPEX cost model. The same will be true somewhen for VCF and VCF in the public cloud as well. A single universal license which allows you to run the different components and tools everywhere.

Customers need the flexibility to build the applications in any environment, matching the needs of the application and the best infrastructure. They need to manage and operate different environments as one, as efficiently as possible, with common models of security and governance.

Customers need to shift workloads seamlessly between cloud providers (also known as cross-cloud workload mobility) without the cost, complexity or risk of rewriting applications, rebuilding process or retraining IT resources.

And that’s my simple explanation of VMware Cloud Foundation and why it so important and the core of the VMware (Multi-Cloud) strategy.

Let me know what you think! 🙂

A big thank you to my colleagues Christian Dudler, Gavin Egli and Danny Stettler who reviewed my content and illustrations.

Update January 2022: If you would like to get a basic technical understand of VMware Cloud Foundation, have a look at VMware Cloud Foundation – A Technical Overview 

Multi-Cloud Load Balancing and Autoscaling with NSX Advanced Load Balancer (formerly Avi Networks)

Multi-Cloud Load Balancing and Autoscaling with NSX Advanced Load Balancer (formerly Avi Networks)

Do you want to build your private cloud like a hyperscaler is doing it? You know that VMware Cloud Foundation is becoming the new vSphere, but still wonder how you can implement software-defined load balancing (LB) or application services and features like autoscaling or predictive scaling? Then this article about multi-cloud load balancing and autoscaling with NSX Advanced Load Balancer aka Avi Networks is for you!

My Experience with a Legacy ADC

A few years ago, I was working on the customer side for an insurance company in Switzerland as a Citrix System Engineer. My daily tasks included the maintenance and operation of the Citrix environment, which included physical and virtual Citrix NetScaler Application Delivery Controller (ADC) appliances. The networking team owned a few hardware-based appliances (NetScaler SDX) with integrated virtualization capability (XenServer as hypervisor) to host multiple virtual NetScaler (VPX) instances.

The networking team had their dedicated NetScaler VPX instances (for LDAP and HTTP load balancing mostly) and deployed my appliances after I filed a change request. Today, you would call this multi-tenancy. For a Citrix architecture it was best practices to have one high availability (HA) pair for the internal and one HA pair for the external (DMZ) network access. An HA pair was running in a active/passive mode and I had to maintain the same setup for the test environment as well.

Since my virtual VPX appliances were hosted on the physical SDX appliance, I always relied on the network engineers when I needed more resources (CPU, RAM, SSL, throughput) allocated to my virtual instances. Before I could upgrade to a specific firmware version, I also had to wait until they upgraded the physical NetScaler appliances and approved my change request. This meant, we had to plan changes and maintenance windows together and had to cross fingers, that their upgrade went well, so that we then could upgrade all our appliances after.

NetScaler SDX

It was also possible to download a VPX appliance, which could run on top of VMware vSphere. To be more independent, I decided to install four new VPX appliances (for the production and test environment) on vSphere and migrated the configuration from the appliances running on the physical SDX appliance.

Another experience I had with load balancers was when I started to work for Citrix as a consultant in Central Europe and had to perform a migration of physical NetScaler MPX appliances, which had no integrated virtualization capability. I believe I had two sites with each two of these powerful MPX appliances for tens of thousands of users. Beside the regular load balancing configuration for some of the Citrix components, I also had to configure Global Server Load Balancing (GSLB) in active/passive mode for the two sites.

NetScaler GSLB Active Passive

There were so many more features available (e.g., Web Application Firewall, Content Switching, Caching, Intrusion Detection), but I never used anything else than the NetScaler Universal Gateway for the remote access to the virtual desktop infrastructure (VDI), load balancing, HTTP to HTTPS redirections and GSLB. In all scenarios I had an HA pair where one instance was idling and doing nothing. And the active unit was in average not utilized more than 15-20%. It was common to install/buy too large or powerful instances/licenses, because you wanted to be on the safe side and have enough capacity to terminate all your SSL sessions and so on.

It (load balancing) was about distributing network traffic across multiple servers by spreading the requests and workload evenly, and to add some intelligence (health monitoring) in case an application server or a service would fail or become unavailable for any reason. If one more application server was needed, I ordered a new Windows Server, installed and configured the Citrix components and added the necessary load balancing configuration on the NetScaler. These were all manual tasks. 

This was my personal experience from 2017. Since then, applications became more complex and distributed. The analysts and the market are focusing on containerized and portable apps that are running in multiple clouds. The prediction is also that the future is multi-cloud.

Multiple Clouds vs. Multi-Cloud

There are different definitions and understandings out there what multi-cloud means. In my understanding, using a private cloud, the AWS cloud, Microsoft Office 365 and Azure is a typical setup with multiple clouds. There are simple scenarios where you migrate workloads from a private to a public cloud; or having applications with services hosted on the private and on the public cloud. The latter would be an example of a hybrid cloud architecture.

There are various reasons for why services and resources (workloads) are distributed on multiple clouds (on-prem, Azure, AWS, GCP etc.):

  • Avoid dependence on one cloud provider only
  • Consume different specific services that are not provided elsewhere
  • Optimize costs for different workloads and services
  • React to price changes by the providers

That is why we are seeing also the trend to break up big legacy applications (monoliths) in smaller pieces (segments), which is a best practice and design principle today. The goal is to move to a loosely coupled and more service-oriented architecture. This provides greater agility, more flexibility and easier scalability with less inter-dependencies.

A segmented application is much easier to run in different clouds (think about portability). Running one application over multiple clouds is in my understanding the right definition of multi-cloud.

Multiple Clouds versus Multi-Cloud

Let’s assume that most probably all the four reasons above apply to larger enterprises. If we take another angle, we can define some business and technical requirements for multi-cloud:

  • Application or services need to be cloud vendor-agnostic
  • Provide or abstract control and management interfaces of multiple clouds
  • Support application portability (relocation between clouds)
  • Combine IaaS and other services from different clouds
  • Possibility to deploy components of applications in multiple clouds
  • (Cloud) Broker service needed
  • Policy and governance over multiple clouds
  • Network connectivity for migration scenarios with partially modernized applications
  • Automated procedures for deployments
  • Application monitoring over different clouds
  • Cost management
  • Lifecycle management of deployed applications in multiple clouds
  • Self-adaption and auto scaling features
  • Large team with various expertise needed

How can you deliver and manage different applications services like load balancing, web application firewalling, analytics, automation and security over multiple clouds?

Another important question would be, how you want to manage the workload deployment on the various clouds. But cloud management or a cloud management platform is something for another article. 🙂

The requirements for the developers, IT operations and the business are very complex and it’s a long list (see above).

It is important, that you understand, based on the requirements for multi-cloud, that it is mandatory to implement a modern solution for your modern application architectures. Enterprises have become more application-centric and everyone is talking about continuous integration, continuous delivery and DevOps practices to automate operation and deployment tasks. A modern solution implicits a software-defined approach. Otherwise you won’t be able to be agile, adapt to changes and meet future requirements.

My past experience with Citrix’ NetScaler (Citrix ADC) is a typical example that “virtualized” and “software-defined” are not the same thing. And this is very important if we want to have a future-ready solution. If we look at VMware’s software-defined data center (SDDC), it includes virtual compute (hypervisor), but software-defined storage and networking as well. Part of the software-defined networking portfolio is “NSX Advanced Load Balancer“, the software-defined application services platform, which was also known as “Avi Networks” before VMware bought them in June 2019.

Unlike a virtualized load-balancing appliance, a software-defined
application services platform separates the data and control planes
to deliver application services beyond load balancing, real-time
application analytics, security and monitoring, predictive autoscaling,
and end-to-end automation for Transport (Layer 4) to
Application (Layer 7) layer services. The platform supports multicloud
environments and provides software-defined application
services with infrastructure-agnostic deployments on bare metal
servers, virtual machines (VMs), and containers, in on-premises
data centers and private/public clouds.

Auto scaling became famous with AWS as it monitors your applications and automatically adjusts capacity to maintain availability and performance at the lowest possible costs. It automatically adds or removes application servers (e.g. EC2 instances), load balancers, applies the right network configuration and so on.

Can you achieve the same for your on-premises infrastructure with VMware? Yes.

Is there even a solution which can serve both worlds – on-prem and cloud? Yes.

And what about predictive scaling with real-time insights? Yes.

NSX Advanced Load Balancer (NSX ALB)

Why did VMware buy Avi? Because it follows the same architecture principles like NSX: A distributed platform with a separate control and data plane built on software-defined principles for any cloud.

Avi High Level Architecture

Traditional ADCs or load balancers are mostly configured in active/standby pairs, no matter if physical or virtual. Typically you would see around 15% utilization on the active node, while the secondary standby node is just idling and doing nothing. Each pair is its own island of static capacity which shares the management, control and data plane.

You have to decide where to place the virtual IP (VIP) and how much you want to overprovision the physical or virtual appliances, because there is no capacity pooling available. This leads to operational complexity, especially when you have hundreds of such HA pairs running in different clouds. Therefore, legacy and virtualized ADCs are not the ideal choice for a multi-cloud architecture. Let’s check NSX ALB’s architecture:

Control Plane – This is the brain (single point of management) of the whole platform that can be spun up in your on-prem environment or in the cloud (also available as a managed SaaS offering), typically as a three-node cluster. Within this cluster, all configuration is done. This also where the policies reside and the decisions are made. It is the controller’s duty to place virtual services on Service Engines to load balance new applications.

The control plane comprises the three pillars that deliver the key capabilities of the Avi platform:

  • Multi-Cloud – Consistent experience for any cloud, no lock-in
  • Intelligence – The machine learning based analytics engine enables application performance monitoring, troubleshooting, and operational insights (gathered by the SEs)
  • Automation – Elastic and predictive auto scaling & self-service without over-provisioning through a complete set of REST APIs

Data Plane –  The Service Engines (SEs) handle all data plane operations by receiving and executing instructions from the controller. The SEs perform load balancing and all client- and server-facing network interactions. It collects real-time application telemetry from application traffic flows. 

As already mentioned, NSX ALB can be deployed in multiple cloud environments like VMware vCenter, Amazon Web Services, Microsoft Azure, Google Cloud Platform, Oracle Cloud, IBM Cloud, VMC on AWS, Nutanix, OpenStack or bare-metal.

Use Cases

Most customers deploy Avi because of:

  • Load Balancer refresh
  • Multi-Cloud initiatives
  • Security including WAF, DDoS attack mitigation, achieving compliance (GDPR, PCI, HIPAA)
  • Container ingress (integrates via REST APIs with K8s ecosystems like Tanzu Kubernetes Grid, AKS, EKS, GKE, OpenShift)

Advanced Kubernetes Ingress Controller Avi Networks

  • Virtual Desktop Infrastructure (Citrix, VMware Horizon)

Consistent Application Services Platform (Features)

Avi/NSX ALB is an enterpise-grade solution. So, everything you would expect from a traditional ADC (e.g. F5); layer 4 to layer 7 services, SSL, DDoS, WAF etc. is built-in without the need for a special license or edition. There is no NSX license requirement even the product name would suggest it. It can be deployed as a standalone load balancer or as an integrated solution with other VMware products (e.g., VCF, vRA/vRO, Horizon, Tanzu etc.).

Avi Networks Features

Below is a list with the core features:

  • Enterprise-class load balancing – SSL termination, default gateway, GSLB, DNS, and other L4-L7 services
  • Multi-cloud load balancing – Intelligent traffic routing across multiple sites and across private or public clouds
  • Application performance monitoring – Monitor performance and record and replay network events like a Network DVR
  • Predictive auto scaling – Application and load balancer scaling based on real-time traffic patterns
  • Self-service – For app developers with REST APIs to build services into applications
  • Cloud connectors – VMware Cloud on AWS, SDN/NFV controllers, OpenStack, AWS, GCP, Azure, Linux Server Cloud, OpenShift/Kubernetes
  • Distributed application security fabric – Granular app insights from distributed service proxies to secure web apps in real time
  • SSO / Client Authentication – SAML 2.0 authentication for back-end HTTP applications
  • Automation and programmability – REST API based solution for accelerated application delivery; extending automation from networking to developers
  • Application Analytics – Real-time telemetry from a distributed load balancing fabric that delivers millions of data points in real time

Load Balancing for VMware Horizon

NSX ALB can be configured for load balancing in VMware Horizon deployments, where you place Service Engines in front of Unified Access Gateways (UAG) or Connection Servers (CS) as required.

Avi Horizon High Level Architecture

For a multi-site architecture you can also configure GSLB if needed. With GSLB, access to resources is controlled with DNS queries and health checking.

Note: If you are using the Horizon Universal Broker, the cloud-based brokering service, there is no need for GSLB, because the Universal Broker can orchestrate connections from a higher level based on different policies.

Automation

With NSX Advanced Load Balancer there are two parts when we talk about automation. One part is about infrastructure automation, where the controller talks to the ecosystem like a vCenter, AWS or Azure to orchestrate the Service Engine. So, when you configure a new VIP, the controller would talk to vCenter to spin up a VM, puts it in the right portgroup, connects the front and the back-end, downloads the policy and Service Engine, and starts receiving traffic.

The second piece of automation focuses more on the operational automation which is through the REST API. But, on top of that you can also run Ansible playbooks, Terraform templates, use Go and Python SDKs, have integrations with Splunk or other tools like vRealize Automation. These are the built-in automation capabilities of Avi.

Avi Networks Automation

VMworld 2020 Sessions

This year, VMworld is going to be for free and virtual. Take this chance and register yourself and learn more about Avi aka NSX ALB:

  1. Making Your Private Cloud Network Run Like a Public Cloud – Part 2 [VCNC2918]
  2. Modern Apps and Containers: Networking and Security [VCNC2920]
  3. Prepare for the New Normal of Work from Anywhere [VCNC2919]
VMware Multi-Cloud and Hyperscale Computing

VMware Multi-Cloud and Hyperscale Computing

In my previous article Cross-Cloud Mobility with VMware HCX I already very briefly touched VMware’s hybrid and multi-cloud vision and strategy. I mentioned, that VMware is coming from the on-premises world if you compare them with AWS, Azure or Google, but have the same “consistent infrastructure with consistent operations” messaging. And that the difference would be, that VMware is not only hardware-agnostic, but even cloud-agnostic. To abstract the technology format and infrastructure in the public cloud, their idea is to run VMware Cloud Foundation (VCF) everywhere (e.g. Azure VMware Solution), on-premises on top of any hardware and in the cloud on any global infrastructure from any hyperscaler like AWS, Azure, Google, Oracle, IBM, Alibaba. Or you can run your workloads in a VMware cloud provider’s cloud based on VCF. That’s the VMware multi-cloud.

The goal of this article is not compare any features from different vendors and products, but to give you a better idea why multi-cloud is becoming a strategic priority for most enterprises and why VMware could be right partner for your journey to the cloud.

To get started, let’s get an understanding what the three big hyperscalers are doing is when it comes to a hybrid or multi-cloud.

Microsoft

To bring Azure services to your data center and to benefit from a hybrid cloud approach, you would probably go for Azure Stack to run virtualized applications on-premises. Their goal is to build consistent experiences in the cloud and at the edge, even for scenarios where you have no internet connection. This would be by VMware’s definition a typical hybrid cloud architecture.

Multi-cloud refers to the use of multiple public cloud service providers in a multi-cloud architecture, whereas hybrid cloud describes the use of public cloud in conjunction with private cloud. In a hybrid cloud environment, specific applications leverage both the private and public clouds to operate. In a multi-cloud environment, two or more public cloud vendors provide a variety of cloud-based services to a business.

With the announcement of Azure Arc at MS Ignite 2019, Microsoft introduced a new product, which “simplifies complex and distributed environments across on-premises, edge and multi-cloud“. Beside the fact that you can run Azure data services anywhere, it gives you the possibility to govern and secure your Windows servers, Linux servers and Kubernetes (K8s) clusters across different clouds. Arc can also deploy and manage K8s applications consistently (from source control).

Azure Arc InfographicYou could summarize it like this, that Microsoft is bringing Azure infrastructure and services to any infrastructure. It’s not necessary to understand the technical details of Azure Stack and Azure Arc. More important is the messaging and the strategy. It’s about managing and securing Windows/Linux servers, virtual machines and K8s clusters everywhere and this with their Azure Resource Manager (ARM). Arc ensures that the right configurations and policies are in place to fulfill governance requirements across clouds. Run your workloads where you need it and where it makes sense, even it isn’t Azure.

Google Anthos

Google open-sourced their own implementation of containers to the Linux kernel in about 2006 or 2007. It was called cgroups, which stands for control groups. Docker appeared in 2013 and provided some nice tooling for containers. Over the next years, Microservices were used more often to divide monoliths into different pieces and services. Because of the growing numbers of containers, Google saw the need to make this technology easy to manage and orchestrate for everyone. This was six years ago when they released Kubernetes.

By the way, two of the three Kubernetes founders, namely Joe Beda and Craig McLuckie, are working for VMware since their company Heptio has been acquired by VMware in November 2018.

Today, Kubernetes is the standard way to run containers at scale.

We know by now that the future is hybrid or even multi-cloud, and not public cloud only. Also Google realized that years ago. Besides that, a lot of enterprises made the experience that moving to the cloud and re-engineering the whole application at the same time mostly fail. This means, that moving applications from your on-premises data center, refactoring the application at the same time and run it in the public cloud, is not that easy.

Why isn’t it easy? Because you are re-engineering the whole application, have to take care of other application and network dependencies, think about security, governance and have to train your staff to cope with all the new management consoles and processes.

Google’s answer and approach here is to modernize applications on-premises and then move them to the cloud after the modernization happened. They say that you need a platform, that runs in the cloud and in your data center. A platform, that runs consistently across different environments – same technology, same tools and policies everywhere.

This platform is called Google Anthos. Anthos is 100% software-defined and (hardware) vendor-agnostic. To deliver their desired developer experience on-prem as well, they rely on VMware. This is GKE running on-prem on top of vSphere:

Anthos vSphere on-prem

Amazon Web Services

The last solution I would like to mention is AWS Outposts, which is a fully managed service that extends their AWS infrastructure, services and tools to any data center for a “truly consistent hybrid experience”. What are the AWS services running on Outposts?

  • Containers (EKS)
  • Compute (EC2)
  • Storage (EBS)
  • Databases (Amazon RDS)
  • Data Analytics (Amazon EMR)
  • Different tools and APIs

AWS Outposts are delivered as an industry-standard 42U rack. The Outpost rack is 80 inches (203.2cm) tall, 24 inches (60.96cm) wide, and 48 inches (121.92cm) deep. Inside we have hosts, switches, a network patch panel, a power shelf, and blank panels. It has redundant active components including network switches and hot spare hosts.

If you visit the Outposts website, you’ll find the following information:

Coming soon in 2020, a VMware variant of AWS Outposts will be available. VMware Cloud on AWS Outposts delivers a fully managed VMware Software-Defined Data Center (SDDC) running on AWS Outposts infrastructure on premises.

VMC on AWS Outposts is for customers, who want to use the same VMware software conventions and control plane as they have been using for years. It can be seen as an extension from the regular VMC on AWS offering which is now made available on-premises (on top of the AWS Outposts infrastructure) for a hybrid approach.

VMC on AWS Outposts

What do all these options have in common? It is always about consistent infrastructure with consistent operations. To have one platform in the cloud and on-premises in your data center or at the edge. Most of today’s hybrid cloud strategies rely on the facts, that migrations to the cloud are not easy, fail a lot and so it’s clear why we still have 90% of all workloads running on-premises. We are going to have many million containers more in the future, which need to be orchestrated with Kubernetes, but virtual machines are not just disappearing or being replaced tomorrow.

My conclusion here is, that every hyperscaler is seeing cloud-native in our (near) future and wants to provide their services in the cloud and on-prem. That customer can build their new applications with a service-oriented architecture or partially modernize existing monoliths (big legacy applications) on the same technology stack.

Consistent Infrastructure & Consistent Operations

All hyperscalers mention as well, that you have to take care of different management and security consoles, skills set and tools in general. Except Microsoft with Azure Arc, nobody else is having a “real” multi-cloud solution or platform. I want to highlight, that even Azure Arc is only here for some servers, Kubernetes clusters and takes care of governance.

Let’s assume you have a hybrid cloud setup in place. Your current project requirements tell you to develop new applications in the Google Cloud using GKE. That’s fine. Your current on-premises data centers run with VMware vSphere for virtualization. Tomorrow, you have to think about edge computing for specific use cases where AI and ML-based workloads are involved. Then you decide to go for Azure and create a hybrid architecture with Azure Stack and Arc. Now you are using two different public cloud providers, one with their specific hybrid cloud offering and also VMware vSphere on-premises.

What are you going to do now? How do you manage and secure all these different clouds and technologies? Or do you think about migrating all the application workloads from on-prem to GCP and Azure? Or do you start with Anthos now for other use cases and applications? Maybe you decide later to move away from VMware and evacuate the VMware-based private cloud to any hyperscaler? Is it even possible to do that? If yes, how long would this technology change and migration take?

Let’s assume for this exercise, that this would be a feasable option with an acceptable timeframe. How are you going to manage the different servers, applications, dependencies and secure everything at the same time? How can you manage and provision infrastructure in an easy and efficient way? What about cost control? What happens if you don’t see Azure as strategic anymore and want to move to AWS tomorrow? Then you figure out, that cloud is more expensive than you thought and experience yourself why only 10% of all workloads are running in the public cloud today.

Multi-Cloud Reality

I think people can pretty easy handle an infrastructure which runs VMware on-premises and have maximum one public cloud only – a hybrid cloud architecture. If we are talking about a greenfield scenario where you could start from scratch and choose AWS including AWS Outposts, because you think it’s best for you and matches all the requirements, go for it. You know what is right for you.

But I believe, and this is also what I see with larger customers, the current reality is hybrid and the future is multi-cloud.

VMware Multi-Cloud Strategy

And a multi-cloud environment is a totally different game to manage. What is the VMware multi-cloud strategy exactly and why is it different?

Consistent VMware Multi-Cloud

VMware’s approach is always to abstract complexity. This doesn’t mean that everything is getting less complex, but you will get the right platform and tooling to deal with this complexity.

A decade ago, abstracting meant providing a hypervisor (vSphere) for any hardware (being vendor-agnostic). After that we had software-defined storage (vSAN) followed software-defined networking (NSX). Beside these three major software pieces, we also have the vRealize suite, which is mainly known for products like vRealize Automation and vRealize Operations. The technology stack consisting of vSphere, vSAN, NSX, vRealize and some management components from the software-defined data center and is called VMware Cloud FoundationA technology stack that allows you to experience the ease of public cloud in your data center. Again, if wanted and required, you can run this stack on top of any hyperscaler like AWS, Azure, Google Cloud, Alibaba Cloud, Oracle Cloud or IBM.

VMware Cloud Foundation

It’s a platform which can deliver services as you would expect in the public cloud. The vRealize suite can help you to automatically provision virtual machines and containers including the right network and storage (any vSphere-based cloud or cloud-native on AWS, GCP, Azure or Alibaba). Build your own templates or blueprints (Infrastructure as Code) to deliver services IaaS, DBaaS, CaaS, DaaS, FaaS, PaaS, SaaS and DRaaS, which can be ordered and consumed by your users or your IT. Put a price tag behind any service or workload you deploy, and include your public cloud spending as well (e.g. with CloudHealth) in this calculation.

You want to deliver vGPU enabled virtual machines or containers? Also possible with vSphere. Modern AI/ML based applications need compute acceleration to handle large and complex computation. vSphere Bitfusion allows you to access GPUs in a virtualized environment over the network (ethernet). Bitfusion works across any cloud and environment and can be accessed from any workload from any network. This topic gets very interesting if we talk about edge computing for example.

vSphere Bitfusion

Modern applications obviously demand a modern infrastructure. An infrastructure with a hybrid or multi-cloud architecture. With that you are facing the challenge of maintaining control and visibility over a growing number of environments. In such a modern environment, how do you automate configuration and management? What about networking and security policies applied at a cluster level? How you handle identity and access management (IAM)? Any clue about backup and restore? And what would be your approach for cost management in a multi-cloud world?

Modern Applications Challenges

To improve the IT ops and developer experience, VMware announced the Tanzu portfolio including something they call the Tanzu Kubernetes Grid (TKG). The promise of TKG is to provide developers a consistent and on-demand access to infrastructure across clouds and is considered to be the enterprise-ready Kubernetes runtime.

Since vSphere 7, TKG has been embedded into the control plane vSphere 7 with Kubernetes as a service. Finally, as Kubernetes is natively integrated into the hypervisor, we have a converged platform for VMs and containers. IT ops now can see and manage Kubernetes objects (e.g. pods) from the vSphere client and developers use the Kubernetes APIs to access the SDDC infrastructure.

There are different ways to consume TKG beside “vSphere 7 with Kubernetes“. TKG is a consistent and upstream compatible Kubernetes runtime with preintegrated and validated components, that also runs in any public cloud or edge environments.

Tanzu Kubernetes Grid

If you have to run Kubernetes clusters natively on Azure, AWS, Google and on vSphere on-premises, how would you manage IAM, lifecycle, policies, visibility, compliance and security? How would you manage any new or existing clusters?

Tanzu Mission Control

Here, VMware’s solution would be Tanzu Mission Control (TMC). A centralized management platform (operated by VMware as SaaS) for all your clusters in any cloud. TMC allows you to provision TKG workload clusters to your environment of choice and manage the lifecycle of each cluster via TMC. To date, the supported deployments are in vSphere and AWS EC2 accounts. The deployment on Azure is coming very soon.

Existing Kubernetes clusters from any vendor such as EKS, AKS, GKE or OpenShift can be attached to TMC. As long as you are maintaining CNCF conformant clusters, you can attach them to TMC so that you can manage all of them centrally.

The Tanzu portfolio is much bigger and includes more than TKG and TMC, which only address the “where and how to run Kubernetes” and “how to deploy and manage Kubernetes”. Tanzu has other solutions like an application catalog, build service, application service (previously Pivotal Cloud Foundry) and observability (monitoring and metrics) for example.

VMware Tanzu Products

And this Tanzu products can be complemented with cloud-scale networking solutions like an application delivery controller (ADC) or software-defined WAN (SD-WAN). To deliver the “public cloud experience” to developers for any infrastructure, we need to provide agility. From an infrastructure perspective we’ll find VMware Cloud Foundation and from application or developer perspective we learned that Tanzu covers that.

For a distributed application architecture, you also need a software-defined ADC architecture that is fully distributed, auto scalable and provides real-time analytics and security for VMs or containers. VMware’s NSX Advanced Load Balancer (formerly known as Avi Networks) runs on AWS, GCP, Azure, OpenStack and VMware and has a rich feature set:

AVI Networks Features

Hypervisor versus Public Cloud

What I am trying to say here, is, that cloud-native at scale requires much more than containers only. While hypervisors are obviously not disappearing and getting replaced by containers from the public cloud very soon, they will co-exist and therefore it is very important to implement solutions which can be used everywhere. If you can ignore the cost factor for a moment, probably the best solution would be using the exact same technology stack and tools for all the clouds your workloads are running on.

You need to rely on a partner and solution portfolio that could address or solve anything (or almost anything) you are building in your IT landscape. As I already said, VCF and Tanzu are just a few pieces of the big puzzle. Important would be an end-to-end approach from any layer or perspective.

Therefore, I believe, VMware is very relevant and very well-positioned to support your journey to the multi-cloud.

The application you migrate or modernize need to be accessed by your users in a simple and secure way. This would lead us for example to the next topic, where we could start a discussion about the digital workspace or end-user computing (EUC).

Talking about EUC and the future-ready workplace would involve other IT initiatives like hybrid or multi-cloud, application modernization, data center and cloud networking, workspace security, network security and so on. A discussion which would touch all strategic pillars VMware defined and presented since VMworld 2019.

VMware 5 Strategic Pillars

If your goal is also to remove silos, provide a better user and admin experience, and this in a secure way over any cloud, then I would say that VMware’s unique platform approach is the best option you’ll find on the market.

And since VMware can and will co-exist with the hyperscalers, and even run on top of all them, I would consider to talk about the “big four” and not “big three” hyperscalers from now on.

Cross-Cloud Mobility with VMware HCX

Cross-Cloud Mobility with VMware HCX

Update 10th September 2020: vSphere 7.0 (VDS 7.0) and NSX-T 3.0.1+ are supported since the HCX R143 release which has been made available on September 8, 2020

https://docs.vmware.com/en/VMware-HCX/services/rn/VMware-HCX-Release-Notes.html 

Most people think that VMware HCX is a only migration tool that helps you moving workloads to a vSphere based cloud like VMware Cloud on AWS, Azure VMware Solution or Google Cloud VMware Engine. But it can do so much more for you than only application or workload migrations. HCX is also designed for workload rebalancing and business continuity across data centers or VMware clouds. Why I say “across VMware clouds” and not only “clouds”?

A few years ago everyone thought or said that customers will move all their workloads to the public cloud and the majority of them don’t need local data centers anymore. But we all know that this perception has changed and that the future cloud operation is model hybrid.

A hybrid cloud environment leverages both the private and public clouds to operate. A multi-cloud environment includes two or more public cloud vendors which provide cloud-based services to a business that may or may not have a private cloud. A hybrid cloud environment might also be a multi-cloud environment.

We all know that the past perception was an illusion and we didn’t have a clue where the hyperscalers like AWS, Azure or GCP would be in the next 5 or 7 years. And I believe that even the AWS and Microsoft didn’t expect what is going to happen since we observed interesting shifts in the last few years.

Amazon Web Services (AWS) has been launched 14 years ago (2006) to provide web services from the cloud. At AWS re:Invent 2018 the CEO Andy Jassy announced AWS Outposts because their customers have been asking for an AWS option on-premises. In the end, Outpost is just an extension of an AWS region into the own data center, where you can launch EC2 instances or Amazon EBS volumes locally. AWS already had some hybrid services available (like Storage Gateway) but here we talk about infrastructure and making your own data center part of the AWS Global Infrastructure.

Microsoft Azure was released in 2010 and the first technical preview of Azure Stack has been announced in 2016. So, Microsoft also realized that the future cloud model is a hybrid approach “that provides consistency across private, hosted and public clouds”.

Google Cloud Platform (GCP) offers cloud computing services since 2008. Eleven years later (in 2019) Google introduced Anthos that you can “run an app anywhere –  simply, flexibly and securely”.

All the hyperscalers changed their cloud model to provide customers a consistent infrastructure with consistent operations and management as we understand now.

VMware is coming from the other end of this hybrid world and has the same overall goal or vision to make a hybrid or multi-cloud reality. But with one very important difference. VMware helps you to abstract the complexity of a hybrid environment and gives you the choice to run your workloads in any cloud infrastructure with a cloud-agnostic approach.

As organizations try to migrate their workloads to the public, they face multiple challenges and barriers:

  • How can I migrate my workload to the public cloud?
  • How long does it take to migrate?
  • What about application downtime?
  • Which migration options do I have?
  • Which cloud is the best destination for which workloads?
  • Do I need to refactor or develop some applications?
  • Can I do a lift and shift migration and modernize the application later?
  • How can I consistently deploy workloads and services for my multi-cloud?
  • How can I operate and monitor (visibility and observability) all the different clouds?
  • What if tomorrow one the public cloud provider is not strategic anymore? How can I move my workloads away?
  • How can I control costs over all clouds?
  • How can I maintain security?
  • What about the current tools and 3rd party software I am using now?
  • What if I want to migrate VMs back from the public cloud?
  • What if I want to move away/back somewhen from a specific cloud provider?

In summary, the challenges with a hybrid cloud are about costs, complexity, tooling and skills. Each public cloud added to your current on-premises infrastructure is in fact a new silo. If you have the extra money and time and don’t need consistent infrastructures and consistent operations and management, you’ll accept the fact that you have a heterogeneous environment with different technology formats, skill/tool sets, operational inconsistencies and security controls.

If you are interested in a more consistent platform then you should build a more unified hybrid cloud. Unified means that you provide consistent operations with the existing skills and tools (e.g. vCenter, vRealize Automation, vRealize Operations) and the same policies and security configuration over all clouds – your data center, public cloud or at the edge.

To provide such a cloud agnostic platform you need to abstract the technology format and infrastructure in the public cloud. This is why VMware built the VMware Cloud Foundation (VCF) platform that delivers a set of software-defined services for compute, storage, networking, security and cloud management for any cloud.

VMC on AWS, Azure VMware Solution, Google Cloud VMware Engine and all the other hybrid cloud offerings (IBM, Oracle, Alibaba Cloud, VCPP) are based on VMware Cloud Foundation. This is the underlying technology stack you would need if your goal is to be independent and to achieve workload mobility between clouds. With this important basic understanding we can take a closer look at VMware HCX.

VMware HCX Use Cases

HCX provides an any-to-any vSphere workload mobility service without requiring retrofit as we use the same technology stack in any cloud. 

VMware HCX Use Cases

HCX enables you to schedule application migrations of hundreds or thousands of vSphere VMs within your data centers and across public clouds without requiring a reboot or a downtime.

If you would like to change the current platform or have to modernize your current data center, HCX allows you to migrate workloads from vSphere 5.x and non-vSphere (Hyper-V and KVM) environments.

VMware HCX Migration

Workload rebalancing means providing a mobility platform across cloud regions and cloud providers to move applications and workloads at any time for any reason.

Workload mobility is cool and may be the future but is not possible today as the public cloud’s egress costs would be way too high at the moment. Let’s say you pay $0.05 per GB when you move data away from the public cloud to any external destination, this would generate costs of $2.50 for a 50GB virtual machine.

Not that much, right? If you move away 500 VMs, the bill would list $1’250 for egress costs. Evacuating VMs from one public cloud to another one is not so expensive if it happens three or four times a year. But if the rebalancing should happen at a higher cadence, the egress costs would get too high. But we can assume that this fact will change in the future as the public cloud computing prices will come down in the future. 

HCX Components and Services

HCX is available with an Advanced and Enterprise license. The Advanced license services are standard with HCX and are also included in the HCX Enterprise license. The Enterprise license is needed when you migrate non-vSphere workloads into a vSphere environment. This capability is called OS Assisted Migration (OSAM).

HCX Services

The HCX Advanced features are included in a NSX Data Center Enterprise Plus license. With a managed service like VMware Cloud on AWS or Azure VMware Solution HCX Advanced is already be included.

HCX Connector Advanced License

If you want to move workloads from a vSphere environment to a vSphere enabled public cloud, you don’t need the complete VMware Cloud Foundation stack at the source site:

  • On-premises vSphere version 5.5 and above
  • Minimum of 100 Mbps bandwidth between source and destination
  • Virtual switch based on vDS (vSphere Distributed Switch), Cisco Nexus 1000v or vSphere Standard Switch
  • Minimum of virtual machine hardware version 9
  • VMs with hard disks not larger than 2TB

Depending on the HCX license and services you need, you have to deploy some or all of the HCX components. HCX comprises components and appliances at the source and destination sites.

HCX Manager Destination

The HCX Connector services and appliances are deployed at the destination site first before you are going to deploy the virtual appliances at the source site (HCX Interconnect appliance).

HCX Interconnect Appliance Download Link

After you deployed the appliances at the source site, you can create the site pairing.

HCX Site Pairing

As soon as you have installed HCX in both sites, you can manage and configure the services within the vSphere Client.

HCX in vSphere Client

After a successful site pairing, you can start to create the HCX Service Mesh.

The Multi-Site Service mesh is used to create a secure optimized transport fabric between any two sites managed by HCX. When HCX Migration, Disaster recovery, Network Extension, and WAN Optimization services are enabled, HCX deploys Virtual Appliances in the source site and corresponding “peer” virtual appliances on the destination site. The Multi-Site Service Mesh enables the configuration, deployment, and serviceability of these Interconnect virtual appliance pairs.

HCX Service Mesh

In the HCX site-to-site architecture, there is notion of an HCX source and HCX destination environment. Depending on the environment, there is a specific HCX installer:

HCX Connector (previously HCX Enterprise) or HCX Cloud. HCX Connector is always deployed as the source. HCX Cloud is typically deployed as the destination, but it can be used as the source in cloud-to-cloud deployments. In HCX-enabled public clouds, the cloud provider deploys HCX Cloud. The public cloud tenant deploys HCX Connector on-premises.
The source and destination sites are paired together for HCX operations. 

In both the source and destination environments, HCX is deployed to the management zone, next to each site’s vCenter Server, which provides a single plane (HCX Manager) for administering VMware HCX. This HCX Manager provides a framework for deploying HCX service virtual machines across both the source and destination sites. VMware HCX administrators are authenticated, and each task authorized through the existing vSphere SSO identity sources. VMware HCX mobility, extension, protection actions can be initiated from the HCX User Interface or from within the vCenter Server Navigator screen’s context menus.

In the NSX Data Center Enterprise Plus (HCX for Private to Private deployments), the tenant deploys both source and destination HCX Managers.

The HCX-IX service appliance provides replication and vMotion-based migration capabilities over the Internet and private lines to the destination site whereas providing strong encryption, traffic engineering, and virtual machine mobility.

The VMware HCX WAN Optimization service appliance improves performance characteristics of the private lines or Internet paths by applying WAN optimization techniques like the data de-duplication and line conditioning. It makes performance closer to a LAN environment. It accelerates on-boarding to the destination site using Internet/VPN- without waiting for Direct Connect/MPLS circuits.

The VMware HCX Network Extension service appliance provides a late Performance (4–6 Gbps) Layer 2 extension capability. The extension service permits keeping the same IP and MAC addresses during a Virtual Machine migration. Network Extension with Proximity Routing provides the optimal ingress and egress connectivity for virtual machines at the destination site.

 

Using VMware HCX OS Assisted Migration (OSAM), you can migrate guest (non-vSphere) virtual machines from on-premise data centers to the cloud. The OSAM service has several components: the HCX Sentinel software that is installed on each virtual machine to be migrated, a Sentinel Gateway (SGW) appliance for connecting and forwarding guest workloads in the source environment, and a Sentinel Data Receiver (SDR) in the destination environment.

The HCX Sentinel Data Receiver (SDR) appliance works with the HCX Sentinel Gateway appliance to receive, manage, and monitor data replication operations at the destination environment.

HCX Migration Types

VMs can be moved from one HCX-enabled data center using different migration technologies or types.

HCX Migration Types

HCX cold migration uses the VMware NFC (Network File Copy) protocol and is automatically selected when the source VM is powered off.

HCX vMotion

  • This option is designed for moving a single virtual machine at a time
  • There is no service interruption during the HCX vMotion migration
  • Encrypted vMotion between legacy source and SDDC target
  • Bi-directional (Cross-CPU family compatibility without cluster EVC)
  • In-flight Optimization (deduplication/compression)
  • Compatible from vSphere 5.5+ environments (VM HW v9)

HCX Bulk Migration

  • Bulk migration uses the host-based replication (HBR) to move a virtual machine between HCX data centers
  • This option is designed for moving virtual machines in parallel (migration in waves)
  • This migration type can set to complete on a predefined schedule
  • The virtual machine runs at the source site until the failover begins. The service interruption with the bulk migration is equivalent to a reboot
  • Encrypted Replication migration between legacy source and SDDC target
  • Bi-directional (Cross-CPU family compatibility)
  • WAN Optimized (deduplication/compression)
  • VMware Tools and VM Hardware can be upgraded to the latest at the target.

HCX Replication Assisted vMotion

VMware HCX Replication Assisted vMotion (RAV) combines advantages from VMware HCX Bulk Migration (parallel operations, resiliency, and scheduling) with VMware HCX vMotion (zero downtime virtual machine state migration).

HCX OS Assisted Migration

This migration method provides for the bulk migration of guest (non-vSphere) virtual machines using OS Assisted Migration to VMware vSphere on-premise or cloud-based data centers. Enabling this service requires additional HCX licensing.

 

  • Utilizes OS assisted replication to migrate (conceptually similar to vSphere replication)
  • Source VM remains online during replication
  • Quiesce the source VM for final sync before migration
  • Source VM is powered off and the migrated VM is powered on in target site, for low downtime switchover
  • VMware tools is installed on the migrated VM

Cross-Cloud Mobility

Most customers will probably start with one public cloud first, e.g. VMC on AWS, to evaluate the hybridity and mobility HCX delivers. Cross-cloud monility is maybe not a requirement today or tomorrow but gets more important when your company has a multi-cloud strategy which becomes reality very soon.

If you want to be able to move workloads seamlessly between clouds, extend networks and protect workloads the same way across any cloud, then you should consider a VMware platform and use HCX.

HCX Cross-Cloud Mobility

Let’s take network and security as an example. How would you configure and manage all the different network, security, firewall policies etc. in your different clouds with the different infrastructure and security management consoles?

If you abstract the hyperscaler’s global infrastructure and put VMware on top, you could in this case use NSX (software-defined networking) everywhere. And because all the different policies are tied to a virtual machine, it doesn’t matter anymore if you migrate a VM from host to host or from cloud to cloud.

This is what you would call consistent operations and management which is enabled by a consistent infrastructure (across any cloud).

And how would you migrate workloads in a very cost and time efficient way without a layer 2 stretch? You would have to take care of re-IPing workloads and this involves a lot of changes and dependencies. If you have hundreds of applications then the cloud migration would be a never ending project with costs you could not justify.

In the case you need to move workload back to your own on-premises data center, HCX also gives you this advantage.

You have the choice in which cloud you want to run your applications, at any time.

 

HCX and vSphere 7

At the time of writing HCX has no official support for vSphere 7.0 yet. I tested it in my home lab and ran into an error while creating the Service Mesh. At least one other colleague had the same issue with vSphere 7 using NSX-T 3.0 and VDS 7.0.

HCX vSphere 7 Error

I would like to thank Danny Stettler for reviewing and contributing. 🙂 Big kudos to you, Danny! 🙂

I hope the article has helped you to get an overview what HCX and a hybrid cloud model really mean. Drop a comment and share your view and experience when it comes to cloud strategies and migrations.

 

Mobile-First with Samsung DeX and VMware Horizon

Mobile-First with Samsung DeX and VMware Horizon

Currently, most people must work from home because of COVID-19. When the global lockdown started, companies were challenged to provide continuity of business with:

  • Remote PC access (physical access to computers with VMware Horizon or Citrix)
  • Remote access via VPN
  • Published desktops and apps hosted on-premises
  • Published desktops and apps hosted in the Cloud (e.g. Azure or AWS)
  • Shipping of new laptops
  • Shipping of new thin clients (don’t forget the Raspberry Pi 4)
  • Shipping of new mobile devices like smartphones and tablets

The options look clear but the execution of these action wasn’t that easy. Nobody was prepared for a pandemic and its consequence of shipping problems and delays of servers, PCs and mobile devices. On the other side some of the companies already had the necessary infrastructure but not enough licenses for their virtual desktop infrastructure (VDI) of unified endpoint management (UEM) platform. 

In Switzerland we are lucky to have modern and stable internet connections where 1 Gigabit over fiber is not the exception anymore. I don’t know how VMware’s internal IT was challenged at the beginning of this crisis but I could always access my applications and data from my laptop with Workspace ONE (I don’t need Horizon for work). 

After a few weeks I was asking myself if the employers know all their options to enable their employees for remote working. For sure the most increased numbers are related to digital workspace products. That’s why people on Twitter and LinkedIn are shouting out that it’s “the year of VDI”. 

As long as not every virtual desktop or remote desktop session host is equipped with a vGPU or when GPUs are affordable for everyone and become commodity, in my opinion, we cannot talk about the year of VDI. But that’s another topic. 

When I heard that PC and smartphone shipments are delayed for weeks, I looked for alternatives which don’t rely on shipping of new devices:  

  • I assume the employee has a private PC or laptop at home
  • I assume the employee possesses at least one smartphone or tablet
  • I assume the employee has a stable internet connection at home
  • I assume the employee has 4G/5G reception

To access a virtual desktop brokered via VMware Horizon you would only need a PC or laptop with a HTML5 capable browser to access the virtual environment. The installation of the Horizon Agent is not mandatory but would give you a much richer user experience.

Working directly on a smartphone or tablet makes only sense and fun when: 

  • You don’t have to access a virtual desktop or virtual app on a tiny display
  • You can use mobile apps
  • You can SaaS apps
  • You can connect your phone or tablet to an external display – ideally with a keyboard and mouse

I have an iPad Pro at home but decided to test a mobile-first approach with a Samsung S20+, because it is more common that employers provide a smartphone instead of tablet. I am not aware of any mobile-only company that solely work with smartphones or tablets. But I think it’s important to understand how a mobile-first or mobile-only approach affects the user experience and if it’s possible to replaces PCs, laptops or thin clients.

Why is this thought interesting and important? The employee experience (EX) is the number one priority for a digital workspace and with today’s UEM platforms you can manage almost every formfactor and operating system (iOS, Android, macOS, Windows 10, Linux).

What if we can provide the same user experience and reduce costs with a mobile-first strategy coupled with Horizon for VDI use cases? Don’t ignore that there’s an ongoing shift towards 5G and it’s becoming more and more accessible. 

The most famous telco provider in Switzerland is “Swisscom” who already offers a pretty wide 5G (up to 1Gbit/s) and 5G+ (up to 2Gbit/s) coverage:

Swisscom 5G Coverage

My vision here is, that every employee is only equipped with a smartphone which they can use in the office and at home to securely connect to the corporate network to access internal apps or data and SaaS applications.

Here is what I would like to test:

  • Can the Samsung S20+ replace my Dell laptop?
  • How can I connect peripherals like an external display, keyboard, mouse, headset, webcam, printer etc.
  • Which internal and external (mobile/SaaS) applications can be used with a good user experience?
  • Which applications should better be accessed via a virtual desktop or published app delivered with Horizon?
  • How is the user experience with Samsung DeX?
  • Which 3rd party applications are supported with Samsung DeX?
  • Can Samsung DeX transform a Samsung smartphone into a Windows thin client?
  • How is my daily work affected?
  • How does Samsung DeX and VMware Horizon work together?

Preparation of my Mobile-First Workplace

The first thing I did, after I installed all necessary Android updates, was to enroll my S20+ in VMware’s Workspace ONE. 

In enrolled my phone as a dedicated corporate device and could access my company’s applications within the next five minutes. The following applications are the most important ones for my daily work at VMware: 

  • Microsoft Outlook / VMware Boxer
  • Microsoft PowerPoint
  • Microsoft Word
  • Microsoft Excel
  • OneDrive
  • Microsoft Teams
  • Slack
  • Zoom
  • Salesforce (SaaS version)
  • Different web links (Confluence, Jira, intranet, technical marketing website etc.)

Workspace ONE Application Catalog

My phone is enrolled, remote access to the corporate network can be established and all the necessary mobile applications are installed. Internal web links and SaaS applications can be access through the secure per-app VPN tunnel (micro VPN tunnel) and the Workspace ONE application catalog (image above) with SSO (Single Sign-On).

So, how do I transform this smartphone into PC-mode? 

Samsung DeX

I believe Samsung first included the “Desktop eXperience” (DeX) feature on Galaxy S8 smartphones and the original version even required the use of a DeX docking station. And since a few months DeX can now be launched via a direct cable connected to an external display, Windows or Mac client. This means that no multiport adapter and no HDMI cable is needed if your display has an USB-C port. 

Samsung DeX is not hardware — it’s a software platform that extends your smartphone or tablet into a desktop computing experience.

To use DeX desktop on a Windows or Mac OS you’ll need the downloadable app, but this is not something I’m going to explore further.

Lucky me, my Dell display at home has a lot of regular USB ports and one USB-C port. This allowed me to connect the S20+ smartphone with the USB-C cable and connect peripherals like my headset (or speakers), keyboard and mouse. Another option would be, if you have a Bluetooth keyboard and mouse, to connect them directly to the S20. They only thing I didn’t do yet, because it has no priority, is to connect my network printer to the phone.

Samsung DeX Home Office Setup

In the image above you can see the DeX desktop with some applications shortcuts I created.

Samsung DeX Keyboard Settings Samsung DeX Audio Settings 

The configuration of my keyboard and headset as the primary audio device was also very simple. So far, I am very impressed! 

Adapters

All you need to get started using DeX are a display, a HDMI adapter and peripherals. The HDMI adapter is only needed if you haven’t got an integrated USB-C port in your monitor. And not every monitor has a lot of USB ports. That’s why Samsung offers three different adapters: 

 

 

 

Samsung DeX Adapters

The DeX cable is simple 1.4m long HDMI-USB-C cable which you plug into your monitor.

The compact HDMI adapter allows you to connect your phone to a HDMI cable on your monitor. As no additional ports are available with the DeX cable and HDMI adapter, you’ll need to use Bluetooth peripherals.

The third option is a multiport adapter gives you a USB 3.0 port, a GigE port for a wired internet connect and a USB-C port to connect the phone’s charging cable (beside the HDMI port).

If you have no mouse, then you could use your phone as a touchpad. A notification on your phone will give show you this option.

Samsung Core Applications & 3rd party apps

In Samsung’s “Beginner’s Guide to Samsung DeX” you’ll find the following information about support mobile apps: 

All of Samsung’s core applications are optimized for DeX, meaning you can resize and maximize the apps. You can also use right-click functionality and keyboard shortcuts. There are dozens of third-party apps that are fully optimized for DeX, including the Microsoft Office suite, Adobe Acrobat Reader, Photoshop Lightroom, Photoshop Sketch, Gmail, Chrome, BlueJeans, GoToMeeting and all the leading VDI clients, to name just a few. For those that aren’t optimized for DeX, read on for the next tip.

Samsung DeX Support Apps

Here’s the next tip about the DeX Labs activation:

DeX Labs offers access to “experimental” features that aren’t officially supported. Two current features include allowing DeX to force apps to resize and auto-open the last used app. To activate, click the DeX logo on the bottom right of your screen, open DeX Labs and toggle the features on. Now, when you open an app that is not DeX optimized, you’ll be given the opportunity to force resizing. This will allow you to view it in a larger window or even in fully maximized view.

Samsung and VMware have a partnership for a while now and because of that the VMware Horizon Client and some other VMware apps are on the list of supported 3rd party apps:

Samsung DeX 3rd party apps

I tested Zoom already and it worked perfectly. First tests of Boxer and Slack also looked promising. The only apps which are not on the list of “apps in DeX mode” are:

  • MS Teams
  • Salesforce
  • Slack (seems to work)

Samsung DeX Team Crash Samsung DeX Salesforce crash

When I try to open MS Teams in DeX mode, nothing happens, and I see on the smartphone that the app is immediately crashing. DeX Labs, which attempts to resize apps that aren’t officially supported by Samsung DeX, didn’t make any difference.

Mobile Apps vs. Desktop Apps

Since MS Teams is not working in DeX mode, I’m going to check if DeX Labs helps. Otherwise I have to mirror my phone’s screen to use MS Teams or start this on a virtual desktop provided with Horizon.

Launching VMware Horizon desktops when working within DeX gives you both. You’re now working on a virtual desktop with full desktop apps. You’re viewing content on a full-sized monitor, and using the keyboard and mouse to get work done. And it’s all powered by your Galaxy smartphone. That is the digital workplace, powered by mobile.

Accessing a virtual desktop is very easy. You just need to download the Horizon Client from the Workspace ONE catalog (Intelligent Hub) or install the available one from the Google Play Store.

Samsung DeX VMware Horizon Client

For my tests I’m going to use the VMware TestDrive environment again like I did it for my testing with the Raspberry Pi 4. In the Horizon Client for Android User Guide you will find more information about using the Horizon Client with Samsung DeX:

If the Android device supports Samsung DeX, you can use Horizon Client in DeX desktop mode.

When the device is in DeX desktop mode, Horizon Client treats the device as a thin client and Thin Client
mode is enabled. For more information, see Using Horizon Client on a Thin Client.

The following features are supported when you use Horizon Client in Horizon DeX desktop mode.

  • You can configure Horizon Client to start automatically when you switch to DeX desktop mode. See
    Enable the DeX Mode Auto Launch Feature.
  • Remote desktop and published application sessions continue to run after you enter or exit DeX
    desktop mode.
  • If Horizon Client is maximized, remote desktops enter full-screen mode after you switch to DeX
    desktop mode.
  • To switch the language input method in a remote desktop, you can use the language switch key on a
    Samsung physical keyboard.
  • You can connect to multiple remote desktops and published applications at the same time. Smart
    card authentication is not supported for multiple sessions

Use Cases for DeX and VDI

Let’s give you a few examples of classic use cases. 

Healthcare

When we think about hospitals and healthcare in general, then data security and mobility are very important topics. Mobility can help to improve productivity and almost every healthcare customers uses VDI for security and mobility purposes: E.g. shift workers, doctors need (VDI desktop) session roaming

My experience shows that doctors often have a phone, tablet and a desktop/laptop.

Instead of having:

  • a computer in the office
  • a computer or thin client in the examination room
  • and a tablet for patient data (electronic health record) or medical images

you could do it all with one device and have the same user experience everywhere an in any case. I will cover the support of various authentication methods (smart card, biometric) later.

Please find here the VMware Horizon 7 Deployment Guide for Healthcare. 

Finance

The finance vertical has also different pillars and specific use cases. With banking or wealth management customers you probably have to talk more about thin clients and VDI. And with insurances companies, that have a lot of road warriors, you need to consider scenarios where the agents/consultants are working in the car or directly at their customer.

For road warriors you could also propose a Samsung tablet and/or a mounted display and a dedicated keyboard which acts like a standalone computer.

Public Sector

Public sector customers have requirements with a combination of the healthcare and finance industry. Obviously, security is one of the most important topics. Data leakage prevention, encryption, data locality (argument for VDI) etc. or just a few of the requirements. Multi-factor authentication with smart cards is also very common.

Security with Samsung DeX and Horizon

In this section I want to summarize what Samsung and VMware offering for a secure mobile-first or mobile-only workplace. Samsung provides security with their phone and additional security features (Know) for the personal and enterprise use.

VMware is referring to their intrinsic security strategy with Zero Trust security approach.

Samsung and VMware

Samsung DeX and Samsung Knox

Using DeX also brings security benefits. Samsung smartphones and tablets are protected by advanced biometric security and Samsung Knox, a defense-grade security platform that’s designed from the chip up to protect devices from the minute they’re powered on — so you can be sure your information is safe.

Workspace ONE Unified Endpoint Management (UEM)

For the device management and compliance (OS updates, security patches etc.) of Android based phones we have Workspace ONE UEM before allowing any access.

With Workspace ONE Access you can grant access to your applications based on a combination of conditions (which is also known as the conditional access engine. The policy framework for conditional access consists of:

  • User (employee, contractor, customer)
  • Device (iOS, Android, Win10, macOS, BYOD, corp device, unmanaged)
  • Application (web, mobile, virtual, low or high security, internal, external)
  • Location (network range, 3G/4G/5G, geo)

For the secure remote access to the corporate network Workspace ONE offers an application tunnel and proxy. The app tunnel is established with Workspace ONE Tunnel and the Unified Access Gateway (UAG) offers edge services that you securely access your on-prem Horizon virtual app or desktop.

With Workspace ONE Intelligence you’ll get automated remediation and orchestration. Based on different conditions or triggers you can define actions or workflows like ticketing or notifications. You could also automate the blocking of a VPN access if a phone or tablet doesn’t meet the required patch level.

If you want to go one step further you could leverage the Workspace ONE Trust Network which combines the insights from Workspace ONE with verified security partners (current partners are Carbon Black, Zscaler, Lookout, Netskope, Wandera and Zimperium) by APIs to deliver predictive and automated security for your mobile clients or the digital workspace in general.

VMware Zero Trust Security Workspace ONE

You can also add VMware NSX to enhance security with micro-segmentation and secure east-west traffic for applications and desktops in the data center and the cloud.

Workspace ONE Intelligent Hub is the portal for users to access their different applications and provides the same user experience on any device. The look and feel in your browser are the same as on your Samsung phone or tablet. In DeX mode I opened the Intelligent Hub app (left) and access the portal as well in Chrome (right):

Workspace ONE Intelligent Hub

Smart Cards

Customers know the smart cards as plastic cards which have a digital certificate embedded which allows them to authenticate themselves to their desktops and applications. Some larger enterprises use the same plastic card or badge to access buildings or to digitally sign documents.

To get access to the digital certificate on the card traditionally you would insert the card in an internal or external connected smart card reader and insert your PIN after. For mobile workers and a mobile-centric platform this way of working doesn’t offer the best user experience.

And beside that physical smart cards are also considered old-fashioned, right?

Because of these reasons VMware introduced support for derived credentials a couple of years ago for Horizon Clients for Android, iOS and Windows. This eliminated the need for physical smart cards and smart card readers. All you need is the PIV-D Manager mobile app which comes with Workspace ONE:

VMware PIV-D Manager is a mobile application that integrates with various Derived Credential solution providers enabling the use of Derived Credentials with Workspace ONE UEM. The available vendors currently supported with the PIV-D Manager app are DISA Purebred, Entrust IdentityGuard, Intercede MyID, XTec, and Workspace ONE UEM.

Remote Support

What if your mobile workers have problems with their phones or tablets? Workspace ONE Assist is the last piece to complete the puzzle and it enables you to access and troubleshoot devices remotely in real time from the Workspace ONE console.

Here are the current supported features separated by platform:

WS1 Assist Capabilities Platform

In April 2020 VMware has just announced the expansion of their remote support solution offerings with VMware RemoteHelp

The difference between RemoteHelp and Workspace ONE Assist is, that you don’t Workspace ONE UEM with RemoteHelp. You can look at RemoteHelp like Bomgar or TeamViewer, but with the addition that support engineers can launch remote support sessions of Android (and iOS) devices directly from their CRM platform. RemoteHelp is sold as a standalone product and has its own console and end-user mobile application.

You would use RemoteHelp where customers are using DeX with Horizon but are not managing the device with Workspace ONE.

Samsung Galaxy S20+ as Thin Client

I mentioned already once or twice that I wrote articles about the Raspberry Pi 4 (RPi) and how it performs as a thin client for VMware Horizon. From a price perspective you cannot compare a S20+ and RPi because a smartphone has so many more features and is used for a lot else than connecting to a virtual desktop or web browsing. But let us have a look at the specs of both devices:

SpecificationsSamsung S20+Raspberry Pi 4 B/4GB
FormfactorSmartphoneSmall Single-Board Computer
Dimensions & Weight161.9 x 73.7 x 7.8mm
188g
88 x 58 x 19.5mm
46g (board only)
Operating System(s)Android 10Raspbian
Stratodesk NoTouch OS
ThinLinX
Ubuntu (MATE, Core, Server)
RISC OS
Win 10 IoT Core
Processor (CPU)64-bit 8-Core 2.70 GHz4-Core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz
Memory (RAM)8GB4GB
Network & Connectivity5G
LTE
Wi-Fi
1 GigE (with adapter)
Wi-Fi
1 GigE
Display ConnectivityUSB-C
HDMI (with adapter)
2x micro-HDMI
Power ConnectivityUSB-C
Wireless Charging
Wireless PowerShare
Power Supply
USB-C Connector

The specifications of the Galaxy S20+ let us expect that we should have the same user experience compared to a Raspberry Pi4 Model B with 4GB RAM.

Horizon Test Environment

I’m going to use the same vGPU enabled Windows 10 from VMware TestDrive in the EMEA region. The Win10 desktop is equipped with four vCPUs from a Xeon Gold 6140 CPU, 8GB RAM and a Nvidia Tesla V100 GPU (V100-2Q profile).

VMware TestDrive

As you can see in the screenshot above in Remote Desktop Analyzer I’m connected with the Blast protocol and that the active encoder is NVIDIA NvEnc H264. This tells us that the non-CPU encoding (H.264) on the virtual desktop and the H.264 decoding on the Samsung smartphone are supported and working.

Performance Testing

I have tested a YouTube HD trailer and graphic intensive applications as usual. All my uploaded videos have been compressed to a more web-friendly format and size.

1) YouTube

Here is the link for the Avengers 4 Endgame Trailer.

2) Nvidia Faceworks

3) eDrawings Racecar Animation

4) Nvidia “A New Dawn”

5) FishGL

Can a Samsung Galaxy S20+ replace my laptop?

A Samsung smartphone (or tablet) can definitely replace a fat client like a PC or laptop. The videos above are clearly showing that accessing and working with a virtual desktop is no problem at all and that the user experience is very good.

Working in DeX mode was a little strange at the beginning, but I think I and people in general could get used to it over time.

The 3rd party apps which are not working in DeX mode need to be accessed from a virtual desktop delivered with VMware Horizon or directly on the phone. You can switch to screen mirroring quickly and go back to DeX mode after. That’s just how it is.

When I joined VMware two years ago, I chose the Galaxy S8 as my corporate device and a Dell Precision laptop. For my role as a pre-sales solution architect who has to work a lot offline while travelling, a laptop is probably a better fit. Otherwise, at home or in the office, I could easily work with my S8 or S20+ only.

And as companies are giving you a phone and laptop as well, the price for a S20+ is very acceptable if you can replace at least one device like a PC or thin client.

Mobile computing is already transforming productivity across many industries. I believe that Samsung’s key features and VMware’s digital workspace offering make it possible to provide a secure mobile-first workplace.