Multi-Tenancy on VMware Cloud Foundation with vRealize Automation and Cloud Director

Multi-Tenancy on VMware Cloud Foundation with vRealize Automation and Cloud Director

In my article VMware Cloud Foundation And The Cloud Management Platform Simply Explained I wrote about why customers need a VMware Cloud Foundation technology stack and what a VMware cloud management platform is.

One of the reasons and one of the essential characteristics of a cloud computing model I mentioned is resource pooling.

By the National Institute of Standards and Technology (NIST) resource pooling is defined with the following words:

The provider’s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual
resources dynamically assigned and reassigned according to consumer demand.
There is a sense of location independence in that the customer generally has no
control or knowledge over the exact location of the provided resources but may be
able to specify location at a higher level of abstraction (e.g., country, state, or
data center).

This time I would like to focus on multi-tenancy and how you can achieve that on top of VMware Cloud Foundation (VCF) with Cloud Director (formerly known as vCloud Director) and vRealize Automation, which both could be part of a VMware cloud management platform (CMP).

Multi-Tenancy

There are many understandings around about multi-tenancy and different people have different definitions for it.

If we start from the top of an IT infrastructure, we will have application or software multi-tenancy with a single instance of an application serving multiple tenants. And in the past even running on the same virtual or physical server. In this case the multi-tenancy feature is built into the software, which is commonly accessed by a group of users with specific permissions. Each tenant gets a dedicated or isolated share of this application instance.

Coming from the bottom of the data center, multi-tenancy describes the isolation of resources (compute, storage) and networks to deliver applications. The best example here are (cloud) services providers.

Their goal is to create and provide virtual data centers (VDC) or a virtual private cloud (VPC) on top of the same physical data center infrastructure – for different tenants aka customers. Normally, the right VMware solution for this requirement and service providers would be Cloud Director, but this is maybe not completely true anymore with the release of vRealize Automation 8.x. 

To make it easier for all of us, I’ll call Cloud Director and vCloud Director “vCD” from now on.

VMware Cloud Director (formerly vCloud Director)

Cloud Director is a product exclusively for cloud service providers via the VMware Cloud Provider Program (VCPP). Originally released in 2010, it enables service providers (SPs) to provision SDDC (Software-Defined Data Center) services as complete virtual data centers. vCD also keeps resources from different tenants isolated from each other.

Within vCD a unit of tenancy is called Organization VDC (OrgVDC). It is defined as a set of dedicated compute (CPU, RAM), storage and network resources. A tenant can be bound to a single OrgVDC or can be composed of multiple Organization VDCs. This is typically known as Infrastructure as a Service (IaaS).

A provider virtual data center (PVDC) is a grouping of compute, storage, and network resources from a single vCenter Server instance. Multiple organizations/tenants can share provider virtual data center resources.

Cloud Director Resource Abstraction

A lot of customers and VCPP partners have now started to offer their cloud services (IaaS, PaaS, SaaS etc.) based on VMware Cloud Foundation. For private and hybrid cloud scenarios, but also in the public cloud as a managed cloud service (VMware Cloud on AWS, Azure VMware Solution, Google Cloud VMware Engine, Alibaba Cloud VMware Solution and more).

Important: I assume that you are familiar with VCF, its core components (ESXi, vSAN, NSX, SDDC Manager) and architecture models (standard as the preferred).

Cloud Director components are currently not part of the VCF lifecycle automation, but it is a roadmap item!

Cloud Director Resource Hosting Models

vCD offers multiple hosting models:

  • In the shared hosting model, multiple tenant workloads run all together on the same
    resource groups without any performance assurance
  • In the reserved hosting model, performance of workloads is assured by resource
    reservation.
  • In the physical hosting model, hardware is dedicated to a single tenant and performance
    is assured by the allocated hardware

Tenant Using Shared Hosting on VCF Workload Domain

In this use case a tenant is using shared hosting backed by a VMware Cloud Foundation workload domain. A workload domain, which is mapped to a provider VDC.

vCD VCF Shared

Tenant Using Shared Hosting and Reserved Hosting on Multiple VCF Workload Domains

This use case describes the example of customer using shared and reserved hosting backed by multiple VCD workload domains. Here each cluster has a single resource pool mapped to a single PVDC.

vCD VCF Shared Reserved

Tenant Using Physical Hosting and Central Point of Management (CPOM)

The last example shows a single customer using physical hosting. You will notice that there is also a vSphere with
Kubernetes workload domain. VMware Cloud Foundation automates the installation of vSphere with Kubernetes (Tanzu) which makes it incredibly easy to deploy and manage.

You can see that there is an “SDDC” box on top of the Kubernetes Cluster vCenter, which is attached to
the “SDDC Proxy” entity. vCD can act as an HTTP/S proxy server between tenants and the
underlying vSphere environment in VMware Cloud Foundation. An SDDC proxy is an
access point to a component from an SDDC, for example, a vCenter Server instance, an ESXi host, or
an NSX Manager instance.

The vCD becomes the central point of management (CPOM) in this case and the customer gets a complete dedicated SDDC with vCenter access.

vCD VCF Physical CPOM

Note: Since vCD 9.7 it is possible to present for example a vCenter Server instance securely to a tenant’s organization using the Cloud Director user interface. This is how you could build your own VMC-on-AWS-like cloud offering!

Cloud Director CPOM

All 3 Tenants Together

Finally, we put it all together. In the first use case we can see that different customers are sharing resources from a
single PVDC. We can also see that resources from a single vCenter can be split across different provider virtual datacenters and that we can mix and match multi-tenants workload domains and workload domains offering dedicated private cloud all together.

vCD VCF All Together

Cloud Director Service and VMware Cloud on AWS

If you don’t want to extend or operate your own data center or cloud infrastructure anymore and provide a managed service to multiple customer, there are still options for you available backed by VMware Cloud Foundation as well.

Since October 2020 you have Cloud Director Service globally available, which delivers multi-tenancy to VMware Cloud on AWS for managed service providers (MSP).

VMware sees not only new, but also existing VCPP partners moving towards a mixed-asset portfolio, where their cloud management platform consists of a VCPP and MSP (VMware SaaS offerings) contract. This allows them for example to run vCD on-premises for their current customers and the onboarding of new tenants would happen in the public cloud with CDS and VMC on AWS.

vCD CDS Mixed Mode

Enterprise Multi-Tenancy with vRealize Automation

With the release of vRealize Automation 8.1 (vRA) VMware offered support for dedicated infrastructure multi-tenancy, created and managed through vRealize Suite Lifecycle Manager. This means vRealize Automation enables customers or IT providers to set up multiple tenants or organizations within each deployment.

Providers can set up multiple tenant organizations and allocate infrastructure. Each tenant manages its own projects (team structures), resources and deployments.

Enabling tenancy creates a new Provider (default) organization. The Provider Admin will create new tenants, add tenant admins, setup directory synchronization, and add users. Tenant admins can also control directory synchronization for their tenant and will grant users access to services within their tenant. Additionally, tenant admins will configure Policies, Governance, Cloud Zones, Profiles, access to content and provisioned resources; within their tenant. A single shared SDDC or separate SDDCs can be used among tenants depending on available resources.

vRealize Automation 8.1 Multi-Tenancy

With vRealize Automation 8.2, provider administrators got the ability to share infrastructure by creating and assigning Virtual Private Zones (VPZ) to tenant organizations.

Think of VPZs as a kind of container of infrastructure capacity and services which can be defined and allocated to a Tenant. You can add unique or shared cloud accounts, with associated compute, flavors, images, storage, networking, and tags to each VPZ. Each component offers the same configuration options you would see for a standalone configuration.

vRealize Automation 8.2 Multi-Tenancy

vRealize Automation and VMware Cloud Foundation

With the pretty new multi-tenancy and VPZ capability a new consumption model on top of VCF can be built. You (provider) would map the Cloud Zones (compute resources on vSphere (or AWS for example)) to a VCF workload domain.

The provider sets these cloud zones up for their customers and provides dedicated or shared infrastructure backed by Cloud Foundation workload domains.

This combination would allow you to build an enterprise VPC construct (like AWS for example), a logically isolated section of your provider cloud.

vRealize Automation and VMware Cloud Foundation

SDDC Manager Integration and VMware Cloud Foundation (VCF) Cloud Account

Since the vRA 8.2 release customers are also able to configure a SDDC Manager integration and on-board workload domains as VMware Cloud Foundation cloud accounts into the VMware Cloud Assembly service.

VMware Cloud Director or vRealize Automation?

You wonder if vRealize Automation could replace existing vCD installations? Or if both cloud management platforms can do the same?

I can assure you, that you can provide a self-service provisioning experience with both solutions and that you can provide any technology or cloud service “as a service”. Both have in common to be backed by Cloud Foundation, have some form of integration (vRA) and can be built by a VMware Validated Design (VVD).

vCD is known to be a service provider solution, where vRA is more common in enterprise environments. VMware has VCPP partners, that use Cloud Director for their external customers and vRealize Automation for their internal IT and customers.

If you are looking for a “cloud broker” and Infrastructure as Code (IaC), because you also want to provision workloads on AWS, Azure or GCP as well, then vRealize Automation is the better solution since vCD doesn’t offer this deep integration and these deployment options yet.

Depending on your multi-tenant needs and if you for example only have chosen vCD in the past, because of the OrgVDC and resource pooling feature, vRealize Automation would be enough and could replace vCD in this case.

It is also very important to understand how your current customer onboarding process and operational model look like:

  • How do you want to create a new tenant? 
  • How do you want to onboard/migrate existing customer workloads to your provider infrastructure?
  • Do you need versioning of deployments or templates?
  • Do customers require access to the virtual infrastructure (e.g. vCenter or OrgVDC) or do you just provide SaaS or PaaS?
  • Do customers need a VPN or hybrid cloud extension into your provider cloud?
  • How would you onboard non-vSphere customers (Hyper-V, KVM) to your vSphere-based cloud?
  • Does your customer rely on other clouds like AWS or Azure?
  • How do you do billing for your vSphere-based cloud or multi-cloud environment?
  • What is your Kubernetes/container strategy?
  • And 100 other things 😉

There are so many factors and criteria to talk about, which would influence such a decision. There is no right or wrong answer to the question, if it should be VMware Cloud Director or vRealize Automation. Use what makes sense.

Which could also be a combination of both.

VMware Carbon Black Cloud Workload – Agentless Protection for vSphere Workloads

VMware Carbon Black Cloud Workload – Agentless Protection for vSphere Workloads

At VMworld 2020 VMware announced Carbon Black Cloud Workload (CBC Workload) as part of their intrinsic security approach.

For me, this was the biggest and most important announcement from this year’s VMworld. It is a new offering, which is relevant for every vSphere customer out there – even the small and medium enterprises, which maybe still just rely on ESXi and vCenter only for their environment.

CBC Workload introduces protection for workloads in private and public clouds. For vSphere, there is no additional agent installation needed, because the Carbon Black sensor (agent) is built into vSphere. That’s why you may hear that this solution is “agentless”.

Carbon Black Cloud Workload Bundles

This cloud-native (SaaS) solution provides foundational workload hardening and vulnerability management combined with prevention, detection and response capabilities to protect workloads running in virtualized private cloud and hybrid cloud environments.

Carbon Black Cloud Workload Protection Bundles

Note: Customers, that are using vSphere and VMware Horizon, should take a look at Workspace Security VDI, which has also been announced at VMworld 2020. A single-vendor solution with the combination of VMware Horizon and Carbon Black.

If you would like to know more about the interoperability of Carbon Black and Horizon, have a look at KB79180.

Carbon Black Cloud Workload Overview

Customers and partners have now the possibility to provide a workload security solution for Windows and Linux virtual machines. The complete system requirements can be found here.

“You can enable Carbon Black in your data center with an easy one-click deployment. To minimize your deployment efforts, a lightweight Carbon Black launcher is made available with VMware Tools. Carbon Black launcher must be available on the Windows and Linux VMs.”

Carbon Black enable via vCenter

Carbon Black Cloud Workload consists of a few key components that interact with each other:

CBC Workload Components

You must first deploy an on-premises OVF/OVA template for the Carbon Black Cloud Workload appliance (4 vCPU, 4GB RAM, 41GB storage) that connects the Carbon Black Cloud to the vCenter Server through a registration process. After the registration is complete, the Carbon Black Cloud Workload appliance deploys the Carbon Black Cloud Workload plug-in and collects the inventory from the vCenter Server.

The plug-in provides visibility into processes and network connections running on a virtual machine.

As a vCenter Server administrator, you want to have visibility of known vulnerabilities in your environment to understand your security posture and schedule maintenance windows for patching and remediation. With the help of vulnerability assessment, you can proactively minimize the risk in your environment. You can now monitor known vulnerabilities from the Carbon Black Cloud Workload plug-in:

vSphere Client Carbon Black

The infosec guys in your company would do the vulnerability assessment from the CBC console:

CBC Vulnerabilities

Carbon Black Cloud Workload protection provides vSphere administrators a full inventory, appliance health and vulnerability reporting from one console, the already well-known vSphere Client.

Carbon Black vSphere Client Summary

Cybersecurity Requirements

According to the NIST Cybersecurity Framework the security lifecycle is made of five functions:

  1. Identify – Cloud & Service Context, Dynamic Asset Visibility, Compliance & Standards, Cloud Risk Management
  2. Protect – Services / API Defined, Cloud Access Control, Network Integrity, Data Security, Change Control & Guardrails
  3. Detect – Cloud-Speed, Inter-connected Services, Events & Anomalies, Continuous Monitoring
  4. Respond – DevOps Collaboration, Real-time Notifications, Automated Actions, Response as Code
  5. Recover – Templates / Code Review, Shift Left / Pipeline, Exceptions and Verification

Workload Security Lifecycle

CBC Workload focuses on identifying the risks with workload visibility and vulnerability management, which are part of the “Workload Essentials” edition.

If you would like to prevent malicious activities to protect your workloads and replace your existing legacy anti-virus (AV) solution, then “Workload Advanced” would be the right edition for you as it includes Next-Gen AV (NGAV).

Behavioral EDR (Endpoint Detection & Response), also part of the “Advanced” bundle, belongs to “detect & respond” of the security lifecycle.

Workload Security for Kubernetes

Carbon Black Guardrails and Runtime Security

You just learned that Carbon Black Cloud gives workload protection for virtualized Windows or Linux virtual machines running on vSphere. What about container security for Kubernetes?

In May 2020 VMware officially closed its acquisition of Octarine, a SaaS security platform for protecting containers and Kubernetes. VMware bought Octarine to enable Carbon Black to secure applications running in Kubernetes.

Traditional security is no longer relevant for the security of Kubernetes, because Kubernetes is so powerful and hence risky, networking is very complex and a total different game, because static IPs and ports are no longer relevant. And you need a new security approach which is compatible with IT’s organizational shift from traditional to a DevSecOps approach.

VMware’s solution covers the whole lifecycle of the application from building the container to the app running in production. It is a two-part solution with the first one being “Guardrails“. It is able to scan container images for vulnerabilities and Kubernetes manifests for any misconfigurations.

Carbon Black Cloud Guardrails Module

The second part is runtime protection. When the workloads are deployed in production, the Carbon Black security agent is able to detect malicious activities.

Carbon Black Cloud Runtime Module 

Let’s have a look at the different features the Kubernetes “Guardrails” provide for each phase of the application:

  • Build: Image vulnerability scanning, Kubernetes configuration hardening
  • Deploy: Policy governance, compliance reporting, visibility and hardening
  • Operate: Threat detection and response, anomaly detection and least privilege runtime, event monitoring

And these were the key capabilities and benefits, which have been mentioned at VMworld 2020 for “Guardrails”:

Carbon Black Kubernetes Guardrails Features

For “runtime” security the following key capabilities and benefits were mentioned:

  • Visibility of network traffic
  • Coverage of workloads and hosts activity
  • Network policy management
  • Threat detection
  • Anomaly detection
  • Egress security
  • SIEM integration

Customers will be able to have visibility of all the workloads running in the local or cloud-native production clusters and how they interact with each other. They will also see which services are exposed to ingress traffic, which services are exiting the cluster and where this egress traffic is going to. It is also going to be visible which communication is encrypted and what type of encryption is used.

Note: The Carbon Black Cloud module for hardening and securing Kubernetes workloads is expected to be generally available until the end of 2020.

The launch of Carbon Black Workload was the first important step to let the intrinsic security vision become more a reality (after VMware acquired Carbon Black). Moving on with Kubernetes and bringing new container security capabilities is going to be the next big move forward, that VMware can become a major security provider. 

Stay tuned for more security announcements!

Additional Resources

If you would like to know more about Carbon Black Cloud Workload and security for Kubernetes, have a look at:

VMware’s Tanzu Kubernetes Grid

VMware’s Tanzu Kubernetes Grid

Since the announcement of Tanzu and Project Pacific at VMworld US 2019 a lot happened and people want to know more what VMware is doing with Kubernetes. This article is a summary about the past announcements in the cloud native space. As you already may know at this point, when we talk about Kubernetes, VMware made very important acquisitions regarding this open-source project.

VMware Kubernetes Acquisitions

It all started with the acquisition of Heptio, a leader in the open Kubernetes ecosystem. With two of the creators of Kubernetes (K8s), namely Joe Beda and Craig McLuckie, Heptio should help to drive the cloud native technologies within VMware forward and help customers and the open source community to accelerate the enterprise adoption of K8s on-premises and in multi-cloud environments.

The second important milestone was in May 2019, where the intent to acquire Bitnami, a leader in application packaging solutions for Kubernetes environments, has been made public. At VMworld US 2019 VMware announced Project Galleon to bring Bitnami capabilities to the enterprise to offer customized application stacks to their developers.

One week before VMworld US 2019 the third milestone has been communicated, the agreement to acquire Pivotal. The solutions from Pivotal have helped customers learn how to adopt modern techniques to build and run software and they are the provider of the most popular developer framework for Java, Spring and Spring Boot.

On the 26th August 2019, VMware gave those strategic acquisitions the name VMware Tanzu. Tanzu should help customers to BUILD modern applications, RUN Kubernetes consistently in any cloud and MANAGE all Kubernetes environments from a single point of control (single console).

VMware Tanzu

Tanzu Mission Control (Tanzu MC) is the cornerstone of the Tanzu portfolio and should help to relieve the problems we have or going to have with a lof of Kubernetes clusters (fragmentation) within organizations. Multiple teams in the same company are creating and deploying applications on their own K8s clusters – on-premises or in any cloud (e.g. AWS, Azure or GCP). There are many valid reasons why different teams choose different clouds for different applications, but is causing fragmentation and management overhead because you are faced with different management consoles and silo’d infrastructures. And what about visibility into app/cluster health, cost, security requirements, IAM, networking policies and so on? Tanzu MC let customers manage all their K8s clusters across vSphere, VMware PKS, public cloud, managed services or even DIY – from a single console.

Tanzu Mission Control

It lets you provision K8s clusters in any environment and configure policies which establish guardrails. Those guardrails are configured by IT operations and they will apply policies for access, security, backup or quotas.

Tanzu Mission Control

As you can see, Mission Control has a lot of capabilities. If you look at the last two images you can see that you not only can create clusters directly from Tanzu MC, but also have the ability to attach existing K8s clusters. This can be done by installing an agent in the remote K8s cluster, which then provides a secure connection back to Tanzu MC.

We focused on the BUILD and MANAGE layers now. Let’s take a look at the RUN layer which should help us to run Kubernetes consistently across clouds. Without consistency across cloud environments (this includes on-prem) enterprises will struggle to manage their hundred or even thousands of modern apps. It’s just getting too complex.

VMware’s goal in general is to abstract complexity and to make your life easier and for this case VMware has announced the so-called Tanzu Kubernetes Grid (TKG) to provide us a common Kubernetes distribution across all the different environments.

Tanzu Kubernetes Grid

In my understanding TKG means VMware’s Kubernetes distribution, will include Project Pacific as soon as it’s GA and is based on three principles:

  • Open Source Kubernetes – tested and secured
  • Cluster Lifecycle management – fully integrated
  • 24×7 support

Meaning, that TKG is based on open source technologies, packaged for enterprises and supported by VMware’s Global Support Services (GSS). Based on these facts you could say, that today your Kubernetes journey with VMware starts with VMware PKS. PKS is the way VMware deliver the principles of Tanzu today – across vSphere, VCF, VMC on AWS, public clouds and edge.

Project Pacific

Project Pacific, which has been announced at VMworld US 2019 as well, is a complement to VMware PKS and will be available in a future release. If you are not familiar with Pacific yet, then read the introduction of Project Pacific. Otherwise, it’s sufficient to say, that Project Pacific means the re-architecture of vSphere to natively integrate Kubernetes. There is no nesting or any kind of it and it’s not Kubernetes in vSphere. It’s more like vSphere on top of Kubernetes since the idea of this project is to use Kubernetes to steer vSphere.

Project Pacific

Pacific will embed Kubernetes into the control plane of vSphere and converge VMs and containers on the same underlying platform. This will give the IT operators the possibility to see and manage Kubernetes from the vSphere client and provide developers the interfaces and tools they are already familiar with.

Project Pacific Console

If you are interested in the Project Pacific Beta Program, you’ll find all information here.

I would have access to download the vSphere build which includes Project Pacific, but I haven’t got time at the moment and my home lab is also not ready yet. We hear customers asking about the requirements for Pacific. If you watch all the different recordings from the VMworld sessions about Project Pacific and the Supervisor Cluster, then we could predict, that only NSX-T is a prerequisite to deploy and enable Project Pacific. This slide shows why NSX-T is part of Pacific:

Project Pacific Supervisor ClusterFrom this slide (from session HBI1452BE) we learn that a load balancer built on NSX Edge is sitting in front of the three K8s Control Plane VMs and that you’ll find a Distributed Load Balancer spanned across all hosts to enable the pod-to-pod or east-west communication.

Nobody of the speakers ever mentioned vSAN as a requirement and I also doubt that vSAN is going to be a prerequisite for Pacific.

You may ask yourself now which Kubernetes version will be shipped with ESXi and how you upgrade your K8s distribution? And what about if this setup with Pacific is too “static” for you? Well, for the Supervisor Clusters VMware releases patches with vSphere and you apply them with the known tools like VUM. For your own built K8s clusters, or if you need to deploy Guest Clusters, then the upgrades are easy as well. You just have to download the new distribution and specify the new version/distribution in the (Guest Cluster Manager) YAML file.

Conclusion

Rumos say that Pacific will be shipped with the upcoming vSphere 7.0 release, which even should include NSX-T 3.0. For now we don’t know when Pacific will be shipped with vSphere and if it really will be included with the next major version. I would be impressed if that would be the case, because you need a stable hypervisor version, then a new NSX-T version is also coming into play and in the end Pacific relies on these stable components. Our experience has shown that the first release normally is never perfect and stable and that we need to wait for the next cycle or quarter. With that in mind I would say that Pacific could be GA in Q3 2020 or Q4 2020. And beside that the beta program for Project Pacific just has started!

Nevertheless I think that Pacific and the whole Kubernetes Grid from VMware will help customers to run their (modern) apps on any Kubernetes infrastructure. We just need to be aware that there are some limitations when K8s is embedded in the hypervisor, but for these use cases Guest Clusters could be deployed anyway.

In my opinion Tanzu and Pacific alone don’t make “the” big difference. It’s getting more interesting if you talk about multi-cloud management with vRA 8.0 (or vRA Cloud), use Tanzu MC for the management of all your K8s clusters, networking with NSX-T (and NSX Cloud), create a container host with a container image (via vRA’s Service Broker) for AI- and ML-based workloads and provide the GPU over the network with Bitfusion.

Bitfusion Architecture

Looking forward to such conversations! 😀