VMware Cloud Foundation – A Technical Overview

VMware Cloud Foundation – A Technical Overview

While I was studying for the VMware Cloud Foundation Specialist certification, I realized that there is no one-pager available that gives you a short technical explanation of VMware Cloud Foundation.

What is VMware Cloud Foundation (VCF)?

VMware Cloud Foundation is a hybrid cloud platform that provides a full-stack hyperconverged infrastructure (HCI) that is made for modernizing data centers and deploying modern container-based applications. VCF integrates different components like vSphere (compute), vSAN (storage), NSX (networking) and some parts of the vRealize Suite in a HCI solution with infrastructure automation and software lifecycle management. The idea of VCF follows a standardized, automated and validated approach that simplifies the management of all the needed software-defined infrastructure resources.

This standardized and automated software stack provides customers consistent infrastructure and operations in a cloud operating model that can be deployed on-premises, at the edge or public cloud.

Cloud Foundation has Tanzu Standard integrated to provide a unified platform that lets virtual machines (VMs), Kubernetes and containers co-exist on the same platform.

Note: The Tanzu Standard Edition is included in the VCF Standard, Advanced and Enterprise edition

What software is being delivered in Cloud Foundation?

The BoM (bill of materials) is changing with each VCF release. Let me take the VCF 4.3 release as example to list the components and software versions:

  • VMware SDDC Manager 4.3
  • vSphere 7.0 Update 2a with Tanzu
  • vCenter Server 7.0 P03
  • vSAN 7.0 Update 2
  • NSX-T 3.1.3
  • VMware Workspace ONE Access 3.3.5
  • vRealize Log Insight 8.4
  • vRealize Operations 8.4
  • vRealize Automation 8.4.1
  • (vRealize Network Insight)

Note: VCF 4.3 deploys vRealize Lifecycle Manager (VRSLCM) 8.4.1, which then deploys and provides ongoing lifecycle management for other vRealize components. Currently, vRealize Network Insight needs to be imported manually into VRSLCM and then deployed.

Which VMware Cloud Foundation editions are available?

A VCF comparison matrix can be found here.

VMware Cloud Foundation Editions

VMware Cloud Foundation Architecture

VCF is made for greenfield deployments (brownfield not supported) and supports two different architecture models:

  • Standard Architecture
  • Consolidated Architecture

VMware Cloud Foundation Architecture

The standard architecture separates management workloads and lets them run on a dedicated management workload domain. Customer workloads are deployed on a separate virtual infrastructure workload domain (VI workload domain). Each workload domain is managed by a separate vCenter Server instance, which allows autonomous licensing and lifecycle management.

                                                 

Note: The standard architecture is the recommended model, because it separates management workloads from customers workloads.

Customers with a small environment (or a PoC) can start with a consolidated architecture. This allows you to run customer and management workloads together on the same workload domain (WLD).

Note: The management workload domain’s default cluster datastore must use vSAN. Other WLDs can use vSAN, NFS, FC and vVols for the principal storage.

VMware Cloud Foundation Storage

Does VCF provide flexible workload domain sizing?

Yes, that’s possible. You can license the WLDs based on your needs and use the editions that make the most sense depending on your use cases.

VMware Cloud Foundation Flexible Licensing

How many physical nodes are required to deploy VMware Cloud Foundation?

A minimum of four physical nodes is required to start in a consolidated architecture or to build your management workload domain. Four nodes are required to ensure that the environment can tolerate a failure while another node is being updated.

VI workload domains require a minimum of three nodes.

This means, to start with a standard architecture, you need to have the requirements (and money) to start with at least seven physical nodes.

What are the minimum hardware requirements?

These minimum specs have been listed for the management WLD since VCF 4.0 (September 2020):

VMware Cloud Foundation Hardware Requirements

What about edge/remote use cases?

When you would like to deploy VMware Cloud Foundation workload domains at a remote site, you can deploy so-called “VCF Remote Clusters”. Those remote workload domains are managed by the VCF instance at the central site and you can perform the same full-stack lifecycle management for the remote sites from the central SDDC Manager.

Prerequisites to deploy remote clusters can be found here.

Does VCF support HCI Mesh?

Yes. VMware Cloud Foundation 4.2 and later supports sharing remote datastores with HCI Mesh for VI workload domains.

HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.

What is SDDC Manager?

SDDC Manager is a preconfigured virtual appliance that is deployed in the management workload domain for creating workload domains, provisioning additional virtual infrastructure and lifecycle management of all the software-defined data center (SDDC) management components.

You use SDDC Manager in VMware Cloud Foundation to perform the following operations:

  • Commissioning or decommissioning ESXi hosts
  • Deployment of workload domains
  • Extension of clusters in the management and workload domains with ESXi hosts
  • Adding clusters to the management domain and workload domains
  • Support for network pools for host configuration in a workload domain
  • Product licenses storage
  • Deployment of vRealize Suite components.
  • Lifecycle management of the virtual infrastructure components in all workload domains, and of vRealize Suite Lifecycle Manager components.
  • Certificate management
  • Password management and rotation
  • NSX-T Edge cluster deployment in the management domain and workload domains
  • Backup configuration

VMware Cloud Foundation SDDC Manager Dashboard

How many resources does the VCF management WLD need during the bring-up process?

We know that VCF includes vSphere (ESXi and vCenter), vSAN, SDDC Manager, NSX-T and eventually some components of the vRealize Suite. The following table should give you an idea how the resource requirements look like to get VCF up and running:

VMware Cloud Foundation Resource Requirements

If you are interested to know how many resources the vRealize Suite will consume of the management workload domain , have a look at this table:

VMware Cloud Foundation Resource Requirements vRealize

How can I migrate my workloads from a non-VCF environment to a new VCF deployment?

VMware HCX provides a path to modernize from a legacy data center architecture by migrating to VMware Cloud Foundation.

VMware Cloud Foundation HCX Migration

Where can I get more information about VMware Tanzu and the Tanzu Standard edition?

Please have a look at these articles:

What is NSX Advanced Load Balancer?

NSX Advanced Load Balancer (NSX ALB) formerly known as Avi is a solution that provides advanced load balancing capabilities for VMware Cloud Foundation. 

Which security add-ons are available with VMware Cloud Foundation?

VMware has different workload and network security offerings to complement VCF:

Is there also a VCF subscription license?

Yes, you can purchase VCF-S (VCF Subscription) licenses as part of the VMware Cloud Universal program.

Can I get VCF as a managed service offering?

Yes, this is possible. Please have a look at Data Center as a Service based on VMware Cloud Foundation.

Where can I get more information?

Please consult the VMware Foundation 4.3 FAQ for more information about VMware Cloud Foundation. 

 

A Universal License and Technology to Build a Flexible Multi-Cloud

A Universal License and Technology to Build a Flexible Multi-Cloud

In November 2020 I wrote an article called “VMware Cloud Foundation And The Cloud Management Platform Simply Explained“. That piece was focused on the “why” and “when” VMware Cloud Foundation (VCF) makes sense for your organization. It also includes business values and hints that VCF is more than just about technology. Cloud Foundation is one of the most important drivers and THE enabler for to fulfill VMware’s multi-cloud strategy.

If you are not familiar enough with VMware’s multi-cloud strategy, then please have a look at my article “VMware Multi-Cloud and Hyperscale Computing” first.

To summarize the two above mentioned articles, one can say, that VMware Cloud Foundation is a software-defined data center (SDDC) that can run in any cloud. In “any cloud” means that VCF can also be consumed as a service through other cloud provider partners like:

Additionally, Cloud Foundation and the whole SDDC can be consumed as a managed offering called DCaaS or LCaaS (Data Center / Local Cloud as a service).

Let’s say a customer is convinced that a “VCF everywhere” approach is right for them and starts building up private and public clouds based on VMware’s technologies. This means that VMware Cloud Foundation now runs in their private and public cloud.

Note: This doesn’t mean that the customer cannot use native public cloud workloads and services anymore. They can simply co-exist.

The customer is at a point now where they have achieved a consistent infrastructure. What’s up next? The next logical step is to use the same automation, management and security consoles to achieve consistent operations.

A traditional VMware customer goes for the vRealize Suite now, because they would need vRealize Automation (vRA) for automation and vRealize Operations (vROps) to monitor the infrastructure.

The next topic in this customer’s journey would be application modernization, which includes topics containerization and Kubernetes. VMware’s answer for this is the Tanzu portfolio. For the sake of this example let’s go with “Tanzu Standard”, which is one of four editions available in the Tanzu portfolio (aka VMware Tanzu).

VMware Cloud Foundation

Let’s have a look at the customer’s bill of materials so far:

  • VMware Cloud Foundation on-premises (vSphere, vSAN, NSX)
  • VMware Cloud on AWS
  • VMware Cloud on Dell EMC (locally managed VCF service for special edge use cases)
  • vRealize Automation
  • vRealize Operations
  • Tanzu Standard (includes Tanzu Kubernetes Grid and Tanzu Mission Control)

Looking at this list above, we see that their infrastructure is equipped with three different VMware Cloud Foundation flavours (on-prem, hyperscaler managed, locally managed) complemented by products of the vRealize Suite and the Tanzu portfolio.

This infrastructure with its different technologies, components and licenses has been built up over the past few years. But organizations are nowadays asking for more flexibility than ever. By flexibility I mean license portability and a subscription model.

VMware Cloud Universal

On 31st March 2021 VMware introduced VMware Cloud Universal (VMCU). VMCU is the answer to make the customer’s life easier, because it gives you the choice and flexibility in which clouds you want to run your infrastructure and consume VMware Cloud offerings as needed. It even allows you to convert existing on-premises VCF licenses to a VCF-subscription license.

The VMCU program includes the following technologies and licenses:

  • VMware Cloud Foundation Subscription
  • VMware Cloud on AWS
  • Google Cloud VMware Engine
  • Azure VMware Solution
  • VMware Cloud on Dell EMC
  • vRealize Cloud Universal Enterprise Plus
  • Tanzu Standard Edition
  • VMware Success 360 (S360 is required with VMCU)

VMware Cloud Console

As Kit Kolbert, CTO VMware, said, “the idea is that VMware Cloud is everywhere that you want your applications to be”.

The VMware Cloud Console gives you view into all those different locations. You can quickly see what’s going on with a specific site or cloud landing zone, what its overall utilization looks like or if issues occur.

The Cloud Console has a seamless integration with vROps, which also helps you regarding capacity forecasting and (future) requirements (e.g., do I have enough capacity to meet my future demand?).

VMware Cloud Console

In short, it’s the central multi-cloud console to manage your global VMware Cloud environment.

vRealize Cloud Universal

What is part of vRealize Cloud Universal (vRCU) Enterprise Plus? vRCU is a SaaS management suite that combines on-premises and SaaS capabilities for automation, operations, log analytics and network visibility into a single offering. In other words, you get to decide where you want to deploy your management and operations tools. vRealize Cloud Universal comes in four editions and in VMCU you have the vRCU Enterprise Plus edition included with the following components:

vRealize Cloud Universal Editions

    Note: While vRCU standard, advanced and enterprise are sold as standalone editions today, the enterprise plus edition is only sold with VMCU (and as add-on to VMC on AWS).

    vRealize AI Cloud

    Have you ever heard of Project Magna? It is something that was announced at VMworld 2019, that provides adaptive optimization and a self-tuning engine for your data center. It was Pat Gelsinger who envisioned a so-called “self-driving data center”. Intelligence-driven data center might haven been a better term since Project Magna leverages artificial intelligence by using reinforcement learning, which combs through your data and runs thousands of scenarios that searches for the best regard output based on trial and error on the Magna SaaS analytics engine.

    The first instantiation began with vSAN (today also known as vRAI Cloud vSAN Optimizer), where Magna will collect data, learn from it, and make decisions that will automatically self-tune your infrastructure to drive greater performance and efficiencies.

    Today, this SaaS service is called vRealize AI Cloud.

    vRealize AI Cloud vSAN vRealize AI (vRAI) learns about your operating environments, application demands and adapts to changing dynamics, ensuring optimization per stated KPI. vRAI Cloud is only available on vRealize Operations Cloud via the vRealize Cloud Universal subscription.

    VMware Skyline

    VMware Skyline as a support service that automatically collects, aggregates, and analyzes product usage data, which proactively identifies potential problems and helps the VMware support engineers to improve the resolution time. Skyline is included in vRealize Cloud Universal because it just makes sense. A lot of customers have asked for unifying the self-service experience between Skyline and vRealize Operations Cloud. And many customers are using Skyline and vROps side by side today.

    Users can now be proactive and perform troubleshooting in a single SaaS workflow. This means customers save more time by automating Skyline proactive remediations in vROps Cloud. But Skyline supports vSphere, vSAN, NSX, vRA, VCF and VMware Horizon as well.

    VMware Cloud Universal Use Cases

    As already mentioned, VMCU makes very much sense if you are building a hybrid or multi-cloud architecture with a consistent (VMware) infrastructure. VMCU, vRCU and the Tanzu portfolio help you to create a unified control plane for your cloud infrastructure.

    Other use cases could be cloud migration or cloud bursting scenarios. If we switch back to the fictive customer before, we could use VMCU to convert existing VCF licenses to VCF-S (subscription) licenses, which in the end allow you to build a VMware-based Cloud on top of AWS (other public cloud providers are coming very soon!) for example.

    Another good example is to achieve the same service and operating model on-prem as in the public cloud: a fully managed consumable infrastructure. Meaning, to move from a self-built and self-managed VCF infrastructure to something like VMC on Dell EMC.

    How can I get VMCU?

    There is no monthly subscription model and VMware only supports one-year or three-year terms. Customers will need to sign an Enterprise License Agreement (ELA) and purchase VMCU SPP credits.

    Note: SPP credits purchased out of the program are not allowed to be used within the VMCU program!

    After purchasing the VMCU SPP credits and VMware Cloud onboarding and organization setup, you can select the infrastructure offerings to consume your SPP credits. This can be done via the VMware Cloud Console.

    Summary

    I hope this article was useful to get a better understanding about VMware Cloud Universal. It might seem a little bit complex, but that’s not true. VMCU makes your life easier and helps you to build and license a globally distributed cloud infrastructure based on VMware technology.

    VCF Subscription

     

     

     

    VMworld 2021 – My Content Catalog and Session Recommendation

    VMworld 2021 – My Content Catalog and Session Recommendation

    VMworld 2021 is going to happen from October 6-7, 2021 (EMEA). This year you can expect so many sessions and presentations about the options you have when combining different products together, that help you to reduce complexity, provide more automation and therefore create less overhead.

    Let me share my 5 personal favorite picks and also 5 recommended sessions based on the conversations I had with multiple customers this year.

    My 5 Personal Picks

    10 Things You Need to Know About Project Monterey [MCL1833]

    Project Monterey was announced in the VMworld 2020 keynote. There has been tremendous work done since then. Hear Niels Hagoort and Sudhansu Jain talking about SmartNICs and how they will redefine the data center with decoupled control and data planes – for ESXi hosts and bare-metal systems. They are going to cover and demo the overall architecture and use cases!

    Upskill Your Workforce with Augmented and Virtual Reality and VMware [VI1596]

    Learn from Matt Coppinger how augmented realited (AR) and virtual reality (VR) are transforming employee productivity, and how these solutions can be deployed and managed using VMware technologies. Matt is going to cover the top enterprise use cases for AR/VR as well as the challenges you might face deploying these emerging technologies. Are you interested how to architect and configure VMware technologies to deploy and manage the latest AR/VR technology, applications and content? If yes, then this session is also for you.

    Addressing Malware and Advanced Threats in the Network [SEC2027] (Tech+ Pass Only)

    I am very interested to learn more cybersecurity. With Chad Skipper VMware has an expert who can give insights on how the Network Detection and Response (NDR) capabilities if NSX Advanced Threat Prevention provide visibility, detection and prevention of advanced threats.

    60 Minutes of Non-Uniform Memory Access (NUMA) 3rd Edition [MCL1853]

    Learn more about NUMA from Frank Denneman. You are going to learn more about the underlying configuration of a virtual machine and discover the connection between the Generapl-Purpose Graphics Processing Unit (GPGPU) and the NUMA node. You will also understand after how your knowledge of NUMA concepts in your cluster can help the developer by aligning the Kubernetes nodes to the physical infrastructure with the help of VM Service.

    Mount a Robust Defense in Depth Strategy Against Ransomware [SEC1287]

    Are you interested to learn more about how to protect, detect, respond to and recover from cybersecurity attacks across all technology stacks, regardless of their purpose or location? Learn more from Amanda Blevins about the VMware solutions for end users, private clouds, public clouds and modern applications.

    5 Recommended Sessions based on Customer Conversations

    Cryptographic Agility: Preparing for Quantum Safety and Future Transition [VI1505]

    A lot of work is needed to better understand cryptographic agility and how we can address and manage the expected challenges that come with quantum computing. Hear VMware’s engineers from the Advanced Technology Group talking about the requirements of crypto agility and VMware’s recent research work on post-quantum cryptography in the VMware Unified Access Gateway (UAG) project.

    Edge Computing in the VMware Office of the CTO: Innovations on the Horizon [VI2484]

    Let Chris Wolf give you some insight into VMware’s strategic direction in support of edge computing. He is going to talk about solutions that will drive down costs while accelerating the velocity and agility in which new apps and services can be delivered to the edge.

    Delivering a Continuous Stream of More Secure Containers on Kubernetes [APP2574]

    In this session one can see how you can use two capabilities in VMware Tanzu Advanced, Tanzu Build Service and Tanzu Application Catalog, to feed a continuous stream of patched and compliant containers into your continuous delivery (CD) system. A must attend session delivered by David Zendzian, the VMware Tanzu Global Field CISO.

    A Modern Firewall For any Cloud and any Workload [SEC2688]

    VMware NSX firewall reimagines East-West security by using a distributed- and software-based approach to attach security policies to every workload in any cloud. Chris Kruegel gives you insights on how to stop lateral movement with advanced threat prevention (ATP) capabilities via IDS/IPS, sandboxing, NTA and NDR.

    A Practical Approach for End-to-End Zero Trust [SEC2733]

    Hear different the VMware CTOs Shawn Bass, Pere Monclus and Scott Lundgren talking about a zero trust approach. Shawn and the others will discuss specific capabilities that will enable customers to achieve a zero trust architecture that is aligned to the NIST guidance and covers secure access for users as well secure access to workloads.

    Enjoy VMworld 2021! 🙂

     

    VMware is Becoming a Leading Cybersecurity Vendor

    VMware is Becoming a Leading Cybersecurity Vendor

    For most organizations it is still new that they can talk about cybersecurity with VMware. VMware’s intrinsic security vision is something we have seen the first time at VMworld 2019, and since then it has become more a strategy than a vision.

    VMware is not new to enterprise security and it didn’t start with Workspace ONE nor with NSX. Security was already part of their DNA since it was possible for the first time that two virtual machines can share a physical host and have isolated compute resources assigned.

    Another example of (intrinsic) security came with vSAN and the encryption of data at rest, then followed by unified endpoint management and identity/access management with Workspace ONE. But wait!

    It was August 2013 when Pat Gelsinger introduced NSX as the platform for network virtualization, which included the distributed firewall capability already. The internal firewall is built into the VMware hypervisor since almost 8 years now, wow!

    NSX Service-Defined Firewall

    I had no customer so far, who wasn’t talking about achieving zero trust security with micro-segmentation to prevent lateral (east-west) movement. Zero trust is one approach to improve data center defenses with the inspection of every traffic flow within the data center. The idea is to divide the data center infrastructure into smaller security zones and that the traffic between the zones is inspected based on the organization’s defined policies.

    Perimeter Defense vs Micro-Segmentation

    Micro-segmentation puts a firewall to each virtual machine or workload, allowing us to protect all east-west communication.

    So, deploy micro-segmentation and the problem is solved, right? Not quite. While the concept of micro-segmentation has been around for a while, organizations still face barriers when trying to apply it in practice.

    Let’s have a look at some of the barriers to micro-segmentation and why this solution alone is not enough (anymore) to achieve zero trust:

    • Policy discovery challenges – Identifying the right micro-segments and configuring the proper security policies is an extremely daunting task, especially in a dynamic data center environment.
    • Limited-access controls – Basing micro-segmentation solely on L4 attributes (e.g., IP addresses and ports) is not enough. The ephemeral nature of applications and flows requires more than that.
    • Reliance on agents – Some micro-segmentation implementations require the installation of extra software agents on each virtual machine (VM), causing complexity and introducing vulnerability.
    • Lack of threat detection and prevention – Threats often masquerade as normal-looking traffic. Settling for basic traffic blocking rules isn’t enough.

    What does that tell us? Understanding the current applications’ topology and communication flows between their sub-services and -components is not easy. And with applications, which become less monolithic but very dynamic and distributed across multiple clouds, it becomes almost impossible, right?

    NSX Intelligence is a home-grown solution that automates policy discovery, understands the communication between services and can construct apps and flows maps (topologies).

    NSX Intelligence Recommendations

    Can we assume that traffic from A to B over HTTPS is safe per se with micro-segmentation? Nope.

    If we want to enhance traffic analysis capabilities and have a deeper look into traffic, the L7 (application layer) capabilities for micro-segmentation can be used.

    Firewall rules cannot consume application IDs. A context-aware firewall identifies applications and enforces a micro-segmentation for east-west traffic, independent of the port that the application uses.

    Other use case: For virtual desktop infrastructures (VDI), you could use VMware NSX’s ability to provide Active Directory identity-based firewall (IDFW) rules.

    Okay. We have a topology now and can create context-aware service-defined firewall rules. How can we differentiate between good or bad traffic? How can we detect network anomalies?

    Today’s attacks are becoming more sophisticated and hackers use masquerading techniques to embed threats within normal-looking traffic flows. Micro-segmentation alone will not intercept hidden threats, it only identifies traffic flows that should be allowed or blocked.

    It’s time to talk about advanced inspection capabilities.

    NSX Distributed IDS/IPS

    In general, for a firewall to inspect traffic, the traffic has to pass through it. In a virtual world this means we would redirect traffic from the VM’s to the firewalls and back. A practice called hair-pinning:

    Firewall Hair-Pinning

    That results in additional traffic and unnecessary latency. NSX has a distributed architecture, there is no centralized appliance that limits security capacity and network traffic doesn’t need to be hair-pinned to a network security stack for traffic inspection. Everything done with physical appliance can now be done in software (see coloring).

    Software-Defined Networking without Hair-Pinning

    The term intrinsic security always means that security is built into the infrastructure. The micro-segmentation capabilities including NSX Intelligence come without an agent – no reliance on agents!

    The VMware NSX Distributed IDS/IPS functionality adds additional traffic inspection capabilities to the service-defined firewall and follows the same intrinsic security principles.

    Note: These regular-expression IDS/IPS engines detect traffic patterns and are programmed to look for malicious traffic patterns.

    NSX Distributed IDPS

    NSX Advanced Threat Prevention (ATP)

    At VMworld 2020 VMware announced NSX Advanced Threat Protection, that brings technology from their Lastline acquisition to the NSX service-defined firewall.

    In my understanding, Lastline’s core product was a malware sandbox that can go deeper (than other sandboxes from other vendors) by using a full-system emulation to look at every instruction the malware executes.

    The Lastline system uses machine learning that recognizes essential elements of an attack, unlike the narrow signature-based systems that miss the many variants an attacker may use. The Lastline approach is not just anomaly detection – anomaly detection treats every outlier as bad and results in many false positives. Lastline leverages the deep understanding of malicious behavior to flag clearly bad activities such as East-West movement, command and control activity, and data exfiltration.

    This brings us to the powerful combination of the existing VMware capabilities with recently integrated Lastline feature set:

    NSX FW with ATP Features

    NSX Network Detection and Response

    Network Detection and Response (NDR) is a category of security solutions that complement EDR (we talk about Endpoint Detection and Response later) tools.

    Powered by artificial intelligence (AI), NSX NDR maps and defends against MITRE ATT&CK techniques with the current capabilities:

    NSX NDR MITRE ATTACK Framework Capabilities Q2 2021

    NSX NDR protects the network, cloud and hybrid cloud traffic, and provides a cloud-based and on-prem architecture that enables sensors to gain comprehensive visibility into traffic that crosses the network perimeter (north/south), as well as traffic that moves laterally inside the perimeter (east/west).

    NSX NDR uses a combination of four complementary technologies to detect and analyze advanced threats:

    NSX NDR Technologies

    Behavior-based Network Traffic Analysis (NTA)

    Network Traffic Analysis tools are all about detecting anomalies within the network (on-prem and public cloud) and use AI to create models of normal network activity and then alert on anomalies.

    VMware NTA Anomalies

    The challenge today is that not all anomalies are malicious. With Lastline’s NTA, VMware can now pick up threat behaviors and correlate these to network anomalies and vice versa. Because of this, according to VMware, they have the industry’s most accurate threat detection with minimal false positives.

    NSX NDR NTA Anomaly 2

    Intrusion Detection and Prevention System (IDPS)

    The NSX Advanced Threat Protection bundle includes IDS/IPS, which is integrated into NSX. The NSX Distributed IDS/IPS benefits from the unique application context from the hypervisor and network virtualization layers to make threat detection more accurate, efficient and dynamic.

    The key capabilities of NSX Distributed IDS/IPS include:

    • Distributed analysis
    • Curated, context-based signature distribution
    • Application context-driven threat detection
    • Policy and state mobility
    • Automated policy lifecycle management

    Use cases for NSX Distributed IDS/IPS include:

    • Easily achieving regulatory compliance
    • Virtualizing security zones
    • Replacing discrete appliances
    • Virtual patching vulnerabilities

    NSX Advanced Threat Analyzer (Sandbox)

    Included with NSX Advanced Threat Prevention, Advanced Threat Analyzer provides complete malware analysis and enables accurate detection and prevention of advanced threats. It deconstructs every behavior engineered into a file or URL, and sees all instructions that a program executes, all memory content, and all operating system activity.

    NSX NDR Sandbox Ransomware

    Other malware detection technologies, such as traditional sandboxes, only have visibility down to the operating system level. They can inspect content and identify potentially malicious code, but they can’t interact with malware like NSX Advanced Threat Analyzer can. As a result, they have significantly lower detection rates and higher false positives, in addition to being easily identified and evaded by advanced malware. (Advanced threats evade other sandboxing technologies by recognizing the sandbox environment or using kernel-level exploits.)

    VMware Threat Analysis Unit (TAU)

    With the Lastline acquisition VMware could further increase the capabilities provided by the VMware Carbon Black Threat Analysis Unit (TAU) with network-centric research and behavioral analysis.

    The VMware Threat Analysis Unit automatically shares the malware characteristics, behaviors and associated IoCs (Indicator of Compromises) of every malicious object curated and analyzed by VMware with all VMware customers and partners.

    NSX Advanced Threat Analyzer continuously updates the VMware TAU in real time with intelligence from partner and customer environments around the world.

    NSX Security Packages – How to get NSX ATP

    According to the knowledge base article Product Offerings for VMware NSX Security 3.1.x (81231), the new NSX Security editions became available in October 2020:

    • NSX Firewall for Baremetal Hosts. For organizations needing an agent-based network segmentation solution.
    • NSX Firewall. For organizations with one or more sites (optionally including public cloud endpoints) that primarily need advanced security services, select advanced networking capabilities, and traffic flow visibility and security operations with NSX Intelligence.
    • NSX Firewall with Advanced Threat Protection. For organizations that need NSX Firewall capabilities as well as advanced threat prevention capabilities, such as IDS/IPS, threat analysis, and network detection and response.

    Use Case with Network Virtualization

    If you are a customer with a NSX Data Center Advanced or Enterprise+ license, who uses NSX for network virtualization only today, you just need the “NSX ATP add-on” for NSX Data Center Advanced or Enterprise+.

    Note: The ATP add-on requires NSX-T 3.1 and above.

    Use Case without Network Virtualization (no NSX Data Center)

    If you have no need for network virtualization for now, you have the following options:

    1. If you look for base firewall features, you can get started with the NSX Firewall license.
    2. Should you look for base firewall features plus advanced threat protection, then start with NSX Firewall with Advanced Threat Protection.
    3. From here you still can down the network virtualization path and get the NSX Data Center Enterprise+ add-on for ATP

    Use Case for VCF Customers

    VCF customers have the option to start with the NSX ATP add-on for NSX NDC Adv/Ent+ as well.

    If you are looking for more even security, want NSX Advanced Load Balancer (GSLB, WAF) and/or Carbon Black Cloud Workload Protection (NGAV, EDR, Audit & Remediation) as well, then you have to get the “network and app security” or “advanced security” add-on.

    Carbon Black Endpoint Detection and Response (EDR)

    Before the Carbon Black acquisition, VMware already had strong technology, but was not seen or known as cybersecurity vendor. And it was really this acquisition that made the whole industry understand that VMware had to be taken seriously now as a security vendor.

    So, what is EDR according to Wikipedia?

    “Endpoint detection and response technology is used to protect endpoints, which are computer hardware devices, from threat. Creators of the EDR technology-based platforms deploy tools to gather data from endpoint devices, and then analyze the data to reveal potential cyber threats and issues. It is a protection against hacking attempts and theft of user data. The software is installed on the end-user device and it is continually monitored. The data is stored in a centralized database. In an incident when a threat is found, the end-user is immediately prompted with preventive list of actions.”

    EDR is essential since local activities on machines that may be malicious are not visible on the network. VMware Carbon Black EDR is an incident response and threat hunting solution designed for security operations centers (SOCs) and incident response (IR) teams. Enterprise EDR is delivered through the VMware Carbon Black Cloud, an endpoint protection platform that consolidates security in the cloud using a single agent, console and dataset.

    The Lastline acquisition, which came after Carbon Black, was just another brilliant move from VMware!

    XDR – VMware Security brings together EDR and NDR

    Again, while EDR protects endpoints, NDR protects the network, so that an organization’s entire IT infrastructure is secured. EDR gives security professionals visibility into endpoints that might be compromised, but this isn’t enough when an attack has moved across the network and into other systems by the time the security team is aware of it.

    This is where XDR comes in. VMware rolled out its Extended Detection and Response (XDR) strategy at VMworld 2020. By the way, it was in 2020 when Gartner named XDR as one of the top nine cybersecurity trends.

    By providing a holistic view of activity across the system that avoids visibility gaps, XDR allows security teams to understand where a threat comes from and how it’s spreading across the environment – in order to eliminate it. In other words, XDR offers greater analysis and correlation capabilities and a holistic point of view.

    EDR NDR Context Correlation

    VMware’s XDR platform is the Carbon Black Cloud. Carbon Black Cloud’s evolution into an XDR platform includes product integrations with existing VMware products like Workspace ONE, vSphere and the NSX service-defined firewall, as well as third-party partner platforms.

    At the Carbon Black Connect 2020 event, VMware announced launched their Next-Gen SOC Alliance that features integrations with the VMware Carbon Black Cloud to deliver key XDR capabilities and context into Security Information and Event Management (SIEM) technologies.

    We’re in an epic war against cybercrime. We know the asymmetric nature of this war – you will not win by trying to staff your SOC with more analysts. Nor can the battle be won by deploying an individual technology focused on only one part of your IT infrastructure. EDR and NDR along with your SIEM form the winning combination you need to win the war.

    Conclusion

    The Carbon Black acquisition gave VMware a strong cybersecurity foundation to build on. The recent acquisition of Lastline VMware added sandboxing and network traffic analysis capabilities to their internal firewall, which is provided by NSX.

    I don’t think it’s about “can VMware become a leading cybersecurity vendor” anymore. VMware has the most advanced internal firewall and is already becoming a leading cybersecurity vendor. The recent Global InfoSec award just confirms this statement:

    • Most Innovative in Endpoint Security” for VMware Carbon Black Cloud
    • “Market Leader in Firewall” for VMware NSX Service-defined Firewall

    If you want to learn and see more, this YouTube video with Stijn Vanveerdeghem, Sr. Technical Product Manager and Chad Skipper, Global Security Technologist, is a good start.

    Thanks for reading! 🙂

    The Rise of VMware Tanzu Service Mesh

    The Rise of VMware Tanzu Service Mesh

    My last article focused on application modernization and data portability in a multi-cloud world. I explained the value of the VMware Tanzu portfolio by mentioning a consistent infrastructure and consistent application platform approach, which ultimately delivers a consistent developer experience. I also dedicated a short section about Tanzu Service Mesh, which is only one part of the unified Tanzu control plane (besides Tanzu Mission Control and Tanzu Observability) for multiple Kubernetes clusters and clouds.

    When you hear or see someone writing about TSM, you very soon get to the point, where the so-called “Global Namespaces” (GNS) are being mentioned, which has the magic power to stitch hybrid applications together that run in multiple clouds.

    Believe me when I say that Tanzu Service Mesh (TSM) is rising and becoming the next superstar of the VMware portfolio. I think Joe Baguley would agree here. 😀

    Namespaces

    Before we start talking about Tanzu Service Mesh and the magical power of Global Namespaces, let us have a look at the term “Namespaces” first.

    Kubernetes Namespace

    Namespaces give you a way to organize clusters into virtual carved out sub-clusters, which can be helpful when different teams, tenants or projects share the same Kubernetes cluster. This form of a namespace provides a method to better share resources, because it ensures fair allocation of these resources with the right permissions.

    So, using namespaces gives you a way of isolation that developers never affect other project teams. Policies allow to configure compute resources by defining resource quotas for CPU or memory utilization. This also ensures the performance of a specific namespace, its resources (pods, services etc.) and the Kubernetes cluster in general.

    Although namespaces are separate from each other, they can communicate with each other. Network policies can be configured to create isolated and non-isolated pods. For example, a network policy can allow or deny all traffic coming from other namespaces.

    Ellei Mei explained this in a very easy in her article after Project Pacific had been made public in September 2019:

    Think of a farmer who divides their field (cluster + cluster resources) into fenced-off smaller fields (namespaces) for different herds of animals. The cows in one fenced field, horses in another, sheep in another, etc. The farmer would be like operations defining these namespaces, and the animals would be like developer teams, allowed to do whatever they do within the boundaries they are allocated.

    vSphere Namespace

    The first time I heard of Kubernetes or vSphere Namespaces was in fact at VMworld 2019 in Barcelona. VMware then presented a new app-focused management concept. This concept described a way to model modern application and all their parts, and we call this a vSphere Namespace today.

    With Project Pacific (today known vSphere with Tanzu or Tanzu Kubernetes Grid), VMware went one step further and extended the Kubernetes Namespace by adding more options for compute resource allocation, vMotion, encryption, high availability, backup & restore, and snapshots.

    Rather than having to deal with each namespace and its containers, vSphere Namespaces (also called “guardrails” sometimes) can draw a line around the whole application and services including virtual machines.

    Namespaces as the unit of management

    With the re-architecture of vSphere and the integration of Kubernetes as its control plane, namespaces can be seen as the new unit of management.

    Imagine that you might have thousands of VMs in your vCenter inventory that you needed to deal with. After you group those VMs into their logical applications, you may only have to deal with dozens of namespaces now.

    If you need to turn on encryption for an application, you can just click a button on the namespace in vCenter and it does it for you. You don’t need to deal with individual VMs anymore.

    vSphere Virtual Machine Service

    With the vSphere 7 Update 2a release, VMware provided the “VM Service” that enables Kubernetes-native provisioning and management of VMs.

    For many organizations legacy applications are not becoming modern over night, they become hybrid first before the are completely modernized. This means we have a combination of containers and virtual machines forming the application, and not find containers only. I also call this a hybrid application architecture in front of my customers. For example, you may have a containerized application that uses a database hosted in a separate VM.

    So, developers can use the existing Kubernetes API and a declarative approach to create VMs. No need to open a ticket anymore to request a virtual machine. We talk self-service here.

    Tanzu Mission Control – Namespace Management

    Tanzu Mission Control (TMC) is a VMware Cloud (SaaS) service that provides a single control point for multiple teams to remove the complexities from managing Kubernetes cluster across multiple clouds.

    One of the ways to organize and view your Kubernetes resources with TMC is by the creation of “Workspaces”.

    Workspaces allows you to organize your namespaces into logical groups across clusters, which helps to simplify management by applying policies at a group level. For example, you could apply an access policy to an entire group of clusters (from multiple clouds) rather than creating separate policies for each individual cluster.

    Think about backup and restore for a moment. TMC and the concept of workspaces allow you to back up and restore data resources in your Kubernetes clusters on a namespace level.

    Management and operations with a new application view!

    FYI, VMware announced the integration of Tanzu Mission Control and Tanzu Service Mesh in December 2020.

    Service Mesh

    A lot of vendors including VMware realized that the network is the fabric that brings microservices together, which in the end form the application. With modernized or partially modernized apps, different Kubernetes offerings and a multi-cloud environment, we will find the reality of hybrid applications which sometimes run in multiple clouds. 

    This is the moment when you have to think about the connectivity and communication between your app’s microservices.

    One of the main ideas and features behind a service mesh was to provide service-to-service communication for distributed applications running in multiple Kubernetes clusters hosted in different private or public clouds.

    The number of Kubernetes service meshes has rapidly increased over the last few years and has gotten a lot of hype. No wonder why there are different service mesh offerings around:

    • Istio
    • Linkerd
    • Consul
    • AWS Apps Mesh
    • OpenShift Service Mesh by Red Hat
    • Open Service Mesh AKS add-on (currently preview on Azure)

    Istio is probably the most famous one on this list. For me, it is definitely the one my customers look and talk about the most.

    Service mesh brings a new level of connectivity between services. With service mesh, we inject a proxy in front of each service; in Istio, for example, this is done using a “sidecar” within the pod.

    Istio’s architecture is divided into a data plane based on Envoy (the sidecar) and a control plane, that manages the proxies. With Istio, you inject the proxies into all the Kubernetes pods in the mesh.

    As you can see on the image, the proxy sits in front of each microservice and all communications are passed through it. When a proxy talks to another proxy, then we talk about a service mesh. Proxies also handle traffic management, errors and failures (retries) and collect metric for observability purposes.

    Challenges with Service Mesh

    The thing with service mesh is, while everyone thinks it sounds great, that there are new challenges that service mesh brings by itself.

    The installation and configuration of Istio is not that easy and it takes time. Besides that, Istio is also typically tied to a single Kubernetes cluster and therefore Istio data plane – and organizations usually prefer to keep their Kubernetes clusters independent from each other. This leaves us with security and policies tied to a Kubernetes cluster or cloud vendor, which leaves us with silos.

    Istio supports a so-called multi-cluster deployment with one service mesh stretched across Kubernetes clusters, but you’ll end up with a stretched Istio control plane, which eliminates the independence of each cluster.

    So, a lot of customers also talk about better and easier manageability without dependencies between clouds and different Kubernetes clusters from different vendors.

    That’s the moment when Tanzu Service Mesh becomes very interesting. 🙂

    Tanzu Service Mesh (formerly known as NSX Service Mesh)

    Tanzu Service Mesh, built on VMware NSX, is an offering that delivers an enterprise-grade service mesh, built on top of a VMware-administrated Istio version.

    When onboarding a new cluster on Tanzu Service Mesh, the service deploys a curated version of Istio signed and supported by VMware. This Istio deployment is the same as the upstream Istio in every way, but it also includes an agent that communicates with the Tanzu Service Mesh global control plane. Istio installation is not the most intuitive, but the onboarding process of Tanzu Service Mesh simplifies the process significantly.

    Overview of Tanzu Service Mesh

    The big difference and the value that comes with Tanzu Service Mesh (TSM) is its ability to support cross-cluster and cross-cloud use cases via Global Namespaces.

    Global Namespaces (GNS)

    Yep, another kind of a namespace, but the most exciting one! 🙂

    A Global Namespace is a unique concept in Tanzu Service Mesh and connects resources and workloads that form the application into a virtual unit. Each GNS is an isolated domain that provides automatic service discovery and manages the following functions that are port of it, no matter where they are located:

    • Identity. Each global namespace has its own certificate authority (CA) that provisions identities for the resources inside that global namespace
    • Discovery (DNS). The global namespace controls how one resource can locate another and provides a registry.
    • Connectivity. The global namespace defines how communication can be established between resources and how traffic within the global namespace and external to the global namespace is routed between resources.
    • Security. The global namespace manages security for its resources. In particular, the global namespace can enforce that all traffic between the resources is encrypted using Mutual Transport Layer Security authentication (mTLS).
    • Observability. Tanzu Service Mesh aggregates telemetry data, such as metrics for services, clusters, and nodes, inside the global namespace.

    Use Cases

    The following diagram represents the global namespace concept and other pieces in a high-level architectural view. The components of one application are distributed in two different Kubernetes clusters: one of them is on-premises and the other in a public cloud. The Global Namespace creates a logical view of these application components and provides a set of basic services for the components.

    Global Namespaces

    If we take application continuity as another example for a use case, we would deploy an app in more than one cluster and possibly in a remote region for disaster recovery (DR), with a load balancer between the locations to direct traffic to both clusters. This would be an active-active scenario. With Tanzu Service Mesh, you could group the clusters into a Global Namespace and program it to automatically redirect traffic in case of a failure. 

    In addition to the use case and support for multi-zone and multi-region high availability and disaster recovery, you can also provide resiliency with automated scaling based on defined Service-Level Objectives (SLO) for multi-cloud apps.

    VMware Modern Apps Connectivity Solution  

    In May 2021 VMware introduced a new solution that brings together the capabilities of Tanzu Service Mesh and NSX Advanced Load Balancer (NSX ALB, formerly Avi Networks) – not only for containers but also for VMs. While Istio’s Envoy only operates on layer 7, VMware provides layer 4 to layer 7 services with NSX (part of TSM) and NSX ALB, which includes L4 load balancing, ingress controllers, GSLB, WAF and end-to-end service visibility. 

    This solution speeds the path to app modernization with connectivity and better security across hybrid environments and hybrid app architectures.

    Multiple disjointed products, no end-to-end observability

     

     

     

     

     

     

    Summary

    One thing I can say for sure: The future for Tanzu Service Mesh is bright!

    Many customers are looking for ways for offloading security (encryption, authentication, authorization) from an application to a service mesh.

    One great example and use case from the financial services industry is crypto agility, where a “crypto service mesh” (a specialized service mesh) could be part of a new architecture, which provides quantum-safe certificates.

    And when we offload encryption, calculation, authentication etc., then we may have other use cases for SmartNICs and  Project Monterey

    To learn more about service mesh and the capabilities of Tanzu Service Mesh, I can recommend Service Mesh for Dummies written Niran Even-Chen, Oren Penso and Susan Wu.

    Thank you for reading!

     

    Application Modernization and Multi-Cloud Portability with VMware Tanzu

    Application Modernization and Multi-Cloud Portability with VMware Tanzu

    It was 2019 when VMware announced Tanzu and Project Pacific. A lot has happened since then and almost everyone is talking about application modernization nowadays. With my strong IT infrastructure background, I had to learn a lot of new things to survive initial conversations with application owners, developers and software architects. And in the same time VMware’s Kubernetes offering grew and became very complex – not only for customers, but for everyone I believe. 🙂

    I already wrote about VMware’s vision with Tanzu: To put a consistent “Kubernetes grid” over any cloud

    This is the simple message and value hidden behind the much larger topics when discussing application modernization and application/data portability across clouds.

    The goal of this article is to give you a better understanding about the real value of VMware Tanzu and to explain that it’s less about Kubernetes and the Kubernetes integration with vSphere.

    Application Modernization

    Before we can talk about the modernization of applications or the different migration approaches like:

    • Retain – Optimize and retain existing apps, as-is
    • Rehost/Migration (lift & shift) – Move an application to the public cloud without making any changes
    • Replatform (lift and reshape) – Put apps in containers and run in Kubernetes. Move apps to the public cloud
    • Rebuild and Refactor – Rewrite apps using cloud native technologies
    • Retire – Retire traditional apps and convert to new SaaS apps

    …we need to have a look at the palette of our applications:

    • Web Apps – Apache Tomcat, Nginx, Java
    • SQL Databases – MySQL, Oracle DB, PostgreSQL
    • NoSQL Databases – MongoDB, Cassandra, Prometheus, Couchbase, Redis
    • Big Data – Splunk, Elasticsearch, ELK stack, Greenplum, Kafka, Hadoop

    In an app modernization discussion, we very quickly start to classify applications as microservices or monoliths. From an infrastructure point of view you look at apps differently and call them “stateless” (web apps) or “stateful” (SQL, NoSQL, Big Data) apps.

    And with Kubernetes we are trying to overcome the challenges, which come with the stateful applications related to app modernization:

    • What does modernization really mean?
    • How do I define “modernization”?
    • What is the benefit by modernizing applications?
    • What are the tools? What are my options?

    What has changed? Why is everyone talking about modernization? Why are we talking so much about Kubernetes and cloud native? Why now?

    To understand the benefits (and challenges) of app modernization, we can start looking at the definition from IBM for a “modern app”:

    “Application modernization is the process of taking existing legacy applications and modernizing their platform infrastructure, internal architecture, and/or features. Much of the discussion around application modernization today is focused on monolithic, on-premises applications—typically updated and maintained using waterfall development processes—and how those applications can be brought into cloud architecture and release patterns, namely microservices

    Modern applications are collections of microservices, which are light, fault tolerant and small. Microservices can run in containers deployed on a private or public cloud.

    Which means, that a modern application is something that can adapt to any environment and perform equally well.

    Note: App modernization can also mean, that you must move your application from .NET Framework to .NET Core.

    I have a customer, that is just getting started with the app modernization topic and has hundreds of Windows applications based on the .NET Framework. Porting an existing .NET app to .NET Core requires some work, but is the general recommendation for the future. This would also give you the option to run your .NET Core apps on Windows, Linux and macOS (and not only on Windows).

    A modern application is something than can run on bare-metal, VMs, public cloud and containers, and that easily integrates with any component of your infrastructure. It must be something, that is elastic. Something, that can grow and shrink depending on the load and usage. Since it is something that needs to be able to adapt, it must be agile and therefore portable.

    Cloud Native Architectures and Modern Designs

    If I ask my VMware colleagues from our so-called MAPBU (Modern Application Platform Business Unit) how customers can achieve application portability, the answer is always: “Cloud Native!”

    Many organizations and people see cloud native as going to Kubernetes. But cloud native is so much more than the provisioning and orchestration of containers with Kubernetes. It’s a about collaboration, DevOps, internal processes and supply chains, observability/self-healing, continuous delivery/deployment and cloud infrastructure.

    There are so many definitions around “cloud native”, that Kamal Arora from Amazon Web Services and others wrote the book “Cloud Native Architecture“, which describes a maturity model. This model helps you to understand, that cloud native is more a journey than only restrictive definition.

    Cloud Native Maturity Model

    The adoption of cloud services and applying an application-centric design are very important, but the book also mentions that security and scalability rely on automation. And this for example could bring the requirement for Infrastructure as Code (IaC).

    In the past, virtualization – moving from bare-metal to vSphere – didn’t force organizations to modernize their applications. The application didn’t need to change and VMware abstracted and emulated the bare-metal server. So, the transition (P2V) of an application was very smooth and not complicated.

    And this is what has changed today. We have new architectures, new technologies and new clouds running with different technology stacks. We have Kubernetes as framework, which requires applications to be redesigned for these platforms.

    That is the reason why enterprises have to modernize their applications.

    One of the “five R’s” mentioned above is the lift and shift approach. If you don’t want or need to modernize some of your applications, but move to the public cloud in an easy, fast and cost efficient way, have a look at VMware’ hybrid cloud extension (HCX).

    In this article I focus more on the replatform and refactor approaches in a multi-cloud world.

    Kubernetize and productize your applications

    Assuming that you also define Kubernetes as the standard to orchestrate your containers where your microservices are running in, usually the next decision would be about the Kubernetes “product” (on-prem, OpenShift, public cloud).

    Looking at the current CNCF Cloud Native Landscape, we can count over 50 storage vendors and over 20 networks vendors providing cloud native storage and networking solutions for containers and Kubernetes.

    Talking to my customers, most of them mention the storage and network integration as one of their big challenges with Kubernetes. Their concern is about performance, resiliency, different storage and network patterns, automation, data protection/replication, scalability and cloud portability.

    Why do organizations need portability?

    There are many use cases and requirements that portability (infrastructure independence) becomes relevant. Maybe it’s about a hardware refresh or data center evacuation, to avoid vendor/cloud lock-in, not enough performance with the current infrastructure or it could be about dev/test environments, where resources are deployed and consumed on-demand.

    Multi-Cloud Application Portability with VMware Tanzu

    To explore the value of Tanzu, I would like to start by setting the scene with the following customer use case:

    In this case the customer is following a cloud-appropriate approach to define which cloud is the right landing zone for their applications. They decided to develop new applications in the public cloud and use the native services from Azure and AWS. The customers still has hundreds of legacy applications (monoliths) on-premises and didn’t decide yet, if they want to follow a “lift and shift and then modernize” approach to migrate a number applications to the public cloud.

    Multi-Cloud App Portability

    But some of their application owners already gave the feedback, that their applications are not allowed to be hosted in the public cloud, have to stay on-premises and need to be modernized locally.

    At the same time the IT architecture team receives the feedback from other application owners, that the journey to the public cloud is great on paper, but brings huge operational challenges with it. So, IT operations asks the architecture team if they can do something about that problem.

    Both cloud operations for Azure and AWS teams deliver a different quality of their services, changes and deployments take longer with one of their public clouds, they have problems with overlapping networks, different storage performance characteristics and APIs.

    Another challenge is the role-based access to the different clouds, Kubernetes clusters and APIs. There is no central log aggregation and no observability (intelligent monitoring & alerting). Traffic distribution and load balancing are also other items on this list.

    Because of the feedback from operations to architecture, IT engineering received the task to define a multi-cloud strategy, that solves this operational complexity.

    Notes: These are the regular multi-cloud challenges, where clouds are the new silos and enterprises have different teams with different expertise using different management and security tools.

    This is the time when VMware’s multi-cloud approach Tanzu become very interesting for such customers.

    Consistent Infrastructure and Management

    The first discussion point here would be the infrastructure. It’s important, that the different private and public clouds are not handled and seen as silos. VMware’s approach is to connect all the clouds with the same underlying technology stack based on VMware Cloud Foundation.

    Beside the fact, that lift and shift migrations would be very easy now, this approach brings two very important advantages for the containerized workloads and the cloud infrastructure in general. It solves the challenge with the huge storage and networking ecosystem available for Kubernetes workloads by using vSAN and NSX Data Center in any of the existing clouds. Storage and networking and security are now integrated and consistent.

    For existing workloads running natively in public clouds, customers can use NSX Cloud, which uses the same management plane and control plane as NSX Data Center. That’s another major step forward.

    Using consistent infrastructure enables customers for consistent operations and automation.

    Consistent Application Platform and Developer Experience

    Looking at organization’s application and container platforms, achieving consistent infrastructure is not required, but obviously very helpful in terms of operational and cost efficiency.

    To provide a consistent developer experience and to abstract the underlying application or Kubernetes platform, you would follow the same VMware approach as always: to put a layer on top.

    Here the solution is called Tanzu Kubernetes Grid (TKG), that provides a consistent, upstream-compatible implementation of Kubernetes, that is tested, signed and supported by VMware.

    A Tanzu Kubernetes cluster is an opinionated installation of Kubernetes open-source software that is built and supported by VMware. In all the offerings, you provision and use Tanzu Kubernetes clusters in a declarative manner that is familiar to Kubernetes operators and developers. The different Tanzu Kubernetes Grid offerings provision and manage Tanzu Kubernetes clusters on different platforms, in ways that are designed to be as similar as possible, but that are subtly different.

    VMware Tanzu Kubernetes Grid (TKG aka TKGm)

    Tanzu Kubernetes Grid can be deployed across software-defined datacenters (SDDC) and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2. I would assume, that the Google Cloud is a roadmap item.

    TKG allows you to run Kubernetes with consistency and makes it available to your developers as a utility, just like the electricity grid. TKG provides the services such as networking, authentication, ingress control, and logging that a production Kubernetes environment requires.

    This TKG version is also known as TKGm for “TKG multi-cloud”.

    VMware Tanzu Kubernetes Grid Service (TKGS aka vSphere with Tanzu)

    TKGS is the option vSphere admins want to hear about first, because it allows you to turn a vSphere cluster to a platform running Kubernetes workloads in dedicated resources pools. TKGS is the thing that was known as “Project Pacific” in the past.

    Once enabled on a vSphere cluster, vSphere with Tanzu creates a Kubernetes control plane directly in the hypervisor layer. You can then run Kubernetes containers by deploying vSphere Pods, or you can create upstream Kubernetes clusters through the VMware Tanzu Kubernetes Grid Service and run your applications inside these clusters.

    VMware Tanzu Mission Control (TMC)

    In our use case before, we have AKS and EKS for running Kubernetes clusters in the public cloud.

    The VMware solution for multi-cluster Kubernetes management across clouds is called Tanzu Mission Control, which is a centralized management platform for the consistency and security the IT engineering team was looking for.

    Available through VMware Cloud Services as SaaS offering, TMC provides IT operators with a single control point to provide their developers self-service access to Kubernetes clusters.

    TMC also provides cluster lifecycle management for TKG clusters across environment such as vSphere, AWS and Azure.

    It allows you to bring the clusters you already have in the public clouds or other environments (with Rancher or OpenShift for example) under one roof via the attachment of conformant Kubernetes clusters.

    Not only do you gain global visibility across clusters, teams and clouds, but you also get centralized authentication and authorization, consistent policy management and data protection functionalities.

    VMware Tanzu Observability by Wavefront (TO)

    Tanzu Observability extends the basic observability provided by TMC with enterprise-grade observability and analytics.

    Wavefront by VMware helps Tanzu operators, DevOps teams, and developers get metrics-driven insights into the real-time performance of their custom code, Tanzu platform and its underlying components. Wavefront proactively detects and alerts on production issues and improves agility in code releases.

    TO is also a SaaS-based platform, that can handle the high-scale requirements of cloud native applications.

    VMware Tanzu Service Mesh (TSM)

    Tanzu Service Mesh, formerly known as NSX Service Mesh, provides consistent connectivity and security for microservices across all clouds and Kubernetes clusters. TSM can be installed in TKG clusters and third-party Kubernetes-conformant clusters.

    Organizations that are using or looking at the popular Calico cloud native networking option for their Kubernetes ecosystem often consider an integration with Istio (Service Mesh) to connect services and to secure the communication between these services.

    The combination of Calico and Istio can be replaced by TSM, which is built on VMware NSX for networking and that uses an Istio data plane abstraction. This version of Istio is signed and supported by VMware and is the same as the upstream version. TSM brings enterprise-grade support for Istio and a simplified installation process.

    One of the primary constructs of Tanzu Service Mesh is the concept of a Global Namespace (GNS). GNS allows developers using Tanzu Service Mesh, regardless of where they are, to connect application services without having to specify (or even know) any underlying infrastructure details, as all of that is done automatically. With the power of this abstraction, your application microservices can “live” anywhere, in any cloud, allowing you to make placement decisions based on application and organizational requirements—not infrastructure constraints.

    Note: On the 18th of March 2021 VMware announced the acquisition of Mesh7 and the integration of Mesh7’s contextual API behavior security solution with Tanzu Service Mesh to simplify DevSecOps.

    Tanzu Editions

    The VMware Tanzu portfolio comes with three different editions: Basic, Standard, Advanced

    Tanzu Basic enables the straightforward implementation of Kubernetes in vSphere so that vSphere admins can leverage familiar tools used for managing VMs when managing clusters = TKGS

    Tanzu Standard provides multi-cloud support, enabling Kubernetes deployment across on-premises, public cloud, and edge environments. In addition, Tanzu Standard includes a centralized multi-cluster SaaS control plane for a more consistent and efficient operation of clusters across environments = TKGS + TKGm + TMC

    Tanzu Advanced builds on Tanzu Standard to simplify and secure the container lifecycle, enabling teams to accelerate the delivery of modern apps at scale across clouds. It adds a comprehensive global control plane with observability and service mesh, consolidated Kubernetes ingress services, data services, container catalog, and automated container builds = TKG (TKGS & TKGm) + TMC + TO + TSM + MUCH MORE

    Tanzu Data Services

    Another topic to reduce dependencies and avoid vendor lock-in would be Tanzu Data Services – a separate part of the Tanzu portfolio with on-demand caching (Tanzu Gemfire), messaging (Tanzu RabbitMQ) and database software (Tanzu SQL & Tanzu Greenplum) products.

    Bringing all together

    As always, I’m trying to summarize and simplify things where needed and I hope it helped you to better understand the value and capabilities of VMware Tanzu.

    There are so many more products available in the Tanzu portfolio, that help you to build, run, manage, connect and protect your applications. In case you are interested to read more about VMware Tanzu, the have a look at my article 10 Things You Didn’t Know About VMware Tanzu.

    If you would like to know more about application and cloud transformation make sure to attend the 45 minute VMware event on March 31 (Americas) or April 1 (EMEA/APJ)!