VMware Explore US 2022 – Summary of Day 1 Announcements

VMware Explore US 2022 – Summary of Day 1 Announcements

VMworld is now VMware Explore and is currently happening in San Francisco! This is a consolidated of the announcements from day 1 (August 30th, 2022).

VMware Introduces vSphere 8, vSAN 8 and VMware Cloud Foundation+

VMware today introduced VMware vSphere 8 and VMware vSAN 8—major new releases of VMware’s compute and storage solutions.

vSphere 8 – vSphere 8 introduces vSphere on DPUs, previously known as Project Monterey. In close collaboration with technology partners AMD, Intel and NVIDIA as well as OEM system partners Dell Technologies, Hewlett Packard Enterprise and Lenovo, vSphere on DPUs will unlock hardware innovation helping customers meet the throughput and latency needs of modern distributed workloads. vSphere will enable this by offloading and accelerating network and security infrastructure functions onto DPUs from CPUs.

ESXi on DPU

vSphere 8 will dramatically accelerate AI and machine learning applications by doubling the virtual GPU devices per VM, delivering a 4x increase of passthrough devices, and supporting vendor device groups which enable binding of high-speed networking devices and the GPU.

vSAN 8: vSAN 8 introduces breakthrough performance and hyper-efficiency. Built from the ground up, the new vSAN Express Storage Architecture (ESA) will enhance the performance, storage efficiency, data protection and management of vSAN running on the latest generation storage devices. vSAN 8 will provide customers with a future ready infrastructure that supports modern TLC storage devices and delivers up to a 4x performance boost.

VMware Cloud Foundation+ – VMware introduces a new cloud-connected architecture for managing and operating full stack HCI in data centers. Built on vSphere+ and vSAN+, VMware Cloud Foundation+ will add a new cloud-connected architecture for managing and operating full-stack HCI in our data center or co-location facility.

VMware Cloud Foundation+ will deliver new admin, developer and hybrid cloud services through a simplified subscription model and keyless entitlement. VMware Cloud Foundation 4.5 will enable VMware Cloud Foundation+ by adding vSphere+ and vSAN+, plus a cloud gateway that provides access to the VMware Cloud Console as part of the full stack architecture.

VMware Cloud for Hyperscalers

VMC on AWS – Amazon Elastic Compute Cloud (Amazon EC2) I4i instances for I/O-intensive Workloads: Powered by 3rd generation Intel® Xeon® Scalable processors (Ice Lake), Amazon EC2 instances help deliver better workload support and delivery, lower TCO, and increased scalability and application performance. Compared to I3, the I4i instances provide nearly twice the number of physical cores, twice the memory, three times the storage capacity, and three times the network bandwidth.

Amazon FSx for NetApp ONTAP Integration Availability – as a native AWS cloud storage service that is certified as a supplemental datastore for VMware Cloud on AWS, FSx for ONTAP offers fully managed shared storage built on the familiar NetApp ONTAP file system trusted by VMware customers running on premises today. Customers can now use FSx for ONTAP as a simple and elastic datastore for VMware Cloud on AWS, enabling them to scale storage up or down independently from compute while paying only for the resources they need.

VMware Cloud Flex Storage Availability – A new VMware-managed and natively integrated cloud storage and data management solution that offers supplemental datastore-level access for VMware Cloud on AWS. With just a few clicks in the VMware Cloud Console, customers can scale their storage environment without adding hosts, and elastically adjust storage capacity up or down as needed for every application. Customers also benefit from a simple, pay-as-you-consume pricing model. Together with VMware vSAN, VMware Cloud Flex Storage offers flexibility and customer value in terms of resilience, performance, scale, and cost in the cloud.

VMware Cloud Flex Compute – “Preview” of a new cloud compute model that will help customers get started faster with VMware Cloud on AWS. With this new model, VMware introduces a “resource-defined” cloud compute model in place of “hardware-defined” compute instance model which will provide customers higher flexibility, elasticity, and speed to better meet cost and performance requirements of enterprise applications. It will help customers get started faster with VMware Cloud on AWS by using smaller consumable units.

Azure VMware Solution – Customers will be able to purchase Azure VMware Solution as part of VMware Cloud Universal, a flexible purchasing and consumption program for executing multi-cloud and digital transformation strategies. VMware Cloud Director Service for Azure VMware Solution is also now available in Public Preview.

Google Cloud VMware Engine – VMware announced VMware Tanzu Standard edition on Google Cloud VMware Engine to help simplify Kubernetes adoption and management.

Oracle Cloud VMware Solution – New features and capabilities with VMware Tanzu Standard Edition and introduced support for single host SDDCs for non-production workloads.

VMware Cloud Management – VMware Aria

VMware unveiled a multi-cloud management portfolio called VMware Aria, which provides a set of end-to-end solutions for managing the cost, performance, configuration, and delivery of infrastructure and cloud native applications.

VMware Aria is a new brand for the vRealize components, Tanzu Observability by Wavefront and CloudHealth unified under one umbrella, one name.

The VMware products and services within the VMware Aria portfolio are:

  • VMware Aria Automation (formerly, vRealize Automation)
  • VMware Aria Operations (formerly, vRealize Operations)
  • VMware Aria Operations for Networks (formerly, vRealize Network Insight)
  • VMware Aria Operations for Logs (formerly, vRealize Log Insight)
  • VMware Aria Operations for Secure Clouds (formerly, CloudHealth Secure State)
  • VMware Aria Cost powered by CloudHealth (formerly, CloudHealth)
  • VMware Aria Operations for Applications (formerly VMware Tanzu Observability)
  • VMware Skyline

VMware Aria Products

VMware Aria is anchored by VMware Aria Hub (formerly known as Project Ensemble), which provides centralized views and controls to manage the entire multi-cloud environment, and leverages VMware Aria Graph to provide a common definition of applications, resources, roles, and accounts.

VMware Aria Graph provides a single source of truth that is updated in near-real time. Other solutions on the market were designed in a slower moving era, primarily for change management processes and asset tracking. By contrast, VMware Aria Graph is designed expressly for cloud-native operations.

VMware Aria provides features and functions that span management disciplines and clouds to deliver unique value for multi-cloud governance, cross-cloud migration, and actionable business insights. In addition, there are three new end-to-end management services built on top of VMware Aria Hub and VMware Aria Graph:

  • VMware Aria Guardrails – Automate enforcement of cloud guardrails for networking, security, cost, performance, and configuration at scale for multi-cloud environments with an everything-as-code approach
  • VMware Aria Migration – Accelerate and simplify the multi-cloud migration journey by automating assessment, planning, and execution in conjunction with VMware HCX
  • VMware Aria Business Insights – Discern relevant business insights from full-stack event correlation leveraging AI/ML analytics

Networking and Security

Project Northstar – Project Northstar is a SaaS-based network and security offering that will empower NSX customers with a set of on-demand multi-cloud networking and security services, end-to-end visibility, and controls. Customers will be able to use a centralized cloud console to gain instant access to networking and security services, such as network and security policy controls, Network Detection and Response (NDR), NSX Intelligence, Advanced Load Balancing (ALB), Web Application Firewall (WAF), and HCX. It will support both private cloud and VMware Cloud deployments running on public clouds and enable enterprises to build flexible network infrastructure that they can spin up and down in minutes.

Graphical user interface Description automatically generated

DPU-based Acceleration for NSX – Formerly known as Project Monterey, VMware announced that starting with NSX 4.0 and vSphere 8.0, customers can leverage DPU-based acceleration using SmartNICs. Offloading NSX services to the DPU can accelerate networking and security functions without impacting the host CPUs, addressing the needs of modern applications and other network-intensive and latency-sensitive applications.

Image of a SmartNIC

Project Trinidad – Available as tech preview, Project Trinidad extends VMware’s API security and analytics by deploying sensors on Kubernetes clusters and uses machine learning with business logic inference to detect anomalous behavior in east-west traffic between microservices.

Project Watch – VMware unveiled Project Watch, a new approach to multi-cloud networking and security that will provide advanced app-to-app policy controls to help with continuous risk and compliance assessment. In technology preview, Project Watch will help network security and compliance teams to continuously observe, assess, and dynamically mitigate risk and compliance problems in composite multi-cloud applications.

Additionally, VMware NSX Advanced Load Balancer adds new bot management capabilities to help enterprises address threats quickly and efficiently, providing enhanced multi-layer application protection with existing Web Application Firewall, DDoS protection, and API security.

Edge

VMware Edge Compute Stack 2.0 – VMware announced the VMware Edge Compute Stack v1.0 last year and is now adding more features and functionalities optimized for different use cases at the enterprise edge – shipped with vSphere 8 and Tanzu Kubernetes Grid 2.0. VMware, for the first time, will introduce initial support for non-x86 processor-based specialized small form factor edge platforms to simultaneously run IT/OT workloads and workflows on a single stack.

 

VMware Private Mobile Network (Beta) – Delivered by service providers, this new managed service offering provides enterprises with private 4G/5G mobile connectivity in support of edge-native applications. VMware will empower partners with a single PMN orchestrator to operate multi-tenant private 4G/5G networks with an enterprise-grade solution. 

Modern Applications (VMware Tanzu)

Tanzu Application Platform – VMware pre-announced new Tanzu Application Platform (TAP) 1.3 capabilities like the availability on RedHat OpenShift or the support for air-gapped installations for regulated and disconnected environments.

Tanzu Mission Control – Finally, VMware announced the preview for lifecycle management of Amazon Elastic Kubernetes Service (EKS) clusters, which enables direct provisioning and management of EKS clusters, which is awesome! I suppose we can expect the support for Azure Kubernetes Service (AKS) also coming very soon.

Tanzu Kubernetes Grid – With the release of TKG 2.0, VMware now includes a unified experience for applications running on any cloud. In the near future, Tanzu Kubernetes Grid 2.0 should support both Supervisor-based and VM-based management cluster models. On vSphere 8, both Supervisor-based and VM-based models will be supported, and VM-based management clusters will continue to be available on previous versions of vSphere and public clouds. This means in other words, that VMware continues with their “TKGS” and “TKGm” flavors.

Tanzu Service Mesh – Also pre-announced, VMware is adding several enterprise and application resiliency capabilities into Tanzu Service Mesh:

  • Support for customer-owned enterprise certificate authority through integration with Venafi
  • Improved security with enterprise-approved container image registries, data services support, external services support
  • and a global SLO dashboard that allows developers and site-reliability engineers to view all managed service SLOs, helping with capacity planning, troubleshooting, and understanding the health of their applications.

Read more about all the Tanzu announcements here.

Anywhere Workspace

VMware unveiled how it is advancing self-configuring, self-healing and self-securing outcomes across four key technology areas that are delivered by the Anywhere Workspace platform:

  • VDI and DaaS
  • Digital Employee Experience
  • Unified Endpoint Management
  • Security

VMware is introducing a next generation of VMware Horizon Cloud that will enable multi-cloud agility and flexibility. This new release represents a major update to Horizon Cloud on Microsoft Azure that can dramatically simplify the infrastructure that needs to be deployed inside customer environments, reducing infrastructure costs in some cases by over 70% while increasing scalability and reliability of VMware’s DaaS platform.

20K user infrastructure cost comparison

Workspace ONE UEM’s Freestyle Orchestrator will be expanding to include support for mobile devices.

Workspace ONE support for Windows OS multi-user mode is now available in Tech Preview for Azure Active Directory-based deployments; and it will soon be extended to Active Directory-based deployments.

VMware also announced the coming tech preview of Workspace ONE Cloud Marketplace, which will feature dashboards, widgets, reports, Freestyle Orchestrator workflows, and other resources that can be imported to help customers adopt additional solutions.

Horizon Managed Desktop –  I am very excited about this announcement, because it will provide a managed service offering that takes care of lifecycle services, support, and more, on top of a customer-provided infrastructure. This will help customers that don’t have in-house experts get to value with VDI faster.

Availability

VMware Cloud Foundation+, VMware vSphere 8, VMware vSAN 8 and VMware Edge Compute Stack 2.0 are all expected to be available by October 28, 2022 (the close of VMware’s Q3 FY23). VMware Private Mobile Network is expected to be available in beta in VMware’s Q3 FY23.

Closing Comment

Not bad for the first day, right? Stay tuned for more exciting VMware Explore announcements!

Interclouds And The Future of Cloud Computing

Interclouds And The Future of Cloud Computing

I am finally taking the time to write this piece about interclouds, workload mobility and application portability. Some of my engagements during the past four weeks led me several times to discussions about interclouds and workload mobility.

Cloud to Cloud Interoperability and Federation

Who has thought back in 2012 that we will have so many (public) cloud providers like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud etc. in 2022?

10 years ago, many people and companies were convinced that the future consists of public cloud infrastructure only and that local self-managed data centers are going to disappear.

This vision and perception of cloud computing has dramatically changed over the past few years. We see public cloud providers stretching their cloud services and infrastructure to large data centers or edge locations. It seems they realized, that the future is going to look differently than a lot of people anticipated back then.

I was not aware that the word “intercloud” and the need for it exists for a long time already apparently. Let’s take David Bernstein’s presentation as an example, which I found by googling “intercloud”:

This presentation is about avoiding the mistake of using proprietary protocols and cloud infrastructures that lead to silos and a non-interoperable architecture. He was part of the IEEE Intercloud Working Group (P2302) which was working on a standard for “Intercloud Interoperability and Federation (SIIF)” (draft), which mentioned the following:

Currently there are no implicit and transparent interoperability standards in place in order for disparate
cloud computing environments to be able to seamlessly federate and interoperate amongst themselves.
Proposed P2302 standards are a layered set of such protocols, called “Intercloud Protocols”, to solve the interoperability related challenges. The P2302 standards propose the overall design of decentralized, scalable, self-organizing federated “Intercloud” topology.

David Bernstein Intercloud

I do not know David Bernstein and the IEEE working group personally, but it would be great to hear from some of them, what they think about the current cloud computing architectures and how they envision the future of cloud computing for the next 5 or 10 years.

As you can see, the wish for an intercloud protocol or an intercloud exists since a while. Let us quickly have a look how others define intercloud:

Cisco in 2008 (it seems that David Bernstein worked at Cisco that time). Intercloud is a network of clouds that are linked with each other. This includes private, public, and hybrid clouds that come together to provide a seamless exchange of data.

teradata. Intercloud is a cloud deployment model that links multiple public cloud services together as one holistic and actively orchestrated architecture. Its activities are coordinated across these clouds to move workloads automatically and intelligently (e.g., for data analytics), based on criteria like their cost and performance characteristics.

The Future of Cloud Computing

I found this post on Twitter on May 19th, 2022:

Alvin Cheung Berkeley Intercloud

Alvin Cheung is an associate professor at Berkeley EECS and wrote the following in his Twitter comments:

we argue that cloud computing will evolve to a new form of inter-cloud operation: instead of storing data and running code on a single cloud provider, apps will run on an inter-operating set of cloud providers to leverage their specialized services / hw / geo etc, much like ISPs.

Alvin and his colleagues wrote a publication which states “A Berkeley View on the Future of Cloud Computing” that mentions the following very early in the PDF:

We predict that this market, with the appropriate intermediation, could evolve into one with a far greater emphasis on compatibility, allowing customers to easily shift workloads between clouds.

[…] Instead, we argue that to achieve this goal of flexible workload placement, cloud computing will require intermediation, provided by systems we call intercloud brokers, so that individual customers do not have to make choices about which clouds to use for which workloads, but can instead rely on brokers to optimize their desired criteria (e.g., price, performance, and/or execution location).

We believe that the competitive forces unleashed by the existence of effective intercloud brokers will create a thriving market of cloud services with many of those services being offered by more than one cloud, and this will be sufficient to significantly increase workload portability.

Intercloud Broker

Organizations place their workloads in that cloud which makes the most sense for them. Depending on different regulations, data classification, different cloud services, locations, or pricing, they then decide which data or workload goes to which cloud.

The people from Berkeley do not necessarily promote a multi-cloud architecture, but have the idea of an intercloud broker that places your workload on the right cloud based on different factors. They see the intercloud as an abstraction layer with brokering services:

In my understanding their idea goes towards the direction of an intelligent and automated cloud management platform that takes the decision where a specific workload and its data should be hosted. And that it, for example, migrates the workload to another cloud which is cheaper than the current one.

Cloud Native Technologies for Multi-Cloud

Companies are modernizing/rebuilding their legacy applications or create new modern applications using cloud native technologies. Modern applications are collections of microservices, which are light, fault tolerant and small. These microservices can run in containers deployed on a private or public cloud.

Which means, that a modern application is something that can adapt to any environment and perform equally well.

The challenge today is that we have modern architectures, new technologies/services and multiple clouds running with different technology stacks. And we have Kubernetes as framework, which is available in different formats (DIY or offerings like Tanzu TKG, AKS, EKS, GKE etc.)

Then there is the Cloud Native Computing Foundation (CNCF) and the open source community which embrace the principal of “open” software that is created and maintained by a community.

It is about building applications and services that can run on any infrastructure, which also means avoiding vendor or cloud lock-in.

Challenges of Interoperability and Multiple Clouds

If you discuss multi-cloud and infrastructure independent applications, you mostly end up with an endless list of questions like:

  • How can we achieve true workload mobility or application portability?
  • How do we deal with the different technology formats and the “language” (API) of each cloud?
  • How can we standardize and automate our deployments?
  • Is latency between clouds a problem?
  • What about my stateful data?
  • How can we provide consistent networking and security?
  • What about identity federation and RBAC?
  • Is the performance of each cloud really the same?
  • How should we encrypt traffic between services in multiple clouds?
  • What about monitoring and observability?

Workload Mobility and Application Portability without an Intercloud

VMware has a different view and approach how workload mobility and application portability can be achieved.

Their value add and goal is the same, but with a different strategy of abstracting clouds.

VMware is not building an intercloud but they provide customer a  technology stack (compute, storage, networking), or a cloud operating system if you will, that can run on top of every major public cloud provider like AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud and Alibaba Cloud.

VMware Workload Mobility

This consistent infrastructure makes it especially for virtual machines and legacy applications extremely easy to be migrated to any location.

What about modern applications and Kubernetes? What about developers who do not care about (cloud) infrastructures?

Project Cascade

At VMworld 2021, VMware announced the technology preview of “Project Cascade” which will provide a unified Kubernetes interface for both on-demand infrastructure (IaaS) and containers (CaaS) across VMware Cloud – available through an open command line interface (CLI), APIs, or a GUI dashboard.

The idea is to provide customers a converged IaaS and CaaS consumption service across any cloud, exposed through different Kubernetes APIs.

VMware Project Cascade

I heard the statement “Kubernetes is complex and hard” many times at KubeCon Europe 2022 and Project Cascade is clearly providing another abstraction layer for VM and container orchestration that should make the lives of developers and operators less complex.

Project Ensemble

Another project in tech preview since VMworld last year is “Project Ensemble“. It is about multi-cloud management platform that provides an app-centric self-service portal with predictive support.

Project Ensemble will deliver a unified consumption surface that meets the unique needs of the cloud administrator and SRE alike. From an architectural perspective, this means creating a platform designed for programmatic consumption and a firm “API First” approach.

I can imagine that it will be a service that leverages artificial intelligence and machine learning to simplify troubleshooting and that is capable in the future to intelligently place or migrate your workloads to the appropriate or best cloud (for example based on cost) including all attached networking and security policies.

Conclusion

I believe that VMware is on the right path by giving customers the option to build a cloud-agnostic infrastructure with the necessary abstraction layers for IaaS and CaaS including the cloud management platform. By providing a common way or standard to run virtual machines and containers in any cloud, I am convinced, VMware is becoming the defacto standard for infrastructure for many enterprises.

VMware Vision and Strategy 2022

By providing a consistent cloud infrastructure and a consistent developer model and experience, VMware bridges the gap between the developers and operators, without the need for an intercloud or intercloud protocol. That is the future of cloud computing.

 

Other relevant resources:

 

 

Build a Digital Manufacturing Platform with the VMware Edge Compute Stack

Build a Digital Manufacturing Platform with the VMware Edge Compute Stack

VMware revealed their edge computing vision at VMworld 2021. In VMware’s view the multi-cloud extends from the public clouds to private clouds to edge. Edge is about bringing apps and services closer to where they are needed, especially in sectors like retail, transportation, energy and manufacturing.

In verticals like manufacturing the edge was always important. It’s about producing things than you can sell. If you cannot produce, you lose time and money. Reliability, stability and factory uptime are not new requirements. But why is edge becoming so important now?

Without looking at any analyst report and only providing experience from the field, it is clear why. Almost all of the large enterprises are migrating workloads from their global (central) data centers to the public cloud. At the same time, customers are looking at new innovations and technologies to connect their machines, processes, people and data in a much more efficient way.

Which requirement did all my customers have in common? They didn’t want to move their dozens or hundreds of edge infrastructures to the public cloud, because the factories should work independently and autonomously in case of a WAN outage for example. Additionally, some VMware technologies were already deployed at the edge.

VMware Edge Compute Stack

This is why VMware introduced the so-called “Edge Compute Stack” (ECS) in October 2021, which is provides a unified platform to run VMs alongside containerized applications at the far edge (aka enterprise edge). ECS is a purpose-built stack that is available in three different editions (information based on initial availability from VMworld 2021):

VMware Edge Comput Stack Editions

As you can see, each VMware Edge Compute Stack edition has the vSphere Enterprise+ (hypervisor) included, software-defined storage with vSAN is optional, but Tanzu for running containers is always included.

While ECS is great, the purpose of this article is about highlighting different solutions and technologies that help you to build the foundation for a digital manufacturing platform.

IT/OT Convergence

You most probably have a mix of home-grown and COTS (commercial off-the-shelf) software, that need to be deployed in your edge locations (e.g., factories, markets, shops etc.). In manufacturing, OT (operational technology) vendors have just started the adoption of container technologies due to unique technology requirements and the business model that relies on proprietary systems.

The OT world is typically very hardware-centric and uses proprietary architectures. These systems and architectures, which were put into production 15-20 years ago, are still functional. It just worked.

While these methods and architectures have been very good, the manufacturing industry realized that this static and inflexible approach resulted in a technology debt, that didn’t allow any innovation for a long period of time.

Manufacturing companies are moving to a cloud-native architecture that should provide more flexibility and vendor interoperability with the same focus in mind: To provide a reliable, scalable and flexible infrastructure.

This is when VMware becomes relevant again with their (edge) compute stack. VMware vSphere allows you to run VMs and containers on the same platform. This is true for IT and OT workloads, that’s IT partial IT/OT covergence.

You may ask yourself how you then would  design the network. I’ll answer this topic in a minute.

Kubernetes Operations

IT platform teams, who design and manage the edge have to expand their (VMware) platform capabilities that allow them to deploy and host containers. Like I said before, this is why Tanzu is included in all the VMware Edge Compute Stack editions. Kubernetes is the new Infrastructure-as-a-Service (IaaS) and so it makes only sense that the container deployment and management capability is included.

How do you provide centralized or regional Kubernetes management and operations if you don’t have a global (regional) data center anymore?

With a hybrid approach, by using Tanzu for Kubernetes Operations (TKO), a set of SaaS services that allow you to run, manage, connect and secure your container infrastructure across clouds and edge locations.

IT/OT Security

Now you have the right platform to run your IT and OT workloads on the same hypervisor or compute platform. You also have a SaaS-based control plane to deploy and manage your Kubernetes clusters. 

As soon as you are dealing with a very dynamic environment where containers exist, you are having discussions about software-defined networking or virtualized networks. Apart from that, every organization and manufacturer are transforming their network and security at the edge and talk about network segmentation (and cybersecurity!).

Traditionally, you’ll find the Purdue Model implemented, a concept model for industrial control systems (ICS) that breaks the network in two zones:

  • Information Technology (IT)
  • Operational Technology (OT)

The Purdue Model of Computer Integrated Manufacturing

Source: https://www.automationworld.com/factory/iiot/article/21132891/is-the-purdue-model-still-relevant 

In these IT and OT zones you’ll find subzones that describe different layers and the ICS components. As you can see as well, each level is secured by a dedicated physical firewall appliance. From this drawing one could say that the IT and OT world converge in the DMZ layer, because of the bidirectional traffic flow.

VMware is one of the pioneers when it comes to network segmentation that helps you driving IT/OT convergence. This is made possible by using network virtualization. As soon as you are using the VMware hypervisor and its integrated virtual switch, you are already using a virtualized network.

To bring IT and OT closer together and to provide a virtualized network design based on the Purdue Model including a zero-trust network architecture, you would start looking at VMware NSX to implement that.

In case you are looking for a software-defined load balancer or application delivery controller, have a look at NSX Advanced Load Balancer (formerly known as Avi).

PLC Virtualization

In level 2 of the Purdue Model, which hosts the systems for supervising, monitoring and controlling the physical process, you will find components like human-machine interfaces (HMI) and supervisory control and data acquisition (SCADA) software.

In level 3, manufacturing execution systems (MES) can be found.

Nowadays, most companies already run their HMIs, SCADAs and MES software in virtual machines on the VMware vSphere hypervisor.

The next big thing is the virtualization of PLCs (programmable logic controller), which is an industrial computer that controls manufacturing processes, such as machines, assembly lines and robotic devices. Traditional PLC implementations in hardware are costly and lack scalability.

That is why the company SDA was looking for a less hardware-centric but more software-centric approach and developed the SDA vPLC that is able to meet sub 10ms performance.

This vPLC solution is based on a hybrid architecture between a cloud system and the industrial workload at the edge, which has been tested on VMware’s Edge Compute Stack.

Monitoring & Troubleshooting

One area, which we haven’t highlighted yet, is the monitoring and troubleshooting of virtual machines (VMs). The majority of your workloads are still VM-based. How do you monitor these workloads and applications, deal with resource and capacity planning/management, and troubleshoot, if you don’t have a central data center anymore?

With the same approach as before – just with a cloud-based service. Most organizations rely on vRealize Operations (vROps) and vRealize Log Insight (vRLI) for their IT operations and platform teams gain visibility in all the main and edge data centers.

You can still use vROps and vRLI (on-premises) in your factories, but VMware recommends using the vRealize Cloud Universal (vRCU) SaaS management suite, that gives you the flexibility to deploy your vRealize products on-premises or as SaaS. In an edge use case the SaaS-based control plane just makes sense.

In addition to vRealize Operations Cloud you can make use of the vRealize True Visibility Suite (TVS), that extends your vRealize Operations platform with management packs and connectors to monitor different compute, storage, network, application and database vendors and solutions.

Factory VDI

Some of your factories may need virtual apps or desktops and for edge use cases there are different possible architectures available. Where a factory has a few hundred of concurrent users, a dedicated standalone VDI/RDSH deployment might make sense. What if you have hundreds of smaller factories and don’t want to maintain a complete VDI/RDSH infrastructure?

VMware is currently working on a new architecture for VMware Horizon (aka VMware Horizon Next-Generation) and their goal is to provide a single, unified platform across on-premises and cloud environments.  They also plan to do that by introducing a pod-less architecture that moves key components to the VMware-hosted Horizon (Cloud) Control Plane.

This architecture is perfectly made for edge use cases and with this approach customers can reduce costs, expect increased scalability, improve troubleshooting and provide a seamless experience for any edge or cloud location.

VMware Horizon Next-Generation 

Management for Enterprise Wearables

If your innovation and tech team are exploring new possibilities with wearable technologies like augmented reality (AR), mixed reality (MR) and virtual reality (VR) head-mounted displays (HMDs), then VMware Workspace ONE Unified Endpoint Management (UEM) can help you to securely manage these devices!

Workspace ONE UEM is very strong when it comes to the modern management of Windows Desktop and macOS operating systems, and device management (Android/iOS).

Conclusion

As you can see, VMware has a lot to offer for the enterprise edge. Organizations that are multi-cloud and keep their edge locations on-premises, have a lot of new technologies and possibilities nowadays.

VMware’s strengths are unfolded as soon as you combine different solutions. And these solutions help you to work on your priorities and requirements to build the right foundation for a digital manufacturing platform.

DevSecOps with VMware Tanzu – Intrinsic Security for a Modern Application Supply Chain

DevSecOps with VMware Tanzu – Intrinsic Security for a Modern Application Supply Chain

Intrinsic security is something we heard a lot in the past from VMware and it was mostly used to describe the strategy and capabilities behind the Carbon Black portfolio (EDR) that is complemented with the advanced threat prevention from NSX (NDR), that form together the VMware XDR vision.

I see similarities between intrinsic security and workout I am doing in the gym. My goal is to build more strength and power, and to become healthier in general. For additional muscle gain benefits and to be more time efficient, I have chosen compound exercises. I am not a fan of single muscle group exercises, which involve isolation exercises. Our body has a lot of joints for different movements, and I think it’s just natural if you use multiple muscle groups and joints during a specific exercise.

Therefore, when you perform compound exercises, you involve different muscles to complete the movement. This improves your intermuscular coordination of your muscles. In addition, as everyone would tell you, these exercises improve your core strength and they let your body become a single unit.

While doing weight training, it is very important to use the proper technique and equipment. Otherwise, the risk for injuries and vulnerabilities increases.

This is what intrinsic security means for me! And I think this is very much relevant to understand when talking about DevSecOps.

Understanding DevSecOps

For VMware, talking to developers and talking about DevOps started in 2019 when they presented VMware Tanzu the first time at VMworld. The ideas and innovation behind the name “Tanzu” should bring developers and IT operators closer together for collaboration.

DevOps is the combination of different practices, tools and philosophies that should help an organization to deliver applications and services at a higher pace. In the example above it would mean, that application developers and operations teams are not working isolated in silos anymore, they become one team, a single unit. But technology plays very important role to support the success of the new mindset and culture!

DevOps is about efficiency and the automation of manual tasks or processes. You want to become fast, flexible and efficient. When you put security in the center of this, then we start talking about DevSecOps. You want to know if one of your muscles or parts of the body become weak (defect) or vulnerable.

Tanzu DevSecOps Flow

Depending on where you are right now on this application modernization journey, doing DevSecOps could mean a huge cultural and fundamental change to how you develop applications and do IT operations.

For me, DevSecOps is not about bringing security tools together from different teams and technologies. If DevOps and DevSecOps mean that you must change your mindset, then it is maybe also about time to consider the importance of new technology choices.

If DevSecOps means that you put security in the center of a DevOps- or container-centric environment, then security must become an intrinsic part of a modern application supply chain.

The VMware Tanzu portfolio has a lot of products and services to bring developers, operations and security teams together.

Where do we start? We need to “shift left” and this means we need to integrate security already early in the application lifecycle.

Code – Spring Framework

Before you can deliver an application to your customer, you need to develop it, you need to code. Application frameworks are a very effective approach for developing more secure and optimized applications.

Frameworks help to write code faster and more efficient. Not only does a framework can save your developers a lot of coding effort, but it also comes with pre-defined templates. They incorporate best practices and help you simplifying the overall application architecture.

Why is this important? To achieve better security or a more secure cloud native application, it makes sense to standardize and automate. Automation is key for security. Standardization makes it easier to understand or reuse code. You can write all the code yourself, but the chances are high that someone else did parts of your work already. Less variability reduces complexity and therefore enhances security.

There is the open-source Spring Framework for example, which uses Java as the underlying language (or .NET for Steeltoe). Both projects are managed by VMware and millions of developers use them.

Tanzu Spring Steeltoe

What happens next? You would now run your continuous integration (CI) process (integration tests, unit tests) and then you are ready to package or build your application.

Build – Tanzu Build Service (TBS)

So, your code is now good for release. If you want to deploy your application to a Kubernetes environment, then you need a secure, portable and reproducible build that can be checked for security vulnerabilities, and you need an easy way to patch those vulnerabilities.

How are you going to build your container image where you application is going to be built into? A lot of customers and vendors have a dockerfile based approach.

VMware recommends Tanzu Build Service (TBS), which uses Tanzu Buildpacks that are based on the open-source Cloud Native Buildpacks CNCF project to turn application source code into container images. So, no dockerfiles.

TBS is constantly looking for changes in your source code and then automatically builds an image based on that. This means with TBS you don’t need any advanced knowledge of container packaging formats or know how to optimally construct a container creation script for a given programming language.

Tanzu Build Service knows all the images you have built and understands all the dependencies and components you have used. If something changes, your image is going to be rebuilt automatically and then stored in a registry of your choice. More about the registry in a second.

Tanzu Build Service

What happens if a vulnerability comes out and one of your libraries, operating systems or components is affected? TBS would patch this vulnerability and all the affected downstream container images would be updated automatically.

Imagine how happy your CISO would be about this way of building secure container images! 🙂

Build – Harbor

We have now pushed our container image to a container repository, a so-called registry. VMware uses Harbor (open-source cloud native registry by VMware, donated to the CNCF in 2018) as an enterprise-grade storage for container images. Additionally, Harbor provides static analysis of vulnerabilities in images through open-source projects like Trivy and Clair.

Tanzu Build Service Harbor

We have now developed our applications and stored our packaged images in our Harbor registry. What else do we need?

Build – VMware Application Catalog (VAC)

Developers are not going to build everything by themselves. Other services like databases or caching are needed to build the application as well and there are so many known and pre-packaged open-source software freely available online. This brings additional security risks and provides malicious actors to publish container images that contain vulnerabilities.

How can you mitigate this risk and reduce the chance for a critical application outage or breach?

In 2019, VMware acquired Bitnami, which delivers and maintains a catalog of 130+ pre-packaged and ready-to-use open-source application components, that are “continuously maintained and verifiably tested for use in production environments”.

Known as VMware Application Catalog (VAC, formerly also known as Tanzu Application Catalog), VAC as a SaaS offering provides your organization a customizable private collection of open-source software and services, that can automatically be placed in your private container image registry. In this case in your Harbor registry.

Example apps that are supported today:

Language Runtimes Databases App Components Developer Tools Business Apps
Nodejs MySQL Kafka Artifactory WordPress
Python PostgreSQL RabbitMQ Jenkins Drupal
Ruby MariaDB TensorFlow Redmine Magento
Java MongoDB ElasticSearch Harbor Moodle

How does it work?

VMware Application Catalog - How it works

There are two product features that I would like to highlight:

  • Build-time CVE scan reports for container images using Trivy
  • Build-time Antivirus scans for container images using ClamAV

Your application, built by Tanzu Build Service and VMware Application Catalog, is complete now, and stored in your Harbor registry. And since you use VAC, you also have your “marketplace” of applications, that is curated by a (security) team in your organization. 

If you want to see VAC in action, have a look at this Youtube video.

Note: Yes, VAC is a SaaS hosted application and you may have concerns because you are a public/federal customer. That’s no problem. Consider VAC as your trusted source where you can copy things from. There is no data stored in the public cloud nor does it run anything up there. Download your packages from this trusted repository over to you air gapped environment.

Run – Tanzu Kubernetes Grid (TKG)

Your application is ready to be deployed and the next step is in your pipeline is “continuous deployment“. We finally can deploy our applications to a Kubernetes cluster.

Tanzu Kubernetes Grid or TKG is VMware’s own consistent and conformant Kubernetes distribution that can run in any cloud. VMware’s strategy is about running the same Kubernetes dial tone across data centers and public cloud, which enables a consistent and secure experience for your developers.

TKG has a tight integration with vSphere called “vSphere with Tanzu”. Since TKG is an enterprise-ready Kubernetes for a multi-cloud infrastructure, it can run also in all major public clouds.

If consistent automation is important to you and you want to run Kubernetes in an air gapped environment, where there is no AWS, Azure or any other major public cloud provider, then a consistent Kubernetes version like TKG would add value to your infrastructure.

Manage/Operate – Tanzu Mission Control (TMC)

How do we manage these applications on any Kubernetes cluster (VMware TKG, Amazon EKS, Microsoft AKS, Google GKE), that can run in any cloud?

Some organizations started with TKG and others already started with managed Kubernetes offerings like EKS, AKS or GKE. That’s not a problem. The question here is how you deploy, manage, operate, and secure all these different clusters.

VMware’s solution for that is Tanzu Mission Control, which is also a SaaS-based tool hosted by VMware, that is the first offering I’m going to cover, that is part of a global Tanzu control plane. TMC is a solution that makes your multi-cloud and multi-cluster Kubernetes management much easier.

With TMC you’ll get:

  • Centralized Cluster Lifecycle Management. TMC enables automated provisioning and lifecycle management of TKG cluster across any cloud. It provides centralized provision, scaling, upgrading and deletion functions for your Kubernetes clusters. Tanzu Mission Control also allows you to attach any CNCF-conformant Kubernetes cluster (K8s on-prem, K8s in public cloud, TKG, EKS, AKS, GKE, OpenShift) to the platform for management, visibility, and analytic purposes. I would expect that we can use TMC in the future to lifecycle managed offerings like EKS, AKS or GKE.
  • Centralized Policy Management. TMC has a very powerful policy engine to apply consistent policies across clusters and clouds. You can create security, access, network, quota, registry, and custom policies (Open Policy Agent framework).
  • Identity and Access Management. Another important feature you don’t want to miss with DevSecOps in mind is centralized authentication and authorization, and identity federation from multiple sources like AD, LDAP and SAML. Make sure you give the right people or project teams the right access to the right resources.
  • Cluster Inspection. There are to inspection that you can run against your Kubernetes clusters. TMC leverages the built-in open-source project Sonobuoy that makes sure your cluster are configured in a conformant way with the Cloud Native Computing Foundation (CNCF) standards. Tanzu Mission Control provides CIS Benchmark inspection as another option.

Tanzu Mission Control

Tanzu Mission Control integrates with other Tanzu products like Tanzu Observability and Tanzu Service Mesh, which I’m covering later.

Connect – Antrea

VMware Tanzu uses Antrea as the default container network interface (CNI) and Kubernetes NetworkPolicy to provide network connectivity and security for your pods. Antrea is an open-source project with active contributors from Intel, Nvidia/Mellanox and VMware, and it supports multiple operating systems and managed Kubernetes offerings like EKS, AKS or GKE!

Antrea uses Open vSwitch (OvS) as the networking data plane in every Kubernetes node. OvS is a high performance and programmable virtual switch that not only supports Linux, but also Windows. VMware is working on the achievement to reach feature parity between them, and they are even working on the support for ARM hosts in addition to x86 hosts.

Antrea creates overlay networks using VXLAN or Geneve for encapsulation and encrypts node-to-node communication if needed.

Connect & Secure – NSX Advanced Load Balancer

Ingress is a very important component of Kubernetes and let’s you configure how an application can or should be accessed. It is a set of routing rules that describe how traffic is routed to an application inside of a Kubernetes cluster. So, getting an application up and running is only the half side of the story. The application still needs a way for users to access it. If you would like to know more about “ingress”, I can recommend this short introduction video.

While a project like Contour is a great open-source project, VMware recommends Avi (aka NSX Advanced Load Balancer) provides much more enterprise-grade features like L4 load balancing, L7 ingress, security/WAF, GSLB and analytics. If stability, enterprise support, resiliency, automation, elasticity, and analytics are important to you, then Avi Enterprise, a true software-defined multi-cloud application delivery controller, is definitely the better fit.

 

Secure – Tanzu Service Mesh (TSM)

Let’s take a step back and recap what we have achieve until here. We have a standardized and automated application supply chain, with signed container images, that can be deployed in any conformant Kubernetes cluster. We can also access the application from outside and pod-to-pod communication, so that applications can talk to each other. So far so far good.

Is there maybe another way to stitch these services together or “offload” security from the containers? What if I have microservices or applications running in different clouds, that need to securely communicate with each other?

A lot of vendors including VMware realized that the network is the fabric that brings microservices together, which in the end form the application. With modernized or partially modernized apps, different Kubernetes offerings and a multi-cloud environment, we will find the reality of hybrid applications which sometimes run in multiple clouds.

This is the moment when you need to think about the connectivity and communication between your app’s microservices. Today, many Kubernetes users do that by implementing a service mesh and Istio is most probably the most used open-source project platform for that.

The thing with service mesh is, while everyone thinks it sounds great, that there are new challenges that service mesh brings by itself. The installation and configuration of Istio is not that easy and it takes time. Besides that, Istio is also typically tied to a single Kubernetes cluster and therefore Istio data plane – and organizations usually prefer to keep their Kubernetes clusters independent from each other. This leaves us with security and policies tied to a Kubernetes cluster or cloud vendor, which leaves us with silos.

Tanzu Service Mesh, built on VMware NSX, is an offering that delivers an enterprise-grade service mesh, built on top of a VMware-administrated Istio version.

The big difference and the value that comes with Tanzu Service Mesh (TSM) is its ability to support cross-cluster and cross-cloud use cases via Global Namespaces.

Global Namespaces

A Global Namespace is a unique concept in Tanzu Service Mesh and connects resources and workloads that form the application into a virtual unit. Each GNS is an isolated domain that provides automatic service discovery and manages the following functions that are port of it, no matter where they are located:

  • Identity. Each global namespace has its own certificate authority (CA) that provisions identities for the resources inside that global namespace
  • Discovery (DNS). The global namespace controls how one resource can locate another and provides a registry.
  • Connectivity. The global namespace defines how communication can be established between resources and how traffic within the global namespace and external to the global namespace is routed between resources.
  • Security. The global namespace manages security for its resources. In particular, the global namespace can enforce that all traffic between the resources is encrypted using Mutual Transport Layer Security authentication (mTLS).
  • Observability. Tanzu Service Mesh aggregates telemetry data, such as metrics for services, clusters, and nodes, inside the global namespace.

Monitor – Tanzu Observability (TO)

Another important part of DevSecOps with VMware Tanzu is observability. What happens if something goes wrong? What are you doing when an application is not working anymore as expected? How do you troubleshoot a distributed application, split in microservices, that potentially runs in multiple clouds?

Image an application split into different smaller services, that are running in a pod, which could be running in a virtual machine on a specific host in your on-premises datacenter, at the edge, or somewhere in the public cloud.

You need a tool that supports the architecture of a modern application. You need a solution that understands and visualizes cloud native applications.

That’s when VMware suggests Tanzu Observability to provide you observability and deep visibility across your DevSecOps environment.

Tanzu Observability

Tanzu Observability has an integration with Tanzu Mission Control, which has the capability then to install the Wavefront Kubernetes collector on your Kubernetes clusters. The name “Wavefront” comes from the company Wavefront, which VMware acquired in 2017.

Since Tanzu Observability is only offered as a SaaS version, I would like to highlight that it is “secure by design” according to VMware:

  • Isolation of customer data
  • User & Service Account Authentication (SSO, LDAP, SAML)
  • RBAC & Authorization
  • Data encryption at rest and in transit
  • Data at rest is managed by AWS S3 (protected by KMS)
  • Certifications like ISO 27001/27017/27018 or SOC 2 Type 1

Summary – Tanzu Portfolio Capabilities

The container build and deploy process consists of the Spring runtime, Tanzu Application Catalog and Tanzu Build Service.

The global control plane (SaaS) is formed by Tanzu Mission Control, Tanzu Service Mesh and Tanzu Observability.

The networking layer consists of NSX Advanced Load Balancer for ingress & load balancing and uses Antrea for container networking.

The foundation of this architecture is built on VMware’s Kubernetes runtime called Tanzu Kubernetes Grid.

Tanzu Advanced Capabilities

Note: There are other components like Application Transformer or Tanzu SQL (part of Tanzu Data Services), which I haven’t covered in this article.

Secure – Carbon Black Cloud Container

Another solution that might be of interest for you is Carbon Black Container. CB Container also provide visibility and control that DevSecOps team need to secure Kubernetes clusters and the application the deploy on top of them.

This solution provides container vulnerability & risk dashboard, image scanning, compliance policy scanning, CI/CD integration, integration with Harbor and supports any upstream Kubernetes like TKG, EKS, AKS, GKE or OpenShift.

Conclusion

DevSecOps with VMware Tanzu helps you to simplify and secure the whole container and application lifecycle. VMware has made some strategic acquisitions (Heptio, Pivotal, Bitnami, Wavefront, Octarine, Avi Networks, Carbon Black) in the past to become a major player the world of containerization, Kubernetes and application modernization.

I personally believe that VMware’s approach and Tanzu portfolio have a very strong position in the market. Their modular approach and the inclusion of open-source projects is a big differentiator. Tanzu is not just about Kubernetes, it’s about building, securing and managing the applications.

If you have a strong security focus, VMware can cover all the layers up from the hypervisor to the applications that can be deployed in any cloud. That’s the strength and unique value of VMware: A complete and diverse portfolio with products, that provide even more value when combined together.

Don’t forget, that VMware is number 1 when it comes to data center infrastructures and most of the customer workloads are still running on-premises. That’s why I believe that VMware and their Tanzu portfolio are very well positioned.

In case you missed it the announcements a few weeks ago, check out  Tanzu Application Platform and Tanzu for Kubernetes Operations that meet the needs of all those who are concerned with DevSecOps!

And if you would like to know more about VMware Tanzu in general, have a look at my “10 Things You Didn’t Know About VMware Tanzu” article.

 

What is Tanzu for Kubernetes Operations?

What is Tanzu for Kubernetes Operations?

Updated on March 16, 2022

The customers I worked with last year were large enterprises with a multi-cloud strategy and they have just started their application modernization journey. Typically, VMware customers interested in Tanzu would take a look at the Standard edition first, which gives you:

  • Tanzu Kubernetes Grid Runtime
  • Tanzu Mission Control Standard
  • Avi Essentials (NSX Advanced Load Balancer)
  • Antrea (open-source) for container networking
  • and some other open-source software like Prometheus, Grafana, Fluent Bit, Contour

Tanzu Std vs Adv

A lot of my customers were interested in Tanzu Advanced, but they were asking for something in between these editions. Tanzu Standard sounded very interesting, but almost all of them asked the followings questions:

  • What if I don’t build or modernize my own applications yet and get my application as a container from my ISV?
  • Prometheus and Grafana are nice, but I would like to have something more enterprise-ready for observability. How can I get Tanzu Observability?
  • Avi Essentials sounds great, but I am thinking to replace my current load balancer. Is it possible to replace my F5 or Citrix ADC (formerly known as Citrix NetScaler) appliances?
  • Contour seems to be a nice open-source project, but I am looking for something with built-in automation and analytics capabilities for ingress. Can’t I get Avi Enterprise for that as well?
  • I am looking for zero trust application security. How can you help me to encrypt traffic between containers or microservices, which could also be hosted on different clouds (e.g., on-prem and public cloud)?

The answer to these questions is Tanzu Kubernetes for Operations. Tanzu for Kubernetes Operations (TKO) is a bundle of VMware products and services to meet the requirements of cloud platform teams. It provides a centralized, consistent and simplified container management and operations across clouds and currently includes the following products and services:

Important Note: The VMware product guide says that “a Core is a single physical computational unit of the Processor which may be presented as one or more vCPUs“. So, if you plan a CPU overcommit of 1:2 (cores:vCPU) for your on-premises infrastructure, then you have to license 12 cores only.

TKO Reference Architecture

VMware has released TKO reference architectures for vSphere, AWS and Azure.

Figure 1 - Tanzu for Kubernetes Operations

Use this link to get additional information how to deploy and configure Tanzu Mission Control, Tanzu Observability and Tanzu Service Mesh.

What is Application Transformer for Tanzu?

Application Transformer for VMware Tanzu became generally available in February 2022.

Application Transformer can help you to convert virtual machines and application components to OCI-compliant container images, that then can be deployed into the Tanzu Kubernetes stack.

Tanzu Application Transformer

 

Tanzu App Navigator

Application Transformer helps you to analyze and visualize application components and dependencies. It also provides customers scores that allow them to decide which applications should be transformed.

App Navigator is a 4-to-6 week engagement that helps you to decide which applications you should tackle first and how much change is needed to drive business outcomes. It’s one thing to containerize an application, but App Navigator helps you to create a modernization strategy based on your goals.

Note: VMware’s App Navigator team uses Application Transformer during their service engagement.

Tanzu App Navigator

Tanzu Application Platform

Deploying an application on Kubernetes is not an easy thing if you don’t know anything about Kubernetes.

If you would like to focus more on your applications and your developer’s experience, then Tanzu Application Platform (TAP) could be very interesting for you.

With Tanzu Application Platform, application developers and operations teams can build and deliver a better multi-cloud developer experience on any Kubernetes distribution, including Azure Kubernetes Service, Amazon Elastic Kubernetes Service, Google Kubernetes Engine, as well as software offerings like Tanzu Kubernetes Grid.

VMware is known to provide reduction of complexity and to provide cloud-agnostic infrastructures. They started to abstract the underlying server hardware, then the virtualization of the whole data center (compute, storage, network) came and the next step was the abstraction of public clouds like AWS, Azure and Google.

In the case of Tanzu Application Platform we are talking about an opinionated grouping of separate components that run on any conformant Kubernetes cluster (TKG, AKS, EKS, GKE, OpenShift etc.). From an application developer perspective an application can automatically be built, tested and deployed on Kubernetes.

Tanzu Application Platform

Meaning, with TAP you get a modular application developer PaaS (adPaaS) offering and true application platform portability with the capability of “bring-your-own-Kubernetes”.

 

10 Things You Didn’t Know About VMware Tanzu

10 Things You Didn’t Know About VMware Tanzu

Updated on March 16, 2022

While I was working with one of the largest companies in the world during the past year, I learned a lot about VMware Tanzu and NSX Advanced Load Balancer (formerly known as Avi). Application modernization and the containerization of applications are very complex topics.

Customers are looking for ways to “free” their apps from infrastructure and want to go cloud-native by using/building microservices, containers and Kubernetes. VMware has a large portfolio to support you on your application modernization journey, which is the Tanzu portfolio. A lot of people still believe that Tanzu is a product – it’s not a product. Tanzu is more than just a Kubernetes runtime and as soon as people like me from VMware explain you the capabilities and possibilities of Tanzu, one tends to become overwhelmed at first.

Why? VMware’s mission is always to abstract things and make things easier for you but this doesn’t mean you can skip a lot of the questions and topics that should be discussed:

  • Where should your containers and microservices run?
  • Do you have a multi-cloud strategy?
  • How do you want to manage your Kubernetes clusters?
  • How do you build your container images?
  • How do you secure the whole application supply chain?
  • Have you thought about vulnerability scanning for the components you use to build the containers?
  • What kind of policies would you like to set on application, network and storage level?
  • Do you need persistent storage for your containers?
  • Should it be a vSphere platform only or are you also looking at AKS, EKS, GKE etc.?
  • How are you planning to automate and configure “things”?
  • Which kind of databases or data services do you use?
  • Have you already got a tool for observability?

With these kind of questions, you and I would figure out together, which Tanzu edition makes the most sense for you. Looking at the VMware Tanzu website, you’ll find four different Tanzu editions:

VMware Tanzu Editions

If you click on one of the editions, you get the possibility to compare them:

Tanzu Editions Comparison

Based on the capabilities listed above, customers would like to know the differences between Tanzu Standard and Advanced. Believe me, there is a lot of information I can share with you to make your life easier and to understand the Tanzu portfolio better. 🙂

1) VMware Tanzu Standard and Advanced Features and Components

Let’s start looking at the different capabilities and components that come with Tanzu Standard and Advanced:

Tanzu Std vs Adv

Tanzu Standard focuses very much on Kubernetes multi-cloud and multi-cluster management (Tanzu Kubernetes Grid with Tanzu Mission Control aka TMC), Tanzu Advanced adds a lot of capabilities to build your applications (Tanzu Application Catalog, Tanzu Build Service).

2) Tanzu Mission Control Standard and Advanced

Maybe you missed it in the screenshot before. Tanzu Standard comes with Tanzu Mission Control Standard, Tanzu Advanced is equipped with Tanzu Mission Control Advanced.

Note: Announced at VMworld 2021, there is now even a third edition called Tanzu Mission Control Essentials, that was specifically made for VMware Cloud offerings such as VMC on AWS.

I must mention here, that you could leverage the “free tier” of Tanzu Mission Control called TMC Starter. It can be combined with the Tanzu Community Edition (also free) for example or with existing clusters from other providers (AKS, GKE, EKS).

What’s the difference between TMC Standard and Advanced? Let’s check the TMC feature comparison chart:

  • TMC Adv provides “custom roles”
  • TMC Adv lets you configure more policies (security policies – custom, images policies, networking policies, quota policies, custom policies, policy insights)
  • With Tanzu Mission Control Advanced you also get “CIS Benchmark inspections”

What if I want Tanzu Standard (Kubernetes runtime with Tanzu Mission Control and some open- source software) but not the complete feature set of Tanzu Mission Control Advanced? Let me answer that question a little bit later. 🙂

3) NSX Advanced Load Balancer Essentials vs. Enterprise (aka Avi Essentials vs. Enterprise)

Yes, there are also different NSX ALB editions included in Tanzu Standard and Advanced. The NSX ALB Essentials edition is not something that you can buy separately, and it’s only included in the Tanzu Standard edition.

The enterprise edition of NSX ALB is part of Tanzu Advanced but it can also be bought as a standalone product.

Here are the capabilities and differences between NSX ALB Essentials and Enterprise:

NSX ALB Essentials vs. Enterprise

So, the Avi Enterprise edition provides a fully-featured version of NSX Advanced Load Balancer while Avi Essentials only provides L4 LB services for Tanzu.

Note: Customers can create as many NSX ALB / Avi Service Engines (SEs) as required with the Essentials edition and you still have the possibility to set up a 3-node NSX ALB controller cluster.

Important: It is not possible to mix the NSX ALB controllers from the Essentials and Enterprise edition. This means, that a customer, that has NSX ALB Essentials included in Tanzu Standard, and has another department using NSX ALB Enterprise for another use case, needs to run separate controller clusters. While the controllers don’t cost you anything, there is obviously some additional compute footprint coming with this constraint.

FYI, there is also a cloud-managed option for the Avi Controllers with Avi SaaS.

What if I want the complete feature set of NSX ALB Enterprise? Let’s put this question also aside for a moment.

4) Container Ingress with Contour vs. NSX ALB Enterprise

Ingress is a very important component of Kubernetes and let’s you configure how an application can or should be accessed. It is a set of routing rules that describe how traffic is routed to an application inside of a Kubernetes cluster. So, getting an application up and running is only the half side of the story. The application still needs a way for users to access it. If you would like to know more about “ingress”, I can recommend this short introduction video.

While Contour is a great open-source project, Avi provides much more enterprise-grade features like L4 LB, L7 ingress, security/WAF, GSLB and analytics. If stability, enterprise support, resiliency, automation, elasticity and analytics are important to you, then Avi Enterprise is definitely the better fit.

To keep it simple: If you are already thinking about NSX ALB Enterprise, then you could use it for K8s Ingress/LB and so much other use cases and services! 🙂  

5) Observability with Grafana/Prometheus vs. Tanzu Observability

I recently wrote a blog about “modern application monitoring with VMware Tanzu and vRealize“. This article could give you a better understanding if you want to get started with open-source software or something like Tanzu Observability, which provides much more enterprise-grade features. Tanzu Observability is considered to be a fast-moving leader according to the GigaOm Cloud Observability Report.

What if I still want Tanzu Standard only but would like to have Tanzu Observability as well? Let’s park this question as well for another minute.

6) Open-Source Projects Support by VMware Tanzu

The Tanzu Standard edition comes with a lot of leading open-source technologies from the Kubernetes ecosystem. There is Harbor for container registry, Contour for ingress, Grafana and Prometheus for monitoring, Velero for backup and recovery, Fluentbit for logging, Antrea and Calico for container networking, Sonobuoy for conformance testing and Cluster API for cluster lifecycle management.

VMware Open-Source Projects

VMware is actively contributing to these open-source projects and still wants to give customers the flexibility and choice to use and integrate them wherever and whenever you see fit. But how are these open-source projects supported by VMware? To answer this , we can have a look at the Tanzu Toolkit (included in Tanzu Standard and Advanced):

  • Tanzu Toolkit includes enterprise-level support for Harbor, Velero, Contour, and Sonobuoy
  • Tanzu Toolkit provides advisory—or best effort—guidance on Prometheus, Grafana, and Alertmanager for use with Tanzu Kubernetes Grid. Installation, upgrade, initial tooling configuration, and bug fixes are beyond the current scope of VMware’s advisory support.

7) Tanzu Editions Licensing

There are two options how you can license your Tanzu deployments:

  • Per CPU Licensing – Mostly used for on-prem deployments or where standalone installations are planned (dedicated workload domain with VCF). Tanzu Standard is included in all the regular VMware Cloud Foundation editions.
  • Per Core Licensing – For non-standalone on-prem and public cloud deployments, you should license Tanzu Standard and Advanced based on number of cores used by the worker (including control plane VMs) and management nodes delivering K8s clusters. Constructs such as “vCPUs”, “virtual CPUs” and “virtual cores” are proxies (other names) for CPU cores.

Tanzu Advanced is sold as a “pack” of software and VMware Cloud service offerings. Each purchased pack of Tanzu Advanced equals 20 cores. Example of 1 pack:

  • Spring Runtime: 20 cores
  • Tanzu Application Catalog: 20 cores
  • Tanzu SQL: 1 core (part of Tanzu Data Services)
  • Tanzu Build Service: 20 cores
  • Tanzu Observability: 160 PPS (sufficient to collect metrics for the infrastructure)
  • Tanzu Mission Control Advanced: 20 cores
  • Tanzu Service Mesh Advanced: 20 cores
  • NSX ALB Enterprise: 1 CPU = 1/4 Avi Service Core
  • Tanzu Standard Runtime: 20 cores

If you need more details about these subscription licenses, please consult the VMware Product Guide (starting from page 37).

As you can see, a lot of components (I didn’t even list all) form the Tanzu Advanced  edition. The calculation, planning and sizing for the different components require multiple discussions with your Tanzu specialist from VMware.

8) Tanzu Standard Sizing

Disclaimer – This sizing is based on my current understanding, and it is always recommended to do a proper sizing with your Tanzu specialists / consultants.

So, we have learnt before that Tanzu Standard licensing is based on cores, which are “used by the worker and management nodes delivering K8s clusters”.

As you may already know, the so-called “Supervisor Cluster” is currently formed by three control plane VMs. Looking at the validated design for Tanzu for VMware Cloud Foundation workload domains, one can also get a better understanding of the Tanzu Standard runtime sizing for vSphere-only environments.

The three Supervisor Cluster (management nodes) VMs have each 4 vCPUs – this means in total 12 vCPUs.

The three Tanzu Kubernetes Cluster control plane VMs have each 2 vCPUs – this means in total 6 vCPUs.

The three Tanzu Kubernetes Cluster worker nodes (small size) have each 2 vCPUs – this means in total 6 vCPUs.

My conclusion here is that you need to license at least 24 vCPU to get started with Tanzu Standard.

Important Note: The VMware product guide says that “a Core is a single physical computational unit of the Processor which may be presented as one or more vCPUs“. If you are planning a CPU overcommit of 1:2 (cores:vCPU) for your on-premises infrastructure, then you have to license 12 cores only.

Caution: William Lam wrote about the possibility to deploy single or dual node Supervisor Cluster control plane VMs. It is technically possible to reduce the numbers of control plane VMs, but it is not officially supported by VMware. We need to wait until this feature becomes available in the future.

It would be very beneficial for customers with a lot of edge locations or smaller locations in general. If you can reduce the Supervisor Cluster down to two control plane VMs only, the initial deployment size would only need 14 vCPUs (cores).

9) NSX Advanced Load Balancer Sizing and Licensing

General licensing instructions for Avi aka NSX ALB (Enterprise) can be found here

NSX ALB is licensed based on cores consumed by the Avi Service Engines. As already said before, you won’t be charged for the Avi Controllers and itt is possible to add new licenses to the ALB Controller at any time. Avi Enterprise licensing is based on so-called Service Cores. This means, one vCPU or core equals one Service Core.

Avi as a standalone product has only one edition, the fully-featured Enterprise edition. Depending on your needs and the features (LB, GSLB, WAF, analytics, K8s ingress, throughput, SSL TPS etc.) you use, you’ll calculate the necessary amount of Service Cores.

It is possible to calculate and assign more or less than 1 Service Core per Avi Service Engine:

  • 25 Mbps throughput (bandwidth) = 0.4 Service Cores
  • 200 Mbps throughput = 0.7 Service Cores

Example: A customer wants to deploy 10 Service Engines with 25MB and 4 Service Engines with 200MB. These numbers would map to 10*0.4 Service Cores + 4*0.7 Services Cores, which give us a total of 6.8 Service Cores. In this case you would by 7 Service Cores. 

10) Tanzu for Kubernetes Operations (TKO)

Now it’s time to answer the questions we parked before:

  • What if I want Tanzu Standard (Kubernetes runtime with Tanzu Mission Control and some open- source software) but not the complete feature set of Tanzu Mission Control Advanced?
  • What if I want the complete feature set of NSX ALB Enterprise?
  • What if I still want Tanzu Standard only but would like to have Tanzu Observability as well?

The answer to this and the questions above is Tanzu for Kubernetes Operations (TKO)!

Conclusion

Wherever you are on your application modernization journey, VMware and their Tanzu portfolio got your back covered. Not matter if you want to start small, make your first steps and experiences with open-source projects, or if you want to have a complete set with the Tanzu Advanced edition, VMware offers the right options and flexibility.

I hope my learnings from this customer engagement help you to better understand the Tanzu portfolio and its capabilities.

Please leave your comments and thoughts below. 🙂