10 Things You Didn’t Know About VMware Tanzu

10 Things You Didn’t Know About VMware Tanzu

While I was working with one of the largest companies in the world during the past year, I learned a lot about VMware Tanzu and NSX Advanced Load Balancer (formerly known as Avi). Application modernization and the containerization of applications are very complex topics.

Customers are looking for ways to “free” their apps from infrastructure and want to go cloud-native by using/building microservices, containers and Kubernetes. VMware has a large portfolio to support you on your application modernization journey, which is the Tanzu portfolio. A lot of people still believe that Tanzu is a product – it’s not a product. Tanzu is more than just a Kubernetes runtime and as soon as people like me from VMware explain you the capabilities and possibilities of Tanzu, one tends to become overwhelmed at first.

Why? VMware’s mission is always to abstract things and make things easier for you but this doesn’t mean you can skip a lot of the questions and topics that should be discussed:

  • Where should your containers and microservices run?
  • Do you have a multi-cloud strategy?
  • How do you want to manage your Kubernetes clusters?
  • How do you build your container images?
  • How do you secure the whole application supply chain?
  • Have you thought about vulnerability scanning for the components you use to build the containers?
  • What kind of policies would you like to set on application, network and storage level?
  • Do you need persistent storage for your containers?
  • Should it be a vSphere platform only or are you also looking at AKS, EKS, GKE etc.?
  • How are you planning to automate and configure “things”?
  • Which kind of databases or data services do you use?
  • Have you already got a tool for observability?

With these kind of questions, you and I would figure out together, which Tanzu edition makes the most sense for you. Looking at the VMware Tanzu website, you’ll find four different Tanzu editions:

VMware Tanzu Editions

If you click on one of the editions, you get the possibility to compare them:

Tanzu Editions Comparison

Based on the capabilities listed above, customers would like to know the differences between Tanzu Standard and Advanced. Believe me, there is a lot of information I can share with you to make your life easier and to understand the Tanzu portfolio better. 🙂

1) VMware Tanzu Standard and Advanced Features and Components

Let’s start looking at the different capabilities and components that come with Tanzu Standard and Advanced:

Tanzu Std vs Adv

Tanzu Standard focuses very much on Kubernetes multi-cloud and multi-cluster management (Tanzu Kubernetes Grid with Tanzu Mission Control aka TMC), Tanzu Advanced adds a lot of capabilities to build your applications (Tanzu Application Catalog, Tanzu Build Service).

2) Tanzu Mission Control Standard and Advanced

Maybe you missed it in the screenshot before. Tanzu Standard comes with Tanzu Mission Control Standard, Tanzu Advanced is equipped with Tanzu Mission Control Advanced.

Note: Announced at VMworld 2021, there is now even a third edition called Tanzu Mission Control Essentials, that was specifically made for VMware Cloud offerings such as VMC on AWS.

I must mention here, that you could leverage the “free tier” of Tanzu Mission Control called TMC Starter. It can be combined with the Tanzu Community Edition (also free) for example or with existing clusters from other providers (AKS, GKE, EKS).

What’s the difference between TMC Standard and Advanced? Let’s check the TMC feature comparison chart:

  • TMC Adv provides “custom roles”
  • TMC Adv lets you configure more policies (security policies – custom, images policies, networking policies, quota policies, custom policies, policy insights)
  • With Tanzu Mission Control Advanced you also get “CIS Benchmark inspections”

What if I want Tanzu Standard (Kubernetes runtime with Tanzu Mission Control and some open- source software) but not the complete feature set of Tanzu Mission Control Advanced? Let me answer that question a little bit later. 🙂

3) NSX Advanced Load Balancer Essentials vs. Enterprise (aka Avi Essentials vs. Enterprise)

Yes, there are also different NSX ALB editions included in Tanzu Standard and Advanced. The NSX ALB Essentials edition is not something that you can buy separately, and it’s only included in the Tanzu Standard edition.

The enterprise edition of NSX ALB is part of Tanzu Advanced but it can also be bought as a standalone product.

Here are the capabilities and differences between NSX ALB Essentials and Enterprise:

NSX ALB Essentials vs. Enterprise

So, the Avi Enterprise edition provides a fully-featured version of NSX Advanced Load Balancer while Avi Essentials only provides L4 LB services for Tanzu.

Note: Customers can create as many NSX ALB / Avi Service Engines (SEs) as required with the Essentials edition and you still have the possibility to set up a 3-node NSX ALB controller cluster.

Important: It is not possible to mix the NSX ALB controllers from the Essentials and Enterprise edition. This means, that a customer, that has NSX ALB Essentials included in Tanzu Standard, and has another department using NSX ALB Enterprise for another use case, needs to run separate controller clusters. While the controllers don’t cost you anything, there is obviously some additional compute footprint coming with this constraint.

FYI, there is also a cloud-managed option for the Avi Controllers with Avi SaaS.

What if I want the complete feature set of NSX ALB Enterprise? Let’s put this question also aside for a moment.

4) Container Ingress with Contour vs. NSX ALB Enterprise

Ingress is a very important component of Kubernetes and let’s you configure how an application can or should be accessed. It is a set of routing rules that describe how traffic is routed to an application inside of a Kubernetes cluster. So, getting an application up and running is only the half side of the story. The application still needs a way for users to access it. If you would like to know more about “ingress”, I can recommend this short introduction video.

While Contour is a great open-source project, Avi provides much more enterprise-grade features like L4 LB, L7 ingress, security/WAF, GSLB and analytics. If stability, enterprise support, resiliency, automation, elasticity and analytics are important to you, then Avi Enterprise is definitely the better fit.

To keep it simple: If you are already thinking about NSX ALB Enterprise, then you could use it for K8s Ingress/LB and so much other use cases and services! 🙂  

5) Observability with Grafana/Prometheus vs. Tanzu Observability

I recently wrote a blog about “modern application monitoring with VMware Tanzu and vRealize“. This article could give you a better understanding if you want to get started with open-source software or something like Tanzu Observability, which provides much more enterprise-grade features. Tanzu Observability is considered to be a fast-moving leader according to the GigaOm Cloud Observability Report.

What if I still want Tanzu Standard only but would like to have Tanzu Observability as well? Let’s park this question as well for another minute.

6) Open-Source Projects Support by VMware Tanzu

The Tanzu Standard edition comes with a lot of leading open-source technologies from the Kubernetes ecosystem. There is Harbor for container registry, Contour for ingress, Grafana and Prometheus for monitoring, Velero for backup and recovery, Fluentbit for logging, Antrea and Calico for container networking, Sonobuoy for conformance testing and Cluster API for cluster lifecycle management.

VMware Open-Source Projects

VMware is actively contributing to these open-source projects and still wants to give customers the flexibility and choice to use and integrate them wherever and whenever you see fit. But how are these open-source projects supported by VMware? To answer this , we can have a look at the Tanzu Toolkit (included in Tanzu Standard and Advanced):

  • Tanzu Toolkit includes enterprise-level support for Harbor, Velero, Contour, and Sonobuoy
  • Tanzu Toolkit provides advisory—or best effort—guidance on Prometheus, Grafana, and Alertmanager for use with Tanzu Kubernetes Grid. Installation, upgrade, initial tooling configuration, and bug fixes are beyond the current scope of VMware’s advisory support.

7) Tanzu Editions Licensing

There are two options how you can license your Tanzu deployments:

  • Per CPU Licensing – Mostly used for on-prem deployments or where standalone installations are planned (dedicated workload domain with VCF). Tanzu Standard is included in all the regular VMware Cloud Foundation editions.
  • Per Core Licensing – For non-standalone on-prem and public cloud deployments, you should license Tanzu Standard and Advanced based on number of cores used by the worker and management nodes delivering K8s clusters. Constructs such as “vCPUs”, “virtual CPUs” and “virtual cores” are proxies (other names) for CPU cores.

Tanzu Advanced is sold as a “pack” of software and VMware Cloud service offerings. Each purchased pack of Tanzu Advanced equals 20 cores. Example of 1 pack:

  • Spring Runtime: 20 cores
  • Tanzu Application Catalog: 20 cores
  • Tanzu SQL: 1 core (part of Tanzu Data Services)
  • Tanzu Build Service: 20 cores
  • Tanzu Observability: 160 PPS (sufficient to collect metrics for the infrastructure)
  • Tanzu Mission Control Advanced: 20 cores
  • Tanzu Service Mesh Advanced: 20 cores
  • NSX ALB Enterprise: 1 CPU = 1/4 Avi Service Core
  • Tanzu Standard Runtime: 20 cores

If you need more details about these subscription licenses, please consult the VMware Product Guide (starting from page 37).

As you can see, a lot of components (I didn’t even list all) form the Tanzu Advanced  edition. The calculation, planning and sizing for the different components require multiple discussions with your Tanzu specialist from VMware.

8) Tanzu Standard Sizing

Disclaimer – This sizing is based on my current understanding, and it is always recommended to do a proper sizing with your Tanzu specialists / consultants.

So, we have learnt before that Tanzu Standard licensing is based on cores, which are “used by the worker and management nodes delivering K8s clusters”.

As you may already know, the so-called “Supervisor Cluster” is currently formed by three control plane VMs. Looking at the validated design for Tanzu for VMware Cloud Foundation workload domains, one can also get a better understanding of the Tanzu Standard runtime sizing for vSphere-only environments.

The three Supervisor Cluster control planes VMs have each 4 vCPUs – this means in total 12 vCPUs (cores).

The three Tanzu Kubernetes Cluster worker nodes (small size) have each 2 vCPUs – this means in total 6 vCPUs (cores).

My conclusion here is that you need to license at least 18 cores to get started with Tanzu Standard.

Caution: William Lam wrote about the possibility to deploy single or dual node Supervisor Cluster control plane VMs. It is technically possible to reduce the numbers of control plane VMs, but it is not officially supported by VMware. We need to wait until this feature becomes available in the future.

It would be very beneficial for customers with a lot of edge locations or smaller locations in general. If you can reduce the Supervisor Cluster down to two control plane VMs only, the initial deployment size would only need 14 vCPUs (cores).

9) NSX Advanced Load Balancer Sizing and Licensing

General licensing instructions for Avi aka NSX ALB (Enterprise) can be found here

NSX ALB is licensed based on cores consumed by the Avi Service Engines. As already said before, you won’t be charged for the Avi Controllers and itt is possible to add new licenses to the ALB Controller at any time. Avi Enterprise licensing is based on so-called Service Cores. This means, one vCPU or core equals one Service Core.

Avi as a standalone product has only one edition, the fully-featured Enterprise edition. Depending on your needs and the features (LB, GSLB, WAF, analytics, K8s ingress, throughput, SSL TPS etc.) you use, you’ll calculate the necessary amount of Service Cores.

It is possible to calculate and assign more or less than 1 Service Core per Avi Service Engine:

  • 25 Mbps throughput (bandwidth) = 0.4 Service Cores
  • 200 Mbps throughput = 0.7 Service Cores

Example: A customer wants to deploy 10 Service Engines with 25MB and 4 Service Engines with 200MB. These numbers would map to 10*0.4 Service Cores + 4*0.7 Services Cores, which give us a total of 6.8 Service Cores. In this case you would by 7 Service Cores. 

10) Tanzu for Kubernetes Operations (TKO)

Now it’s time to answer the questions we parked before:

  • What if I want Tanzu Standard (Kubernetes runtime with Tanzu Mission Control and some open- source software) but not the complete feature set of Tanzu Mission Control Advanced?
  • What if I want the complete feature set of NSX ALB Enterprise?
  • What if I still want Tanzu Standard only but would like to have Tanzu Observability as well?

Before we do that, let me quickly show you one slide from the VMworld 2021 session Make Your Move to Multi-Cloud Kubernetes with VMware Tanzu [APP3117]:

VMworld 2021 Tanzu for Kubernetes Operations Megan Bruce presented this slide and said, that you need a consistent Kubernetes runtime to start your multi-cloud Kubernetes journey with VMware Tanzu, so that you can lifecycle (deploy, manage and upgrade) clusters consistently. This capabilities starts with Tanzu Kubernetes Grid.

The next component you need is a way to manage your platform and having a centralized management plane that provides centralized visibility and control over your platform, that is used and consumed by distributed teams. That is provided by Tanzu Mission Control.

How do you effectively monitor and troubleshoot issues faster, and how do you stitch services together and protect your data both at rest and in transit across cloud? That would be Tanzu Observability and Tanzu Service Mesh.

Finally, VMware can also help you to implement global load balancing and provides advanced traffic routing with NSX Advanced Load Balancer.

The different Tanzu products I just highlighted, are all SaaS based offerings and form the global Tanzu control plane you would get with Tanzu Advanced. But how can you get these components if you want to build this standardized control plane and have a mix of Tanzu Standard and Advanced? What if I want something in between Tanzu Std and Adv before I move later to the complete Tanzu Adv edition?

Well, the answer to this and the questions above is “Tanzu for Kubernetes Ops” (TKO)!

I believe it hasn’t been officially announced at VMworld, but TKO is a new soft-bundle. It does NOT come as one standalone SKU for customers yet, but for sure this is where VMware is heading to. Let me summarize the components of this bundle (it’s not a new edition) for you:

  • Tanzu Standard Runtime (includes Tanzu Kubernetes Grid + open-source software), licensed per core
  • Tanzu Mission Control Advanced, licensed per core
  • Tanzu Observability, licensed based on PPS (minimum of 1000 PPS required)
  • Tanzu Service Mesh Advanced, licensed based on core
  • Antrea Advanced, licensed based on core
  • NSX ALB (Avi) Enterprise, licensed based on service cores

Does this BOM answer all our questions? YES! 🙂

The cool thing about it? You don’t need to choose all the components. Just pick what makes sense for you. Example: You can start with the Tanzu Standard Runtime, TMC Advanced, Tanzu Observability and NSX ALB Enterprise, and go for Tanzu Service Mesh whenever the time is right.

Maybe you already started with the public cloud offerings like AKS, EKS and GKE and need a consistent control plane? Then Tanzu and TKO are still good choices for you.

Conclusion

Wherever you are on your application modernization journey, VMware and their Tanzu portfolio got your back covered. Not matter if you want to start small, make your first steps and experiences with open-source projects, or if you want to have a complete set with the Tanzu Advanced edition, VMware offers the right options and flexibility.

I hope my learnings from this customer engagement help you to better understand the Tanzu portfolio and its capabilities.

Please leave your comments and thoughts below. 🙂

Modern Application Monitoring with VMware Tanzu and vRealize

Modern Application Monitoring with VMware Tanzu and vRealize

The complexity of applications has increased because of new cloud technologies and new application architectures. Since organizations adopt and embrace the DevOps mindset, developers and IT operations are closer than ever. Developers are now part of the team operating the distributed systems.

Businesses must figure out how they know about system failures and need to have an understanding “what” is broken (symptom) and “why” (possible cause) something is broken.

Let’s talk about application performance management (APM) and enterprise observability. 🙂

Monitoring

It was around the year 2012 or 2013 when I had to introduce a new monitoring solution for a former employer who was a cloud service provider. I think Nagios was the state-of-the-art technology back then and I replaced it PRTG Network Monitor from Paessler.

When we onboarded a new customer infrastructure or application, the process was always the same. I had to define the metrics to collect and then put those metrics on a dashboard. It was very important to set alerts based on thresholds or conditions. Everyone knew back then that this approach wasn’t the best, but we didn’t have any other choice.

PRTG Sensor View

If an IP was not pingable or a specific port of a server or application was down for 60 seconds, an alert popped up and an e-mail had been sent to the IT helpdesk. And in the dashboard you could see sensors switching from a green to a red state.

To simplify the troubleshooting process and to have some a logical application view, I had to create some dependencies between sensors. This was probably the only way to create something like an application (dependency) mapping.

When users worked on a virtual desktop or on a Windows Terminal Server, we “measured” the user experience and application performance based on network latency and server resource usage based on CPU and RAM mostly.

Observability

Observability enables you to drill down into the distributed services and systems (hardware components, containers, microservices) that make up an application.

Monitoring and observability are not the same thing. As described before, monitoring is the process of collection metrics and alerts that one can monitor the health and performance of components like network devices, databases, servers or VMs.

Observability helps you to understand complex architectures and interactions between elements in this architecture. It also allows you to troubleshoot performance issues, identify root causes for failures faster and helps you to optimize your cloud native infrastructure and applications.

In other words, observability can help you to speed up mean time to detection (MTTD) and mean time to resolution (MTTR) for infrastructure and application failures.

There are three golden telemetry signals to achieve observability (source):

  • Logs: Logs are the abiding records of discrete events that can identify unpredictable behavior in a system and provide insight into what changed in the system’s behavior when things went wrong. It’s highly recommended to ingest logs in a structured way, such as in JSON format so that log visualization systems can auto-index and make logs easily queryable.
  • Metrics: Metrics are considered as the foundations of monitoring. They are the measurements or simply the counts that are aggregated over a period of time. Metrics will tell you how much of the total amount of memory is used by a method, or how many requests a service handles per second.
  • Traces: A single trace displays the operation as it moves from one node to another in a distributed system for an individual transaction or request. Traces enable you to dig into the details of particular requests to understand which components cause system errors, monitor flow through the modules, and discover the bottlenecks in the performance of the system.

Tanzu Observability Tracing

When using observability during app development, it can also improve the developer experience and productivity.

Tanzu Observability Services

The VMware Tanzu portfolio currently has four different editions:

Different Tanzu Observability services are available for different components and Tanzu editions.

Tanzu Standard Observability

Tanzu Standard includes the leading open-source projects Prometheus and Grafana for platform monitoring (and Fluent Bit for log forwarding).

Tanzu Kubernetes Grid provides monitoring with the open-source Prometheus and Grafana services. You deploy these services on your cluster and can then take advantage of Grafana visualizations and dashboards. As part of the integration, you can set up Alertmanager to send alerts to Slack or use custom Webhooks alert notifications.

Tanzu Kubernetes Grid architecture

Tanzu Standard Observability is comprised of:

  • Fluent Bit is an open-source log processor and forwarder which allows you to collect any data like metrics and logs from different sources, enrich them with filters and send them to multiple destinations. It’s the preferred choice for containerized environments like Kubernetes.
  • Grafana is a multi-platform open-source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources.
  • Prometheus is a free software application used for event monitoring and alerting. It records real-time metrics in a time series database built using a HTTP pull model, with flexible queries and real-time alerting.

Note: VMware only provides advisory (best effort) guidance on Prometheus and Grafana for use with Tanzu Kubernetes Grid. The installation, configuration and upgrades are beyond the current scope of VMware’s advisory support.

Tanzu Advanced Observability

In May 2017 VMware acquired Wavefront which is now part of the Tanzu portfolio and called “Tanzu Observability” (TO).

TO is a SaaS-based metrics monitoring and analytics platform that handles enterprise-scale requirements of modern cloud native application.

Compared to the Grafana/Prometheus, one would say that Tanzu Observability is a true enterprise-grade observability platform. According to the GigaOm Cloud Observability Report VMware Tanzu Observability is one of the strong leaders among Dynatrace and Splunk just to name a few.

Tanzu Observability is best suited for large organization and provides a consumption-based pricing that is based on the rate at which you send metric data to Tanzu Observability during the course of each month. This gives you the flexibility to start with any size want and scale up/down as needed. It’s not dependent on number of hosts or the number of users. 

Tanzu Observability CIO Dashboard

Tanzu Observability allows you to collect data from different sources and provides integrations to over 250 technologies including different public clouds, web application and services, big data frameworks, data stores, other monitoring tools, operating systems / hosts, and many more.

Tanzu Observability Integrations

While data retention with Prometheus is limited to a maximum of 14 days, VMware allows you to send Prometheus data to Tanzu Observability for long-term data retention (up to 18 months at full granularity).

Just announced at VMworld 2021, VMware has added artificial intelligence and machine learning (AI/ML) root cause capabilities…

Tanzu Observability AI Powered Root Cause Analysis

…and created an integration between Tanzu Observability and vRealize Operations Cloud.

Through this integration, developers and SREs can now view vRealize Operations Cloud metrics alongside all the metrics, histograms, and traces collected by Tanzu Observability from other sources for a more holistic view of business-critical applications and infrastructure.

If you are attending VMworld, check out the sessions below to learn more about Tanzu Observability.

  • APP1308: Observability for Modern Application and Kubernetes Environments
  • APP2648: Implement Observability for Kubernetes Clusters and Workloads in Minutes
  • VI2630: Best Practices and Reference Framework for Implementing Observability
  • UX2551: Move from Traditional Monitoring to Observability and SRE – Design Studio
  • VMTN2810: Lost in Containers? Enhance Observability with Actionable Visualization
  • 2965: Kubernetes Cluster Operations, Monitoring and Observability
  • 2957: Build a Data Analytics Platform in Minutes Using Deployment Blueprints
  • APP2677: Meet the Experts: VMware Tanzu Observability by Wavefront
  • VMTN3230: Observe Application internals Holistically
  • VI1448: Take a Modern Approach to Achieve Application Resiliency
  • APP1319: Transforming Customer Experiences with VMware’s App Modernization Platform

Integration with other Tanzu Products

Tanzu Observability is fully integrated within the Tanzu family with OOTB integrations with:

Kubernetes Monitoring in vRealize Operations

Tanzu Observability provides “Kubernetes Observability” and OOTB integrations with RedHat OpenShift, Azure Kubernetes Service (AKS), Amazon EKS and Google GKE for example.

Tanzu Observability Kubernetes Monitoring

vRealize Operations (vROps) is also able to monitor multiple Kubernetes environments like VMware Tanzu Kubernetes Grid, RedHat OpenShift, Amazon EKS, Azure AKS or Google GKE. That is made possible with the vROps Management Pack for Kubernetes.

Using vRealize Operations Management Pack for Kubernetes (needs vROps 8.1 or later), you can monitor, troubleshoot, and optimize the capacity management for Kubernetes clusters. Below some of the additional capabilities that this management pack delivers:

  • Auto-discovery of Tanzu Kubernetes Grid (TKG) or Tanzu Mission Control (TMC) Kubernetes clusters.
  • Complete visualization of Kubernetes cluster topology, including namespaces, clusters, replica sets, nodes, pods, and containers.
  • Performance monitoring for Kubernetes clusters.
  • Out-of-the-box dashboards for Kubernetes constructs, which include inventory and configuration.
  • Multiple alerts to monitor the Kubernetes clusters.
  • Mapping Kubernetes nodes with virtual machine objects.
  • Report generation for capacity, configuration, and inventory metrics for clusters or pods.

vRealize Operations K8s Monitoring

Note: Kubernetes monitoring is available in vRealize Operations Advanced.

There is also a Prometheus integration, that enables vRealize Operations Manager to retrieve metrics directly from Prometheus:

Diagram Description automatically generated

Note: vRealize Operations can also integrate with your existing application performance management systems. vROps offers integrations with App Dynamics, DataDog, Dynatrace and New Relic.

Conclusion

There are different options available within the VMware Tanzu and vRealize when it comes to Kubernetes operations, monitoring and observability.

Depending on your current needs and toolset you’ll have different options and integration possibilities. 

VMware’s portfolio gives you the choice to use open-source software like Grafana/Prometheus, leverage an existing vRealize Operations deployment or to get an enterprise-grade observability and analytics platform like Tanzu Observability.

If you are looking for and end-to-end monitoring stack aka 360-degree visibility for your K8s environments and clouds, VMware Tanzu and the vRealize Suite give you the following products:

  1. Applications – Tanzu Observability
  2. Kubernetes Cluster – Tanzu Observability, vRealize Operations, vRealize Network Insight, vRealize Log Insight
  3. Network Layer – vRealize Operations, vRealize Network Insight, vRealize Log Insight
  4. Virtualization Layer – vRealize Operations, vRealize Network Insight, vRealize Log Insight

 

VMworld 2021 – Summary of VMware Projects

VMworld 2021 – Summary of VMware Projects

On day 1 of VMworld 2021 we have heard and seen a lot of super exciting announcements. I believe everyone is excited about all the news and innovations VMware has presented so far.

I’m not going to summarize all the news from day 1 or day 2 but thought it might be helpful to have an overview of all the VMware projects that have been mentioned during the general session and solution keynotes.

Project Cascade

VMware Project Cascade

Project Cascade will provide a unified Kubernetes interface for both on-demand infrastructure (IaaS) and containers (CaaS) across VMware Cloud – available through an open command line interface (CLI), APIs, or a GUI dashboard.  Project Cascade will be built on an open foundation, with the open-sourced VM Operator as the first milestone delivery for Project Cascade that enables VM services on VMware Cloud.

VMworld 2021 session: Solution Keynote: The VMware Multi-Cloud Computing Infrastructure Strategy of 2021 [MCL3217]

Project Capitola

VMware Project Capitola

Project Capitola is a software-defined memory implementation that will aggregate tiers of different memory types such as DRAM, PMEM, NVMe and other future technologies in a cost-effective manner, to deliver a uniform consumption model that is transparent to applications.

VMworld 2021 session: Introducing VMware Project Capitola: Unbounding the ‘Memory Bound’ [MCL1453] and How vSphere Is Redefining Infrastructure For Running Apps In the Multi-Cloud Era [MCL2500]

Project Ensemble

VMware Project Ensemble

Project Ensemble integrates and automates multi-cloud management with vRealize. This means that all the different VMware cloud management capabilities—self-service, elasticity, metering, and more—are in one place. You can access all the data, analytics, and workflows to easily manage your cloud deployments at scale.

VMworld 2021 session: Introducing Project Ensemble Tech Preview [MCL1301]

Project Arctic

VMware Project Arctic

Project Arctic is “the next evolution of vSphere” and is about bringing your own hardware while taking advantage of VMware Cloud offerings to enable a hybrid cloud experience. Arctic natively integrates cloud connectivity into vSphere and establishes hybrid cloud as the default operating model.

VMworld 2021 session: What’s New in vSphere [APP1205] and How vSphere Is Redefining Infrastructure For Running Apps In the Multi-Cloud Era [MCL2500]

Project Monterey

VMware Project Monterey

Project Monterey was announced in the VMworld 2020 keynote. It is about SmartNICs that will redefine the data center with decoupled control and data planes for management, networking, storage and security for VMware ESXi hosts and bare-metal systems.

VMworld 2021 session: 10 Things You Need to Know About Project Monterey [MCL1833] and How vSphere Is Redefining Infrastructure For Running Apps In the Multi-Cloud Era [MCL2500]

Project Iris

I don’t remember anymore which session mentioned Project Iris but it is about the following:

Project Iris discovers and analyzes an organization’s full app portfolio; recommends which apps to rehost, replatform, or refactor; and enables customers to adapt their own transformation journey for each app, line of business, or data center.

Project Pacific

Project Pacific was announced at VMworld 2019. It is about re-architecting vSphere to integrate and embed Kubernetes and is known as “vSphere with Tanzu” (or TKGS) today. In other words, Project Pacific transformed vSphere into a Kubernetes-native platform with an Kubernetes control plane integrated directly into ESXi and vCenter. Pacific is part of the Tanzu portofolio.

VMworld 2019 session: Introducing Project Pacific: Transforming vSphere into the App Platform of the Future [HBI4937BE]

Project Santa Cruz

VMware Project Santa Cruz

Project Santa Cruz is a new integrated offering from VMware that adds edge compute and SD-WAN together to give you a secure, scalable, zero touch edge run time at all your edge locations. It connects your edge sites to centralized management planes for both your networking team and your cloud native infrastructure team. This solution is OCI compatible: if your app runs in a container, it can run on Santa Cruz.

VMworld 2021 session: Solution Keynote: What’s Next? A Look inside VMware’s Innovation Engine [VI3091]

Project Dawn Patrol

Project Dawn Patrol

So far, Project Dawn Patrol was only mentioned during the general session. “It will give you full visibility with a map of all your cloud assets and their dependencies”, Dormain Drewitz said.

VMworld 2021 session: General Session: Accelerating Innovation, Strategies for Winning Across Clouds and Apps [GEN3103]

Project Radium

VMware Project Radium

Last year VMware introduced vSphere Bitfusion which allow shared access to a pool of GPUs over a network. Project Radium expands the fetature set of Bitfusion to other architectures and will support AMD, Graphcore, Intel, Nvidia and other hardware vendors for AI/ML workloads.

VMworld 2021 session: Project Radium: Bringing Multi-Architecture compute to AI/ML workloads [VI1297]

Project IDEM

IDEM has been described as an “easy to use management automation technology”.

VMworld 2021 session: Solution Keynote: What’s Next? A Look inside VMware’s Innovation Engine [VI3091] and Next-Generation SaltStack: What Idem Brings to SaltStack [VI1865]

Please comment below or let me know via Twitter or LinkedIn if I missed a new or relevant VMware project. 😉

Must Watch VMworld Multi-Cloud Sessions

I recently wrote a short blog about some of the sessions I recommend to customers, partners and friends.

If you would like to know more about the VMware multi-cloud strategy and vision, have a look at some of the sessions below:

VMworld 2021 Must Watch Sessions

 

VMworld 2021 – My Content Catalog and Session Recommendation

VMworld 2021 – My Content Catalog and Session Recommendation

VMworld 2021 is going to happen from October 6-7, 2021 (EMEA). This year you can expect so many sessions and presentations about the options you have when combining different products together, that help you to reduce complexity, provide more automation and therefore create less overhead.

Let me share my 5 personal favorite picks and also 5 recommended sessions based on the conversations I had with multiple customers this year.

My 5 Personal Picks

10 Things You Need to Know About Project Monterey [MCL1833]

Project Monterey was announced in the VMworld 2020 keynote. There has been tremendous work done since then. Hear Niels Hagoort and Sudhansu Jain talking about SmartNICs and how they will redefine the data center with decoupled control and data planes – for ESXi hosts and bare-metal systems. They are going to cover and demo the overall architecture and use cases!

Upskill Your Workforce with Augmented and Virtual Reality and VMware [VI1596]

Learn from Matt Coppinger how augmented realited (AR) and virtual reality (VR) are transforming employee productivity, and how these solutions can be deployed and managed using VMware technologies. Matt is going to cover the top enterprise use cases for AR/VR as well as the challenges you might face deploying these emerging technologies. Are you interested how to architect and configure VMware technologies to deploy and manage the latest AR/VR technology, applications and content? If yes, then this session is also for you.

Addressing Malware and Advanced Threats in the Network [SEC2027] (Tech+ Pass Only)

I am very interested to learn more cybersecurity. With Chad Skipper VMware has an expert who can give insights on how the Network Detection and Response (NDR) capabilities if NSX Advanced Threat Prevention provide visibility, detection and prevention of advanced threats.

60 Minutes of Non-Uniform Memory Access (NUMA) 3rd Edition [MCL1853]

Learn more about NUMA from Frank Denneman. You are going to learn more about the underlying configuration of a virtual machine and discover the connection between the Generapl-Purpose Graphics Processing Unit (GPGPU) and the NUMA node. You will also understand after how your knowledge of NUMA concepts in your cluster can help the developer by aligning the Kubernetes nodes to the physical infrastructure with the help of VM Service.

Mount a Robust Defense in Depth Strategy Against Ransomware [SEC1287]

Are you interested to learn more about how to protect, detect, respond to and recover from cybersecurity attacks across all technology stacks, regardless of their purpose or location? Learn more from Amanda Blevins about the VMware solutions for end users, private clouds, public clouds and modern applications.

5 Recommended Sessions based on Customer Conversations

Cryptographic Agility: Preparing for Quantum Safety and Future Transition [VI1505]

A lot of work is needed to better understand cryptographic agility and how we can address and manage the expected challenges that come with quantum computing. Hear VMware’s engineers from the Advanced Technology Group talking about the requirements of crypto agility and VMware’s recent research work on post-quantum cryptography in the VMware Unified Access Gateway (UAG) project.

Edge Computing in the VMware Office of the CTO: Innovations on the Horizon [VI2484]

Let Chris Wolf give you some insight into VMware’s strategic direction in support of edge computing. He is going to talk about solutions that will drive down costs while accelerating the velocity and agility in which new apps and services can be delivered to the edge.

Delivering a Continuous Stream of More Secure Containers on Kubernetes [APP2574]

In this session one can see how you can use two capabilities in VMware Tanzu Advanced, Tanzu Build Service and Tanzu Application Catalog, to feed a continuous stream of patched and compliant containers into your continuous delivery (CD) system. A must attend session delivered by David Zendzian, the VMware Tanzu Global Field CISO.

A Modern Firewall For any Cloud and any Workload [SEC2688]

VMware NSX firewall reimagines East-West security by using a distributed- and software-based approach to attach security policies to every workload in any cloud. Chris Kruegel gives you insights on how to stop lateral movement with advanced threat prevention (ATP) capabilities via IDS/IPS, sandboxing, NTA and NDR.

A Practical Approach for End-to-End Zero Trust [SEC2733]

Hear different the VMware CTOs Shawn Bass, Pere Monclus and Scott Lundgren talking about a zero trust approach. Shawn and the others will discuss specific capabilities that will enable customers to achieve a zero trust architecture that is aligned to the NIST guidance and covers secure access for users as well secure access to workloads.

Enjoy VMworld 2021! 🙂

 

The Rise of VMware Tanzu Service Mesh

The Rise of VMware Tanzu Service Mesh

My last article focused on application modernization and data portability in a multi-cloud world. I explained the value of the VMware Tanzu portfolio by mentioning a consistent infrastructure and consistent application platform approach, which ultimately delivers a consistent developer experience. I also dedicated a short section about Tanzu Service Mesh, which is only one part of the unified Tanzu control plane (besides Tanzu Mission Control and Tanzu Observability) for multiple Kubernetes clusters and clouds.

When you hear or see someone writing about TSM, you very soon get to the point, where the so-called “Global Namespaces” (GNS) are being mentioned, which has the magic power to stitch hybrid applications together that run in multiple clouds.

Believe me when I say that Tanzu Service Mesh (TSM) is rising and becoming the next superstar of the VMware portfolio. I think Joe Baguley would agree here. 😀

Namespaces

Before we start talking about Tanzu Service Mesh and the magical power of Global Namespaces, let us have a look at the term “Namespaces” first.

Kubernetes Namespace

Namespaces give you a way to organize clusters into virtual carved out sub-clusters, which can be helpful when different teams, tenants or projects share the same Kubernetes cluster. This form of a namespace provides a method to better share resources, because it ensures fair allocation of these resources with the right permissions.

So, using namespaces gives you a way of isolation that developers never affect other project teams. Policies allow to configure compute resources by defining resource quotas for CPU or memory utilization. This also ensures the performance of a specific namespace, its resources (pods, services etc.) and the Kubernetes cluster in general.

Although namespaces are separate from each other, they can communicate with each other. Network policies can be configured to create isolated and non-isolated pods. For example, a network policy can allow or deny all traffic coming from other namespaces.

Ellei Mei explained this in a very easy in her article after Project Pacific had been made public in September 2019:

Think of a farmer who divides their field (cluster + cluster resources) into fenced-off smaller fields (namespaces) for different herds of animals. The cows in one fenced field, horses in another, sheep in another, etc. The farmer would be like operations defining these namespaces, and the animals would be like developer teams, allowed to do whatever they do within the boundaries they are allocated.

vSphere Namespace

The first time I heard of Kubernetes or vSphere Namespaces was in fact at VMworld 2019 in Barcelona. VMware then presented a new app-focused management concept. This concept described a way to model modern application and all their parts, and we call this a vSphere Namespace today.

With Project Pacific (today known vSphere with Tanzu or Tanzu Kubernetes Grid), VMware went one step further and extended the Kubernetes Namespace by adding more options for compute resource allocation, vMotion, encryption, high availability, backup & restore, and snapshots.

Rather than having to deal with each namespace and its containers, vSphere Namespaces (also called “guardrails” sometimes) can draw a line around the whole application and services including virtual machines.

Namespaces as the unit of management

With the re-architecture of vSphere and the integration of Kubernetes as its control plane, namespaces can be seen as the new unit of management.

Imagine that you might have thousands of VMs in your vCenter inventory that you needed to deal with. After you group those VMs into their logical applications, you may only have to deal with dozens of namespaces now.

If you need to turn on encryption for an application, you can just click a button on the namespace in vCenter and it does it for you. You don’t need to deal with individual VMs anymore.

vSphere Virtual Machine Service

With the vSphere 7 Update 2a release, VMware provided the “VM Service” that enables Kubernetes-native provisioning and management of VMs.

For many organizations legacy applications are not becoming modern over night, they become hybrid first before the are completely modernized. This means we have a combination of containers and virtual machines forming the application, and not find containers only. I also call this a hybrid application architecture in front of my customers. For example, you may have a containerized application that uses a database hosted in a separate VM.

So, developers can use the existing Kubernetes API and a declarative approach to create VMs. No need to open a ticket anymore to request a virtual machine. We talk self-service here.

Tanzu Mission Control – Namespace Management

Tanzu Mission Control (TMC) is a VMware Cloud (SaaS) service that provides a single control point for multiple teams to remove the complexities from managing Kubernetes cluster across multiple clouds.

One of the ways to organize and view your Kubernetes resources with TMC is by the creation of “Workspaces”.

Workspaces allows you to organize your namespaces into logical groups across clusters, which helps to simplify management by applying policies at a group level. For example, you could apply an access policy to an entire group of clusters (from multiple clouds) rather than creating separate policies for each individual cluster.

Think about backup and restore for a moment. TMC and the concept of workspaces allow you to back up and restore data resources in your Kubernetes clusters on a namespace level.

Management and operations with a new application view!

FYI, VMware announced the integration of Tanzu Mission Control and Tanzu Service Mesh in December 2020.

Service Mesh

A lot of vendors including VMware realized that the network is the fabric that brings microservices together, which in the end form the application. With modernized or partially modernized apps, different Kubernetes offerings and a multi-cloud environment, we will find the reality of hybrid applications which sometimes run in multiple clouds. 

This is the moment when you have to think about the connectivity and communication between your app’s microservices.

One of the main ideas and features behind a service mesh was to provide service-to-service communication for distributed applications running in multiple Kubernetes clusters hosted in different private or public clouds.

The number of Kubernetes service meshes has rapidly increased over the last few years and has gotten a lot of hype. No wonder why there are different service mesh offerings around:

  • Istio
  • Linkerd
  • Consul
  • AWS Apps Mesh
  • OpenShift Service Mesh by Red Hat
  • Open Service Mesh AKS add-on (currently preview on Azure)

Istio is probably the most famous one on this list. For me, it is definitely the one my customers look and talk about the most.

Service mesh brings a new level of connectivity between services. With service mesh, we inject a proxy in front of each service; in Istio, for example, this is done using a “sidecar” within the pod.

Istio’s architecture is divided into a data plane based on Envoy (the sidecar) and a control plane, that manages the proxies. With Istio, you inject the proxies into all the Kubernetes pods in the mesh.

As you can see on the image, the proxy sits in front of each microservice and all communications are passed through it. When a proxy talks to another proxy, then we talk about a service mesh. Proxies also handle traffic management, errors and failures (retries) and collect metric for observability purposes.

Challenges with Service Mesh

The thing with service mesh is, while everyone thinks it sounds great, that there are new challenges that service mesh brings by itself.

The installation and configuration of Istio is not that easy and it takes time. Besides that, Istio is also typically tied to a single Kubernetes cluster and therefore Istio data plane – and organizations usually prefer to keep their Kubernetes clusters independent from each other. This leaves us with security and policies tied to a Kubernetes cluster or cloud vendor, which leaves us with silos.

Istio supports a so-called multi-cluster deployment with one service mesh stretched across Kubernetes clusters, but you’ll end up with a stretched Istio control plane, which eliminates the independence of each cluster.

So, a lot of customers also talk about better and easier manageability without dependencies between clouds and different Kubernetes clusters from different vendors.

That’s the moment when Tanzu Service Mesh becomes very interesting. 🙂

Tanzu Service Mesh (formerly known as NSX Service Mesh)

Tanzu Service Mesh, built on VMware NSX, is an offering that delivers an enterprise-grade service mesh, built on top of a VMware-administrated Istio version.

When onboarding a new cluster on Tanzu Service Mesh, the service deploys a curated version of Istio signed and supported by VMware. This Istio deployment is the same as the upstream Istio in every way, but it also includes an agent that communicates with the Tanzu Service Mesh global control plane. Istio installation is not the most intuitive, but the onboarding process of Tanzu Service Mesh simplifies the process significantly.

Overview of Tanzu Service Mesh

The big difference and the value that comes with Tanzu Service Mesh (TSM) is its ability to support cross-cluster and cross-cloud use cases via Global Namespaces.

Global Namespaces (GNS)

Yep, another kind of a namespace, but the most exciting one! 🙂

A Global Namespace is a unique concept in Tanzu Service Mesh and connects resources and workloads that form the application into a virtual unit. Each GNS is an isolated domain that provides automatic service discovery and manages the following functions that are port of it, no matter where they are located:

  • Identity. Each global namespace has its own certificate authority (CA) that provisions identities for the resources inside that global namespace
  • Discovery (DNS). The global namespace controls how one resource can locate another and provides a registry.
  • Connectivity. The global namespace defines how communication can be established between resources and how traffic within the global namespace and external to the global namespace is routed between resources.
  • Security. The global namespace manages security for its resources. In particular, the global namespace can enforce that all traffic between the resources is encrypted using Mutual Transport Layer Security authentication (mTLS).
  • Observability. Tanzu Service Mesh aggregates telemetry data, such as metrics for services, clusters, and nodes, inside the global namespace.

Use Cases

The following diagram represents the global namespace concept and other pieces in a high-level architectural view. The components of one application are distributed in two different Kubernetes clusters: one of them is on-premises and the other in a public cloud. The Global Namespace creates a logical view of these application components and provides a set of basic services for the components.

Global Namespaces

If we take application continuity as another example for a use case, we would deploy an app in more than one cluster and possibly in a remote region for disaster recovery (DR), with a load balancer between the locations to direct traffic to both clusters. This would be an active-active scenario. With Tanzu Service Mesh, you could group the clusters into a Global Namespace and program it to automatically redirect traffic in case of a failure. 

In addition to the use case and support for multi-zone and multi-region high availability and disaster recovery, you can also provide resiliency with automated scaling based on defined Service-Level Objectives (SLO) for multi-cloud apps.

VMware Modern Apps Connectivity Solution  

In May 2021 VMware introduced a new solution that brings together the capabilities of Tanzu Service Mesh and NSX Advanced Load Balancer (NSX ALB, formerly Avi Networks) – not only for containers but also for VMs. While Istio’s Envoy only operates on layer 7, VMware provides layer 4 to layer 7 services with NSX (part of TSM) and NSX ALB, which includes L4 load balancing, ingress controllers, GSLB, WAF and end-to-end service visibility. 

This solution speeds the path to app modernization with connectivity and better security across hybrid environments and hybrid app architectures.

Multiple disjointed products, no end-to-end observability

 

 

 

 

 

 

Summary

One thing I can say for sure: The future for Tanzu Service Mesh is bright!

Many customers are looking for ways for offloading security (encryption, authentication, authorization) from an application to a service mesh.

One great example and use case from the financial services industry is crypto agility, where a “crypto service mesh” (a specialized service mesh) could be part of a new architecture, which provides quantum-safe certificates.

And when we offload encryption, calculation, authentication etc., then we may have other use cases for SmartNICs and  Project Monterey

To learn more about service mesh and the capabilities of Tanzu Service Mesh, I can recommend Service Mesh for Dummies written Niran Even-Chen, Oren Penso and Susan Wu.

Thank you for reading!

 

Application Modernization and Multi-Cloud Portability with VMware Tanzu

Application Modernization and Multi-Cloud Portability with VMware Tanzu

It was 2019 when VMware announced Tanzu and Project Pacific. A lot has happened since then and almost everyone is talking about application modernization nowadays. With my strong IT infrastructure background, I had to learn a lot of new things to survive initial conversations with application owners, developers and software architects. And in the same time VMware’s Kubernetes offering grew and became very complex – not only for customers, but for everyone I believe. 🙂

I already wrote about VMware’s vision with Tanzu: To put a consistent “Kubernetes grid” over any cloud

This is the simple message and value hidden behind the much larger topics when discussing application modernization and application/data portability across clouds.

The goal of this article is to give you a better understanding about the real value of VMware Tanzu and to explain that it’s less about Kubernetes and the Kubernetes integration with vSphere.

Application Modernization

Before we can talk about the modernization of applications or the different migration approaches like:

  • Retain – Optimize and retain existing apps, as-is
  • Rehost/Migration (lift & shift) – Move an application to the public cloud without making any changes
  • Replatform (lift and reshape) – Put apps in containers and run in Kubernetes. Move apps to the public cloud
  • Rebuild and Refactor – Rewrite apps using cloud native technologies
  • Retire – Retire traditional apps and convert to new SaaS apps

…we need to have a look at the palette of our applications:

  • Web Apps – Apache Tomcat, Nginx, Java
  • SQL Databases – MySQL, Oracle DB, PostgreSQL
  • NoSQL Databases – MongoDB, Cassandra, Prometheus, Couchbase, Redis
  • Big Data – Splunk, Elasticsearch, ELK stack, Greenplum, Kafka, Hadoop

In an app modernization discussion, we very quickly start to classify applications as microservices or monoliths. From an infrastructure point of view you look at apps differently and call them “stateless” (web apps) or “stateful” (SQL, NoSQL, Big Data) apps.

And with Kubernetes we are trying to overcome the challenges, which come with the stateful applications related to app modernization:

  • What does modernization really mean?
  • How do I define “modernization”?
  • What is the benefit by modernizing applications?
  • What are the tools? What are my options?

What has changed? Why is everyone talking about modernization? Why are we talking so much about Kubernetes and cloud native? Why now?

To understand the benefits (and challenges) of app modernization, we can start looking at the definition from IBM for a “modern app”:

“Application modernization is the process of taking existing legacy applications and modernizing their platform infrastructure, internal architecture, and/or features. Much of the discussion around application modernization today is focused on monolithic, on-premises applications—typically updated and maintained using waterfall development processes—and how those applications can be brought into cloud architecture and release patterns, namely microservices

Modern applications are collections of microservices, which are light, fault tolerant and small. Microservices can run in containers deployed on a private or public cloud.

Which means, that a modern application is something that can adapt to any environment and perform equally well.

Note: App modernization can also mean, that you must move your application from .NET Framework to .NET Core.

I have a customer, that is just getting started with the app modernization topic and has hundreds of Windows applications based on the .NET Framework. Porting an existing .NET app to .NET Core requires some work, but is the general recommendation for the future. This would also give you the option to run your .NET Core apps on Windows, Linux and macOS (and not only on Windows).

A modern application is something than can run on bare-metal, VMs, public cloud and containers, and that easily integrates with any component of your infrastructure. It must be something, that is elastic. Something, that can grow and shrink depending on the load and usage. Since it is something that needs to be able to adapt, it must be agile and therefore portable.

Cloud Native Architectures and Modern Designs

If I ask my VMware colleagues from our so-called MAPBU (Modern Application Platform Business Unit) how customers can achieve application portability, the answer is always: “Cloud Native!”

Many organizations and people see cloud native as going to Kubernetes. But cloud native is so much more than the provisioning and orchestration of containers with Kubernetes. It’s a about collaboration, DevOps, internal processes and supply chains, observability/self-healing, continuous delivery/deployment and cloud infrastructure.

There are so many definitions around “cloud native”, that Kamal Arora from Amazon Web Services and others wrote the book “Cloud Native Architecture“, which describes a maturity model. This model helps you to understand, that cloud native is more a journey than only restrictive definition.

Cloud Native Maturity Model

The adoption of cloud services and applying an application-centric design are very important, but the book also mentions that security and scalability rely on automation. And this for example could bring the requirement for Infrastructure as Code (IaC).

In the past, virtualization – moving from bare-metal to vSphere – didn’t force organizations to modernize their applications. The application didn’t need to change and VMware abstracted and emulated the bare-metal server. So, the transition (P2V) of an application was very smooth and not complicated.

And this is what has changed today. We have new architectures, new technologies and new clouds running with different technology stacks. We have Kubernetes as framework, which requires applications to be redesigned for these platforms.

That is the reason why enterprises have to modernize their applications.

One of the “five R’s” mentioned above is the lift and shift approach. If you don’t want or need to modernize some of your applications, but move to the public cloud in an easy, fast and cost efficient way, have a look at VMware’ hybrid cloud extension (HCX).

In this article I focus more on the replatform and refactor approaches in a multi-cloud world.

Kubernetize and productize your applications

Assuming that you also define Kubernetes as the standard to orchestrate your containers where your microservices are running in, usually the next decision would be about the Kubernetes “product” (on-prem, OpenShift, public cloud).

Looking at the current CNCF Cloud Native Landscape, we can count over 50 storage vendors and over 20 networks vendors providing cloud native storage and networking solutions for containers and Kubernetes.

Talking to my customers, most of them mention the storage and network integration as one of their big challenges with Kubernetes. Their concern is about performance, resiliency, different storage and network patterns, automation, data protection/replication, scalability and cloud portability.

Why do organizations need portability?

There are many use cases and requirements that portability (infrastructure independence) becomes relevant. Maybe it’s about a hardware refresh or data center evacuation, to avoid vendor/cloud lock-in, not enough performance with the current infrastructure or it could be about dev/test environments, where resources are deployed and consumed on-demand.

Multi-Cloud Application Portability with VMware Tanzu

To explore the value of Tanzu, I would like to start by setting the scene with the following customer use case:

In this case the customer is following a cloud-appropriate approach to define which cloud is the right landing zone for their applications. They decided to develop new applications in the public cloud and use the native services from Azure and AWS. The customers still has hundreds of legacy applications (monoliths) on-premises and didn’t decide yet, if they want to follow a “lift and shift and then modernize” approach to migrate a number applications to the public cloud.

Multi-Cloud App Portability

But some of their application owners already gave the feedback, that their applications are not allowed to be hosted in the public cloud, have to stay on-premises and need to be modernized locally.

At the same time the IT architecture team receives the feedback from other application owners, that the journey to the public cloud is great on paper, but brings huge operational challenges with it. So, IT operations asks the architecture team if they can do something about that problem.

Both cloud operations for Azure and AWS teams deliver a different quality of their services, changes and deployments take longer with one of their public clouds, they have problems with overlapping networks, different storage performance characteristics and APIs.

Another challenge is the role-based access to the different clouds, Kubernetes clusters and APIs. There is no central log aggregation and no observability (intelligent monitoring & alerting). Traffic distribution and load balancing are also other items on this list.

Because of the feedback from operations to architecture, IT engineering received the task to define a multi-cloud strategy, that solves this operational complexity.

Notes: These are the regular multi-cloud challenges, where clouds are the new silos and enterprises have different teams with different expertise using different management and security tools.

This is the time when VMware’s multi-cloud approach Tanzu become very interesting for such customers.

Consistent Infrastructure and Management

The first discussion point here would be the infrastructure. It’s important, that the different private and public clouds are not handled and seen as silos. VMware’s approach is to connect all the clouds with the same underlying technology stack based on VMware Cloud Foundation.

Beside the fact, that lift and shift migrations would be very easy now, this approach brings two very important advantages for the containerized workloads and the cloud infrastructure in general. It solves the challenge with the huge storage and networking ecosystem available for Kubernetes workloads by using vSAN and NSX Data Center in any of the existing clouds. Storage and networking and security are now integrated and consistent.

For existing workloads running natively in public clouds, customers can use NSX Cloud, which uses the same management plane and control plane as NSX Data Center. That’s another major step forward.

Using consistent infrastructure enables customers for consistent operations and automation.

Consistent Application Platform and Developer Experience

Looking at organization’s application and container platforms, achieving consistent infrastructure is not required, but obviously very helpful in terms of operational and cost efficiency.

To provide a consistent developer experience and to abstract the underlying application or Kubernetes platform, you would follow the same VMware approach as always: to put a layer on top.

Here the solution is called Tanzu Kubernetes Grid (TKG), that provides a consistent, upstream-compatible implementation of Kubernetes, that is tested, signed and supported by VMware.

A Tanzu Kubernetes cluster is an opinionated installation of Kubernetes open-source software that is built and supported by VMware. In all the offerings, you provision and use Tanzu Kubernetes clusters in a declarative manner that is familiar to Kubernetes operators and developers. The different Tanzu Kubernetes Grid offerings provision and manage Tanzu Kubernetes clusters on different platforms, in ways that are designed to be as similar as possible, but that are subtly different.

VMware Tanzu Kubernetes Grid (TKG aka TKGm)

Tanzu Kubernetes Grid can be deployed across software-defined datacenters (SDDC) and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2. I would assume, that the Google Cloud is a roadmap item.

TKG allows you to run Kubernetes with consistency and makes it available to your developers as a utility, just like the electricity grid. TKG provides the services such as networking, authentication, ingress control, and logging that a production Kubernetes environment requires.

This TKG version is also known as TKGm for “TKG multi-cloud”.

VMware Tanzu Kubernetes Grid Service (TKGS aka vSphere with Tanzu)

TKGS is the option vSphere admins want to hear about first, because it allows you to turn a vSphere cluster to a platform running Kubernetes workloads in dedicated resources pools. TKGS is the thing that was known as “Project Pacific” in the past.

Once enabled on a vSphere cluster, vSphere with Tanzu creates a Kubernetes control plane directly in the hypervisor layer. You can then run Kubernetes containers by deploying vSphere Pods, or you can create upstream Kubernetes clusters through the VMware Tanzu Kubernetes Grid Service and run your applications inside these clusters.

VMware Tanzu Mission Control (TMC)

In our use case before, we have AKS and EKS for running Kubernetes clusters in the public cloud.

The VMware solution for multi-cluster Kubernetes management across clouds is called Tanzu Mission Control, which is a centralized management platform for the consistency and security the IT engineering team was looking for.

Available through VMware Cloud Services as SaaS offering, TMC provides IT operators with a single control point to provide their developers self-service access to Kubernetes clusters.

TMC also provides cluster lifecycle management for TKG clusters across environment such as vSphere, AWS and Azure.

It allows you to bring the clusters you already have in the public clouds or other environments (with Rancher or OpenShift for example) under one roof via the attachment of conformant Kubernetes clusters.

Not only do you gain global visibility across clusters, teams and clouds, but you also get centralized authentication and authorization, consistent policy management and data protection functionalities.

VMware Tanzu Observability by Wavefront (TO)

Tanzu Observability extends the basic observability provided by TMC with enterprise-grade observability and analytics.

Wavefront by VMware helps Tanzu operators, DevOps teams, and developers get metrics-driven insights into the real-time performance of their custom code, Tanzu platform and its underlying components. Wavefront proactively detects and alerts on production issues and improves agility in code releases.

TO is also a SaaS-based platform, that can handle the high-scale requirements of cloud native applications.

VMware Tanzu Service Mesh (TSM)

Tanzu Service Mesh, formerly known as NSX Service Mesh, provides consistent connectivity and security for microservices across all clouds and Kubernetes clusters. TSM can be installed in TKG clusters and third-party Kubernetes-conformant clusters.

Organizations that are using or looking at the popular Calico cloud native networking option for their Kubernetes ecosystem often consider an integration with Istio (Service Mesh) to connect services and to secure the communication between these services.

The combination of Calico and Istio can be replaced by TSM, which is built on VMware NSX for networking and that uses an Istio data plane abstraction. This version of Istio is signed and supported by VMware and is the same as the upstream version. TSM brings enterprise-grade support for Istio and a simplified installation process.

One of the primary constructs of Tanzu Service Mesh is the concept of a Global Namespace (GNS). GNS allows developers using Tanzu Service Mesh, regardless of where they are, to connect application services without having to specify (or even know) any underlying infrastructure details, as all of that is done automatically. With the power of this abstraction, your application microservices can “live” anywhere, in any cloud, allowing you to make placement decisions based on application and organizational requirements—not infrastructure constraints.

Note: On the 18th of March 2021 VMware announced the acquisition of Mesh7 and the integration of Mesh7’s contextual API behavior security solution with Tanzu Service Mesh to simplify DevSecOps.

Tanzu Editions

The VMware Tanzu portfolio comes with three different editions: Basic, Standard, Advanced

Tanzu Basic enables the straightforward implementation of Kubernetes in vSphere so that vSphere admins can leverage familiar tools used for managing VMs when managing clusters = TKGS

Tanzu Standard provides multi-cloud support, enabling Kubernetes deployment across on-premises, public cloud, and edge environments. In addition, Tanzu Standard includes a centralized multi-cluster SaaS control plane for a more consistent and efficient operation of clusters across environments = TKGS + TKGm + TMC

Tanzu Advanced builds on Tanzu Standard to simplify and secure the container lifecycle, enabling teams to accelerate the delivery of modern apps at scale across clouds. It adds a comprehensive global control plane with observability and service mesh, consolidated Kubernetes ingress services, data services, container catalog, and automated container builds = TKG (TKGS & TKGm) + TMC + TO + TSM + MUCH MORE

Tanzu Data Services

Another topic to reduce dependencies and avoid vendor lock-in would be Tanzu Data Services – a separate part of the Tanzu portfolio with on-demand caching (Tanzu Gemfire), messaging (Tanzu RabbitMQ) and database software (Tanzu SQL & Tanzu Greenplum) products.

Bringing all together

As always, I’m trying to summarize and simplify things where needed and I hope it helped you to better understand the value and capabilities of VMware Tanzu.

There are so many more products available in the Tanzu portfolio, that help you to build, run, manage, connect and protect your applications.

If you would like to know more about application and cloud transformation make sure to attend the 45 minute VMware event on March 31 (Americas) or April 1 (EMEA/APJ)!