Current vSphere Subscription Licensing Options

Current vSphere Subscription Licensing Options

Update June 27, 2022: VMware announced vSphere+ and vSAN+

VMware is giving their customers more and more the option to move towards a subscription-based licensing model. In general, companies are moving away from the large pay-up-front deals and replace them with recurring subscriptions. Vendors like VMware are making a lot of investments to provide the structures, processes and capabilities to offer subscription licenses (and SaaS services). Organizations see the benefits of subscription licenses and this blog describes the current options if you want to move your vSphere perpetual licenses towards vSphere subscription.

vSphere+ Advantage – vSphere Subscription Service

Since December 2021, VMware offers vSphere Advantage in limited regions (aka Initial Availability).

vSphere Advantage gives you the flexibility to manage and operate your on-premises vSphere infrastructure while leveraging several VMware Cloud capabilities:

  • Transition from vSphere perpetual to vSphere subscription-based consumption for your vSphere deployments
  • Complete view of the globally distributed on-premises vSphere inventory
  • VMware-managed vCenter Servers (aka Project Arctic, not GA yet)

From a centralized VMware Cloud Console you can monitor events, alerts, capacity utilization, and the security posture of your vSphere infrastructure.

It is also possible now for you to plan and upgrade your existing vSphere licensing keys and replace them with vSphere Advantage, which enables you to make use of keyless entitlements. This keyless entitlement makes it very easy for customers to stay compliant all the time and to understand the current subscription usage.

vSphere+ Operations

To start using vSphere Advantage, you must enable communication between your on-premises vCenter Server and VMware Cloud by using a vCenter Cloud Gateway. This requires an outbound connection (443, HTTPS) only, no VPN is needed.

 

Current vCenter Server Requirements:

  • The vCenter Server version must be 7.0 Update 3a and later
  • Configure the vCenter Server with a backup and restore mechanism
  • Dedicate at least three ESXi hosts for the vCenter Server. (Recommended)
  • The vCenter Server must be self-managed. It must manage its own ESXi hosts and virtual machines

Unsupported vCenter Configurations:

  • Ensure that the vCenter Server is not configured in High Availability mode
  • If the vCenter Server is configured in Enhanced Linked Mode (ELM), unlink it from ELM. See Repoint a vCenter Server Node to a New Domain. ELM is no longer required because with vSphere Advantage you can monitor your entire vSphere inventory in a single pane of glass.
  • Ensure that the vCenter Server is not configured with NSX for vSphere, vRealize Operations Manager, Site Recovery Manager, vCloud Suite, or vSAN.

Project Arctic – VMware-Managed vCenter (Roadmap)

VMware introduced Project Arctic at VMworld 2021. Now it’s called vSphere Advantage. While a hybrid cloud operating model for vSphere becomes default now, it’s not yet possible to let VMware manage your vCenter Servers. We can expect that this capability will be shipped and made generally available somewhen in 2022.

VMware Edge Compute Stack

Edge Compute Stack (ECS) is a purpose-built stack that is available in three different editions (information based on initial availability from VMworld 2021):

VMware Edge Comput Stack Editions

As you can see, each VMware Edge Compute Stack edition has the vSphere Enterprise+ (hypervisor) included. Software-defined storage with vSAN is optional, but Tanzu for running containers is always part of each edition.

Note: The Edge Compute Stack includes vSphere subscription licenses.

Other Options

If you are running the VMware Cloud Foundation (VCF) stack and look for a managed service offering, which includes subscription-based licensing, have a look at the following alternatives:

As you can see, you can start small with vSphere Advantage and grow big with VMware Cloud Universal as the final destination.

Multi-Cloud and Sovereign Cloud – Deploy the Right Data to the Right Cloud

Multi-Cloud and Sovereign Cloud – Deploy the Right Data to the Right Cloud

According to Gartner, regulated industry customers (such as finance and healthcare) and governments are looking for digital borders. Companies in these sectors are looking to reduce vendor lock-in and single points of failure with their cloud providers, whose data centers sometimes are also outside their country (e.g., Switzerland based customer with an AWS data center in Frankfurt).

The market for cloud technology and services is currently dominated by US and Asian cloud providers and many (European) companies store their data in these regions. There are European regions and data centers, but the geopolitical and legal challenges, concerns about data control, industry compliance and sovereignty are driving the creation of new national clouds.

That is why Gartner sees sovereign clouds as one of the emerging technologies, which is currently at the start of the August 2021 published hype cycle:

Das sind die aufstrebenden Technologien im Hype Cycle 2021 | IT-Markt

Image Source: https://www.it-markt.ch/news/2021-08-27/das-sind-die-aufstrebenden-technologien-im-hype-cycle-2021

Use Case 1 – Swiss Federal Administration

As an example and first use case I would mention the Swiss federal administration, which doesn’t see the need for an independent technical infrastructure under public law.

In June 2021 they published the statement that they notified the following cloud providers to become part of the federal administration’s initial multi-cloud architecture:

  • Amazon Web Services (AWS)
  • IBM
  • Microsoft
  • Oracle
  • Alibaba

There are several reasons (pricing, market share, local data center availability) that led to this decision to build a multi-cloud architecture with these cloud providers. But it was interesting to read that the government did an assessment and concluded that no technical independent infrastructure is needed – no need for a local sovereign cloud.

This means that they want to keep their existing data centers to provide infrastructure and data sovereignty.

Interestingly, the Swiss confederation is exploring initiatives for secure and trustworthy data infrastructure for Europe and is examining participation in GAIA-X.

Use Case 2 – Current Sovereign Cloud Providers

There are other examples where organizations and governments saw the need for a sovereign cloud. Having a public cloud provider’s data center in the same country does not necessarily mean, that it’s a sovereign cloud per se. Hyperscale clouds often rely on non-domestic resources that maintain their data centers or provide customer support.

Governments and regulated industries say that you need domestic resources to provide a true sovereign cloud.

A good example here is the UK government, who has chosen the provider UKCloud, that delivers a consistent experience that spans the edge, private cloud and sovereign cloud.

Another VMware sovereign cloud provider is AUCloud, who provides IaaS to the Australian government, defense, defense industries and Critical National Industry (CNI) communities.

The third example I would like to highlight is Saudi Telecom Company (STC), that brings sovereign cloud services to Saudi Arabia.

What do UKCloud, AUCloud and STC have in common? They all joined the pretty new VMware Sovereign Cloud initiative and built their sovereign clouds based on VMware technology.

Use Case 3 – Cloud Act

Another motivation for a sovereign cloud could be the Cloud Act, which is a U.S. law that gives American authorities unrestricted access to the data of American IT cloud providers. It does not matter where the data is effectively stored. In the event of a criminal prosecution, the authorities have a free hand and do not even have to notify the data owners.

What does this mean for cloud users? Because of the Cloud Act, they cannot be sure whether when and to what extent their data or the data of their customers will be read by foreign authorities.

Use Case 4 – GAIA-X

Let me quote the official explanation of GAIA-X:

The architecture of Gaia-X is based on the principle of decentralization. Gaia-X is the result of many individual data owners (users) and technology players (providers) – all adopting a common standard of rules and control mechanisms – the Gaia-X standard.

Together, we are developing a new concept of data infrastructure ecosystem, based on the values of openness, transparency, sovereignty, and interoperability, to enable trust. What emerges is not a new cloud physical infrastructure, but a software federation system that can connect several cloud service providers and data owners together to ensure data exchange in a trusted environment and boost the creation of new common data spaces to create digital economy.

Gaia-X aims to mitigate Europe’s dependency on non-European providers and there seems to be no pre-defined architecture or preferred vendor when it comes to the underlying cloud platform GAIA-X sits on top.

While one would believe that a sovereign cloud is mandatory for GAIA-X, it looks more like a cloud-agnostic data exchange platform hosted by European providers and customers.

I am curious how providers build, operate and maintain a sovereign cloud stack based on open-source software.

How real is the need for Sovereign Cloud?

If a company or government wants to keep, extend, and maintain their own local data centers, this is still a valid option of course. But the above examples showed that the need for sovereign clouds exists and that the global interest seems to be growing.

What is the VMware Sovereign Cloud Initiative?

In October 2021 VMware announced their VMware Sovereign Cloud initiative where they partnering with cloud service providers to deliver a sovereign cloud infrastructure with cloud services on top to customers in regulated industries.

To become a so-called VMware Sovereign Cloud Provider, partners must go through an assessment and meet specific requirements (framework) to show their capability to provide a sovereign cloud infrastructure.

VMware defines a sovereign cloud as one that:

  • Protects and unlocks the value of critical data (e.g., national data, corporate data, and personal data) for both private and public sector organizations
  • Delivers a national capability for the digital economy
  • Secures data with audited security controls
  • Ensures compliance with data privacy laws
  • Improves control of data by providing both data residency and data sovereignty with full jurisdictional control

VMware aims to help regulated industry and government customers to execute their cloud strategies by connecting them to VMware Sovereign Cloud Providers (like UKCloud, AUcloud, STC, Tietoevry, ThinkOn or OVHcloud).

Sovereign Cloud Providers in Switzerland

Currently, there is no official VMware sovereign cloud provider in Switzerland. We have a few and strong VMware cloud provider partners as part of the VMware Cloud Provider Program (VCPP):

Let us come back to the use case 1 with the Swiss federal administration. They are building a multi-cloud and would have in Switzerland a potential number of at least 10 cloud service providers, which could become an official VMware Sovereign Cloud Provider.

VMware Sovereign Cloud Borders 

Image Source: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/vmw-sovereign-cloud-solution-brief-customer.pdf

There are other Swiss providers who are building a sovereign cloud based on open-source technologies like OpenStack.

Hyperscalers like Microsoft or Google need to partner with local providers if they want to build a sovereign cloud and deliver services.

VMware already has 4300+ partners with the strategic partnerships and the same technology stack in 120+ countries and some of them are already sovereign cloud providers as mentioned before.

VMware Sovereign Cloud initiative

Image Source: https://blogs.vmware.com/cloud/2021/10/06/vmware-sovereign-cloud/

What are the biggest challenges with a multi-cloud and a sovereign cloud infrastructure?

What do you think are the biggest challenges of an organization that builds a multi-cloud with different public cloud providers and sovereign clouds?

Let me list a few questions here:

  • How can I easily migrate my workloads to the public or sovereign cloud?
  • How long does it take to migrate my applications?
  • Which cloud is the right one for a specific workload?
  • Do I need to refactor some of my applications?
  • How can I consistently manage and operate 5 different public/sovereign cloud providers?
  • What if I one of my cloud providers is not strategic anymore? How can I build a cloud exit strategy?
  • How do I implement and maintain security?
  • What if I want to migrate workloads back from a public cloud to an on-premises (sovereign) cloud?
  • Which Kubernetes am I going to use in all these different clouds?
  • How do I manage and monitor all these different Kubernetes clusters, networking and security policies, create secure application communication between clouds and so on?
  • How do I control costs?

These are just a small number of questions, but I think it would take your organization or your cloud platform team a while to come up with a solution.

What is the VMware approach? Let me list some other articles of mine that help you to better understand the VMware multi-cloud approach:

Conclusion

Public cloud providers build local data centers and provide data residency. Sovereign clouds provide data sovereignty. Resident data may be accessed by a foreign authority while data sovereignty refers to data being subject to privacy laws and governance structures within the nation where that data is collected.

Controlling the location and access of data in the cloud has become an important task for CIOs and CISOs and I personally believe that sovereign clouds are not becoming important in 2 or 3 years, they are already very important and relevant, and we can expect a growth in this area in the next months.

My conclusion here is, that sovereign clouds and the public clouds are not competitors, they complement each other.

 

 

 

DevSecOps with VMware Tanzu – Intrinsic Security for a Modern Application Supply Chain

DevSecOps with VMware Tanzu – Intrinsic Security for a Modern Application Supply Chain

Intrinsic security is something we heard a lot in the past from VMware and it was mostly used to describe the strategy and capabilities behind the Carbon Black portfolio (EDR) that is complemented with the advanced threat prevention from NSX (NDR), that form together the VMware XDR vision.

I see similarities between intrinsic security and workout I am doing in the gym. My goal is to build more strength and power, and to become healthier in general. For additional muscle gain benefits and to be more time efficient, I have chosen compound exercises. I am not a fan of single muscle group exercises, which involve isolation exercises. Our body has a lot of joints for different movements, and I think it’s just natural if you use multiple muscle groups and joints during a specific exercise.

Therefore, when you perform compound exercises, you involve different muscles to complete the movement. This improves your intermuscular coordination of your muscles. In addition, as everyone would tell you, these exercises improve your core strength and they let your body become a single unit.

While doing weight training, it is very important to use the proper technique and equipment. Otherwise, the risk for injuries and vulnerabilities increases.

This is what intrinsic security means for me! And I think this is very much relevant to understand when talking about DevSecOps.

Understanding DevSecOps

For VMware, talking to developers and talking about DevOps started in 2019 when they presented VMware Tanzu the first time at VMworld. The ideas and innovation behind the name “Tanzu” should bring developers and IT operators closer together for collaboration.

DevOps is the combination of different practices, tools and philosophies that should help an organization to deliver applications and services at a higher pace. In the example above it would mean, that application developers and operations teams are not working isolated in silos anymore, they become one team, a single unit. But technology plays very important role to support the success of the new mindset and culture!

DevOps is about efficiency and the automation of manual tasks or processes. You want to become fast, flexible and efficient. When you put security in the center of this, then we start talking about DevSecOps. You want to know if one of your muscles or parts of the body become weak (defect) or vulnerable.

Tanzu DevSecOps Flow

Depending on where you are right now on this application modernization journey, doing DevSecOps could mean a huge cultural and fundamental change to how you develop applications and do IT operations.

For me, DevSecOps is not about bringing security tools together from different teams and technologies. If DevOps and DevSecOps mean that you must change your mindset, then it is maybe also about time to consider the importance of new technology choices.

If DevSecOps means that you put security in the center of a DevOps- or container-centric environment, then security must become an intrinsic part of a modern application supply chain.

The VMware Tanzu portfolio has a lot of products and services to bring developers, operations and security teams together.

Where do we start? We need to “shift left” and this means we need to integrate security already early in the application lifecycle.

Code – Spring Framework

Before you can deliver an application to your customer, you need to develop it, you need to code. Application frameworks are a very effective approach for developing more secure and optimized applications.

Frameworks help to write code faster and more efficient. Not only does a framework can save your developers a lot of coding effort, but it also comes with pre-defined templates. They incorporate best practices and help you simplifying the overall application architecture.

Why is this important? To achieve better security or a more secure cloud native application, it makes sense to standardize and automate. Automation is key for security. Standardization makes it easier to understand or reuse code. You can write all the code yourself, but the chances are high that someone else did parts of your work already. Less variability reduces complexity and therefore enhances security.

There is the open-source Spring Framework for example, which uses Java as the underlying language (or .NET for Steeltoe). Both projects are managed by VMware and millions of developers use them.

Tanzu Spring Steeltoe

What happens next? You would now run your continuous integration (CI) process (integration tests, unit tests) and then you are ready to package or build your application.

Build – Tanzu Build Service (TBS)

So, your code is now good for release. If you want to deploy your application to a Kubernetes environment, then you need a secure, portable and reproducible build that can be checked for security vulnerabilities, and you need an easy way to patch those vulnerabilities.

How are you going to build your container image where you application is going to be built into? A lot of customers and vendors have a dockerfile based approach.

VMware recommends Tanzu Build Service (TBS), which uses Tanzu Buildpacks that are based on the open-source Cloud Native Buildpacks CNCF project to turn application source code into container images. So, no dockerfiles.

TBS is constantly looking for changes in your source code and then automatically builds an image based on that. This means with TBS you don’t need any advanced knowledge of container packaging formats or know how to optimally construct a container creation script for a given programming language.

Tanzu Build Service knows all the images you have built and understands all the dependencies and components you have used. If something changes, your image is going to be rebuilt automatically and then stored in a registry of your choice. More about the registry in a second.

Tanzu Build Service

What happens if a vulnerability comes out and one of your libraries, operating systems or components is affected? TBS would patch this vulnerability and all the affected downstream container images would be updated automatically.

Imagine how happy your CISO would be about this way of building secure container images! 🙂

Build – Harbor

We have now pushed our container image to a container repository, a so-called registry. VMware uses Harbor (open-source cloud native registry by VMware, donated to the CNCF in 2018) as an enterprise-grade storage for container images. Additionally, Harbor provides static analysis of vulnerabilities in images through open-source projects like Trivy and Clair.

Tanzu Build Service Harbor

We have now developed our applications and stored our packaged images in our Harbor registry. What else do we need?

Build – VMware Application Catalog (VAC)

Developers are not going to build everything by themselves. Other services like databases or caching are needed to build the application as well and there are so many known and pre-packaged open-source software freely available online. This brings additional security risks and provides malicious actors to publish container images that contain vulnerabilities.

How can you mitigate this risk and reduce the chance for a critical application outage or breach?

In 2019, VMware acquired Bitnami, which delivers and maintains a catalog of 130+ pre-packaged and ready-to-use open-source application components, that are “continuously maintained and verifiably tested for use in production environments”.

Known as VMware Application Catalog (VAC, formerly also known as Tanzu Application Catalog), VAC as a SaaS offering provides your organization a customizable private collection of open-source software and services, that can automatically be placed in your private container image registry. In this case in your Harbor registry.

Example apps that are supported today:

Language Runtimes Databases App Components Developer Tools Business Apps
Nodejs MySQL Kafka Artifactory WordPress
Python PostgreSQL RabbitMQ Jenkins Drupal
Ruby MariaDB TensorFlow Redmine Magento
Java MongoDB ElasticSearch Harbor Moodle

How does it work?

VMware Application Catalog - How it works

There are two product features that I would like to highlight:

  • Build-time CVE scan reports for container images using Trivy
  • Build-time Antivirus scans for container images using ClamAV

Your application, built by Tanzu Build Service and VMware Application Catalog, is complete now, and stored in your Harbor registry. And since you use VAC, you also have your “marketplace” of applications, that is curated by a (security) team in your organization. 

If you want to see VAC in action, have a look at this Youtube video.

Note: Yes, VAC is a SaaS hosted application and you may have concerns because you are a public/federal customer. That’s no problem. Consider VAC as your trusted source where you can copy things from. There is no data stored in the public cloud nor does it run anything up there. Download your packages from this trusted repository over to you air gapped environment.

Run – Tanzu Kubernetes Grid (TKG)

Your application is ready to be deployed and the next step is in your pipeline is “continuous deployment“. We finally can deploy our applications to a Kubernetes cluster.

Tanzu Kubernetes Grid or TKG is VMware’s own consistent and conformant Kubernetes distribution that can run in any cloud. VMware’s strategy is about running the same Kubernetes dial tone across data centers and public cloud, which enables a consistent and secure experience for your developers.

TKG has a tight integration with vSphere called “vSphere with Tanzu”. Since TKG is an enterprise-ready Kubernetes for a multi-cloud infrastructure, it can run also in all major public clouds.

If consistent automation is important to you and you want to run Kubernetes in an air gapped environment, where there is no AWS, Azure or any other major public cloud provider, then a consistent Kubernetes version like TKG would add value to your infrastructure.

Manage/Operate – Tanzu Mission Control (TMC)

How do we manage these applications on any Kubernetes cluster (VMware TKG, Amazon EKS, Microsoft AKS, Google GKE), that can run in any cloud?

Some organizations started with TKG and others already started with managed Kubernetes offerings like EKS, AKS or GKE. That’s not a problem. The question here is how you deploy, manage, operate, and secure all these different clusters.

VMware’s solution for that is Tanzu Mission Control, which is also a SaaS-based tool hosted by VMware, that is the first offering I’m going to cover, that is part of a global Tanzu control plane. TMC is a solution that makes your multi-cloud and multi-cluster Kubernetes management much easier.

With TMC you’ll get:

  • Centralized Cluster Lifecycle Management. TMC enables automated provisioning and lifecycle management of TKG cluster across any cloud. It provides centralized provision, scaling, upgrading and deletion functions for your Kubernetes clusters. Tanzu Mission Control also allows you to attach any CNCF-conformant Kubernetes cluster (K8s on-prem, K8s in public cloud, TKG, EKS, AKS, GKE, OpenShift) to the platform for management, visibility, and analytic purposes. I would expect that we can use TMC in the future to lifecycle managed offerings like EKS, AKS or GKE.
  • Centralized Policy Management. TMC has a very powerful policy engine to apply consistent policies across clusters and clouds. You can create security, access, network, quota, registry, and custom policies (Open Policy Agent framework).
  • Identity and Access Management. Another important feature you don’t want to miss with DevSecOps in mind is centralized authentication and authorization, and identity federation from multiple sources like AD, LDAP and SAML. Make sure you give the right people or project teams the right access to the right resources.
  • Cluster Inspection. There are to inspection that you can run against your Kubernetes clusters. TMC leverages the built-in open-source project Sonobuoy that makes sure your cluster are configured in a conformant way with the Cloud Native Computing Foundation (CNCF) standards. Tanzu Mission Control provides CIS Benchmark inspection as another option.

Tanzu Mission Control

Tanzu Mission Control integrates with other Tanzu products like Tanzu Observability and Tanzu Service Mesh, which I’m covering later.

Connect – Antrea

VMware Tanzu uses Antrea as the default container network interface (CNI) and Kubernetes NetworkPolicy to provide network connectivity and security for your pods. Antrea is an open-source project with active contributors from Intel, Nvidia/Mellanox and VMware, and it supports multiple operating systems and managed Kubernetes offerings like EKS, AKS or GKE!

Antrea uses Open vSwitch (OvS) as the networking data plane in every Kubernetes node. OvS is a high performance and programmable virtual switch that not only supports Linux, but also Windows. VMware is working on the achievement to reach feature parity between them, and they are even working on the support for ARM hosts in addition to x86 hosts.

Antrea creates overlay networks using VXLAN or Geneve for encapsulation and encrypts node-to-node communication if needed.

Connect & Secure – NSX Advanced Load Balancer

Ingress is a very important component of Kubernetes and let’s you configure how an application can or should be accessed. It is a set of routing rules that describe how traffic is routed to an application inside of a Kubernetes cluster. So, getting an application up and running is only the half side of the story. The application still needs a way for users to access it. If you would like to know more about “ingress”, I can recommend this short introduction video.

While a project like Contour is a great open-source project, VMware recommends Avi (aka NSX Advanced Load Balancer) provides much more enterprise-grade features like L4 load balancing, L7 ingress, security/WAF, GSLB and analytics. If stability, enterprise support, resiliency, automation, elasticity, and analytics are important to you, then Avi Enterprise, a true software-defined multi-cloud application delivery controller, is definitely the better fit.

 

Secure – Tanzu Service Mesh (TSM)

Let’s take a step back and recap what we have achieve until here. We have a standardized and automated application supply chain, with signed container images, that can be deployed in any conformant Kubernetes cluster. We can also access the application from outside and pod-to-pod communication, so that applications can talk to each other. So far so far good.

Is there maybe another way to stitch these services together or “offload” security from the containers? What if I have microservices or applications running in different clouds, that need to securely communicate with each other?

A lot of vendors including VMware realized that the network is the fabric that brings microservices together, which in the end form the application. With modernized or partially modernized apps, different Kubernetes offerings and a multi-cloud environment, we will find the reality of hybrid applications which sometimes run in multiple clouds.

This is the moment when you need to think about the connectivity and communication between your app’s microservices. Today, many Kubernetes users do that by implementing a service mesh and Istio is most probably the most used open-source project platform for that.

The thing with service mesh is, while everyone thinks it sounds great, that there are new challenges that service mesh brings by itself. The installation and configuration of Istio is not that easy and it takes time. Besides that, Istio is also typically tied to a single Kubernetes cluster and therefore Istio data plane – and organizations usually prefer to keep their Kubernetes clusters independent from each other. This leaves us with security and policies tied to a Kubernetes cluster or cloud vendor, which leaves us with silos.

Tanzu Service Mesh, built on VMware NSX, is an offering that delivers an enterprise-grade service mesh, built on top of a VMware-administrated Istio version.

The big difference and the value that comes with Tanzu Service Mesh (TSM) is its ability to support cross-cluster and cross-cloud use cases via Global Namespaces.

Global Namespaces

A Global Namespace is a unique concept in Tanzu Service Mesh and connects resources and workloads that form the application into a virtual unit. Each GNS is an isolated domain that provides automatic service discovery and manages the following functions that are port of it, no matter where they are located:

  • Identity. Each global namespace has its own certificate authority (CA) that provisions identities for the resources inside that global namespace
  • Discovery (DNS). The global namespace controls how one resource can locate another and provides a registry.
  • Connectivity. The global namespace defines how communication can be established between resources and how traffic within the global namespace and external to the global namespace is routed between resources.
  • Security. The global namespace manages security for its resources. In particular, the global namespace can enforce that all traffic between the resources is encrypted using Mutual Transport Layer Security authentication (mTLS).
  • Observability. Tanzu Service Mesh aggregates telemetry data, such as metrics for services, clusters, and nodes, inside the global namespace.

Monitor – Tanzu Observability (TO)

Another important part of DevSecOps with VMware Tanzu is observability. What happens if something goes wrong? What are you doing when an application is not working anymore as expected? How do you troubleshoot a distributed application, split in microservices, that potentially runs in multiple clouds?

Image an application split into different smaller services, that are running in a pod, which could be running in a virtual machine on a specific host in your on-premises datacenter, at the edge, or somewhere in the public cloud.

You need a tool that supports the architecture of a modern application. You need a solution that understands and visualizes cloud native applications.

That’s when VMware suggests Tanzu Observability to provide you observability and deep visibility across your DevSecOps environment.

Tanzu Observability

Tanzu Observability has an integration with Tanzu Mission Control, which has the capability then to install the Wavefront Kubernetes collector on your Kubernetes clusters. The name “Wavefront” comes from the company Wavefront, which VMware acquired in 2017.

Since Tanzu Observability is only offered as a SaaS version, I would like to highlight that it is “secure by design” according to VMware:

  • Isolation of customer data
  • User & Service Account Authentication (SSO, LDAP, SAML)
  • RBAC & Authorization
  • Data encryption at rest and in transit
  • Data at rest is managed by AWS S3 (protected by KMS)
  • Certifications like ISO 27001/27017/27018 or SOC 2 Type 1

Summary – Tanzu Portfolio Capabilities

The container build and deploy process consists of the Spring runtime, Tanzu Application Catalog and Tanzu Build Service.

The global control plane (SaaS) is formed by Tanzu Mission Control, Tanzu Service Mesh and Tanzu Observability.

The networking layer consists of NSX Advanced Load Balancer for ingress & load balancing and uses Antrea for container networking.

The foundation of this architecture is built on VMware’s Kubernetes runtime called Tanzu Kubernetes Grid.

Tanzu Advanced Capabilities

Note: There are other components like Application Transformer or Tanzu SQL (part of Tanzu Data Services), which I haven’t covered in this article.

Secure – Carbon Black Cloud Container

Another solution that might be of interest for you is Carbon Black Container. CB Container also provide visibility and control that DevSecOps team need to secure Kubernetes clusters and the application the deploy on top of them.

This solution provides container vulnerability & risk dashboard, image scanning, compliance policy scanning, CI/CD integration, integration with Harbor and supports any upstream Kubernetes like TKG, EKS, AKS, GKE or OpenShift.

Conclusion

DevSecOps with VMware Tanzu helps you to simplify and secure the whole container and application lifecycle. VMware has made some strategic acquisitions (Heptio, Pivotal, Bitnami, Wavefront, Octarine, Avi Networks, Carbon Black) in the past to become a major player the world of containerization, Kubernetes and application modernization.

I personally believe that VMware’s approach and Tanzu portfolio have a very strong position in the market. Their modular approach and the inclusion of open-source projects is a big differentiator. Tanzu is not just about Kubernetes, it’s about building, securing and managing the applications.

If you have a strong security focus, VMware can cover all the layers up from the hypervisor to the applications that can be deployed in any cloud. That’s the strength and unique value of VMware: A complete and diverse portfolio with products, that provide even more value when combined together.

Don’t forget, that VMware is number 1 when it comes to data center infrastructures and most of the customer workloads are still running on-premises. That’s why I believe that VMware and their Tanzu portfolio are very well positioned.

In case you missed it the announcements a few weeks ago, check out  Tanzu Application Platform and Tanzu for Kubernetes Operations that meet the needs of all those who are concerned with DevSecOps!

And if you would like to know more about VMware Tanzu in general, have a look at my “10 Things You Didn’t Know About VMware Tanzu” article.

 

What is Tanzu for Kubernetes Operations?

What is Tanzu for Kubernetes Operations?

Updated on March 16, 2022

The customers I worked with last year were large enterprises with a multi-cloud strategy and they have just started their application modernization journey. Typically, VMware customers interested in Tanzu would take a look at the Standard edition first, which gives you:

  • Tanzu Kubernetes Grid Runtime
  • Tanzu Mission Control Standard
  • Avi Essentials (NSX Advanced Load Balancer)
  • Antrea (open-source) for container networking
  • and some other open-source software like Prometheus, Grafana, Fluent Bit, Contour

Tanzu Std vs Adv

A lot of my customers were interested in Tanzu Advanced, but they were asking for something in between these editions. Tanzu Standard sounded very interesting, but almost all of them asked the followings questions:

  • What if I don’t build or modernize my own applications yet and get my application as a container from my ISV?
  • Prometheus and Grafana are nice, but I would like to have something more enterprise-ready for observability. How can I get Tanzu Observability?
  • Avi Essentials sounds great, but I am thinking to replace my current load balancer. Is it possible to replace my F5 or Citrix ADC (formerly known as Citrix NetScaler) appliances?
  • Contour seems to be a nice open-source project, but I am looking for something with built-in automation and analytics capabilities for ingress. Can’t I get Avi Enterprise for that as well?
  • I am looking for zero trust application security. How can you help me to encrypt traffic between containers or microservices, which could also be hosted on different clouds (e.g., on-prem and public cloud)?

The answer to these questions is Tanzu Kubernetes for Operations. Tanzu for Kubernetes Operations (TKO) is a bundle of VMware products and services to meet the requirements of cloud platform teams. It provides a centralized, consistent and simplified container management and operations across clouds and currently includes the following products and services:

Important Note: The VMware product guide says that “a Core is a single physical computational unit of the Processor which may be presented as one or more vCPUs“. So, if you plan a CPU overcommit of 1:2 (cores:vCPU) for your on-premises infrastructure, then you have to license 12 cores only.

TKO Reference Architecture

VMware has released TKO reference architectures for vSphere, AWS and Azure.

Figure 1 - Tanzu for Kubernetes Operations

Use this link to get additional information how to deploy and configure Tanzu Mission Control, Tanzu Observability and Tanzu Service Mesh.

What is Application Transformer for Tanzu?

Application Transformer for VMware Tanzu became generally available in February 2022.

Application Transformer can help you to convert virtual machines and application components to OCI-compliant container images, that then can be deployed into the Tanzu Kubernetes stack.

Tanzu Application Transformer

 

Tanzu App Navigator

Application Transformer helps you to analyze and visualize application components and dependencies. It also provides customers scores that allow them to decide which applications should be transformed.

App Navigator is a 4-to-6 week engagement that helps you to decide which applications you should tackle first and how much change is needed to drive business outcomes. It’s one thing to containerize an application, but App Navigator helps you to create a modernization strategy based on your goals.

Note: VMware’s App Navigator team uses Application Transformer during their service engagement.

Tanzu App Navigator

Tanzu Application Platform

Deploying an application on Kubernetes is not an easy thing if you don’t know anything about Kubernetes.

If you would like to focus more on your applications and your developer’s experience, then Tanzu Application Platform (TAP) could be very interesting for you.

With Tanzu Application Platform, application developers and operations teams can build and deliver a better multi-cloud developer experience on any Kubernetes distribution, including Azure Kubernetes Service, Amazon Elastic Kubernetes Service, Google Kubernetes Engine, as well as software offerings like Tanzu Kubernetes Grid.

VMware is known to provide reduction of complexity and to provide cloud-agnostic infrastructures. They started to abstract the underlying server hardware, then the virtualization of the whole data center (compute, storage, network) came and the next step was the abstraction of public clouds like AWS, Azure and Google.

In the case of Tanzu Application Platform we are talking about an opinionated grouping of separate components that run on any conformant Kubernetes cluster (TKG, AKS, EKS, GKE, OpenShift etc.). From an application developer perspective an application can automatically be built, tested and deployed on Kubernetes.

Tanzu Application Platform

Meaning, with TAP you get a modular application developer PaaS (adPaaS) offering and true application platform portability with the capability of “bring-your-own-Kubernetes”.

 

A Universal License and Technology to Build a Flexible Multi-Cloud

A Universal License and Technology to Build a Flexible Multi-Cloud

In November 2020 I wrote an article called “VMware Cloud Foundation And The Cloud Management Platform Simply Explained“. That piece was focused on the “why” and “when” VMware Cloud Foundation (VCF) makes sense for your organization. It also includes business values and hints that VCF is more than just about technology. Cloud Foundation is one of the most important drivers and THE enabler for to fulfill VMware’s multi-cloud strategy.

If you are not familiar enough with VMware’s multi-cloud strategy, then please have a look at my article “VMware Multi-Cloud and Hyperscale Computing” first.

To summarize the two above mentioned articles, one can say, that VMware Cloud Foundation is a software-defined data center (SDDC) that can run in any cloud. In “any cloud” means that VCF can also be consumed as a service through other cloud provider partners like:

Additionally, Cloud Foundation and the whole SDDC can be consumed as a managed offering called DCaaS or LCaaS (Data Center / Local Cloud as a service).

Let’s say a customer is convinced that a “VCF everywhere” approach is right for them and starts building up private and public clouds based on VMware’s technologies. This means that VMware Cloud Foundation now runs in their private and public cloud.

Note: This doesn’t mean that the customer cannot use native public cloud workloads and services anymore. They can simply co-exist.

The customer is at a point now where they have achieved a consistent infrastructure. What’s up next? The next logical step is to use the same automation, management and security consoles to achieve consistent operations.

A traditional VMware customer goes for the vRealize Suite now, because they would need vRealize Automation (vRA) for automation and vRealize Operations (vROps) to monitor the infrastructure.

The next topic in this customer’s journey would be application modernization, which includes topics containerization and Kubernetes. VMware’s answer for this is the Tanzu portfolio. For the sake of this example let’s go with “Tanzu Standard”, which is one of four editions available in the Tanzu portfolio (aka VMware Tanzu).

VMware Cloud Foundation

Let’s have a look at the customer’s bill of materials so far:

  • VMware Cloud Foundation on-premises (vSphere, vSAN, NSX)
  • VMware Cloud on AWS
  • VMware Cloud on Dell EMC (locally managed VCF service for special edge use cases)
  • vRealize Automation
  • vRealize Operations
  • Tanzu Standard (includes Tanzu Kubernetes Grid and Tanzu Mission Control)

Looking at this list above, we see that their infrastructure is equipped with three different VMware Cloud Foundation flavours (on-prem, hyperscaler managed, locally managed) complemented by products of the vRealize Suite and the Tanzu portfolio.

This infrastructure with its different technologies, components and licenses has been built up over the past few years. But organizations are nowadays asking for more flexibility than ever. By flexibility I mean license portability and a subscription model.

VMware Cloud Universal

On 31st March 2021 VMware introduced VMware Cloud Universal (VMCU). VMCU is the answer to make the customer’s life easier, because it gives you the choice and flexibility in which clouds you want to run your infrastructure and consume VMware Cloud offerings as needed. It even allows you to convert existing on-premises VCF licenses to a VCF-subscription license.

The VMCU program includes the following technologies and licenses:

  • VMware Cloud Foundation Subscription
  • VMware Cloud on AWS
  • Google Cloud VMware Engine
  • Azure VMware Solution
  • VMware Cloud on Dell EMC
  • vRealize Cloud Universal Enterprise Plus
  • Tanzu Standard Edition
  • VMware Success 360 (S360 is required with VMCU)

VMware Cloud Console

As Kit Kolbert, CTO VMware, said, “the idea is that VMware Cloud is everywhere that you want your applications to be”.

The VMware Cloud Console gives you view into all those different locations. You can quickly see what’s going on with a specific site or cloud landing zone, what its overall utilization looks like or if issues occur.

The Cloud Console has a seamless integration with vROps, which also helps you regarding capacity forecasting and (future) requirements (e.g., do I have enough capacity to meet my future demand?).

VMware Cloud Console

In short, it’s the central multi-cloud console to manage your global VMware Cloud environment.

vRealize Cloud Universal

What is part of vRealize Cloud Universal (vRCU) Enterprise Plus? vRCU is a SaaS management suite that combines on-premises and SaaS capabilities for automation, operations, log analytics and network visibility into a single offering. In other words, you get to decide where you want to deploy your management and operations tools. vRealize Cloud Universal comes in four editions and in VMCU you have the vRCU Enterprise Plus edition included with the following components:

vRealize Cloud Universal Editions

    Note: While vRCU standard, advanced and enterprise are sold as standalone editions today, the enterprise plus edition is only sold with VMCU (and as add-on to VMC on AWS).

    vRealize AI Cloud

    Have you ever heard of Project Magna? It is something that was announced at VMworld 2019, that provides adaptive optimization and a self-tuning engine for your data center. It was Pat Gelsinger who envisioned a so-called “self-driving data center”. Intelligence-driven data center might haven been a better term since Project Magna leverages artificial intelligence by using reinforcement learning, which combs through your data and runs thousands of scenarios that searches for the best regard output based on trial and error on the Magna SaaS analytics engine.

    The first instantiation began with vSAN (today also known as vRAI Cloud vSAN Optimizer), where Magna will collect data, learn from it, and make decisions that will automatically self-tune your infrastructure to drive greater performance and efficiencies.

    Today, this SaaS service is called vRealize AI Cloud.

    vRealize AI Cloud vSAN vRealize AI (vRAI) learns about your operating environments, application demands and adapts to changing dynamics, ensuring optimization per stated KPI. vRAI Cloud is only available on vRealize Operations Cloud via the vRealize Cloud Universal subscription.

    VMware Skyline

    VMware Skyline as a support service that automatically collects, aggregates, and analyzes product usage data, which proactively identifies potential problems and helps the VMware support engineers to improve the resolution time. Skyline is included in vRealize Cloud Universal because it just makes sense. A lot of customers have asked for unifying the self-service experience between Skyline and vRealize Operations Cloud. And many customers are using Skyline and vROps side by side today.

    Users can now be proactive and perform troubleshooting in a single SaaS workflow. This means customers save more time by automating Skyline proactive remediations in vROps Cloud. But Skyline supports vSphere, vSAN, NSX, vRA, VCF and VMware Horizon as well.

    VMware Cloud Universal Use Cases

    As already mentioned, VMCU makes very much sense if you are building a hybrid or multi-cloud architecture with a consistent (VMware) infrastructure. VMCU, vRCU and the Tanzu portfolio help you to create a unified control plane for your cloud infrastructure.

    Other use cases could be cloud migration or cloud bursting scenarios. If we switch back to the fictive customer before, we could use VMCU to convert existing VCF licenses to VCF-S (subscription) licenses, which in the end allow you to build a VMware-based Cloud on top of AWS (other public cloud providers are coming very soon!) for example.

    Another good example is to achieve the same service and operating model on-prem as in the public cloud: a fully managed consumable infrastructure. Meaning, to move from a self-built and self-managed VCF infrastructure to something like VMC on Dell EMC.

    How can I get VMCU?

    There is no monthly subscription model and VMware only supports one-year or three-year terms. Customers will need to sign an Enterprise License Agreement (ELA) and purchase VMCU SPP credits.

    Note: SPP credits purchased out of the program are not allowed to be used within the VMCU program!

    After purchasing the VMCU SPP credits and VMware Cloud onboarding and organization setup, you can select the infrastructure offerings to consume your SPP credits. This can be done via the VMware Cloud Console.

    Summary

    I hope this article was useful to get a better understanding about VMware Cloud Universal. It might seem a little bit complex, but that’s not true. VMCU makes your life easier and helps you to build and license a globally distributed cloud infrastructure based on VMware technology.

    VCF Subscription

     

     

     

    The Rise of VMware Tanzu Service Mesh

    The Rise of VMware Tanzu Service Mesh

    My last article focused on application modernization and data portability in a multi-cloud world. I explained the value of the VMware Tanzu portfolio by mentioning a consistent infrastructure and consistent application platform approach, which ultimately delivers a consistent developer experience. I also dedicated a short section about Tanzu Service Mesh, which is only one part of the unified Tanzu control plane (besides Tanzu Mission Control and Tanzu Observability) for multiple Kubernetes clusters and clouds.

    When you hear or see someone writing about TSM, you very soon get to the point, where the so-called “Global Namespaces” (GNS) are being mentioned, which has the magic power to stitch hybrid applications together that run in multiple clouds.

    Believe me when I say that Tanzu Service Mesh (TSM) is rising and becoming the next superstar of the VMware portfolio. I think Joe Baguley would agree here. 😀

    Namespaces

    Before we start talking about Tanzu Service Mesh and the magical power of Global Namespaces, let us have a look at the term “Namespaces” first.

    Kubernetes Namespace

    Namespaces give you a way to organize clusters into virtual carved out sub-clusters, which can be helpful when different teams, tenants or projects share the same Kubernetes cluster. This form of a namespace provides a method to better share resources, because it ensures fair allocation of these resources with the right permissions.

    So, using namespaces gives you a way of isolation that developers never affect other project teams. Policies allow to configure compute resources by defining resource quotas for CPU or memory utilization. This also ensures the performance of a specific namespace, its resources (pods, services etc.) and the Kubernetes cluster in general.

    Although namespaces are separate from each other, they can communicate with each other. Network policies can be configured to create isolated and non-isolated pods. For example, a network policy can allow or deny all traffic coming from other namespaces.

    Ellei Mei explained this in a very easy in her article after Project Pacific had been made public in September 2019:

    Think of a farmer who divides their field (cluster + cluster resources) into fenced-off smaller fields (namespaces) for different herds of animals. The cows in one fenced field, horses in another, sheep in another, etc. The farmer would be like operations defining these namespaces, and the animals would be like developer teams, allowed to do whatever they do within the boundaries they are allocated.

    vSphere Namespace

    The first time I heard of Kubernetes or vSphere Namespaces was in fact at VMworld 2019 in Barcelona. VMware then presented a new app-focused management concept. This concept described a way to model modern application and all their parts, and we call this a vSphere Namespace today.

    With Project Pacific (today known vSphere with Tanzu or Tanzu Kubernetes Grid), VMware went one step further and extended the Kubernetes Namespace by adding more options for compute resource allocation, vMotion, encryption, high availability, backup & restore, and snapshots.

    Rather than having to deal with each namespace and its containers, vSphere Namespaces (also called “guardrails” sometimes) can draw a line around the whole application and services including virtual machines.

    Namespaces as the unit of management

    With the re-architecture of vSphere and the integration of Kubernetes as its control plane, namespaces can be seen as the new unit of management.

    Imagine that you might have thousands of VMs in your vCenter inventory that you needed to deal with. After you group those VMs into their logical applications, you may only have to deal with dozens of namespaces now.

    If you need to turn on encryption for an application, you can just click a button on the namespace in vCenter and it does it for you. You don’t need to deal with individual VMs anymore.

    vSphere Virtual Machine Service

    With the vSphere 7 Update 2a release, VMware provided the “VM Service” that enables Kubernetes-native provisioning and management of VMs.

    For many organizations legacy applications are not becoming modern over night, they become hybrid first before the are completely modernized. This means we have a combination of containers and virtual machines forming the application, and not find containers only. I also call this a hybrid application architecture in front of my customers. For example, you may have a containerized application that uses a database hosted in a separate VM.

    So, developers can use the existing Kubernetes API and a declarative approach to create VMs. No need to open a ticket anymore to request a virtual machine. We talk self-service here.

    Tanzu Mission Control – Namespace Management

    Tanzu Mission Control (TMC) is a VMware Cloud (SaaS) service that provides a single control point for multiple teams to remove the complexities from managing Kubernetes cluster across multiple clouds.

    One of the ways to organize and view your Kubernetes resources with TMC is by the creation of “Workspaces”.

    Workspaces allows you to organize your namespaces into logical groups across clusters, which helps to simplify management by applying policies at a group level. For example, you could apply an access policy to an entire group of clusters (from multiple clouds) rather than creating separate policies for each individual cluster.

    Think about backup and restore for a moment. TMC and the concept of workspaces allow you to back up and restore data resources in your Kubernetes clusters on a namespace level.

    Management and operations with a new application view!

    FYI, VMware announced the integration of Tanzu Mission Control and Tanzu Service Mesh in December 2020.

    Service Mesh

    A lot of vendors including VMware realized that the network is the fabric that brings microservices together, which in the end form the application. With modernized or partially modernized apps, different Kubernetes offerings and a multi-cloud environment, we will find the reality of hybrid applications which sometimes run in multiple clouds. 

    This is the moment when you have to think about the connectivity and communication between your app’s microservices.

    One of the main ideas and features behind a service mesh was to provide service-to-service communication for distributed applications running in multiple Kubernetes clusters hosted in different private or public clouds.

    The number of Kubernetes service meshes has rapidly increased over the last few years and has gotten a lot of hype. No wonder why there are different service mesh offerings around:

    • Istio
    • Linkerd
    • Consul
    • AWS Apps Mesh
    • OpenShift Service Mesh by Red Hat
    • Open Service Mesh AKS add-on (currently preview on Azure)

    Istio is probably the most famous one on this list. For me, it is definitely the one my customers look and talk about the most.

    Service mesh brings a new level of connectivity between services. With service mesh, we inject a proxy in front of each service; in Istio, for example, this is done using a “sidecar” within the pod.

    Istio’s architecture is divided into a data plane based on Envoy (the sidecar) and a control plane, that manages the proxies. With Istio, you inject the proxies into all the Kubernetes pods in the mesh.

    As you can see on the image, the proxy sits in front of each microservice and all communications are passed through it. When a proxy talks to another proxy, then we talk about a service mesh. Proxies also handle traffic management, errors and failures (retries) and collect metric for observability purposes.

    Challenges with Service Mesh

    The thing with service mesh is, while everyone thinks it sounds great, that there are new challenges that service mesh brings by itself.

    The installation and configuration of Istio is not that easy and it takes time. Besides that, Istio is also typically tied to a single Kubernetes cluster and therefore Istio data plane – and organizations usually prefer to keep their Kubernetes clusters independent from each other. This leaves us with security and policies tied to a Kubernetes cluster or cloud vendor, which leaves us with silos.

    Istio supports a so-called multi-cluster deployment with one service mesh stretched across Kubernetes clusters, but you’ll end up with a stretched Istio control plane, which eliminates the independence of each cluster.

    So, a lot of customers also talk about better and easier manageability without dependencies between clouds and different Kubernetes clusters from different vendors.

    That’s the moment when Tanzu Service Mesh becomes very interesting. 🙂

    Tanzu Service Mesh (formerly known as NSX Service Mesh)

    Tanzu Service Mesh, built on VMware NSX, is an offering that delivers an enterprise-grade service mesh, built on top of a VMware-administrated Istio version.

    When onboarding a new cluster on Tanzu Service Mesh, the service deploys a curated version of Istio signed and supported by VMware. This Istio deployment is the same as the upstream Istio in every way, but it also includes an agent that communicates with the Tanzu Service Mesh global control plane. Istio installation is not the most intuitive, but the onboarding process of Tanzu Service Mesh simplifies the process significantly.

    Overview of Tanzu Service Mesh

    The big difference and the value that comes with Tanzu Service Mesh (TSM) is its ability to support cross-cluster and cross-cloud use cases via Global Namespaces.

    Global Namespaces (GNS)

    Yep, another kind of a namespace, but the most exciting one! 🙂

    A Global Namespace is a unique concept in Tanzu Service Mesh and connects resources and workloads that form the application into a virtual unit. Each GNS is an isolated domain that provides automatic service discovery and manages the following functions that are port of it, no matter where they are located:

    • Identity. Each global namespace has its own certificate authority (CA) that provisions identities for the resources inside that global namespace
    • Discovery (DNS). The global namespace controls how one resource can locate another and provides a registry.
    • Connectivity. The global namespace defines how communication can be established between resources and how traffic within the global namespace and external to the global namespace is routed between resources.
    • Security. The global namespace manages security for its resources. In particular, the global namespace can enforce that all traffic between the resources is encrypted using Mutual Transport Layer Security authentication (mTLS).
    • Observability. Tanzu Service Mesh aggregates telemetry data, such as metrics for services, clusters, and nodes, inside the global namespace.

    Use Cases

    The following diagram represents the global namespace concept and other pieces in a high-level architectural view. The components of one application are distributed in two different Kubernetes clusters: one of them is on-premises and the other in a public cloud. The Global Namespace creates a logical view of these application components and provides a set of basic services for the components.

    Global Namespaces

    If we take application continuity as another example for a use case, we would deploy an app in more than one cluster and possibly in a remote region for disaster recovery (DR), with a load balancer between the locations to direct traffic to both clusters. This would be an active-active scenario. With Tanzu Service Mesh, you could group the clusters into a Global Namespace and program it to automatically redirect traffic in case of a failure. 

    In addition to the use case and support for multi-zone and multi-region high availability and disaster recovery, you can also provide resiliency with automated scaling based on defined Service-Level Objectives (SLO) for multi-cloud apps.

    VMware Modern Apps Connectivity Solution  

    In May 2021 VMware introduced a new solution that brings together the capabilities of Tanzu Service Mesh and NSX Advanced Load Balancer (NSX ALB, formerly Avi Networks) – not only for containers but also for VMs. While Istio’s Envoy only operates on layer 7, VMware provides layer 4 to layer 7 services with NSX (part of TSM) and NSX ALB, which includes L4 load balancing, ingress controllers, GSLB, WAF and end-to-end service visibility. 

    This solution speeds the path to app modernization with connectivity and better security across hybrid environments and hybrid app architectures.

    Multiple disjointed products, no end-to-end observability

     

     

     

     

     

     

    Summary

    One thing I can say for sure: The future for Tanzu Service Mesh is bright!

    Many customers are looking for ways for offloading security (encryption, authentication, authorization) from an application to a service mesh.

    One great example and use case from the financial services industry is crypto agility, where a “crypto service mesh” (a specialized service mesh) could be part of a new architecture, which provides quantum-safe certificates.

    And when we offload encryption, calculation, authentication etc., then we may have other use cases for SmartNICs and  Project Monterey

    To learn more about service mesh and the capabilities of Tanzu Service Mesh, I can recommend Service Mesh for Dummies written Niran Even-Chen, Oren Penso and Susan Wu.

    Thank you for reading!