VCP Application Modernization 2022 Exam Review

VCP Application Modernization 2022 Exam Review

It’s a PASS! I can call myself a VMware Certified Professional in Application Modernization (VCP-AM) now. 

Some of you have seen it probably already on Twitter or LinkedIn (on January 20, 2022) – my goal for 2022 is to pass the following exams even I haven’t got hands-on experience with most the related products:

  1. VCP Application Modernization (VCP AM, 2V0-71.21)
  2. VCP Network Virtualization (VCP-NV)
  3. VCAP Data Center Virtualization Design (VCAP-DCV Design)

Why? Because I would like to achieve my 4th and 5th VCP and my 2nd VCAP certification. Last year I was very busy with my family, work and blogging. In 2022 I would like to achieve new certifications which are outside of my comfort zone and practical knowledge domain.

So, how did I pass the VCP-AM only three days later?

First, I rescheduled my exam date from March 3rd to February 3rd. This would have left me two weeks of preparation and I really thought I wouldn’t pass it at the first attempt. But I said to myself it is okay to fail the exam and trying to memorize the questions I struggled the most with.

Then I said: “I don’t need to wait for two weeks to figure that out! Let’s reschedule the exam once more and give it a try on Sunday (January 23rd).” This was the same approach I took back in 2018 when I was going for the VCAP-DTM Design exam (but there I missed a few points and failed).

Well…This time it worked out! 😀

About the VCP-AM Exam

The certification validates your expertise with the VMware Tanzu Standard edition including vSphere with Tanzu, Tanzu Kubernetes Grid (TKG) and Tanzu Mission Control (TMC). If you look at the exam guide then you can see that this exam also checks your fundamental cloud native skills including containerization, application modernization and Kubernetes.

The VCP Application Modernization exam consists of 55 questions and must be completed within 130 minutes.

Certification Path

I hold three different VCP certifications (VCP-DCV, VCP-DTM, VCP-DW) so my steps look like this:

  1. (Recommended) Gain experience with VMware Tanzu
  2. (Recommended) Attend ONE of the training courses
  3. (Required) Pass the Professional VMware Application Modernization exam

Preparing for the Exam

The first step is about gaining experience with VMware Tanzu. Let me quickly list what I’ve done over the past two years:

  • Was one of the first public speakers in Switzerland to present Project Pacific (today known as vSphere with Tanzu or TKGS) and also blogged about it.
  • I think it was Q1 2020 when Project Pacific was available as an early private and then released as an official beta program. Was lucky enough to the get access to the installer files to enable “Workload Management” in my home lab. 🙂
  • The year 2020 was about understanding Tanzu, its components, the different flavours (TKG aka TKGm, TKGS, TKGi) and editions
  • In 2021 I ran two vSphere with Tanzu Standard PoCs with the help of some VMware Tanzu specialists. This was the time where I learned the most since the customers were very demanding and requested a lot of technical information. I also did a lot of blogging.

I think with this experience I already had the first step completed without realizing first how much experience I already had as a VMware employee!

If you have a home lab and can work (almost) on a daily basis with the Tanzu Standard products including Tanzu Mission Control, then I believe you have very good chances to pass the VCP application modernization exam as well. The exam guide is saying this:

The minimally qualified candidate (MQC) is recommended to have at least 6 to 12 months of experience. The MQC can
understand and describe standard features and capabilities of VMware Tanzu Standard Edition components including
VMware vSphere with Tanzu, VMware Tanzu Kubernetes Grid, and VMware Tanzu Mission Control. The MQC can explain the VMware Tanzu vision and has hands-on experience with containerization, Kubernetes, and Application Modernization concepts. The MQC can perform common operational and administrative tasks in an environment where Tanzu Standard Edition Components are present, including basic troubleshooting and repairing of TKG clusters. The MQC can identify the primary components and features of Tanzu Mission Control but not limited to cluster lifecycle management, Role Based Access Controls, security policies, cluster inspections and data protection. In addition, the MQC can understand and describe the steps for installation and setup of Tanzu Standard Edition Components.

 

 

The MQC works in environments where VMware Tanzu Kubernetes Grid, vSphere with Tanzu and Tanzu Mission Control are used in production. The MQC can perform troubleshooting and repairing of TKG and TKC clusters. The MQC can identify the primary components and features of Tanzu Mission Control but not limited to cluster lifecycle management, Role Based Access Controls, security policies, cluster inspections and data protection. The MQC can perform day-2 operations related to TKG and TKC clusters. A successful candidate has a good grasp of the topics included in the exam blueprint.

Step 2 – Attend ONE of the Training Courses

While it’s not mandatory, it is still recommended to attend at least one of the official training courses:

I recommend booking a training (if you need one), that includes TMC. People will have difficulties to get access to a Tanzu Mission Control cloud service. vSphere with Tanzu and Tanzu Kubernetes Grid are products that you easily can get access to and evaluate. Don’t forget to browse through the application modernization hands-on labs (HOLs)!

The most important resources for me were:

Additional Resources

Besides the official trainings and HOLs, you can also get helpful and valuable information from these awesome bloggers:

Please let me know if you found or used any other helpful resources which should be mentioned. Thanks.

Good Luck! 🙂

 

10 Things You Didn’t Know About VMware Tanzu

10 Things You Didn’t Know About VMware Tanzu

Updated on March 16, 2022

While I was working with one of the largest companies in the world during the past year, I learned a lot about VMware Tanzu and NSX Advanced Load Balancer (formerly known as Avi). Application modernization and the containerization of applications are very complex topics.

Customers are looking for ways to “free” their apps from infrastructure and want to go cloud-native by using/building microservices, containers and Kubernetes. VMware has a large portfolio to support you on your application modernization journey, which is the Tanzu portfolio. A lot of people still believe that Tanzu is a product – it’s not a product. Tanzu is more than just a Kubernetes runtime and as soon as people like me from VMware explain you the capabilities and possibilities of Tanzu, one tends to become overwhelmed at first.

Why? VMware’s mission is always to abstract things and make things easier for you but this doesn’t mean you can skip a lot of the questions and topics that should be discussed:

  • Where should your containers and microservices run?
  • Do you have a multi-cloud strategy?
  • How do you want to manage your Kubernetes clusters?
  • How do you build your container images?
  • How do you secure the whole application supply chain?
  • Have you thought about vulnerability scanning for the components you use to build the containers?
  • What kind of policies would you like to set on application, network and storage level?
  • Do you need persistent storage for your containers?
  • Should it be a vSphere platform only or are you also looking at AKS, EKS, GKE etc.?
  • How are you planning to automate and configure “things”?
  • Which kind of databases or data services do you use?
  • Have you already got a tool for observability?

With these kind of questions, you and I would figure out together, which Tanzu edition makes the most sense for you. Looking at the VMware Tanzu website, you’ll find four different Tanzu editions:

VMware Tanzu Editions

If you click on one of the editions, you get the possibility to compare them:

Tanzu Editions Comparison

Based on the capabilities listed above, customers would like to know the differences between Tanzu Standard and Advanced. Believe me, there is a lot of information I can share with you to make your life easier and to understand the Tanzu portfolio better. 🙂

1) VMware Tanzu Standard and Advanced Features and Components

Let’s start looking at the different capabilities and components that come with Tanzu Standard and Advanced:

Tanzu Std vs Adv

Tanzu Standard focuses very much on Kubernetes multi-cloud and multi-cluster management (Tanzu Kubernetes Grid with Tanzu Mission Control aka TMC), Tanzu Advanced adds a lot of capabilities to build your applications (Tanzu Application Catalog, Tanzu Build Service).

2) Tanzu Mission Control Standard and Advanced

Maybe you missed it in the screenshot before. Tanzu Standard comes with Tanzu Mission Control Standard, Tanzu Advanced is equipped with Tanzu Mission Control Advanced.

Note: Announced at VMworld 2021, there is now even a third edition called Tanzu Mission Control Essentials, that was specifically made for VMware Cloud offerings such as VMC on AWS.

I must mention here, that you could leverage the “free tier” of Tanzu Mission Control called TMC Starter. It can be combined with the Tanzu Community Edition (also free) for example or with existing clusters from other providers (AKS, GKE, EKS).

What’s the difference between TMC Standard and Advanced? Let’s check the TMC feature comparison chart:

  • TMC Adv provides “custom roles”
  • TMC Adv lets you configure more policies (security policies – custom, images policies, networking policies, quota policies, custom policies, policy insights)
  • With Tanzu Mission Control Advanced you also get “CIS Benchmark inspections”

What if I want Tanzu Standard (Kubernetes runtime with Tanzu Mission Control and some open- source software) but not the complete feature set of Tanzu Mission Control Advanced? Let me answer that question a little bit later. 🙂

3) NSX Advanced Load Balancer Essentials vs. Enterprise (aka Avi Essentials vs. Enterprise)

Yes, there are also different NSX ALB editions included in Tanzu Standard and Advanced. The NSX ALB Essentials edition is not something that you can buy separately, and it’s only included in the Tanzu Standard edition.

The enterprise edition of NSX ALB is part of Tanzu Advanced but it can also be bought as a standalone product.

Here are the capabilities and differences between NSX ALB Essentials and Enterprise:

NSX ALB Essentials vs. Enterprise

So, the Avi Enterprise edition provides a fully-featured version of NSX Advanced Load Balancer while Avi Essentials only provides L4 LB services for Tanzu.

Note: Customers can create as many NSX ALB / Avi Service Engines (SEs) as required with the Essentials edition and you still have the possibility to set up a 3-node NSX ALB controller cluster.

Important: It is not possible to mix the NSX ALB controllers from the Essentials and Enterprise edition. This means, that a customer, that has NSX ALB Essentials included in Tanzu Standard, and has another department using NSX ALB Enterprise for another use case, needs to run separate controller clusters. While the controllers don’t cost you anything, there is obviously some additional compute footprint coming with this constraint.

FYI, there is also a cloud-managed option for the Avi Controllers with Avi SaaS.

What if I want the complete feature set of NSX ALB Enterprise? Let’s put this question also aside for a moment.

4) Container Ingress with Contour vs. NSX ALB Enterprise

Ingress is a very important component of Kubernetes and let’s you configure how an application can or should be accessed. It is a set of routing rules that describe how traffic is routed to an application inside of a Kubernetes cluster. So, getting an application up and running is only the half side of the story. The application still needs a way for users to access it. If you would like to know more about “ingress”, I can recommend this short introduction video.

While Contour is a great open-source project, Avi provides much more enterprise-grade features like L4 LB, L7 ingress, security/WAF, GSLB and analytics. If stability, enterprise support, resiliency, automation, elasticity and analytics are important to you, then Avi Enterprise is definitely the better fit.

To keep it simple: If you are already thinking about NSX ALB Enterprise, then you could use it for K8s Ingress/LB and so much other use cases and services! 🙂  

5) Observability with Grafana/Prometheus vs. Tanzu Observability

I recently wrote a blog about “modern application monitoring with VMware Tanzu and vRealize“. This article could give you a better understanding if you want to get started with open-source software or something like Tanzu Observability, which provides much more enterprise-grade features. Tanzu Observability is considered to be a fast-moving leader according to the GigaOm Cloud Observability Report.

What if I still want Tanzu Standard only but would like to have Tanzu Observability as well? Let’s park this question as well for another minute.

6) Open-Source Projects Support by VMware Tanzu

The Tanzu Standard edition comes with a lot of leading open-source technologies from the Kubernetes ecosystem. There is Harbor for container registry, Contour for ingress, Grafana and Prometheus for monitoring, Velero for backup and recovery, Fluentbit for logging, Antrea and Calico for container networking, Sonobuoy for conformance testing and Cluster API for cluster lifecycle management.

VMware Open-Source Projects

VMware is actively contributing to these open-source projects and still wants to give customers the flexibility and choice to use and integrate them wherever and whenever you see fit. But how are these open-source projects supported by VMware? To answer this , we can have a look at the Tanzu Toolkit (included in Tanzu Standard and Advanced):

  • Tanzu Toolkit includes enterprise-level support for Harbor, Velero, Contour, and Sonobuoy
  • Tanzu Toolkit provides advisory—or best effort—guidance on Prometheus, Grafana, and Alertmanager for use with Tanzu Kubernetes Grid. Installation, upgrade, initial tooling configuration, and bug fixes are beyond the current scope of VMware’s advisory support.

7) Tanzu Editions Licensing

There are two options how you can license your Tanzu deployments:

  • Per CPU Licensing – Mostly used for on-prem deployments or where standalone installations are planned (dedicated workload domain with VCF). Tanzu Standard is included in all the regular VMware Cloud Foundation editions.
  • Per Core Licensing – For non-standalone on-prem and public cloud deployments, you should license Tanzu Standard and Advanced based on number of cores used by the worker (including control plane VMs) and management nodes delivering K8s clusters. Constructs such as “vCPUs”, “virtual CPUs” and “virtual cores” are proxies (other names) for CPU cores.

Tanzu Advanced is sold as a “pack” of software and VMware Cloud service offerings. Each purchased pack of Tanzu Advanced equals 20 cores. Example of 1 pack:

  • Spring Runtime: 20 cores
  • Tanzu Application Catalog: 20 cores
  • Tanzu SQL: 1 core (part of Tanzu Data Services)
  • Tanzu Build Service: 20 cores
  • Tanzu Observability: 160 PPS (sufficient to collect metrics for the infrastructure)
  • Tanzu Mission Control Advanced: 20 cores
  • Tanzu Service Mesh Advanced: 20 cores
  • NSX ALB Enterprise: 1 CPU = 1/4 Avi Service Core
  • Tanzu Standard Runtime: 20 cores

If you need more details about these subscription licenses, please consult the VMware Product Guide (starting from page 37).

As you can see, a lot of components (I didn’t even list all) form the Tanzu Advanced  edition. The calculation, planning and sizing for the different components require multiple discussions with your Tanzu specialist from VMware.

8) Tanzu Standard Sizing

Disclaimer – This sizing is based on my current understanding, and it is always recommended to do a proper sizing with your Tanzu specialists / consultants.

So, we have learnt before that Tanzu Standard licensing is based on cores, which are “used by the worker and management nodes delivering K8s clusters”.

As you may already know, the so-called “Supervisor Cluster” is currently formed by three control plane VMs. Looking at the validated design for Tanzu for VMware Cloud Foundation workload domains, one can also get a better understanding of the Tanzu Standard runtime sizing for vSphere-only environments.

The three Supervisor Cluster (management nodes) VMs have each 4 vCPUs – this means in total 12 vCPUs.

The three Tanzu Kubernetes Cluster control plane VMs have each 2 vCPUs – this means in total 6 vCPUs.

The three Tanzu Kubernetes Cluster worker nodes (small size) have each 2 vCPUs – this means in total 6 vCPUs.

My conclusion here is that you need to license at least 24 vCPU to get started with Tanzu Standard.

Important Note: The VMware product guide says that “a Core is a single physical computational unit of the Processor which may be presented as one or more vCPUs“. If you are planning a CPU overcommit of 1:2 (cores:vCPU) for your on-premises infrastructure, then you have to license 12 cores only.

Caution: William Lam wrote about the possibility to deploy single or dual node Supervisor Cluster control plane VMs. It is technically possible to reduce the numbers of control plane VMs, but it is not officially supported by VMware. We need to wait until this feature becomes available in the future.

It would be very beneficial for customers with a lot of edge locations or smaller locations in general. If you can reduce the Supervisor Cluster down to two control plane VMs only, the initial deployment size would only need 14 vCPUs (cores).

9) NSX Advanced Load Balancer Sizing and Licensing

General licensing instructions for Avi aka NSX ALB (Enterprise) can be found here

NSX ALB is licensed based on cores consumed by the Avi Service Engines. As already said before, you won’t be charged for the Avi Controllers and itt is possible to add new licenses to the ALB Controller at any time. Avi Enterprise licensing is based on so-called Service Cores. This means, one vCPU or core equals one Service Core.

Avi as a standalone product has only one edition, the fully-featured Enterprise edition. Depending on your needs and the features (LB, GSLB, WAF, analytics, K8s ingress, throughput, SSL TPS etc.) you use, you’ll calculate the necessary amount of Service Cores.

It is possible to calculate and assign more or less than 1 Service Core per Avi Service Engine:

  • 25 Mbps throughput (bandwidth) = 0.4 Service Cores
  • 200 Mbps throughput = 0.7 Service Cores

Example: A customer wants to deploy 10 Service Engines with 25MB and 4 Service Engines with 200MB. These numbers would map to 10*0.4 Service Cores + 4*0.7 Services Cores, which give us a total of 6.8 Service Cores. In this case you would by 7 Service Cores. 

10) Tanzu for Kubernetes Operations (TKO)

Now it’s time to answer the questions we parked before:

  • What if I want Tanzu Standard (Kubernetes runtime with Tanzu Mission Control and some open- source software) but not the complete feature set of Tanzu Mission Control Advanced?
  • What if I want the complete feature set of NSX ALB Enterprise?
  • What if I still want Tanzu Standard only but would like to have Tanzu Observability as well?

The answer to this and the questions above is Tanzu for Kubernetes Operations (TKO)!

Conclusion

Wherever you are on your application modernization journey, VMware and their Tanzu portfolio got your back covered. Not matter if you want to start small, make your first steps and experiences with open-source projects, or if you want to have a complete set with the Tanzu Advanced edition, VMware offers the right options and flexibility.

I hope my learnings from this customer engagement help you to better understand the Tanzu portfolio and its capabilities.

Please leave your comments and thoughts below. 🙂

Modern Application Monitoring with VMware Tanzu and vRealize

Modern Application Monitoring with VMware Tanzu and vRealize

The complexity of applications has increased because of new cloud technologies and new application architectures. Since organizations adopt and embrace the DevOps mindset, developers and IT operations are closer than ever. Developers are now part of the team operating the distributed systems.

Businesses must figure out how they know about system failures and need to have an understanding “what” is broken (symptom) and “why” (possible cause) something is broken.

Let’s talk about application performance management (APM) and enterprise observability. 🙂

Monitoring

It was around the year 2012 or 2013 when I had to introduce a new monitoring solution for a former employer who was a cloud service provider. I think Nagios was the state-of-the-art technology back then and I replaced it PRTG Network Monitor from Paessler.

When we onboarded a new customer infrastructure or application, the process was always the same. I had to define the metrics to collect and then put those metrics on a dashboard. It was very important to set alerts based on thresholds or conditions. Everyone knew back then that this approach wasn’t the best, but we didn’t have any other choice.

PRTG Sensor View

If an IP was not pingable or a specific port of a server or application was down for 60 seconds, an alert popped up and an e-mail had been sent to the IT helpdesk. And in the dashboard you could see sensors switching from a green to a red state.

To simplify the troubleshooting process and to have some a logical application view, I had to create some dependencies between sensors. This was probably the only way to create something like an application (dependency) mapping.

When users worked on a virtual desktop or on a Windows Terminal Server, we “measured” the user experience and application performance based on network latency and server resource usage based on CPU and RAM mostly.

Observability

Observability enables you to drill down into the distributed services and systems (hardware components, containers, microservices) that make up an application.

Monitoring and observability are not the same thing. As described before, monitoring is the process of collection metrics and alerts that one can monitor the health and performance of components like network devices, databases, servers or VMs.

Observability helps you to understand complex architectures and interactions between elements in this architecture. It also allows you to troubleshoot performance issues, identify root causes for failures faster and helps you to optimize your cloud native infrastructure and applications.

In other words, observability can help you to speed up mean time to detection (MTTD) and mean time to resolution (MTTR) for infrastructure and application failures.

There are three golden telemetry signals to achieve observability (source):

  • Logs: Logs are the abiding records of discrete events that can identify unpredictable behavior in a system and provide insight into what changed in the system’s behavior when things went wrong. It’s highly recommended to ingest logs in a structured way, such as in JSON format so that log visualization systems can auto-index and make logs easily queryable.
  • Metrics: Metrics are considered as the foundations of monitoring. They are the measurements or simply the counts that are aggregated over a period of time. Metrics will tell you how much of the total amount of memory is used by a method, or how many requests a service handles per second.
  • Traces: A single trace displays the operation as it moves from one node to another in a distributed system for an individual transaction or request. Traces enable you to dig into the details of particular requests to understand which components cause system errors, monitor flow through the modules, and discover the bottlenecks in the performance of the system.

Tanzu Observability Tracing

When using observability during app development, it can also improve the developer experience and productivity.

Tanzu Observability Services

The VMware Tanzu portfolio currently has four different editions:

Different Tanzu Observability services are available for different components and Tanzu editions.

Tanzu Standard Observability

Tanzu Standard includes the leading open-source projects Prometheus and Grafana for platform monitoring (and Fluent Bit for log forwarding).

Tanzu Kubernetes Grid provides monitoring with the open-source Prometheus and Grafana services. You deploy these services on your cluster and can then take advantage of Grafana visualizations and dashboards. As part of the integration, you can set up Alertmanager to send alerts to Slack or use custom Webhooks alert notifications.

Tanzu Kubernetes Grid architecture

Tanzu Standard Observability is comprised of:

  • Fluent Bit is an open-source log processor and forwarder which allows you to collect any data like metrics and logs from different sources, enrich them with filters and send them to multiple destinations. It’s the preferred choice for containerized environments like Kubernetes.
  • Grafana is a multi-platform open-source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources.
  • Prometheus is a free software application used for event monitoring and alerting. It records real-time metrics in a time series database built using a HTTP pull model, with flexible queries and real-time alerting.

Note: VMware only provides advisory (best effort) guidance on Prometheus and Grafana for use with Tanzu Kubernetes Grid. The installation, configuration and upgrades are beyond the current scope of VMware’s advisory support.

Tanzu Advanced Observability

In May 2017 VMware acquired Wavefront which is now part of the Tanzu portfolio and called “Tanzu Observability” (TO).

TO is a SaaS-based metrics monitoring and analytics platform that handles enterprise-scale requirements of modern cloud native application.

Compared to the Grafana/Prometheus, one would say that Tanzu Observability is a true enterprise-grade observability platform. According to the GigaOm Cloud Observability Report VMware Tanzu Observability is one of the strong leaders among Dynatrace and Splunk just to name a few.

Tanzu Observability is best suited for large organization and provides a consumption-based pricing that is based on the rate at which you send metric data to Tanzu Observability during the course of each month. This gives you the flexibility to start with any size want and scale up/down as needed. It’s not dependent on number of hosts or the number of users. 

Tanzu Observability CIO Dashboard

Tanzu Observability allows you to collect data from different sources and provides integrations to over 250 technologies including different public clouds, web application and services, big data frameworks, data stores, other monitoring tools, operating systems / hosts, and many more.

Tanzu Observability Integrations

While data retention with Prometheus is limited to a maximum of 14 days, VMware allows you to send Prometheus data to Tanzu Observability for long-term data retention (up to 18 months at full granularity).

Just announced at VMworld 2021, VMware has added artificial intelligence and machine learning (AI/ML) root cause capabilities…

Tanzu Observability AI Powered Root Cause Analysis

…and created an integration between Tanzu Observability and vRealize Operations Cloud.

Through this integration, developers and SREs can now view vRealize Operations Cloud metrics alongside all the metrics, histograms, and traces collected by Tanzu Observability from other sources for a more holistic view of business-critical applications and infrastructure.

If you are attending VMworld, check out the sessions below to learn more about Tanzu Observability.

  • APP1308: Observability for Modern Application and Kubernetes Environments
  • APP2648: Implement Observability for Kubernetes Clusters and Workloads in Minutes
  • VI2630: Best Practices and Reference Framework for Implementing Observability
  • UX2551: Move from Traditional Monitoring to Observability and SRE – Design Studio
  • VMTN2810: Lost in Containers? Enhance Observability with Actionable Visualization
  • 2965: Kubernetes Cluster Operations, Monitoring and Observability
  • 2957: Build a Data Analytics Platform in Minutes Using Deployment Blueprints
  • APP2677: Meet the Experts: VMware Tanzu Observability by Wavefront
  • VMTN3230: Observe Application internals Holistically
  • VI1448: Take a Modern Approach to Achieve Application Resiliency
  • APP1319: Transforming Customer Experiences with VMware’s App Modernization Platform

Integration with other Tanzu Products

Tanzu Observability is fully integrated within the Tanzu family with OOTB integrations with:

Kubernetes Monitoring in vRealize Operations

Tanzu Observability provides “Kubernetes Observability” and OOTB integrations with RedHat OpenShift, Azure Kubernetes Service (AKS), Amazon EKS and Google GKE for example.

Tanzu Observability Kubernetes Monitoring

vRealize Operations (vROps) is also able to monitor multiple Kubernetes environments like VMware Tanzu Kubernetes Grid, RedHat OpenShift, Amazon EKS, Azure AKS or Google GKE. That is made possible with the vROps Management Pack for Kubernetes.

Using vRealize Operations Management Pack for Kubernetes (needs vROps 8.1 or later), you can monitor, troubleshoot, and optimize the capacity management for Kubernetes clusters. Below some of the additional capabilities that this management pack delivers:

  • Auto-discovery of Tanzu Kubernetes Grid (TKG) or Tanzu Mission Control (TMC) Kubernetes clusters.
  • Complete visualization of Kubernetes cluster topology, including namespaces, clusters, replica sets, nodes, pods, and containers.
  • Performance monitoring for Kubernetes clusters.
  • Out-of-the-box dashboards for Kubernetes constructs, which include inventory and configuration.
  • Multiple alerts to monitor the Kubernetes clusters.
  • Mapping Kubernetes nodes with virtual machine objects.
  • Report generation for capacity, configuration, and inventory metrics for clusters or pods.

vRealize Operations K8s Monitoring

Note: Kubernetes monitoring is available in vRealize Operations Advanced.

There is also a Prometheus integration, that enables vRealize Operations Manager to retrieve metrics directly from Prometheus:

Diagram Description automatically generated

Note: vRealize Operations can also integrate with your existing application performance management systems. vROps offers integrations with App Dynamics, DataDog, Dynatrace and New Relic.

Conclusion

There are different options available within the VMware Tanzu and vRealize when it comes to Kubernetes operations, monitoring and observability.

Depending on your current needs and toolset you’ll have different options and integration possibilities. 

VMware’s portfolio gives you the choice to use open-source software like Grafana/Prometheus, leverage an existing vRealize Operations deployment or to get an enterprise-grade observability and analytics platform like Tanzu Observability.

If you are looking for and end-to-end monitoring stack aka 360-degree visibility for your K8s environments and clouds, VMware Tanzu and the vRealize Suite give you the following products:

  1. Applications – Tanzu Observability
  2. Kubernetes Cluster – Tanzu Observability, vRealize Operations, vRealize Network Insight, vRealize Log Insight
  3. Network Layer – vRealize Operations, vRealize Network Insight, vRealize Log Insight
  4. Virtualization Layer – vRealize Operations, vRealize Network Insight, vRealize Log Insight

 

VMworld 2021 – Summary of VMware Projects

VMworld 2021 – Summary of VMware Projects

On day 1 of VMworld 2021 we have heard and seen a lot of super exciting announcements. I believe everyone is excited about all the news and innovations VMware has presented so far.

I’m not going to summarize all the news from day 1 or day 2 but thought it might be helpful to have an overview of all the VMware projects that have been mentioned during the general session and solution keynotes.

Project Cascade

VMware Project Cascade

Project Cascade will provide a unified Kubernetes interface for both on-demand infrastructure (IaaS) and containers (CaaS) across VMware Cloud – available through an open command line interface (CLI), APIs, or a GUI dashboard.  Project Cascade will be built on an open foundation, with the open-sourced VM Operator as the first milestone delivery for Project Cascade that enables VM services on VMware Cloud.

VMworld 2021 session: Solution Keynote: The VMware Multi-Cloud Computing Infrastructure Strategy of 2021 [MCL3217]

Project Capitola

VMware Project Capitola

Project Capitola is a software-defined memory implementation that will aggregate tiers of different memory types such as DRAM, PMEM, NVMe and other future technologies in a cost-effective manner, to deliver a uniform consumption model that is transparent to applications.

VMworld 2021 session: Introducing VMware Project Capitola: Unbounding the ‘Memory Bound’ [MCL1453] and How vSphere Is Redefining Infrastructure For Running Apps In the Multi-Cloud Era [MCL2500]

Project Ensemble

VMware Project Ensemble

Project Ensemble integrates and automates multi-cloud management with vRealize. This means that all the different VMware cloud management capabilities—self-service, elasticity, metering, and more—are in one place. You can access all the data, analytics, and workflows to easily manage your cloud deployments at scale.

VMworld 2021 session: Introducing Project Ensemble Tech Preview [MCL1301]

Project Arctic

VMware Project Arctic

Project Arctic is “the next evolution of vSphere” and is about bringing your own hardware while taking advantage of VMware Cloud offerings to enable a hybrid cloud experience. Arctic natively integrates cloud connectivity into vSphere and establishes hybrid cloud as the default operating model.

VMworld 2021 session: What’s New in vSphere [APP1205] and How vSphere Is Redefining Infrastructure For Running Apps In the Multi-Cloud Era [MCL2500]

Project Monterey

VMware Project Monterey

Project Monterey was announced in the VMworld 2020 keynote. It is about SmartNICs that will redefine the data center with decoupled control and data planes for management, networking, storage and security for VMware ESXi hosts and bare-metal systems.

VMworld 2021 session: 10 Things You Need to Know About Project Monterey [MCL1833] and How vSphere Is Redefining Infrastructure For Running Apps In the Multi-Cloud Era [MCL2500]

Project Iris

I don’t remember anymore which session mentioned Project Iris but it is about the following:

Project Iris discovers and analyzes an organization’s full app portfolio; recommends which apps to rehost, replatform, or refactor; and enables customers to adapt their own transformation journey for each app, line of business, or data center.

Project Pacific

Project Pacific was announced at VMworld 2019. It is about re-architecting vSphere to integrate and embed Kubernetes and is known as “vSphere with Tanzu” (or TKGS) today. In other words, Project Pacific transformed vSphere into a Kubernetes-native platform with an Kubernetes control plane integrated directly into ESXi and vCenter. Pacific is part of the Tanzu portofolio.

VMworld 2019 session: Introducing Project Pacific: Transforming vSphere into the App Platform of the Future [HBI4937BE]

Project Santa Cruz

VMware Project Santa Cruz

Project Santa Cruz is a new integrated offering from VMware that adds edge compute and SD-WAN together to give you a secure, scalable, zero touch edge run time at all your edge locations. It connects your edge sites to centralized management planes for both your networking team and your cloud native infrastructure team. This solution is OCI compatible: if your app runs in a container, it can run on Santa Cruz.

VMworld 2021 session: Solution Keynote: What’s Next? A Look inside VMware’s Innovation Engine [VI3091]

Project Dawn Patrol

Project Dawn Patrol

So far, Project Dawn Patrol was only mentioned during the general session. “It will give you full visibility with a map of all your cloud assets and their dependencies”, Dormain Drewitz said.

VMworld 2021 session: General Session: Accelerating Innovation, Strategies for Winning Across Clouds and Apps [GEN3103]

Project Radium

VMware Project Radium

Last year VMware introduced vSphere Bitfusion which allow shared access to a pool of GPUs over a network. Project Radium expands the fetature set of Bitfusion to other architectures and will support AMD, Graphcore, Intel, Nvidia and other hardware vendors for AI/ML workloads.

VMworld 2021 session: Project Radium: Bringing Multi-Architecture compute to AI/ML workloads [VI1297]

Project IDEM

IDEM has been described as an “easy to use management automation technology”.

VMworld 2021 session: Solution Keynote: What’s Next? A Look inside VMware’s Innovation Engine [VI3091] and Next-Generation SaltStack: What Idem Brings to SaltStack [VI1865]

Please comment below or let me know via Twitter or LinkedIn if I missed a new or relevant VMware project. 😉

Must Watch VMworld Multi-Cloud Sessions

I recently wrote a short blog about some of the sessions I recommend to customers, partners and friends.

If you would like to know more about the VMware multi-cloud strategy and vision, have a look at some of the sessions below:

VMworld 2021 Must Watch Sessions

 

VMworld 2021 – My Content Catalog and Session Recommendation

VMworld 2021 – My Content Catalog and Session Recommendation

VMworld 2021 is going to happen from October 6-7, 2021 (EMEA). This year you can expect so many sessions and presentations about the options you have when combining different products together, that help you to reduce complexity, provide more automation and therefore create less overhead.

Let me share my 5 personal favorite picks and also 5 recommended sessions based on the conversations I had with multiple customers this year.

My 5 Personal Picks

10 Things You Need to Know About Project Monterey [MCL1833]

Project Monterey was announced in the VMworld 2020 keynote. There has been tremendous work done since then. Hear Niels Hagoort and Sudhansu Jain talking about SmartNICs and how they will redefine the data center with decoupled control and data planes – for ESXi hosts and bare-metal systems. They are going to cover and demo the overall architecture and use cases!

Upskill Your Workforce with Augmented and Virtual Reality and VMware [VI1596]

Learn from Matt Coppinger how augmented realited (AR) and virtual reality (VR) are transforming employee productivity, and how these solutions can be deployed and managed using VMware technologies. Matt is going to cover the top enterprise use cases for AR/VR as well as the challenges you might face deploying these emerging technologies. Are you interested how to architect and configure VMware technologies to deploy and manage the latest AR/VR technology, applications and content? If yes, then this session is also for you.

Addressing Malware and Advanced Threats in the Network [SEC2027] (Tech+ Pass Only)

I am very interested to learn more cybersecurity. With Chad Skipper VMware has an expert who can give insights on how the Network Detection and Response (NDR) capabilities if NSX Advanced Threat Prevention provide visibility, detection and prevention of advanced threats.

60 Minutes of Non-Uniform Memory Access (NUMA) 3rd Edition [MCL1853]

Learn more about NUMA from Frank Denneman. You are going to learn more about the underlying configuration of a virtual machine and discover the connection between the Generapl-Purpose Graphics Processing Unit (GPGPU) and the NUMA node. You will also understand after how your knowledge of NUMA concepts in your cluster can help the developer by aligning the Kubernetes nodes to the physical infrastructure with the help of VM Service.

Mount a Robust Defense in Depth Strategy Against Ransomware [SEC1287]

Are you interested to learn more about how to protect, detect, respond to and recover from cybersecurity attacks across all technology stacks, regardless of their purpose or location? Learn more from Amanda Blevins about the VMware solutions for end users, private clouds, public clouds and modern applications.

5 Recommended Sessions based on Customer Conversations

Cryptographic Agility: Preparing for Quantum Safety and Future Transition [VI1505]

A lot of work is needed to better understand cryptographic agility and how we can address and manage the expected challenges that come with quantum computing. Hear VMware’s engineers from the Advanced Technology Group talking about the requirements of crypto agility and VMware’s recent research work on post-quantum cryptography in the VMware Unified Access Gateway (UAG) project.

Edge Computing in the VMware Office of the CTO: Innovations on the Horizon [VI2484]

Let Chris Wolf give you some insight into VMware’s strategic direction in support of edge computing. He is going to talk about solutions that will drive down costs while accelerating the velocity and agility in which new apps and services can be delivered to the edge.

Delivering a Continuous Stream of More Secure Containers on Kubernetes [APP2574]

In this session one can see how you can use two capabilities in VMware Tanzu Advanced, Tanzu Build Service and Tanzu Application Catalog, to feed a continuous stream of patched and compliant containers into your continuous delivery (CD) system. A must attend session delivered by David Zendzian, the VMware Tanzu Global Field CISO.

A Modern Firewall For any Cloud and any Workload [SEC2688]

VMware NSX firewall reimagines East-West security by using a distributed- and software-based approach to attach security policies to every workload in any cloud. Chris Kruegel gives you insights on how to stop lateral movement with advanced threat prevention (ATP) capabilities via IDS/IPS, sandboxing, NTA and NDR.

A Practical Approach for End-to-End Zero Trust [SEC2733]

Hear different the VMware CTOs Shawn Bass, Pere Monclus and Scott Lundgren talking about a zero trust approach. Shawn and the others will discuss specific capabilities that will enable customers to achieve a zero trust architecture that is aligned to the NIST guidance and covers secure access for users as well secure access to workloads.

Enjoy VMworld 2021! 🙂

 

The Rise of VMware Tanzu Service Mesh

The Rise of VMware Tanzu Service Mesh

My last article focused on application modernization and data portability in a multi-cloud world. I explained the value of the VMware Tanzu portfolio by mentioning a consistent infrastructure and consistent application platform approach, which ultimately delivers a consistent developer experience. I also dedicated a short section about Tanzu Service Mesh, which is only one part of the unified Tanzu control plane (besides Tanzu Mission Control and Tanzu Observability) for multiple Kubernetes clusters and clouds.

When you hear or see someone writing about TSM, you very soon get to the point, where the so-called “Global Namespaces” (GNS) are being mentioned, which has the magic power to stitch hybrid applications together that run in multiple clouds.

Believe me when I say that Tanzu Service Mesh (TSM) is rising and becoming the next superstar of the VMware portfolio. I think Joe Baguley would agree here. 😀

Namespaces

Before we start talking about Tanzu Service Mesh and the magical power of Global Namespaces, let us have a look at the term “Namespaces” first.

Kubernetes Namespace

Namespaces give you a way to organize clusters into virtual carved out sub-clusters, which can be helpful when different teams, tenants or projects share the same Kubernetes cluster. This form of a namespace provides a method to better share resources, because it ensures fair allocation of these resources with the right permissions.

So, using namespaces gives you a way of isolation that developers never affect other project teams. Policies allow to configure compute resources by defining resource quotas for CPU or memory utilization. This also ensures the performance of a specific namespace, its resources (pods, services etc.) and the Kubernetes cluster in general.

Although namespaces are separate from each other, they can communicate with each other. Network policies can be configured to create isolated and non-isolated pods. For example, a network policy can allow or deny all traffic coming from other namespaces.

Ellei Mei explained this in a very easy in her article after Project Pacific had been made public in September 2019:

Think of a farmer who divides their field (cluster + cluster resources) into fenced-off smaller fields (namespaces) for different herds of animals. The cows in one fenced field, horses in another, sheep in another, etc. The farmer would be like operations defining these namespaces, and the animals would be like developer teams, allowed to do whatever they do within the boundaries they are allocated.

vSphere Namespace

The first time I heard of Kubernetes or vSphere Namespaces was in fact at VMworld 2019 in Barcelona. VMware then presented a new app-focused management concept. This concept described a way to model modern application and all their parts, and we call this a vSphere Namespace today.

With Project Pacific (today known vSphere with Tanzu or Tanzu Kubernetes Grid), VMware went one step further and extended the Kubernetes Namespace by adding more options for compute resource allocation, vMotion, encryption, high availability, backup & restore, and snapshots.

Rather than having to deal with each namespace and its containers, vSphere Namespaces (also called “guardrails” sometimes) can draw a line around the whole application and services including virtual machines.

Namespaces as the unit of management

With the re-architecture of vSphere and the integration of Kubernetes as its control plane, namespaces can be seen as the new unit of management.

Imagine that you might have thousands of VMs in your vCenter inventory that you needed to deal with. After you group those VMs into their logical applications, you may only have to deal with dozens of namespaces now.

If you need to turn on encryption for an application, you can just click a button on the namespace in vCenter and it does it for you. You don’t need to deal with individual VMs anymore.

vSphere Virtual Machine Service

With the vSphere 7 Update 2a release, VMware provided the “VM Service” that enables Kubernetes-native provisioning and management of VMs.

For many organizations legacy applications are not becoming modern over night, they become hybrid first before the are completely modernized. This means we have a combination of containers and virtual machines forming the application, and not find containers only. I also call this a hybrid application architecture in front of my customers. For example, you may have a containerized application that uses a database hosted in a separate VM.

So, developers can use the existing Kubernetes API and a declarative approach to create VMs. No need to open a ticket anymore to request a virtual machine. We talk self-service here.

Tanzu Mission Control – Namespace Management

Tanzu Mission Control (TMC) is a VMware Cloud (SaaS) service that provides a single control point for multiple teams to remove the complexities from managing Kubernetes cluster across multiple clouds.

One of the ways to organize and view your Kubernetes resources with TMC is by the creation of “Workspaces”.

Workspaces allows you to organize your namespaces into logical groups across clusters, which helps to simplify management by applying policies at a group level. For example, you could apply an access policy to an entire group of clusters (from multiple clouds) rather than creating separate policies for each individual cluster.

Think about backup and restore for a moment. TMC and the concept of workspaces allow you to back up and restore data resources in your Kubernetes clusters on a namespace level.

Management and operations with a new application view!

FYI, VMware announced the integration of Tanzu Mission Control and Tanzu Service Mesh in December 2020.

Service Mesh

A lot of vendors including VMware realized that the network is the fabric that brings microservices together, which in the end form the application. With modernized or partially modernized apps, different Kubernetes offerings and a multi-cloud environment, we will find the reality of hybrid applications which sometimes run in multiple clouds. 

This is the moment when you have to think about the connectivity and communication between your app’s microservices.

One of the main ideas and features behind a service mesh was to provide service-to-service communication for distributed applications running in multiple Kubernetes clusters hosted in different private or public clouds.

The number of Kubernetes service meshes has rapidly increased over the last few years and has gotten a lot of hype. No wonder why there are different service mesh offerings around:

  • Istio
  • Linkerd
  • Consul
  • AWS Apps Mesh
  • OpenShift Service Mesh by Red Hat
  • Open Service Mesh AKS add-on (currently preview on Azure)

Istio is probably the most famous one on this list. For me, it is definitely the one my customers look and talk about the most.

Service mesh brings a new level of connectivity between services. With service mesh, we inject a proxy in front of each service; in Istio, for example, this is done using a “sidecar” within the pod.

Istio’s architecture is divided into a data plane based on Envoy (the sidecar) and a control plane, that manages the proxies. With Istio, you inject the proxies into all the Kubernetes pods in the mesh.

As you can see on the image, the proxy sits in front of each microservice and all communications are passed through it. When a proxy talks to another proxy, then we talk about a service mesh. Proxies also handle traffic management, errors and failures (retries) and collect metric for observability purposes.

Challenges with Service Mesh

The thing with service mesh is, while everyone thinks it sounds great, that there are new challenges that service mesh brings by itself.

The installation and configuration of Istio is not that easy and it takes time. Besides that, Istio is also typically tied to a single Kubernetes cluster and therefore Istio data plane – and organizations usually prefer to keep their Kubernetes clusters independent from each other. This leaves us with security and policies tied to a Kubernetes cluster or cloud vendor, which leaves us with silos.

Istio supports a so-called multi-cluster deployment with one service mesh stretched across Kubernetes clusters, but you’ll end up with a stretched Istio control plane, which eliminates the independence of each cluster.

So, a lot of customers also talk about better and easier manageability without dependencies between clouds and different Kubernetes clusters from different vendors.

That’s the moment when Tanzu Service Mesh becomes very interesting. 🙂

Tanzu Service Mesh (formerly known as NSX Service Mesh)

Tanzu Service Mesh, built on VMware NSX, is an offering that delivers an enterprise-grade service mesh, built on top of a VMware-administrated Istio version.

When onboarding a new cluster on Tanzu Service Mesh, the service deploys a curated version of Istio signed and supported by VMware. This Istio deployment is the same as the upstream Istio in every way, but it also includes an agent that communicates with the Tanzu Service Mesh global control plane. Istio installation is not the most intuitive, but the onboarding process of Tanzu Service Mesh simplifies the process significantly.

Overview of Tanzu Service Mesh

The big difference and the value that comes with Tanzu Service Mesh (TSM) is its ability to support cross-cluster and cross-cloud use cases via Global Namespaces.

Global Namespaces (GNS)

Yep, another kind of a namespace, but the most exciting one! 🙂

A Global Namespace is a unique concept in Tanzu Service Mesh and connects resources and workloads that form the application into a virtual unit. Each GNS is an isolated domain that provides automatic service discovery and manages the following functions that are port of it, no matter where they are located:

  • Identity. Each global namespace has its own certificate authority (CA) that provisions identities for the resources inside that global namespace
  • Discovery (DNS). The global namespace controls how one resource can locate another and provides a registry.
  • Connectivity. The global namespace defines how communication can be established between resources and how traffic within the global namespace and external to the global namespace is routed between resources.
  • Security. The global namespace manages security for its resources. In particular, the global namespace can enforce that all traffic between the resources is encrypted using Mutual Transport Layer Security authentication (mTLS).
  • Observability. Tanzu Service Mesh aggregates telemetry data, such as metrics for services, clusters, and nodes, inside the global namespace.

Use Cases

The following diagram represents the global namespace concept and other pieces in a high-level architectural view. The components of one application are distributed in two different Kubernetes clusters: one of them is on-premises and the other in a public cloud. The Global Namespace creates a logical view of these application components and provides a set of basic services for the components.

Global Namespaces

If we take application continuity as another example for a use case, we would deploy an app in more than one cluster and possibly in a remote region for disaster recovery (DR), with a load balancer between the locations to direct traffic to both clusters. This would be an active-active scenario. With Tanzu Service Mesh, you could group the clusters into a Global Namespace and program it to automatically redirect traffic in case of a failure. 

In addition to the use case and support for multi-zone and multi-region high availability and disaster recovery, you can also provide resiliency with automated scaling based on defined Service-Level Objectives (SLO) for multi-cloud apps.

VMware Modern Apps Connectivity Solution  

In May 2021 VMware introduced a new solution that brings together the capabilities of Tanzu Service Mesh and NSX Advanced Load Balancer (NSX ALB, formerly Avi Networks) – not only for containers but also for VMs. While Istio’s Envoy only operates on layer 7, VMware provides layer 4 to layer 7 services with NSX (part of TSM) and NSX ALB, which includes L4 load balancing, ingress controllers, GSLB, WAF and end-to-end service visibility. 

This solution speeds the path to app modernization with connectivity and better security across hybrid environments and hybrid app architectures.

Multiple disjointed products, no end-to-end observability

 

 

 

 

 

 

Summary

One thing I can say for sure: The future for Tanzu Service Mesh is bright!

Many customers are looking for ways for offloading security (encryption, authentication, authorization) from an application to a service mesh.

One great example and use case from the financial services industry is crypto agility, where a “crypto service mesh” (a specialized service mesh) could be part of a new architecture, which provides quantum-safe certificates.

And when we offload encryption, calculation, authentication etc., then we may have other use cases for SmartNICs and  Project Monterey

To learn more about service mesh and the capabilities of Tanzu Service Mesh, I can recommend Service Mesh for Dummies written Niran Even-Chen, Oren Penso and Susan Wu.

Thank you for reading!