The Foundation for Generative AI in the Enterprise

The Foundation for Generative AI in the Enterprise

During the last multi-cloud briefing on July 10th, VMware talked about generative AI (GenAI) for the enterprise and how VMware is democratizing access to the power of artificial intelligence (AI) by enabling enterprises to build and serve in-house AI models that are compact and cost-efficient while addressing the need for compliance, privacy, and data security.

We can expect more information and announcements at VMware Explore 2023 in Las Vegas but the focus of the company will change: Becoming a multi-cloud enabler and building the digital foundation of the future

After the last multi-cloud briefing, customers and partners approached me to ask, what VMware is going to sell in the future. The answer to that was: nothing.

As always, VMware wants to stay ahead in the game and prepares for what is to come. So, whenever the customers are ready to build a GenAI platform, VMware is ready to deliver a platform. VMware Cloud Foundation and multi-cloud are going to be the foundation for generative AI, because they ensure maximum choice and flexibility in where a customer chooses to build, run, and consume their AI models.

Additionally, VMware partners with recognized leaders in the AI space like Nvidia or Intel.

NVIDIA AI Enterprise for VMware (NVAIE)

NVIDIA Enterprise AI refers to a suite of products, technologies, and solutions offered by VMware specifically tailored for enterprise applications of artificial intelligence:

NVIDIA AI Enterprise  is  an end-to-end, cloud-native suite of  AI and data analytics software,  optimized, certified, and supported by NVIDIA to run in virtualized data centers with VMware vSphere® with Tanzu® and VMware Cloud Foundation™ with Tanzu  on  NVIDIA-Certified  Systems™.  It includes key enabling technologies  from NVIDIA for  rapid deployment, management, and scaling of AI workloads  in the modern hybrid cloud.

VMware + NVIDIA AI-Ready Platform

If you want to read more about it in detail, Frank Denneman has started a blog series about machine learning and NVAIE.

Generative AI (GenAI)

There is a generative AI boom that presents new opportunities and challenges. It has the potential to revolutionize how people and companies work in the future. GenAI, a type of artificial intelligence, can be used to create new products and services, images, audio, text, videos, application code, and automate tasks for example.

Since there is this hype now, leading service providers and organizations are trying to get into the pole position now.

This year’s keynote at VMware Explore US 2023 is about “Taking a Cloud-Smart Approach to Harness the Power of Generative AI”:

Join the VMware Explore 2023 General Session to learn how industry leaders are embracing a cloud-smart approach to harness the power of generative AI as they tap into data residing on-premises, at the edge, and across multiple clouds. VMware CEO Raghu Raghuram, President Sumit Dhawan, and a host of other speakers will dive into how VMware and its partners help enterprises build, train and run AI models while addressing the core challenges of risk and cost. You’ll hear from the key players charting the next course of enterprise tech innovation.

VMware AI Labs (VAIL)

Have you seen the latest job opening at VMware and some announcements on LinkedIn? VMware has transformed their research and innovation team into “VMware AI Labs”. This shows their plan and commitment to becoming the leading provider and preferred partner for organizations and their AI/ML initiatives.

VMware AI Labs

Looking at the open “Machine Learning Engineer | VMware AI Labs” job opportunity, one will find the following information:

VMware AI Labs focuses on building differentiated technologies in AI, Generative AI, and adjacent systems. Advanced development (xLabs) efforts focuses on near-term goals to advance VMware’s relevance in AI and Generative AI.

Artificial Intelligence at VMware Explore US 2023

Browsing through the content catalog, I found the following AI-related sessions:

  • Technology Innovation Showcase [K2906LV] by Kit Colbert and Chris Wolf – Dive deep into VMware’s products and solutions to discover ways to succeed in today’s multi-cloud world and the rise of AI. Experience demonstrations of innovations across apps, cloud, devices, edge and security. Discover unique perspectives on what it means to thrive in today’s world and be prepared for tomorrow.
  • 100x Your Engineering Throughput via AI Tools [VIB1744LV] by Hüseyin Dursun and Steve Liang – The increasing popularity of generative AI and large language models (LLMs) has the potential to increase engineering throughput if used wisely. This session will share practices we have been aiming to enable inside VMware and how they can be replicated by customers and partners. Like any other major shift, there must be the right degree of coverage assurance for potential intellectual property-related issues while taking full advantage of what LLMs have to offer. The session will offer learning sharing and help the audience to have a faster start to their journey of AI-driven product and application development.
  • The AI R-Evolution, why it will change the way we Work, Learn and Engineer [VIB2637LV] by Chris Gully and John Arrasjid – Are you curious about running AI workloads or in how AI is changing the infrastructure game? This session will explore the evolution of AI in workloads and in the cloud supporting them. We will discuss current solutions and share our insight in the challenges that exist. How do rules tied to ethics, governance, freedoms and research get influenced and applied? Do the three (now four) laws of robotics apply when AI is personified? How is AI being integrated into infrastructure technologies to provide more resilient and self-healing environments that will support traditional and newer workloads such as AI, machine learning, and deep learning. We will cover people, process and technology in this session.
  • What’s New with VMware + NVIDIA AI-Ready Enterprise Platform [CEIB3051LV] by Justin Murray and Frank Denneman – NVIDIA and VMware have partnered to democratize AI/ML for all enterprises. VMware+NVIDIA AI-Ready Enterprise Platform delivers best-in-class AI/ML software, NVIDIA AI Enterprise, optimized and certified for the industry’s leading enterprise workload platforms, VMware Cloud Foundation & VMware vSphere. Join this talk to hear Justin Murray, Frank Denneman, and an NVIDIA speaker and learn more about this VMware & NVIDIA AI initiative. Watch out for updates to this abstract for some new announcements.
  • AI Powers New Use Cases with VMware Data Products [MAPB2795LV] by Ivan Novick and Ian Pytlarz – 2023 ushered in rapid development in AI, with the improvements in large language models surprising everyone. AI models can be trained to understand the meaning of natural language text as well as unstructured data, such as images, audio and video. Alone, AI can be used to solve new problems. But when combined with traditional big data analytics technologies and open source software, companies can rapidly deploy high ROI applications powered by neural networks and AI, and help lower cost and grow revenue in their business while staying competitive. In this talk, we will discuss new uses cases that were not possible last year, as well as architectures and strategies to rapidly build out the capabilities with VMware Data Solutions and the VMware Application Catalog.
  • Data Science Deep Dive in Anywhere Workspace, and What AI Means for EUC [EUSB2527LV] by Johan van Amersfoort and Hayden Davis – VMware Anywhere Workspace has been leveraging data science for years. And with our autonomous workspace vision, data science is getting even more important. First, we will look under the hood of Anywhere Workspace to learn how we leverage machine learning and AI. Then, with generative AI and large language models making stunning advancements recently, we will discuss ideas about how these might affect the future of employee experience, security, and IT modernization.
  • Integrated MLOps – Accelerating AI-Powered Finance with VMware [INDB2221LV] by Paul Nothard and Yuval Zukerman – AI and machine learning (ML) are hot topics but fraught with danger in regulated industries. Talking with financial services chief risk officers, we understand the concerns our customers have regarding the control of data, the recreation of ML data sets, and most importantly, how AI decisions have been made to demonstrate a lack of bias and clear business decisioning. In our session, you will hear how VMware’s industry solution team, working with partners can help you navigate this danger and hopefully sleep better at night. The panel will be comprised of experts from VMware, our partners, and customer(s).

Conclusion

I guess we have to wait and see what VMware reveals at Explore US at the end of August 2023. I am excited to be in Las Vegas this year and hopefully, I find the time to summarize all the major announcements for you – like I did last year:

Supercloud – A Hybrid Multi-Cloud

Supercloud – A Hybrid Multi-Cloud

I thought it is time to finally write a piece about superclouds. Call it supercloud, the new multi-cloud, a hybrid multi-cloud, cross-cloud, or a metacloud. New terms with the same meaning. I may be biased but I am convinced that VMware is in the pole position for this new architecture and approach.

Let me also tell you this: superclouds are nothing new. Some of you believe that the idea of a supercloud is something new, something modern. Some of you may also think that cross-cloud services, workload mobility, application portability, and data gravity are new complex topics of the “modern world” that need to be discussed or solved in 2023 and beyond. Guess what, most of these challenges and ideas exist for more than 10 years already!

Cloud-First is not cool anymore

There is clear evidence that a cloud-first approach is not cool or the ideal approach anymore. Do you remember about a dozen years ago when analysts believed that local data centers are going to disappear and the IT landscape would only consist of public clouds aka hyperscalers? Have a look at this timeline:

VMware and Public Clouds Timeline

We can clearly see when public clouds like AWS, Google Cloud, and Microsoft Azure appeared on the surface. A few years later, the world realized that the future is hybrid or multi-cloud. In 2019, AWS launched “Outposts”, Microsoft made Azure Arc and their on-premises Kubernetes offering available only a few years later.

Google, AWS, and Microsoft changed their messaging from “we are the best, we are the only cloud” to “okay, the future is multi-cloud, we also have something for you now”. Consistent infrastructure and consistent operations became almost everyone’s marketing slogan.

As you can also see above, VMware announced their hybrid cloud offering “VMware Cloud on AWS” in 2016, the initial availability came a year after, and since 2018 it is generally available.

From Internet to Interclouds

Before someone coined the term “supercloud”, people were talking about the need for an “intercloud”. In 2010, Vint Cerf, the so-called “Father of the Internet” shared his opinions and predictions on the future of cloud computing. He was talking about the potential need and importance of interconnecting different clouds.

Cerf already understood about 13 years ago, that there’s a need for an intercloud because users should be able to move data/workloads from one cloud to another (e.g., from AWS to Azure to GCP). He was guessing back then that the intercloud problem could be solved around 2015.

We’re at the same point now in 2010 as we were in ’73 with internet.

In short, Vint Cerf understood that the future is multi-cloud and that interoperability standards are key.

There is also a document that also delivers proof that NIST had a working group (IEEE P2302) trying to develop “the Standard for Intercloud Interoperability and Federation (SIIF)”. This was around 2011. How did the suggestion back then look like? I found this youtube video a few years ago with the following sketch:

Intercloud 2012

Workload Mobility and Application Portability

As we can see above, VM or workload mobility was already part of this high-level architecture from the IEEE working group. I also found a paper from NIST called “Cloud Computing Standards Roadmap” dated July 2013 with very interesting sections:

Cloud platforms should make it possible to securely and efficiently move data in, out, and among cloud providers and to make it possible to port applications from one cloud platform to another. Data may be transient or persistent, structured or unstructured and may be stored in a file system, cache, relational or non-relational database. Cloud interoperability means that data can be processed by different services on different cloud systems through common specifications. Cloud portability means that data can be moved from one cloud system to another and that applications can be ported and run on different cloud systems at an acceptable cost.

Note: VMware HCX is available since 2018 and is still the easiest and probably the most cost-efficient way to migrate workloads from one cloud to another.

It is all about the money

Imagine it is March 2014, and you read the following announcement: Cisco is going big – they want to spend $1 billion on the creation of an intercloud

Yes, that really happened. Details can be found in the New York Times Archive. The New York Times even mentioned at the end of their article that “it’s clear that cloud computing has become a very big money game”.

In Cisco’s announcement, money had also been mentioned:

Of course, we believe this is going to be good for business. We expect to expand the addressable cloud market for Cisco and our partners from $22Bn to $88Bn between 2013-2017.

In 2016, Cisco retired their intercloud offering, because AWS and Microsoft were, and still are, very dominant. AWS posted $12.2 billion in sales for 2016, Microsoft ended up almost at $3 billion in revenue with Azure.

Remember Cisco’s estimate about the “addressable cloud market”? In 2018, Gartner presented the number of $145B for the worldwide public cloud spend in 2017. For 2023, Gartner forecasted a cloud spend of almost $600 billion.

Data Gravity and Egress Costs

Another topic I want to highlight is “data gravity” coined by Dave McCrory in 2010:

Consider Data as if it were a Planet or other object with sufficient mass. As Data accumulates (builds mass) there is a greater likelihood that additional Services and Applications will be attracted to this data. This is the same effect Gravity has on objects around a planet. As the mass or density increases, so does the strength of gravitational pull. As things get closer to the mass, they accelerate toward the mass at an increasingly faster velocity. Relating this analogy to Data is what is pictured below.

Put data gravity together with egress costs, then one realizes that data gravity and egress costs limit mobility and/or portability discussions:

Source: https://medium.com/@alexandre_43174/the-surprising-truth-about-cloud-egress-costs-d1be3f70d001

By the way, what happened to “economies of scale”?

The Cloud Paradox

As you should understand by now topics like costs, lock-in, and failed expectations (technically and commercially) are being discussed for more than a decade already. That is why I highlighted NIST’s sentence above: Cloud portability means that data can be moved from one cloud system to another and that applications can be ported and run on different cloud systems at an acceptable cost.

Acceptable cost.

While the (public) cloud seems to be the right choice for some companies, we now see other scenarios popping up more often: reverse cloud migrations (also called repatriation sometimes)

I have customers who tell me, that the exact same VM with the exact same business logic costs between 5 to 7 times more when they moved it from their private to a public cloud.

Let’s park that and cover the “true costs of cloud” another time. 😀

Public Cloud Services Spend

Looking at Vantage’s report, we can see the following top 10 services on AWS, Azure and GCP ranked by the share of costs:

If they are right and the numbers are true for most enterprises, it means that customers spend most of their money on virtual machines (IaaS), databases, and storage.

What does Gartner say?

Let’s have a look at the most recent forecast called “Worldwide Public Cloud End-User Spending to Reach Nearly $600 Billion in 2023” from April 2023:

Gartner April 2023 Public Cloud Spend Forecast

All segments of the cloud market are expected see growth in 2023. Infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth in 2023 at 30.9%, followed by platform-as-a-service (PaaS) at 24.1%

Conclusion

If most companies spend around 30% of their budget on virtual machines and Gartner predicts that IaaS is still having a higher growth than SaaS or PaaS, a supercloud architecture for IaaS would make a lot of sense. You would have the same technology format, could use the same networking and security policies, and existing skills, and benefit from many other advantages as well.

Looking at the VMware Cloud approach, which allows you to run VMware’s software-defined data center (SDDC) stack on AWS, Azure, Google, and many other public clouds, customers could create a seamless hybrid multi-cloud architecture – using the same technology across clouds.

Other VMware products that fall under the supercloud category would be Tanzu Application Platform (TAP), the Aria Suite, and Tanzu for Kubernetes Operations (TKO) which belong to VMware’s Cross-Cloud Services portfolio.

Final Words

I think it is important that we understand, that we are still in the early days of multi-cloud (or when we use multiple clouds).

Customers get confused because it took them years to deploy or move new or existing apps to the public cloud. Now, analysts and vendors talk about cloud exit strategies, reverse cloud migrations, repatriations, exploding cloud costs, and so on.

Yes, a supercloud is about a hybrid multi-cloud architecture and a standardized design for building apps and platforms across cloud. But the most important capability, in my opinion, is the fact that it makes your IT landscape future-ready on different levels with different abstraction layers.

VMware Cloud Foundation 5.0 – Technical Overview

VMware Cloud Foundation 5.0 – Technical Overview

Update: Please have a look at the VMware Cloud Foundation 5.1 Technical Overview.

This technical overview supersedes this version, which was based on VMware Cloud Foundation 4.5, and now covers all capabilities and enhancements that were delivered with VCF 5.0.

What is VMware Cloud Foundation (VCF)?

VMware Cloud Foundation is a multi-cloud platform that provides a full-stack hyperconverged infrastructure (HCI) that is made for modernizing data centers and deploying modern container-based applications. VCF is based on different components like vSphere (compute), vSAN (storage), NSX (networking), and some parts of the Aria Suite (formerly vRealize Suite). The idea of VCF follows a standardized, automated, and validated approach that simplifies the management of all the needed software-defined infrastructure resources.

This stack provides customers with consistent infrastructure and operations in a cloud operating model that can be deployed on-premises, at the edge, or in the public cloud.

What software is being delivered in VMware Cloud Foundation?

The BoM (bill of materials) is changing with each VCF release. With VCF 5.0 the following components and software versions are included:

Note: Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.

VMware Cloud Foundation 5 Overview

What happened to the Tanzu entitlements?

With the release of VCF 5.0, VMware plans to retire the perpetual licensing for VMware Cloud Foundation in Q3 2023.

Around the same time, we can expect that VCF is only being sold as part of the “Cloud Packs” (connected and disconnected):

VCF Cloud Pack 

As already mentioned here, customers have also no more option to buy “Tanzu Standard” and existing Tanzu Standard customers can “upgrade” to “Tanzu Kubernetes Grid” (TKG) and Tanzu Mission Control (add-on).

There are several options available. Please contact your VMware representative.

VMware Cloud Foundation Architecture

VCF is made for greenfield deployments (brownfield not supported) and supports two different architecture models:

  • Standard Architecture
  • Consolidated Architecture

VMware Cloud Foundation Deployment Options

The standard architecture separates management workloads and lets them run on a dedicated management workload domain. Customer workloads are deployed on a separate virtual infrastructure workload domain (VI workload domain). Each workload domain is managed by a separate vCenter Server instance, which allows autonomous licensing and lifecycle management.

VMware Cloud Foundation Single Site Deployment

Note: The standard architecture is the recommended model because it separates management workloads from customer workloads.

Customers with a small environment (or a PoC) can start with a consolidated architecture. This allows you to run customer and management workloads together on the same workload domain (WLD).

Note: The management workload domain’s default cluster datastore must use vSAN. Other WLDs can use vSAN, NFS, FC, and vVols for the principal storage.

VMware Cloud Foundation Storage Options

What is a vSAN Stretched Cluster?

vSAN stretched clusters extend a vSAN cluster from a single site to two sites for a higher level of availability and inter-site load balancing.

VMware Cloud Foundation Stretched Cluster

Does VCF provide flexible workload domain sizing?

Yes, that’s possible. You can license the WLDs based on your needs and use the editions that make the most sense depending on your use cases.

VMware Cloud Foundation Flexible Licensing

How many physical nodes are required to deploy VMware Cloud Foundation?

A minimum of four physical nodes is required to start in a consolidated architecture or to build your management workload domain. Four nodes are required to ensure that the environment can tolerate a failure while another node is being updated.

VI workload domains require a minimum of three nodes.

This means, to start with a standard architecture, you need to have the requirements (and money) to start with at least seven physical nodes.

What are the minimum hardware requirements?

These minimum specs have been listed for the management WLD since VCF 4.0 (September 2020):

VMware Cloud Foundation Hardware Requirements

Can I mix vSAN ReadyNodes and Dell EMC VxRail deployments?

No. This is not possible.

What about edge/remote use cases?

When you would like to deploy VMware Cloud Foundation workload domains at a remote site, you can deploy so-called “VCF Remote Clusters”. Those remote workload domains are managed by the VCF instance at the central site and you can perform the same full-stack lifecycle management for the remote sites from the central SDDC Manager.

VMware Cloud Foundation Remote Cluster

Prerequisites to deploy remote clusters can be found here.

Note: If vSAN is used, VCF only supports a minimum of 3 nodes and a maximum of 4 nodes per VCF Remote Cluster. If NFS, vVOLs or Fiber Channel is used as principal storage, then VCF supports a minimum of 2 and a maximum of 4 nodes.

Important: Remote clusters and remote workload domains are not supported when VCF+ is enabled.

Does VCF support HCI Mesh?

Yes. VMware Cloud Foundation 4.2 and later supports sharing remote datastores with HCI Mesh for VI workload domains.

HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.

Note: At this time, HCI Mesh is not supported with VCF ROBO.

What is SDDC Manager?

SDDC Manager is a preconfigured virtual appliance that is deployed in the management workload domain for creating workload domains, provisioning additional virtual infrastructure and lifecycle management of all the software-defined data center (SDDC) management components.

VMware Cloud Foundation SDDC Manager

You use SDDC Manager in VMware Cloud Foundation to perform the following operations:

  • Commissioning or decommissioning ESXi hosts
  • Deployment of workload domains
  • Extension of clusters in the management and workload domains with ESXi hosts
  • Adding clusters to the management domain and workload domains
  • Support for network pools for host configuration in a workload domain
  • Product licenses storage
  • Deployment of vRealize Suite components.
  • Lifecycle management of the virtual infrastructure components in all workload domains, and of vRealize Suite Lifecycle Manager components.
  • Certificate management
  • Password management and rotation
  • NSX-T Edge cluster deployment in the management domain and workload domains
  • Backup configuration

VMware Cloud Foundation SDDC Manager Dashboard

How many resources does the VCF management WLD need during the bring-up process?

We know that VCF includes vSphere (ESXi and vCenter), vSAN, SDDC Manager, NSX-T and eventually some components of the vRealize Suite. The following table should give you an idea what the resource requirements look like to get VCF up and running:

VMware Cloud Foundation Resource Requirements

If you are interested to know how many resources the Aria Suite (formerly vRealize Suite) will consume of the management workload domain, have a look at this table:

VMware Cloud Foundation Resource Requirements vRealize

How can I migrate my workloads from a non-VCF environment to a new VCF deployment?

VMware HCX provides a path to modernize from a legacy data center architecture by migrating to VMware Cloud Foundation.

VMware Cloud Foundation HCX

    Can I install VCF in my home lab?

    Yes, you can. With the VLC Lab Constructor, you can deploy an automated VCF instance in a nested configuration. There is also a Slack VLC community for support.

    VCF Lab Constructor

    Note: Please have a look at “VCF Holodeck” if you would like to create a smaller “sandbox” for testing or training purposes.

    Where can I find more information about VCF?

    Please consult the VMware Foundation 5.0 FAQ for more information about VMware Cloud Foundation.

     

     

     

    VMware Tanzu Licensing – What’s New?

    VMware Tanzu Licensing – What’s New?

    Last year, VMware gave the Tanzu portfolio a fairly good facelift with all the announcements from VMware Explore 2022. It is clear to me that VMware focuses on multi-cluster and multi-cloud Kubernetes management capabilities (Tanzu for Kubernetes Operations) and a superior developer experience with any Kubernetes on any cloud (Tanzu Application Platform). VMware embraces native public clouds and so it was very exciting for many customers when they announced the lifecycle management of Amazon Elastic Kubernetes Service (EKS) clusters – the direct provisioning and management of EKS clusters with Tanzu Mission Control. But what happened in the last 6 to 9 months since VMware Explore US and Europe? And how do I get parts of the VMware Tanzu portfolio nowadays?

    Tanzu Licensing

    Let us start with licensing first. in October 2022, VMware made it clear that they do not want to move forward anymore with the Tanzu Basic and Advanced editions, only Tanzu Standard was left. VMware replaced Tanzu Basic with “Tanzu Kubernetes Grid” (TKG), which comes with the following components:

    • vSphere capabilities / K8s Runtime
    • K8s Cluster Lifecycle Management – Cluster API
    • Image Registry – Harbor
    • Container Networking – Antrea/Calico
    • Load Balancing – NSX Advanced Load Balancer
    • Ingress Controller – Contour
    • Observability – Fluent Bit, Prometheus, Grafana
    • Operating System – Photon OS, Ubuntu, bring-your-own node image
    • Data Protection – Velero

    Note: Nothing is official yet, but according to this article intended for partners, VMware is going to announce the Tanzu Standard EOA (End of Availability) soon:

    …containing updated information on Tanzu Standard entering end of availability (EOA) and the new Tanzu Kubernetes Operations and Tanzu Application Platform partner resources.

    Looking at the “Tanzu Explainer” and its changelog from the 5th of May, one can find the following: “Updated to reflect new Tanzu for Kubernetes Operations SKUs“.

    Tanzu for Kubernetes Operations Bundles

    The Tanzu Explainer on Tech Zone lists the following new bundles/packages for Tanzu for Kubernetes Operations (TKO):

    1. Tanzu for Kubernetes Operations Foundation includes Tanzu Mission Control Advanced and Tanzu Service Mesh Advanced. Two add-on SKUs are available—one adds Antrea Advanced and Aria Operations for Applications, the other adds these plus NSX Advanced Load Balancer Enterprise. Tanzu Kubernetes Grid is not included in this bundle.
    2. Tanzu for Kubernetes Operations includes Tanzu Kubernetes Grid, Tanzu Mission Control Advanced, Tanzu Service Mesh Advanced, Antrea Advanced, and Aria Operations for Applications.
    3. Tanzu for Kubernetes Operations with NSX Advanced Load Balancer includes Tanzu Kubernetes Grid, Tanzu Mission Control Advanced, Tanzu Service Mesh Advanced, Antrea Advanced, Aria Operations for Applications, and NSX Advanced Load Balancer Enterprise.

    Note: Since Tanzu Mission Control Standard (TMC) was only sold as part of the Tanzu Standard Edition, we see VMware moving forward with TMC Advanced only. Which is good! But TMC Essentials still comes with vSphere+ and VMC on AWS.

    Tanzu Entitlements with vSphere and VMware Cloud Foundation Editions

    What about vSphere and VMware Cloud Foundation (VCF)? Let me give you an overview here as well:

    • vSphere+ Standard – No Tanzu entitlements included
    • vSphere+ – Includes TKG and TMC Essentials
    • vSphere Enterprise+ with TKG – Includes TKG
    • VMware Cloud Foundation – All VCF editions have Tanzu Standard included

    Note: We do not know yet what the Tanzu Standard EOA means for the Tanzu entitlements with VCF. Need to wait for guidance.

    VMware Cloud Packs

    In April 2023, VMware introduced new bundles called VMware Cloud Packs and they come in four different flavours:

    1. Compute with Advanced Automation. vSphere+ and Aria Universal Suite Advanced
    2. HCI. vSphere+, vSAN+ Advanced and Aria Universal Suite Standard
    3. HCI with Advanced Automation. vSphere+, vSAN+ Advanced and Aria Universal Suite Advanced
    4. VMware Cloud Foundation. vSphere+, vSAN+ Enterprise, NSX Enterprise Plus, SDDC Manager, Aria Universal Suite Enterprise, Aria Operations for Networks Enterprise add-on

    In addition to these four Cloud Packs offerings, customers can get the following add-ons:

    • Data Protection & Disaster Recovery
    • Network Detection and Response
    • Tanzu Mission Control
    • Ransomware Recovery
    • Advanced Load Balancer
    • Workload and Endpoint Security
    • Intrusion Detection and Prevention
    • VDI/Desktops

    Note: As you can see, all new cloud packs have TKG included and TMC is an add-on. vCenter Standard is with connected and disconnected subscriptions.

    Important: Please note as well that the individual components of the bundles cannot be upgraded independently. Example – Aria Universal Suite Standard as part of the HCI Cloud Pack cannot be upgraded to Aria Universal Suite Enterprise.

    Conclusion

    VMware is clearly moving in the right direction: They want to simplify their portfolio and improve how customers can consume/subscribe services. As always, it is going to take a while until they have figured out which bundles and product versions make sense for most of the customers. Be patient. 🙂

     

    What does VMware Cloud Disaster Recovery have in common with Dell PowerProtect?

    What does VMware Cloud Disaster Recovery have in common with Dell PowerProtect?

    It was at VMware Explore Europe 2022 when I ran into a colleague from Dell who told me about “transparent snapshots” and mentioned that their solution has something in common VMware Cloud Disaster Recovery (VCDR). After doing some research, I figured out that he was talking about the Light Weight Delta (LWD) protocol.

    Snapshots

    Snapshots are states of a system or virtual machine (VM) at a particular point in time and should not be considered a backup. The data of a snapshot include all files that form a virtual machine – this includes disks, memory, and other devices like network interface cards (vNIC). To create or delete a snapshot of a VM, the VM needs to be “stunned” (quiesce I/Os).

    I would say it is common knowledge that a higher number of snapshots negatively impact the I/O performance of a virtual machine. Creating snapshots results in the creation of a snapshot hierarchy with parent-to-child relationships. Every snapshot creates a delta .vmdk file and redirects all inputs/writes to this delta disk file.

    VMware vSphere Storage APIs for Data Protection

    Currently, a lot of backup solutions use “VMware vSphere Storage APIs for Data Protection” (VADP), which has been introduced in vSphere 4.0 released in 2009. A backup product using VADP can backup VMs from a central backup server or virtual machine without requiring any backup agents. Meaning, backup solutions using VADP create snapshots that are used to create backups based on the changed blocks of a disk (Changed Block Tracking aka CBT). These changes or this delta is then written to a secondary site or storage and the snapshot is removed after.

    Deleting a snapshot consolidates the changes between snapshots and previous disk states. Then it writes all the data from the delta disk that contains the information about the deleted snapshot to the parent disk. When you delete the base parent snapshot, all changes merge with the base virtual machine disk.

    To delete a snapshot, a large amount of information must be read and written to a disk. This process can reduce the virtual machine performance until the consolidation is complete.

    VMware Cloud Disaster Recovery (VCDR)

    In 2020, VMware announced the general availability of VMware Cloud Disaster Recovery based on technology from their Datrium acquisition. This new solution extended the current VMware disaster recovery (DR) solutions like VMware Site Recovery, Site Recovery Manager, and Cloud Provider DR solutions.

    VMware Cloud Disaster Recovery is a VMware-delivered disaster recovery as a service (DRaaS) offering that protects on-premises vSphere and VMware Cloud on AWS workloads to VMware Cloud on AWS from both disasters and ransomware attacks. It efficiently replicates VMs to a Scale-out Cloud File System (SCFS) that can store hundreds of recovery points with recovery point objectives (RPOs) as low as 30 minutes. This enables recovery for a wide variety of disasters including ransomware. Virtual machines are recovered to a software-defined data center (SDDC) running in VMware Cloud on AWS. VMware Cloud Disaster Recovery also offers fail-back capabilities to bring your workloads back to their original location after the disaster is remediated.

    VMware Cloud DR Architecture

    Note: Currently, VCDR is only available as an add-on feature to VMware Cloud on AWS. The support for Azure VMware Solution is expected to come next.

    To me, VCDR is one of the best solutions from the whole VMware portfolio.

    High-Frequency Snapshots (HFS)

    One of the differentiators and game-changers are these so-called high-frequency snapshots, which are based on the Light Weight Delta (LWD) technology that VMware developed. Using HFS allows customers to schedule recurring snapshots for every 30 minutes, meaning, that customers can get an Recovery Point Objective (RPO) of 30min!

    To enable and use high-frequency snapshots, your environment must be running on vSphere 7.0 U3 or higher.

    With HFS and LWD, there is no Changed Block Tracking (CBT), no VADP, and no VM stun. This results in better performance when maintaining these deltas.

    Transparent Snapshots by Dell EMC PowerProtect Data Manager (PPDM)

    At VMworld 2021, Dell Technologies presented a session called “Protect Your Virtual Infrastructure with Drastically Less Disruption [SEC2764S]” which was about “transparent snapshots” – image backups with near-zero impact on virtual machines, without the need to pause the VM during the backup process. No more backup proxies, no more agents.

    Dell Transparent Snapshot Architecture

    As with HFS and VCDR, your environment needs to run on vSphere 7.0 U3 and higher.

    How does it work?

    PowerProtect Data Manager transparent snapshots use the vSphere API for I/O (VAI/O) Filtering framework. The transparent snapshots data mover (TSDM) is deployed in the VMware ESXi infrastructure through a PowerProtect Data Manager VIB. This deployment creates consistent VM backup copies and writes the copies to the protection storage (PowerProtect appliance).

    After, this VIB (Data Protection Daemon (DPD) which is part of the VMware ESXi >7.0 U3 image has been installed on the ESXi host) tracks the delta changes in memory and then transfers the delta changes directly to the protection storage.

    VMware Data Protection Daemon

    Note: PPDM also provides image backup and restore support for VMware Cloud on AWS and Azure VMware Solution, but requires VADP.

    Light Weight Delta (LWD)

    It seems that LWD has been developed by VMware but there is no publicly available information out there yet. I only found this screenshot as part of this Dell article:

    VMware Light Weight Delta

    It also seems that Dell is/was the first who could leverage the LWD protocol exclusively but I am sure it will be made available to other VMware partners as well.

    10 More Things You Didn’t Know About vSphere+

    10 More Things You Didn’t Know About vSphere+

    A few months ago I wrote the article 10 Things You Didn’t Know About vSphere+, which gives you a good overview of vSphere+ and VCF+, and some information about licensing. A few things have changed and been added since then and I would like to share some of the information with you.

    1) vSphere+ Standard Edition

    Some customers only need the feature set of vSphere Standard but were very interested in having the benefits that come with the (VMware) cloud connectivity. VMware listened to its customers and introduced vSphere+ Standard back in December 2022. What is included?

    • vSphere Standard features
    • vCenter Standard (unlimited number of deployments)
    • Admin Services (Cloud Console)

    2) vSAN+ Standard and Advanced Edition

    To mirror the vSAN perpetual license editions, VMware released vSAN+ Standard and vSAN+ Advanced in December 2022 as well.

    3) Grace Period when moving from perpetual to subscription licensing

    Customers need to move their existing perpetual licenses within 90 days to vSphere+/vSAN+, see here.

    If Customer receives its entitlement to vSphere+ or vSAN+ through a VMware subscription upgrade program, then Customer must, within 90 days after purchase of the entitlement, relinquish its entitlements to any relevant vSphere or vSAN on-premises perpetual licenses (as applicable) that were exchanged through the subscription upgrade program (“Exchanged Licenses”).

    5) What if I don’t renew my vSphere+/vSAN+ subscription?

    You will be out of compliance, but your environment is still going to work. And you will not receive support from VMware’s Global Support anymore during that time.

    6) Which data is transmitted to VMware Cloud?

    According to this article, the following data is transmitted:

    • vCenter Server Inventory (transmission frequency: 24h)
    • Log Data (transmission frequency: continuous)
    • Performance Data (transmission frequency: 5min)
    • Consumption Data (transmission frequency: 15min)
    • Feature Usage (transmission frequency: 5min)
    • Entitlement (transmission frequency: as necessary)

    7) Aria Universal Suite & vSphere+ (vCloud Suite+)

    The subscription version of vCloud Suite is vCloud Suite+ (vCS+). vCS+ comes also in three editions: Standard, Advanced, Enterprise

    vCloud Suite+ Editions 2023

    8) What about VMware Horizon and vSphere+?

    If you are using vSphere (for Desktop) that came as a bundle with VMware Horizon, then vSphere cannot be upgraded to vSphere+. Consult the product interoperability matrix for more information. If you are using Horizon as a standalone product on top of vSphere+, I don’t see any issues.

    9) What are vSphere+ add-on services?

    Currently, vSphere+ comes with a centralized cloud console that provides consolidated management of all vSphere+ deployments. Customers also get the Cloud Consumption Interface (CCI) and Tanzu Mission Control Essentials as part of vSphere+.

    Add-On #1: Aria Operations

    vSphere+ vROps Add-On

    Powered by Aria Operations (formerly known as vRealize Operations), vSphere+ provides an overview of the resource usage of all the clusters associated with the vCenter Server instances that are connected to your vCenter Cloud Gateway(s). You can monitor and analyze details such as hosts, cores, VMs, and remaining capacity on each cluster. You can also get a view of the number of days remaining until the cluster reaches its usable capacity.

    Add-On #2: VMware Cloud Disaster Recovery (VCDR)

    vSphere+ VCDR Add-On

    You can protect VMs and manage their protection status directly from the VMware Cloud Console if you have a VCDR subscription.

    Future Add-Ons

    Without making any commitment and knowing the vSphere+ roadmap, it seems that VMware is going to bring parts of the VMware Data Services portfolio as an add-on service. More information can be found here.

    10) Counting Cores for vSphere+ and vSAN+ Licensing

    VMware has created a tool to identify the number of core licenses that are required to upgrade existing vSphere/vSAN deployment to vSphere+/vSAN+. William Lam has created two blogs that should help you using the script: