What does VMware Cloud Disaster Recovery have in common with Dell PowerProtect?

What does VMware Cloud Disaster Recovery have in common with Dell PowerProtect?

It was at VMware Explore Europe 2022 when I ran into a colleague from Dell who told me about “transparent snapshots” and mentioned that their solution has something in common VMware Cloud Disaster Recovery (VCDR). After doing some research, I figured out that he was talking about the Light Weight Delta (LWD) protocol.

Snapshots

Snapshots are states of a system or virtual machine (VM) at a particular point in time and should not be considered a backup. The data of a snapshot include all files that form a virtual machine – this includes disks, memory, and other devices like network interface cards (vNIC). To create or delete a snapshot of a VM, the VM needs to be “stunned” (quiesce I/Os).

I would say it is common knowledge that a higher number of snapshots negatively impact the I/O performance of a virtual machine. Creating snapshots results in the creation of a snapshot hierarchy with parent-to-child relationships. Every snapshot creates a delta .vmdk file and redirects all inputs/writes to this delta disk file.

VMware vSphere Storage APIs for Data Protection

Currently, a lot of backup solutions use “VMware vSphere Storage APIs for Data Protection” (VADP), which has been introduced in vSphere 4.0 released in 2009. A backup product using VADP can backup VMs from a central backup server or virtual machine without requiring any backup agents. Meaning, backup solutions using VADP create snapshots that are used to create backups based on the changed blocks of a disk (Changed Block Tracking aka CBT). These changes or this delta is then written to a secondary site or storage and the snapshot is removed after.

Deleting a snapshot consolidates the changes between snapshots and previous disk states. Then it writes all the data from the delta disk that contains the information about the deleted snapshot to the parent disk. When you delete the base parent snapshot, all changes merge with the base virtual machine disk.

To delete a snapshot, a large amount of information must be read and written to a disk. This process can reduce the virtual machine performance until the consolidation is complete.

VMware Cloud Disaster Recovery (VCDR)

In 2020, VMware announced the general availability of VMware Cloud Disaster Recovery based on technology from their Datrium acquisition. This new solution extended the current VMware disaster recovery (DR) solutions like VMware Site Recovery, Site Recovery Manager, and Cloud Provider DR solutions.

VMware Cloud Disaster Recovery is a VMware-delivered disaster recovery as a service (DRaaS) offering that protects on-premises vSphere and VMware Cloud on AWS workloads to VMware Cloud on AWS from both disasters and ransomware attacks. It efficiently replicates VMs to a Scale-out Cloud File System (SCFS) that can store hundreds of recovery points with recovery point objectives (RPOs) as low as 30 minutes. This enables recovery for a wide variety of disasters including ransomware. Virtual machines are recovered to a software-defined data center (SDDC) running in VMware Cloud on AWS. VMware Cloud Disaster Recovery also offers fail-back capabilities to bring your workloads back to their original location after the disaster is remediated.

VMware Cloud DR Architecture

Note: Currently, VCDR is only available as an add-on feature to VMware Cloud on AWS. The support for Azure VMware Solution is expected to come next.

To me, VCDR is one of the best solutions from the whole VMware portfolio.

High-Frequency Snapshots (HFS)

One of the differentiators and game-changers are these so-called high-frequency snapshots, which are based on the Light Weight Delta (LWD) technology that VMware developed. Using HFS allows customers to schedule recurring snapshots for every 30 minutes, meaning, that customers can get an Recovery Point Objective (RPO) of 30min!

To enable and use high-frequency snapshots, your environment must be running on vSphere 7.0 U3 or higher.

With HFS and LWD, there is no Changed Block Tracking (CBT), no VADP, and no VM stun. This results in better performance when maintaining these deltas.

Transparent Snapshots by Dell EMC PowerProtect Data Manager (PPDM)

At VMworld 2021, Dell Technologies presented a session called “Protect Your Virtual Infrastructure with Drastically Less Disruption [SEC2764S]” which was about “transparent snapshots” – image backups with near-zero impact on virtual machines, without the need to pause the VM during the backup process. No more backup proxies, no more agents.

Dell Transparent Snapshot Architecture

As with HFS and VCDR, your environment needs to run on vSphere 7.0 U3 and higher.

How does it work?

PowerProtect Data Manager transparent snapshots use the vSphere API for I/O (VAI/O) Filtering framework. The transparent snapshots data mover (TSDM) is deployed in the VMware ESXi infrastructure through a PowerProtect Data Manager VIB. This deployment creates consistent VM backup copies and writes the copies to the protection storage (PowerProtect appliance).

After, this VIB (Data Protection Daemon (DPD) which is part of the VMware ESXi >7.0 U3 image has been installed on the ESXi host) tracks the delta changes in memory and then transfers the delta changes directly to the protection storage.

VMware Data Protection Daemon

Note: PPDM also provides image backup and restore support for VMware Cloud on AWS and Azure VMware Solution, but requires VADP.

Light Weight Delta (LWD)

It seems that LWD has been developed by VMware but there is no publicly available information out there yet. I only found this screenshot as part of this Dell article:

VMware Light Weight Delta

It also seems that Dell is/was the first who could leverage the LWD protocol exclusively but I am sure it will be made available to other VMware partners as well.

VMware Cloud on Equinix Metal – The New Intercloud?

VMware Cloud on Equinix Metal – The New Intercloud?

It was November 2022 when VMware and Equinix announced an expanded partnership to deliver new infrastructure and multi-cloud services. Called VMware Cloud on Equinix, this solution combines VMware Cloud Infrastructure-as-a-Service (IaaS) with Equinix Metal Hardware-as-a-Services (HWaaS) independently. In other words, the SDDC (software-defined data center) stack is sold by VMware, and HWaaS is sold by Equinix. Looking at this partnership and solution, one could say that Equinix might become “the” intercloud in this multi-cloud era.

What is VMware Cloud on Equinix Metal (VMC-E)?

VMC-E combines VMware’s managed and supported cloud IaaS with Equinix’s baremetal-as-a-service (BMaaS) offering. This gives enterprises the advantage to run this cloud offering almost everywhere globally. Another benefit is that VMC-E will be available in over 30 of the most interconnected global Equinix locations, connected to all the major public clouds and networks (Equinix Fabric).

Equinix Multi-Cloud App

What is Equinix Fabric?

This service allows organizations to connect to other Equinix customers and other internet resources like service providers:

  • Amazon Web Services (AWS)
  • Microsoft Azure
  • Google Cloud
  • Oracle Cloud
  • Alibaba Cloud
  • IBM Cloud
  • and many more

For me, Equinix Fabric is an interesting way to interconnect different VMware-based Clouds like VMware Cloud on AWS, Azure VMware Solution, Google Cloud VMware Engine, Alibaba Cloud VMware Solution, or Oracle Cloud VMware Solution.

VMC-E for multi-cloud apps?

A lot of enterprises are not “cloud-first” anymore, they became “cloud-smart”. They put the right apps in the right cloud based on the right reasons.

VMware Cloud-Smart

VMC-E has the potential to become a true multi-cloud enabler by letting VMware and Equinix customers move their applications to an ideal place. Imagine lifting and shifting a legacy application to VMC-E. This application then sits in the middle of all major clouds and customers can use different services and components for the same application. This is my definition of a multi-cloud app.

Multi-Cloud App on VMC-E

What are the use cases?

VMware and Equinix mention distributed environments and mission-critical applications that rely on high-performance network bandwidth and low latency, such as smart cities, video analytics, game development, VDI, real-time financial market trading, retail POS, IoT, and machine learning.

Which hosts are available?

VMware Cloud on Equinix Metal comes with multiple host configs that can be found here. It is not clear yet which host type(s) will be available during the initial lunch. But the tech preview on YouTube shows the “n3.xlarge.x86” instance type.

Tech Preview VMware Cloud on Equinix Metal - YouTube

How can I get VMC-E?

VMC-E is currently in an early access phase for selected customers in H1 2023.

Tech Preview VMware Cloud on Equinix Metal

Where can I get more information?

To learn more and to participate in the early access program for VMware Cloud on Equinix Metal, please email your interest to  .

10 More Things You Didn’t Know About vSphere+

10 More Things You Didn’t Know About vSphere+

A few months ago I wrote the article 10 Things You Didn’t Know About vSphere+, which gives you a good overview of vSphere+ and VCF+, and some information about licensing. A few things have changed and been added since then and I would like to share some of the information with you.

1) vSphere+ Standard Edition

Some customers only need the feature set of vSphere Standard but were very interested in having the benefits that come with the (VMware) cloud connectivity. VMware listened to its customers and introduced vSphere+ Standard back in December 2022. What is included?

  • vSphere Standard features
  • vCenter Standard (unlimited number of deployments)
  • Admin Services (Cloud Console)

2) vSAN+ Standard and Advanced Edition

To mirror the vSAN perpetual license editions, VMware released vSAN+ Standard and vSAN+ Advanced in December 2022 as well.

3) Grace Period when moving from perpetual to subscription licensing

Customers need to move their existing perpetual licenses within 90 days to vSphere+/vSAN+, see here.

If Customer receives its entitlement to vSphere+ or vSAN+ through a VMware subscription upgrade program, then Customer must, within 90 days after purchase of the entitlement, relinquish its entitlements to any relevant vSphere or vSAN on-premises perpetual licenses (as applicable) that were exchanged through the subscription upgrade program (“Exchanged Licenses”).

5) What if I don’t renew my vSphere+/vSAN+ subscription?

You will be out of compliance, but your environment is still going to work. And you will not receive support from VMware’s Global Support anymore during that time.

6) Which data is transmitted to VMware Cloud?

According to this article, the following data is transmitted:

  • vCenter Server Inventory (transmission frequency: 24h)
  • Log Data (transmission frequency: continuous)
  • Performance Data (transmission frequency: 5min)
  • Consumption Data (transmission frequency: 15min)
  • Feature Usage (transmission frequency: 5min)
  • Entitlement (transmission frequency: as necessary)

7) Aria Universal Suite & vSphere+ (vCloud Suite+)

The subscription version of vCloud Suite is vCloud Suite+ (vCS+). vCS+ comes also in three editions: Standard, Advanced, Enterprise

vCloud Suite+ Editions 2023

8) What about VMware Horizon and vSphere+?

If you are using vSphere (for Desktop) that came as a bundle with VMware Horizon, then vSphere cannot be upgraded to vSphere+. Consult the product interoperability matrix for more information. If you are using Horizon as a standalone product on top of vSphere+, I don’t see any issues.

9) What are vSphere+ add-on services?

Currently, vSphere+ comes with a centralized cloud console that provides consolidated management of all vSphere+ deployments. Customers also get the Cloud Consumption Interface (CCI) and Tanzu Mission Control Essentials as part of vSphere+.

Add-On #1: Aria Operations

vSphere+ vROps Add-On

Powered by Aria Operations (formerly known as vRealize Operations), vSphere+ provides an overview of the resource usage of all the clusters associated with the vCenter Server instances that are connected to your vCenter Cloud Gateway(s). You can monitor and analyze details such as hosts, cores, VMs, and remaining capacity on each cluster. You can also get a view of the number of days remaining until the cluster reaches its usable capacity.

Add-On #2: VMware Cloud Disaster Recovery (VCDR)

vSphere+ VCDR Add-On

You can protect VMs and manage their protection status directly from the VMware Cloud Console if you have a VCDR subscription.

Future Add-Ons

Without making any commitment and knowing the vSphere+ roadmap, it seems that VMware is going to bring parts of the VMware Data Services portfolio as an add-on service. More information can be found here.

10) Counting Cores for vSphere+ and vSAN+ Licensing

VMware has created a tool to identify the number of core licenses that are required to upgrade existing vSphere/vSAN deployment to vSphere+/vSAN+. William Lam has created two blogs that should help you using the script:

 

VMware vSphere – The Enterprise Data Platform

VMware vSphere – The Enterprise Data Platform

The world is creating and consuming more data than ever. There are multiple reasons that can explain this trend. Data creates the foundation for many digital products and services. And we read more and more about companies that want or need to keep their data on-premises because of reasons like data proximity, performance, data privacy, data sovereignty, data security, and predictable cost control. We also know that the edge is growing much faster than large data centers. These and other factors are the reasons why CIOs and decision-makers are now focusing on data more than ever before.

We live in a digital era where data is one of the most valuable assets. The whole economy from the government to local companies would not be able to function without data. Hence, it makes sense to structure and analyze the data, so a company’s data infrastructure becomes a profit center and is not just seen as a cost center anymore.

Data Sprawl

A lot of enterprises are confronted with the so-called data sprawl. Data sprawl means that an organization’s data is stored on and consumed by different devices and operating systems in different locations. There are cases where the consumers and the IT teams are not sure anymore where some of the data is stored and how it should be accessed. This is a huge risk and results in a loss of security and productivity.

Since the discussions about sovereign clouds and data sovereignty have started, it has never been more important where a company’s data resides, and where and how one can consume that data.

Enterprises have started to follow a cloud-smart approach: They put the right application and its data in the right cloud, based on the right reasons. In other words, they think twice now where and how they store their data.

What databases are popular?

When talking to developers and IT teams, I mostly received the following names (in no particular order):

  • Oracle
  • MSSQL
  • MySQL
  • PostgreSQL

I think it would be a fair statement to make that a lot of customers are looking for alternatives to reduce expensive database and database management solutions (DBMS). It seems that Postgres and MySQL earned a lot of popularity over the past years, while Oracle is still considered one of the best databases on the market – even seen as one of the most expensive and least liked solutions. But I also hear other solutions like MongoDB, MariaDB, and Redis mentioned in more discussions.

DBaaS and Public Cloud Characteristics

It is nothing new: Developers are looking for a public-cloud-like experience for their on-premises deployments. They want an easy and smooth self-service experience without the need for opening tickets and waiting for several days to get their database up and running. And we also know that open-source and freedom of choice are becoming more important to companies and their developers. Some of the main drivers here are costs and vendor lock-in.

IT teams on the other side want to provide security and compliance, more standardization around versions and types, and an easy way to backup and restore databases. But the truth is, that a lot of companies are struggling to provide this kind of Database-as-a-Service (DBaaS) experience to their developers.

The idea and expectation of DBaaS are to reduce management and operational efforts with the possibility to easily scale databases up and down. The difference between the public cloud DBaaS offering and your on-premises data center infrastructure is the underlying physical and virtual platform.

On-premises it could be theoretically any hardware, but VMware vSphere is still the most used virtualization platform for an enterprise’s data (center) infrastructure.

VMware vSphere and Data

VMware shared the information that studying their telemetry from their customer base showed that almost 25% of VMware workloads are data workloads (databases, data warehouses, big data analytics, data queueing, and caching) and it looks like that MS SQL Server still has the biggest share of all databases that are hosted on-premises.

They are also seeing a high double-digit growth (approx. 70-90%) when it comes to MySQL and steady growth with PostgreSQL. Rank 4 is probably Redis followed by MongoDB.

VMware Data Solutions

VMware Data Solutions, formerly known as Tanzu Data Services, is a powerful part of the entire VMware portfolio and consists of:

  • VMware GemFire – Fast, consistent data for web-scaling concurrent requests fulfills the promise of highly responsive applications.
  • VMware RabbitMQ – A fast, dependable enterprise message broker that provides reliable communication among servers, apps, and devices.
  • VMware Greenplum – VMware Greenplum is a massively parallel processing database. Greenplum is based on open-source Postgres, enabling Data Warehousing, aggregation, AI/ML and extreme query speed.
  • VMware SQL – VMware’s open-source SQL Database (Postgres & MySQL) is a Relational database service providing cost-efficient and flexible deployments on-demand and at scale. Available on any cloud, anywhere.
  • VMware Data Services Manager – Reduce operational costs and increase developer agility with VMware Data Services Manager, the modern platform to manage and consume databases on vSphere.

VMware Data Services Manager and VMware SQL

VMware SQL allows customers to deploy curated versions of PostgreSQL and MySQL and DSM is the solution that enables customers to create this DBaaS experience their developers are looking for.

VMware DSM Personas

Data Services Manager has the following key features:

  • Provisioning – Provision different configurations of databases (MySQL, Postgres, and SQL Server) with either freely
    configurable or pre-defined sizing of compute and memory resources, depending on user permissions
  • Backup & Restore – Backup, Transactional log, Point in Time Recovery (PiTR), on-demand or as scheduled
  • Scaling – Modify instances depending on usage (scale up, scale down, disk extension)
  • Replication – Replicate (Cold/Hot or Read Replicas) across managed zones
  • Monitoring – Monitor database engine, vSphere infrastructure, networking, and more.

…and supports the following components and versions (with DSM v1.4):

  • MySQL 8.0.30
  • Postgres 10.23.0, 11.18.0, 12.13.0, 13.9.0
  • MSSQL Server 2019 (Standard, Developer, Enterprise Edition)

Companies with a lot of databases have now a way at least to manage, control and secure Postgres, MySQL and MSSQL DB instances from a centralized tool than can be accessed via the UI or API.

Project Moneta

VMware’s vision is to become the cloud platform of choice. What started with compute, storage and network, continues with data: make it as easy to consume as the rest of their software-defined data center stack.

VMware has started with DSM and sees Moneta, which is still an R&D project, as the next evolution. The focus of Moneta is to bring better self-service and programmatic consumption capabilities (e.g., integration with GitHub).

Project Moneta will provide native integration with vSphere+ and the Cloud Consumption Interface (CCI). While nothing is official yet, I think of it as a vSphere+ and VMware Cloud add-on service that would provide data infrastructure capabilities. 

Final Words

If your developers want to use PostgreSQL, MySQL and MSSQL, and if your IT struggles to deploy, manage, secure and backup those databases, then DSM with Tanzu SQL can help. Both solutions are also perfectly made for disconnected use cases or airgapped environments.

Note: The DB engines are certified, tested and supported by VMware.

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Share Your Opinion – Cross-Cloud Mobility and Application Portability

Do you have an opinion about cross-cloud mobility and application portability? If yes, what about this is important to you? How do you intend to achieve this kind of cloud operating model? Is it about flexibility or more about a cloud-exit strategy? Just because we can, does it mean we should? Will it ever become a reality? These are just some of the answers I am looking for.

Contact me via michael.rebmann@cloud13.ch. You can also reach me on LinkedIn.

I am writing a book about this topic and looking for cloud architects and decision-makers who would like to sit down with me via Zoom or MS Teams to discuss the challenges of multi-cloud and how to achieve workload mobility or application/data portability. I just started interviewing chief architects, CTOs and cloud architects from VMware, partners, customers and public cloud providers (like Microsoft, AWS and Google) as part of my research.

The below questions led me to the book idea.

What is Cross-Cloud Mobility and Application Portability about? 

Cross-cloud mobility refers to the ability of an organization to move its applications and workloads between different cloud computing environments. This is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers, such as access to a wider range of services and features, and the ability to negotiate better terms and pricing.

To achieve cross-cloud mobility, organizations need to use technologies and approaches that are compatible with multiple cloud environments. This often involves using open standards and APIs, as well as adopting a microservices architecture and containerization, which make it easier to move applications and workloads between different clouds.

Another key aspect of cross-cloud mobility is the ability to migrate data between different clouds without losing any of its quality or integrity. This requires the use of robust data migration tools and processes, as well as careful planning and testing to ensure that the migrated data is complete and accurate.

In addition to the technical challenges of achieving cross-cloud mobility, there are also organizational and business considerations. For example, organizations need to carefully evaluate their use of different cloud providers, and ensure that they have the necessary contracts and agreements in place to allow for the movement of applications and workloads between those providers.

Overall, cross-cloud mobility is an important capability for organizations that want to take advantage of the benefits of using multiple cloud providers. By using the right technologies and approaches, organizations can easily and securely move their applications (application portability) and workloads between different clouds, and take advantage of the flexibility and scalability of the cloud.

What is a Cloud-Exit Strategy?

A cloud-exit strategy is a plan for transitioning an organization’s applications and workloads away from a cloud computing environment. This can be necessary for a variety of reasons, such as when an organization wants to switch to a different cloud provider, when it wants to bring its applications and data back in-house, or when it simply no longer needs to use the cloud. A cloud-exit strategy typically includes several key components, such as:

  1. Identifying the specific applications and workloads that will be transitioned away from the cloud, and determining the timeline for the transition.
  2. Developing a plan for migrating the data and applications from the cloud to the new environment, including any necessary data migration tools and processes.
  3. Testing the migration process to ensure that it is successful and that the migrated applications and data are functioning properly.
  4. Implementing any necessary changes to the organization’s network and infrastructure to support the migrated applications and data.
  5. Ensuring that the organization has a clear understanding of the costs and risks associated with the transition, and that it has a plan in place to mitigate those risks.

By having a well-defined cloud-exit strategy, organizations can ensure that they are able to smoothly and successfully transition away from a cloud computing environment when the time comes.

What is a Cloud-Native Application?

A cloud-native application is a type of application that is designed to take advantage of the unique features and characteristics of cloud computing environments. This typically includes using scalable, distributed, and highly available components, as well as leveraging the underlying infrastructure of the cloud to deliver a highly performant and resilient application. Cloud-native applications are typically built using a microservices architecture, which allows for flexibility and scalability, and are often deployed using containers to make them portable across different cloud environments.

Does Cloud-Native mean an application needs to perform equally well on any cloud?

No, being cloud-native does not necessarily mean that an application will perform equally well on any cloud. While cloud-native applications are designed to be portable and scalable, the specific cloud environment in which they are deployed can still have a significant impact on their performance and behavior.

For example, some cloud providers may offer specific services or features that can be leveraged by a cloud-native application to improve its performance, while others may not. Additionally, the underlying infrastructure of different cloud environments can vary, which can affect the performance and availability of a cloud-native application. As a result, it is important for developers to carefully consider the specific cloud environment in which their cloud-native application will be deployed, and to optimize its performance for that environment.

How can you avoid a cloud lock-in?

A cloud lock-in refers to a situation where an organization becomes dependent on a particular cloud provider and is unable to easily switch to a different provider without incurring significant costs or disruptions. To avoid a cloud lock-in, organizations can take several steps, such as:

  1. Choosing a cloud provider that offers tools and services that make it easy to migrate to a different provider, such as data migration tools and APIs for integrating with other cloud services.
  2. Adopting a multi-cloud strategy, where the organization uses multiple cloud providers for different workloads or applications, rather than relying on a single provider.
  3. Ensuring that the organization’s applications and data are portable, by using open standards and technologies that are supported by multiple cloud providers.
  4. Regularly evaluating the organization’s use of cloud services and the contracts with its cloud provider, to ensure that it is getting the best value and flexibility.
  5. Developing a cloud governance strategy that includes processes and policies for managing the organization’s use of cloud services, and ensuring that they align with the organization’s overall business goals and objectives.

By taking these steps, organizations can avoid becoming overly dependent on a single cloud provider and maintain the flexibility to switch to a different provider if needed.

Final Words

Multi-Cloud is very complex and has different layers like compute, storage, network, security, monitoring and observability, operations, and cost management. Add topics like open-source software, databases, Kubernetes, developer experience, and automation to the mix, then we will have most probably enough to discuss. 🙂

Looking forward to hearing from you! 

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

 

Update: Please follow this link to get to the updated version with VCF 5.0.

This technical overview supersedes this version, which was based on VMware Cloud Foundation 4.3, and now covers all capabilities and enhancements that were delivered with VCF 4.5.

What is VMware Cloud Foundation (VCF)?

VMware Cloud Foundation is a multi-cloud platform that provides a full-stack hyperconverged infrastructure (HCI) that is made for modernizing data centers and deploying modern container-based applications. VCF is based on different components like vSphere (compute), vSAN (storage), NSX (networking), and some parts of the Aria Suite (formerly vRealize Suite). The idea of VCF follows a standardized, automated, and validated approach that simplifies the management of all the needed software-defined infrastructure resources.

This stack provides customers with consistent infrastructure and operations in a cloud operating model that can be deployed on-premises, at the edge, or in the public cloud.

Tanzu Standard Edition is included in VMware Cloud Foundation with Tanzu Standard, Advanced, and Enterprise editions.

Note: The VMware Cloud Foundation Starter, Standard, Advanced and Enterprise editions do NOT include Tanzu Standard.

What software is being delivered in VMware Cloud Foundation?

The BoM (bill of materials) is changing with each VCF release. With VCF 4.5 the following components and software versions are included:

  • VMware SDDC Manager 4.5
  • vSphere 7.0 Update 3g
  • vCenter Server 7.0 Update 3h
  • vSAN 7.0 Update 3g
  • NSX-T 3.2.1.2
  • VMware Workspace ONE Access 3.3.6
  • vRealize Log Insight 8.8.2
  • vRealize Operations 8.8.2
  • vRealize Automation 8.8.2
  • (vRealize Network Insight)

Note: Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.

VMware Cloud Foundation Components

What is VMware Cloud Foundation+ (VCF+)?

With the launch of VMware Cloud Foundation (VCF) 4.5 in early October 2022, VCF introduced new consumption and licensing models.

VCF+ is the next cloud-connected SaaS product offering, which builds on vSphere+ and vSAN+. VCF+ delivers cloud connectivity to centralize management and a new consumption-based OPEX model to consume VMware Cloud services.

VMware Cloud Foundation Consumption Models

VCF+ components are cloud entitled, metered, and billed. There are no license keys in VCF+. Once the customer is onboarded to VCF+, the components are entitled from the cloud and periodically metered and billed.

VMware Cloud Foundation+

The following components are included in VCF+:

  • vSphere+
  • vSAN+
  • NSX (term license)
  • SDDC Manager
  • Aria Universal Suite (formerly vRealize Cloud Universal aka vRCU)
  • Tanzu Standard
  • vCenter (included as part of vSphere+)

Note: In a given VCF+ instance, you can only have VCF+ licensing, you cannot mix VCF-S (term) and VCF perpetual licenses with VCF+.

What are other VCF subscription offerings?

VMware Cloud Foundation Subscription (VCF-S) is an on-premises (disconnected) term subscription offer that is available as a standalone VCF-S offer using physical core metrics and term subscription license keys.

VMware Cloud Foundation Subscription TLSS

You can also purchase VCF+ and VCF-S licenses as part of the VMware Cloud Universal program.

Note: You can mix VCF-S with perpetual license keys as long as you use the same key (either or) for a workload domain.

Which VMware Cloud Foundation editions are available?

A VCF comparison matrix can be found here.

VMware Cloud Foundation Architecture

VCF is made for greenfield deployments (brownfield not supported) and supports two different architecture models:

  • Standard Architecture
  • Consolidated Architecture

VMware Cloud Foundation Deployment Options

The standard architecture separates management workloads and lets them run on a dedicated management workload domain. Customer workloads are deployed on a separate virtual infrastructure workload domain (VI workload domain). Each workload domain is managed by a separate vCenter Server instance, which allows autonomous licensing and lifecycle management.

VMware Cloud Foundation Single Site Deployment

Note: The standard architecture is the recommended model because it separates management workloads from customer workloads.

Customers with a small environment (or a PoC) can start with a consolidated architecture. This allows you to run customer and management workloads together on the same workload domain (WLD).

Note: The management workload domain’s default cluster datastore must use vSAN. Other WLDs can use vSAN, NFS, FC, and vVols for the principal storage.

VMware Cloud Foundation Storage Options

What is a vSAN Stretched Cluster?

vSAN stretched clusters extend a vSAN cluster from a single site to two sites for a higher level of availability and inter-site load balancing.

VMware Cloud Foundation Stretched Cluster

Does VCF provide flexible workload domain sizing?

Yes, that’s possible. You can license the WLDs based on your needs and use the editions that make the most sense depending on your use cases.

VMware Cloud Foundation Flexible Licensing

How many physical nodes are required to deploy VMware Cloud Foundation?

A minimum of four physical nodes is required to start in a consolidated architecture or to build your management workload domain. Four nodes are required to ensure that the environment can tolerate a failure while another node is being updated.

VI workload domains require a minimum of three nodes.

This means, to start with a standard architecture, you need to have the requirements (and money) to start with at least seven physical nodes.

What are the minimum hardware requirements?

These minimum specs have been listed for the management WLD since VCF 4.0 (September 2020):

VMware Cloud Foundation Hardware Requirements

Can I mix vSAN ReadyNodes and Dell EMC VxRail deployments?

No. This is not possible.

What about edge/remote use cases?

When you would like to deploy VMware Cloud Foundation workload domains at a remote site, you can deploy so-called “VCF Remote Clusters”. Those remote workload domains are managed by the VCF instance at the central site and you can perform the same full-stack lifecycle management for the remote sites from the central SDDC Manager.

VMware Cloud Foundation Remote Cluster

Prerequisites to deploy remote clusters can be found here.

Note: If vSAN is used, VCF only supports a minimum of 3 nodes and a maximum of 4 nodes per VCF Remote Cluster. If NFS, vVOLs or Fiber Channel is used as principal storage, then VCF supports a minimum of 2 and a maximum of 4 nodes.

Important: Remote clusters and remote workload domains are not supported when VCF+ is enabled.

Does VCF support HCI Mesh?

Yes. VMware Cloud Foundation 4.2 and later supports sharing remote datastores with HCI Mesh for VI workload domains.

HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.

Note: At this time, HCI Mesh is not supported with VCF ROBO.

What is SDDC Manager?

SDDC Manager is a preconfigured virtual appliance that is deployed in the management workload domain for creating workload domains, provisioning additional virtual infrastructure and lifecycle management of all the software-defined data center (SDDC) management components.

VMware Cloud Foundation SDDC Manager

You use SDDC Manager in VMware Cloud Foundation to perform the following operations:

  • Commissioning or decommissioning ESXi hosts
  • Deployment of workload domains
  • Extension of clusters in the management and workload domains with ESXi hosts
  • Adding clusters to the management domain and workload domains
  • Support for network pools for host configuration in a workload domain
  • Product licenses storage
  • Deployment of vRealize Suite components.
  • Lifecycle management of the virtual infrastructure components in all workload domains, and of vRealize Suite Lifecycle Manager components.
  • Certificate management
  • Password management and rotation
  • NSX-T Edge cluster deployment in the management domain and workload domains
  • Backup configuration

VMware Cloud Foundation SDDC Manager Dashboard

How many resources does the VCF management WLD need during the bring-up process?

We know that VCF includes vSphere (ESXi and vCenter), vSAN, SDDC Manager, NSX-T and eventually some components of the vRealize Suite. The following table should give you an idea what the resource requirements look like to get VCF up and running:

VMware Cloud Foundation Resource Requirements

If you are interested to know how many resources the Aria Suite (formerly vRealize Suite) will consume of the management workload domain, have a look at this table:

VMware Cloud Foundation Resource Requirements vRealize

How can I migrate my workloads from a non-VCF environment to a new VCF deployment?

VMware HCX provides a path to modernize from a legacy data center architecture by migrating to VMware Cloud Foundation.

VMware Cloud Foundation HCX

What is NSX Advanced Load Balancer?

NSX Advanced Load Balancer (NSX ALB) formerly known as Avi is a solution that provides advanced load balancing capabilities for VMware Cloud Foundation.

Which security add-ons are available with VMware Cloud Foundation?

VMware has different workload and network security offerings to complement VCF:

Can I get VCF as a managed service offering?

Yes, this is possible. Please have a look at Data Center as a Service based on VMware Cloud Foundation.

Can I install VCF in my home lab?

Yes, you can. With the VLC Lab Constructor, you can deploy an automated VCF instance in a nested configuration. There is also a Slack VLC community for support.

VCF Lab Constructor

Where can I find more information about VCF?

Please consult the VMware Foundation 4.5 FAQ for more information about VMware Cloud Foundation.