VMware Cloud Foundation 5.1 – Technical Overview

VMware Cloud Foundation 5.1 – Technical Overview

This technical overview supersedes this version, which was based on VMware Cloud Foundation 5.0, and now covers all capabilities and enhancements that were delivered with VCF 5.1.

What is VMware Cloud Foundation (VCF)?

VMware Cloud Foundation is a multi-cloud platform that provides a full-stack hyperconverged infrastructure (HCI) that is made for modernizing data centers and deploying modern container-based applications. VCF is based on different components like vSphere (compute), vSAN (storage), NSX (networking), and some parts of the Aria Suite (formerly vRealize Suite). The idea of VCF follows a standardized, automated, and validated approach that simplifies the management of all the needed software-defined infrastructure resources.

This stack provides customers with consistent infrastructure and operations in a cloud operating model that can be deployed on-premises, at the edge, or in the public cloud.

What software is being delivered in VMware Cloud Foundation?

Update February 16th, 2024: Please have a look at this article to understand the current VCF licensing. I will publish an updated version of this blog as soon as VMware Cloud Foundation 5.2 has been released.

The BoM (bill of materials) is changing with each VCF release. With VCF 5.1 the following components and software versions are included:

 

Software Component

Version

Date

Build Number

Cloud Builder VM

5.1

07 NOV 2023

22688368

SDDC Manager

5.1

07 NOV 2023

22688368

VMware vCenter Server Appliance

8.0 Update 2a

26 OCT 2023

22617221

VMware ESXi

8.0 Update 2

21 SEP 2023

22380479

VMware vSAN Witness Appliance

8.0 Update 2

21 SEP 2023

22385739

VMware NSX

4.1.2.1

7 NOV 2023

22667789

VMware Aria Suite Lifecycle

8.14

19 OCT 2023

22630473

  • VMware vSAN is included in the VMware ESXi bundle.
  • You can use VMware Aria Suite Lifecycle to deploy VMware Aria Automation, VMware Aria Operations, VMware Aria Operations for Logs, and Workspace ONE Access. VMware Aria Suite Lifecycle determines which versions of these products are compatible and only allows you to install/upgrade to supported versions.
  • VMware Aria Operations for Logs content packs are installed when you deploy VMware Aria Operations for Logs.
  • The VMware Aria Operations management pack is installed when you deploy VMware Aria Operations.
  • You can access the latest versions of the content packs for VMware Aria Operations for Logs from the VMware Solution Exchange and the VMware Aria Operations for Logs in-product marketplace store.

What’s new with VCF 5.1?

Important changes mentioned in the release notes:

  • Support for vSAN ESA.vSAN ESA is an alternative, single-tier architecture designed ground-up for NVMe-based platforms to deliver higher performance with more predictable I/O latencies, higher space efficiency, per-object based data services, and native, high-performant snapshots.
    VCF 5.1 vSAN ESA
  • vSphere Distributed Services engine for Ready nodes. AMD-Pensando and NVIDIA BlueField-2 DPUs are now supported. Offloading the Virtual Distributed Switch (VDS) and NSX network and security functions to the hardware provides significant performance improvements for low latency and high bandwidth applications. NSX distributed firewall processing is also offloaded from the server CPUs to the network silicon.
  • Mixed-mode Support for Workload Domains​. A VCF instance can exist in a mixed BOM state where the workload domains are on different VCF 5.x versions. Note: The management domain should be on the highest version in the instance.
    VCF 5.1 Mixed Mode
  • Support for mixed license deployment. A combination of keyed and keyless licenses can be used within the same VCF instance.
  • VMware vRealize rebranding. VMware recently renamed vRealize Suite of products to VMware Aria Suite. See the Aria Naming Updates blog post for more details.
  • Increased GPU scale. VMware Cloud Foundation 5.1 provides increased support for VMs to be configured with up to 16 GPU devices.
    VCF 5.1 GPU Scale

What are the VMware Cloud Foundation components?

To manage the logical infrastructure in the private cloud, VMware Cloud Foundation augments the VMware virtualization and management components with VMware Cloud Builder and VMware Cloud Foundation SDDC Manager.

VMware Cloud Foundation Component Description
VMware Cloud Builder VMware Cloud Builder automates the deployment of the software-defined stack, creating the first software-defined unit known as the management domain.
SDDC Manager

SDDC Manager automates the entire system life cycle, that is, from configuration and provisioning to upgrades and patching including host firmware, and simplifies day-to-day management and operations. From this interface, the virtual infrastructure administrator or cloud administrator can provision new private cloud resources, monitor changes to the logical infrastructure, and manage life cycle and other operational activities.

VMware Cloud Foundation SDDC Manager Dashboard

vSphere

vSphere uses virtualization to transform individual data centers into aggregated computing infrastructures that include CPU, storage, and networking resources. VMware vSphere manages these infrastructures as a unified operating environment and provides you with the tools to administer the data centers that participate in that environment.

The two core components of vSphere are ESXi and vCenter Server. ESXi is the virtualization platform where you create and run virtual machines and virtual appliances. vCenter Server is the service through which you manage multiple hosts connected in a network and pool host resources.

vSAN

vSAN aggregates local or direct-attached data storage devices to create a single storage pool that is shared across all hosts in the vSAN cluster. Using vSAN removes the need for external shared storage, and simplifies storage configuration and virtual machine provisioning. Built-in policies allow for flexibility in data availability.

NSX NSX is focused on providing networking, security, automation, and operational simplicity for emerging application frameworks and architectures that have heterogeneous endpoint environments and technology stacks. NSX supports cloud-native applications, bare-metal workloads, multi-hypervisor environments, public clouds, and multiple clouds.
vSphere with Tanzu By using the integration between VMware Tanzu and VMware Cloud Foundation, you can deploy and operate the compute, networking, and storage infrastructure for vSphere with Tanzu, also called Workload Management. vSphere with Tanzu transforms vSphere to a platform for running Kubernetes workloads natively on the hypervisor layer. When enabled on a vSphere cluster, vSphere with Tanzu provides the capability to run Kubernetes workloads directly on ESXi hosts and to create upstream Kubernetes clusters within dedicated resource pools.
VMware Aria Suite

VMware Cloud Foundation supports automated deployment of VMware Aria Suite Lifecycle. You can then deploy and manage the life cycle of Workspace ONE Access and the VMware Aria Suite products (VMware Aria Operations for Logs, VMware Aria Automation, and VMware Aria Operations) by using VMware Aria Suite Lifecycle.

VMware Aria Suite is a purpose-built management solution for the heterogeneous data center and the hybrid cloud. It is designed to deliver and manage infrastructure and applications to increase business agility while maintaining IT control. It provides the most comprehensive management stack for private and public clouds, multiple hypervisors, and physical infrastructure.

VMware Cloud Foundation Architecture

VCF is made for greenfield deployments (brownfield not supported) and supports two different architecture models:

  • Standard Architecture
  • Consolidated Architecture

VMware Cloud Foundation Deployment Options

The standard architecture separates management workloads and lets them run on a dedicated management workload domain. Customer workloads are deployed on a separate virtual infrastructure workload domain (VI workload domain). Each workload domain is managed by a separate vCenter Server instance, which allows autonomous licensing and lifecycle management.

VMware Cloud Foundation Single Site Deployment

Note: The standard architecture is the recommended model because it separates management workloads from customer workloads.

Customers with a small environment (or a PoC) can start with a consolidated architecture. This allows you to run customer and management workloads together on the same workload domain (WLD).

Management Domain

The management domain is created during the bring-up process by VMware Cloud Builder and contains the VMware Cloud Foundation management components as follows:

  • Minimum four ESXi hosts

  • An instance of vCenter Server

  • A three-node NSX Manager cluster

  • SDDC Manager

  • vSAN datastore
  • One or more vSphere clusters each of which can scale up to the vSphere maximum of 64

VI Workload Domains

You create VI workload domains to run customer workloads. For each VI workload domain, you can choose the storage option – vSAN, NFS, vVols, or VMFS on FC.

VMware Cloud Foundation Storage Options

A VI workload domain consists of one or more vSphere clusters. Each cluster starts with a minimum of three hosts and can scale up to the vSphere maximum of 64 hosts. SDDC Manager automates the creation of the VI workload domain and the underlying vSphere clusters.

For the first VI workload domain in your environment, SDDC Manager deploys a vCenter Server instance and a three-node NSX Manager cluster in the management domain. For each subsequent VI workload domain, SDDC Manager deploys an additional vCenter Server instance. New VI workload domains can share the same NSX Manager cluster with an existing VI workload domain or you can deploy a new NSX Manager cluster. VI workload domains cannot use the NSX Manager cluster for the management domain.

What is a vSAN Stretched Cluster?

vSAN stretched clusters extend a vSAN cluster from a single site to two sites for a higher level of availability and inter-site load balancing.

VMware Cloud Foundation Stretched Cluster

Does VCF provide flexible workload domain sizing?

Yes, that’s possible. You can license the WLDs based on your needs and use the editions that make the most sense depending on your use cases.

VMware Cloud Foundation Flexible Licensing

How many physical nodes are required to deploy VMware Cloud Foundation?

A minimum of four physical nodes is required to start in a consolidated architecture or to build your management workload domain. Four nodes are required to ensure that the environment can tolerate a failure while another node is being updated.

VI workload domains require a minimum of three nodes.

Can I mix vSAN ReadyNodes and Dell EMC VxRail deployments?

No. This is not possible.

What about edge/remote use cases?

When you would like to deploy VMware Cloud Foundation workload domains at a remote site, you can deploy so-called “VCF Remote Clusters”. Those remote workload domains are managed by the VCF instance at the central site and you can perform the same full-stack lifecycle management for the remote sites from the central SDDC Manager.

VMware Cloud Foundation Remote Cluster

Prerequisites to deploy remote clusters can be found here.

Note: If vSAN is used, VCF only supports a minimum of 3 nodes and a maximum of 4 nodes per VCF Remote Cluster. If NFS, vVOLs or Fiber Channel is used as principal storage, then VCF supports a minimum of 2 and a maximum of 4 nodes.

Important: Remote clusters and remote workload domains are not supported when VCF+ is enabled.

How many resources does the VCF management WLD need during the bring-up process?

We know that VCF includes vSphere (ESXi and vCenter), vSAN, SDDC Manager, NSX and eventually some components of the vRealize Suite. The following table should give you an idea what the resource requirements look like to get VCF up and running:

VMware Cloud Foundation Resource Requirements

If you are interested to know how many resources the Aria Suite (formerly vRealize Suite) will consume of the management workload domain, have a look at this table:

VMware Cloud Foundation Resource Requirements vRealize

Does VCF support HCI Mesh?

Yes. VMware Cloud Foundation 4.2 and later supports sharing remote datastores with HCI Mesh for VI workload domains.

HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.

Note: At this time, HCI Mesh is not supported with VCF ROBO.

Important: HCI Mesh can be configured with vSAN OSA or ESA. HCI Mesh is not supported between a mix of
vSAN OSA and ESA clusters.

Does VMware Cloud Foundation support vSAN Max?

At the time of writing, no.

How is VMware Cloud Foundation licensed?

Currently, VCF is sold as part of VMware Cloud editions.

How can I migrate my workloads from a non-VCF environment to a new VCF deployment?

VMware HCX provides a path to modernize from a legacy data center architecture by migrating to VMware Cloud Foundation.

VMware Cloud Foundation HCX

Can I install VCF in my home lab?

Yes, you can. With the VLC Lab Constructor, you can deploy an automated VCF instance in a nested configuration. There is also a Slack VLC community for support.

VCF Lab Constructor

Note: Please have a look at “VCF Holodeck” if you would like to create a smaller “sandbox” for testing or training purposes

VCF Holodeck Toolkit 

Where can I find more information about VCF?

Please consult the VMware Cloud Foundation FAQ for more information.

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

VMware Cloud Foundation – A Technical Overview (based on VCF 4.5)

 

Update: Please follow this link to get to the updated version with VCF 5.0.

This technical overview supersedes this version, which was based on VMware Cloud Foundation 4.3, and now covers all capabilities and enhancements that were delivered with VCF 4.5.

What is VMware Cloud Foundation (VCF)?

VMware Cloud Foundation is a multi-cloud platform that provides a full-stack hyperconverged infrastructure (HCI) that is made for modernizing data centers and deploying modern container-based applications. VCF is based on different components like vSphere (compute), vSAN (storage), NSX (networking), and some parts of the Aria Suite (formerly vRealize Suite). The idea of VCF follows a standardized, automated, and validated approach that simplifies the management of all the needed software-defined infrastructure resources.

This stack provides customers with consistent infrastructure and operations in a cloud operating model that can be deployed on-premises, at the edge, or in the public cloud.

Tanzu Standard Edition is included in VMware Cloud Foundation with Tanzu Standard, Advanced, and Enterprise editions.

Note: The VMware Cloud Foundation Starter, Standard, Advanced and Enterprise editions do NOT include Tanzu Standard.

What software is being delivered in VMware Cloud Foundation?

The BoM (bill of materials) is changing with each VCF release. With VCF 4.5 the following components and software versions are included:

  • VMware SDDC Manager 4.5
  • vSphere 7.0 Update 3g
  • vCenter Server 7.0 Update 3h
  • vSAN 7.0 Update 3g
  • NSX-T 3.2.1.2
  • VMware Workspace ONE Access 3.3.6
  • vRealize Log Insight 8.8.2
  • vRealize Operations 8.8.2
  • vRealize Automation 8.8.2
  • (vRealize Network Insight)

Note: Only one vCenter Server license is required for all vCenter Servers deployed in a VMware Cloud Foundation system.

VMware Cloud Foundation Components

What is VMware Cloud Foundation+ (VCF+)?

With the launch of VMware Cloud Foundation (VCF) 4.5 in early October 2022, VCF introduced new consumption and licensing models.

VCF+ is the next cloud-connected SaaS product offering, which builds on vSphere+ and vSAN+. VCF+ delivers cloud connectivity to centralize management and a new consumption-based OPEX model to consume VMware Cloud services.

VMware Cloud Foundation Consumption Models

VCF+ components are cloud entitled, metered, and billed. There are no license keys in VCF+. Once the customer is onboarded to VCF+, the components are entitled from the cloud and periodically metered and billed.

VMware Cloud Foundation+

The following components are included in VCF+:

  • vSphere+
  • vSAN+
  • NSX (term license)
  • SDDC Manager
  • Aria Universal Suite (formerly vRealize Cloud Universal aka vRCU)
  • Tanzu Standard
  • vCenter (included as part of vSphere+)

Note: In a given VCF+ instance, you can only have VCF+ licensing, you cannot mix VCF-S (term) and VCF perpetual licenses with VCF+.

What are other VCF subscription offerings?

VMware Cloud Foundation Subscription (VCF-S) is an on-premises (disconnected) term subscription offer that is available as a standalone VCF-S offer using physical core metrics and term subscription license keys.

VMware Cloud Foundation Subscription TLSS

You can also purchase VCF+ and VCF-S licenses as part of the VMware Cloud Universal program.

Note: You can mix VCF-S with perpetual license keys as long as you use the same key (either or) for a workload domain.

Which VMware Cloud Foundation editions are available?

A VCF comparison matrix can be found here.

VMware Cloud Foundation Architecture

VCF is made for greenfield deployments (brownfield not supported) and supports two different architecture models:

  • Standard Architecture
  • Consolidated Architecture

VMware Cloud Foundation Deployment Options

The standard architecture separates management workloads and lets them run on a dedicated management workload domain. Customer workloads are deployed on a separate virtual infrastructure workload domain (VI workload domain). Each workload domain is managed by a separate vCenter Server instance, which allows autonomous licensing and lifecycle management.

VMware Cloud Foundation Single Site Deployment

Note: The standard architecture is the recommended model because it separates management workloads from customer workloads.

Customers with a small environment (or a PoC) can start with a consolidated architecture. This allows you to run customer and management workloads together on the same workload domain (WLD).

Note: The management workload domain’s default cluster datastore must use vSAN. Other WLDs can use vSAN, NFS, FC, and vVols for the principal storage.

VMware Cloud Foundation Storage Options

What is a vSAN Stretched Cluster?

vSAN stretched clusters extend a vSAN cluster from a single site to two sites for a higher level of availability and inter-site load balancing.

VMware Cloud Foundation Stretched Cluster

Does VCF provide flexible workload domain sizing?

Yes, that’s possible. You can license the WLDs based on your needs and use the editions that make the most sense depending on your use cases.

VMware Cloud Foundation Flexible Licensing

How many physical nodes are required to deploy VMware Cloud Foundation?

A minimum of four physical nodes is required to start in a consolidated architecture or to build your management workload domain. Four nodes are required to ensure that the environment can tolerate a failure while another node is being updated.

VI workload domains require a minimum of three nodes.

This means, to start with a standard architecture, you need to have the requirements (and money) to start with at least seven physical nodes.

What are the minimum hardware requirements?

These minimum specs have been listed for the management WLD since VCF 4.0 (September 2020):

VMware Cloud Foundation Hardware Requirements

Can I mix vSAN ReadyNodes and Dell EMC VxRail deployments?

No. This is not possible.

What about edge/remote use cases?

When you would like to deploy VMware Cloud Foundation workload domains at a remote site, you can deploy so-called “VCF Remote Clusters”. Those remote workload domains are managed by the VCF instance at the central site and you can perform the same full-stack lifecycle management for the remote sites from the central SDDC Manager.

VMware Cloud Foundation Remote Cluster

Prerequisites to deploy remote clusters can be found here.

Note: If vSAN is used, VCF only supports a minimum of 3 nodes and a maximum of 4 nodes per VCF Remote Cluster. If NFS, vVOLs or Fiber Channel is used as principal storage, then VCF supports a minimum of 2 and a maximum of 4 nodes.

Important: Remote clusters and remote workload domains are not supported when VCF+ is enabled.

Does VCF support HCI Mesh?

Yes. VMware Cloud Foundation 4.2 and later supports sharing remote datastores with HCI Mesh for VI workload domains.

HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.

Note: At this time, HCI Mesh is not supported with VCF ROBO.

What is SDDC Manager?

SDDC Manager is a preconfigured virtual appliance that is deployed in the management workload domain for creating workload domains, provisioning additional virtual infrastructure and lifecycle management of all the software-defined data center (SDDC) management components.

VMware Cloud Foundation SDDC Manager

You use SDDC Manager in VMware Cloud Foundation to perform the following operations:

  • Commissioning or decommissioning ESXi hosts
  • Deployment of workload domains
  • Extension of clusters in the management and workload domains with ESXi hosts
  • Adding clusters to the management domain and workload domains
  • Support for network pools for host configuration in a workload domain
  • Product licenses storage
  • Deployment of vRealize Suite components.
  • Lifecycle management of the virtual infrastructure components in all workload domains, and of vRealize Suite Lifecycle Manager components.
  • Certificate management
  • Password management and rotation
  • NSX-T Edge cluster deployment in the management domain and workload domains
  • Backup configuration

VMware Cloud Foundation SDDC Manager Dashboard

How many resources does the VCF management WLD need during the bring-up process?

We know that VCF includes vSphere (ESXi and vCenter), vSAN, SDDC Manager, NSX-T and eventually some components of the vRealize Suite. The following table should give you an idea what the resource requirements look like to get VCF up and running:

VMware Cloud Foundation Resource Requirements

If you are interested to know how many resources the Aria Suite (formerly vRealize Suite) will consume of the management workload domain, have a look at this table:

VMware Cloud Foundation Resource Requirements vRealize

How can I migrate my workloads from a non-VCF environment to a new VCF deployment?

VMware HCX provides a path to modernize from a legacy data center architecture by migrating to VMware Cloud Foundation.

VMware Cloud Foundation HCX

What is NSX Advanced Load Balancer?

NSX Advanced Load Balancer (NSX ALB) formerly known as Avi is a solution that provides advanced load balancing capabilities for VMware Cloud Foundation.

Which security add-ons are available with VMware Cloud Foundation?

VMware has different workload and network security offerings to complement VCF:

Can I get VCF as a managed service offering?

Yes, this is possible. Please have a look at Data Center as a Service based on VMware Cloud Foundation.

Can I install VCF in my home lab?

Yes, you can. With the VLC Lab Constructor, you can deploy an automated VCF instance in a nested configuration. There is also a Slack VLC community for support.

VCF Lab Constructor

Where can I find more information about VCF?

Please consult the VMware Foundation 4.5 FAQ for more information about VMware Cloud Foundation.

 

 

 

VMware Horizon – Raspberry Pi 4 with Stratodesk NoTouch OS

After I wrote the article “Raspberry Pi 4 – The Ultimate Thin Client?” I have been asked on Twitter to write about the Raspi in combination with Stratodesk’s NoTouch OS. I have no hands-on experience with this operating system, but am currently helping a partner who is doing a proof of concept with a customer. The customer uses AMD-based thin clients for their tests and one important criteron is Skype for Business. As you maybe know from my previous article, Skype for Business (SfB) is not running with the Horizon Client on TLXOS. The supported Horizon Client features include Blast Extreme, USB redirection, and H.264 decoding.

And I think I know now why. It’s not the Horizon Client on TLXOS, but the Raspi’s CPU architecture. In the VMware Docs for the Horizon Client for Linux 5.1 (most recent at the time of writing) it’s clearly stated that:

Real-Time Audio-Video is supported on x86 and x64 devices. This feature is not supported on ARM processors. The client system must meet the following minimum hardware requirements.

So, if I want to test all features of the Horizon Client, then I have to use my Intel NUC Skull Canyon. I’m still going to test the user experience with NoTouch OS, but the RTAV with SfB is off the table with this device.

Horizon Test Environment

I’m going to use VMware’s TestDrive to access a vGPU enabled Windows 10 desktop from the EMEA region. Such a Windows 10 1709 desktop is equipped with a Xeon Gold 6140 CPU and a Nvidia Tesla V100 card.

Raspberry Pi 4 Setup

There is no special manual needed to set up a Raspberry Pi. Just unbox and install it in a case, if you ordered one. Here are some general instructions: https://projects.raspberrypi.org/en/projects/raspberry-pi-setting-up

Install NoTouch OS on the Raspberry Pi 4

Format the SD card because TLXOS is installed. On Windows open the “Disk Management” tool to delete the volumes on the SD card.

After the deletion it should look like this.

Register for a free trial to download the  installer “Stratodesk-NoTouchOS-DiskImage-2.40.5587-EEs-k419-armhf-190808.zip – NoTouch OS – Standard Edition k419 (Raspberry Pi 3 and 4) – Disk Image Installer”

Uncompress the ZIP archive

Double-click on “FlashSDcard.cmd” and check in the appearing “Win32 Disk Imager” that the drive letter points to your SD card (in my case “F”). When you are sure click “Write” and wait for the operation to complete.

After the write has been successful, remove the SD card and put it into the Raspberry Pi. Boot and let’s see.

Wizard Step 1 – Location and Keyboard

Wizard Step 2 – Create a connection (for Horizon View)

Wizard Step 3 – Admin Password and EULA

Wizard – Configuration stored and Horizon Destop icon appears.

After a reboot, try to connect to your Horizon environment by double-clicking the icon on the desktop.

Works fine – my TestDrive desktop appears

I wanted quickly to test audio and video, but the video was very laggy and no audio at all. I couldn’t find a way, same with TLXOS, to minimize the Horizon session to get back to my NoTouch desktop. After checking the Blast settings in the Horizon Client I could see, that the H.264 decoding is not allowed by default.

Before we connect back to the desktop we need to fix the audio problem as well. In the start menu you can access the system configuration where you have to enter a password first.

After access the “Audio” settings I had to change the “Standard audio device” to “Analog” and allow the other settings marked with “On” now.

Tried to save the config change but this resulted in an error. I decided to reboot the OS.

Checked the settings again – yes, they were saved. Finally, I could move on to the first test with YouTube.

Testing

 

1) User Experience with YouTube

As a first test I’m using the same Avengers 4K trailer on YouTube.

AVENGERS 4 ENDGAME: 8 Minute Trailers (4K ULTRA HD) NEW 2019: https://www.youtube.com/watch?v=FVFPRstvlvk

Result: Video good, audio unusable

2) TestDrive – Nvidia Faceworks

Result: Good performance (same like TLXOS)

3) TestDrive – eDrawings Racecar Animation

Result: Good performance (same like TLXOS)

4) TestDrive – Nvidia “A New Dawn”

Result: Video animation good, audio unusable

5) FishGL

Result: Good performance (same like TLXOS)

NoTouch OS – VMware Horizon Audio Problems

The good thing about the NoTouch OS is, that it gives you more configuration and diagnostic options. And one of them is  “play test sound”:

This tells us that the problem only exists in the Horizon VDI session. What happens if I change my analog speakers to USB and test it again?

Result: Good performance (same like TLXOS)

NoTouch OS – Configuration Options for the Horizon View

I have to admit that Stratodesk’s NoTouch OS is way more mature than a TLXOS. With TLXOS I had the feeling that the configuration options are very limited and the big advantage there was, that you could only configure one application or connection. Meaning you could only use Horizon or a web browser for example.

With NoTouch OS this is really different. You can configure Horizon, Citrix, RDP, Chromium etc. and place all the icons on the desktop or in the start menu.

Maybe I was not familiar enough with TLXOS or it’s not very intuitive, but the NoTouch OS gives me a rich set of options to configure the Horizon Client or my Horizon session.

Conclusion

Compared to the TLXOS I have to admit that Stratodesk’s NoTouch OS is the better option. You have way more options to configure the thin client (the operating system in the end) and the Horizon Client. In addition to that you are also allowed to configure more than one application or connection, which is limited to only one with ThinLinx (TLXOS).

And according to a current customer, who is performing a Horizon PoC, the management software from Stratodesk is also awesome.

If you look for an enterprise-ready operating system for thin clients, then NoTouch OS is the better choice for sure. I can confirm that Stratodesk is correctly installing our Horizon Client for Linux in their image including all the necessary libraries and dependencies!

The only thing which you have to keep in mind is the limited feature set with a Raspberry Pi. Skype for Business with the optimized mode currently is not supported. This means you have to go with a thin client which is based on a Intel or AMD-based CPU architecture.

Raspberry Pi 4 – The Ultimate Thin Client?

Raspberry Pi 4 – The Ultimate Thin Client?

Everyone is talking about the new Raspberry Pi 4 and ask themselves if it’s the new ultimate and cheap thin client. So far, I haven’t seen any customer here in Switzerland using a Pi with VMware Horizon. And to be honest, I have no hands-on experience with Raspberry Pis yet and want to know if someone in pre-sales like me easily could order, install, configure and use it as a thin client. My questions were:

  • How much would it cost me in CHF to have a nice thin client?
  • What kind of operating system (OS) is or needs to be installed?
  • Is this OS supported for the VMware Horizon Client?
  • If not, do I need to get something like the Stratodesk NoTouch OS?
  • If yes, how easy is it to install the Horizon Client for Linux?
  • How would the user experience be for a normal office worker?
  • Is it possible to use graphics and play YouTube videos?

First, let’s check what I ordered on pi-shop.ch:

  • Raspberry Pi 4 Model B/4GB – CHF 62.90
  • KKSB Raspberry Pi 4 Case – CHF 22.90
  • 32GB MicroSD Card (Class10) – CHF 16.90
  • Micro-HDMI to Standard HDMI (A/M) 1m cable – CHF 10.90
  • Power: Official Power Supply 15W – CHF 19.40
  • Keyboard/Mouse: Already available in my home lab

Total cost in CHF: 133.00

Raspberry Pi 4 Model B Specs

I ordered the Raspberry Pi 4 Model B/4GB with the following hardware specifications:

  • CPU – Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz
  • RAM – 4GB LPDDR4
  • WLAN – 2.4 GHz and 5.0 GHz IEEE 802.11b/g/n/ac wireless
  • Gigabit Ethernet
  • USB – 2x USB 3.0, 2x USB 2.0
  • Video – 2 × micro HDMI ports (up to 4Kp60 supported)
  • Multimedia – H.265 (4Kp60 decode), H.264 (1080p60 decode, 1080p30 encode)

With this powerful hardware I expect no problems and would assume that even playing videos and using graphics is not an issue. But let’s figure that out later.

Horizon Client for Linux

The support for the Raspberry Pi came with Horizon Client 4.6 for Linux:

Horizon Client for Linux now supports the Raspberry Pi 3 Model B devices that are installed with ThinLinx Operating System (TLXOS) or Stratodesk NoTouch Operating System. The supported Horizon Client features include Blast Extreme, USB redirection, and H.264 decoding.

And the current Horizon Client 5.1 still only mentions the support for Raspberry Pi 3 with the same supported feature set:

Horizon Client for Linux 5.1 is supported on Raspberry Pi 3 Model B devices that are installed with ThinLinx Operating System (TLXOS) or Stratodesk NoTouch Operating System. The supported Horizon Client features include Blast Extreme, USB redirection, and H.264 decoding.

Hm, nothing has changed so far. During the time of writing this article I’ll try to figure out if the official support for a Pi 4 is coming soon and why ThinLinX is the only supported OS so far. Because I saw on Twitter and on the Forbes website that people are waiting for Ubuntu MATE for their Raspis

And I found a tweet from August 6, 2019, from the ThinLinX account with the following information:

ThinLinX has just released TLXOS 4.7.0 for the Raspberry Pi 4 with dual screen support. The same image runs on the entire Raspberry Pi range from the RPi2 onward TLXOS 4.7.0 supports VMware Horizon Blast, Citrix HDX, RDP/RemoteFX, Digital Signage and IoT

Raspberry Pi and Horizon Client 4.6 for Linux

The next question came up – are there already any people around who tested the ThinLinX OS with a Raspberry Pi 3/4?

Probably a few people tried it already, but only one guy from UK so far blogged about this combination on his blog vMustard.

He wrote a guide about how to install TLXOS and the TMS management software, the configuration of TLXOS and how the Horizon Client for Linux needs to be installed. For sure his information helps me to get started.

Horizon Test Environment

I’m going to use VMware’s TestDrive to access a vGPU enabled Windows 10 desktop from the EMEA region. Such a Windows 10 1709 desktop is equipped with a Xeon Gold 6140 CPU and a Nvidia Tesla V100 card. I tried to get a card from Nvidia to perform the tests in my home lab, but they already gave away all the cards they had. So, the test in my home lab has to wait for a few weeks or months. 🙂 

Workspace ONE UEM and TLXOS

And when I finally have installed TLXOS and can connect to a Horizon desktop, would it be possible to install Intelligent Hub and enroll the device in my Workspace ONE UEM sandbox environment? Is this also possible and supported?

Checking our VMware Docs and the Workspace ONE UEM product documentation the following information can be found:

The flexibility of the Linux operating system makes it a preferred platform for a wide range of uses, including notebooks, Raspberry Pi devices, and other IoT-capable devices. With Workspace ONE UEM, you can build on the flexibility and ubiquity of Linux devices and integrate them with your other mobile platforms in a central location for mobile device management.

Hm, would my new thin client be supported or not? The only requirements mentioned, are:

  • You can enroll devices running any version and any configuration of Linux running on either x86_64 or ARM7 architecture into Workspace ONE UEM
  • You can enroll Linux devices in any Workspace ONE UEM version from 1903 onward
  • You must deploy the Workspace ONE Intelligent Hub for Linux v1.0

As you can see above the new Raspberry Pi 4 is based on ARM8. I asked our product management if the RPi4 and TLXOS is supported and received the following answer:

As for WS1 UEM support for Linux, we do support ARM and won’t have a problem running on a Pi4, but we are still early stages for the product

As the Linux management capabilities with Workspace ONE UEM are very limited, I’m going to wait another four to six months to perform some tests. But TLXOS is anyway coming with its on management software. And customers would probably prefer another Linux Distribution like Ubuntu MATE.

Raspberry Pi 4 Setup

There is no special manual needed to set up a Raspberry Pi. Just unbox and install it in a case, if you ordered one. Here are some general instructions: https://projects.raspberrypi.org/en/projects/raspberry-pi-setting-up

Install ThinLinX OS on the Raspberry Pi 4

Download the most recent installer for ThinLinX OS (TLXOS) for a Raspberry Pi: http://thinlinx.com/download/

1_TLXOS_RaspberryPi4_SDcard_Installer

Insert your microSD card into your PC and launch the “TLXOS Raspberry Pi SD Card Installer” (in my case tlxos_rpi-4.7.0.exe” and press “Yes” if you are prepared to write the image to the SD card.

3_TLXOS 4.7.0 for Raspberry Pi (v2 and v3)

After the image extraction a “Win32 Disk Imager” window will appear. Make sure the to choose the correct drive letter for the SD card (in my case “G”). Click “Write”

4_Win32_Disk_Imager

If everything went fine you should get a notification that the write was successful.

5_TLXOS-Complete

Now put the SD card into the Pi, connect the USB-C power cable, micro-HDMI cable, keyboard and mouse.

And then let’s see if the Pi can boot from the SD card.

5_TLXOS-Complete

It seems that the TLXOS just booted up fine and that we have “30 Day Free Trial” included.

8_TLXOS_30d_FreeTrial

A few minutes later TLXOS was writing something to the disk and did a reboot. The Chromium browser appears. This means we don’t need to install the TMS for our tests, except you would like to test the management of a TLXOS device.

I couldn’t find any menu on TLXOS, so I closed the browser and got access to a menu where I apparently can configure stuff.

10_Chromium_closed_menu_appears

Install Horizon Client for Linux on TLXOS

After I clicked on “Configure” before I browsed through the tabs (Application) and found the option to configure the Horizon Client. It seems that the client is included now in TLXOS which was not the case in the past. Nice! 

11_TLXOS_Configure_VMwareBlast

Note:

When a TLXOS device boots, if configured correctly it will automatically connect to a Remote
Server using the specified connection Mode. Up to 16 different connection Modes can be
configured

I just entered the “Server” before and clicked on “Save Settings” which opened the Horizon Client automatically where I just have to enter my username and password (because I didn’t configure “Auto Login” before).

Voila, my vGPU powered Windows 10 desktop from VMware TestDrive appeared.

As first step I opened the VMware Horizon Performance Tracker and the Remote Desktop Analyzer (RD Analyzer) which both confirmed that the active encoder is “NVIDIA NvEnc H264“. This means that the non-CPU encoding (H.264) on the server and the H.264 decoding on TLXOS with the Horizon Client (with Blast) should work fine.

To confirm this, I logged out from the desktop and checked the Horizon Client settings. Yes, H.264 decoding was allowed (default).

15_TLXOS_HorizonClient_H264_allowed

After disallowing the H.264 decoding I could see the difference in the Horizon Performance Tracker.

The active encoder changed to “adaptive”. Let’s allow H.264 again for my tests!

Testing

 

1) User Experience with YouTube

As a first test the user experience with the Raspberry Pi 4 as a thin client and to check how the H.264 decoding performs I decided to watch this trailer:

AVENGERS 4 ENDGAME: 8 Minute Trailers (4K ULTRA HD) NEW 2019: https://www.youtube.com/watch?v=FVFPRstvlvk

I had to compress the video to be able to upload and embed it here. Important to see is that I was watching the 4K trailer in full screen mode and the video and audio were not choppy, but smooth I would say! I had around 21 to 23 fps. But that’s very impressive, isn’t it?

For the next few tests I’m going to use what TestDrive offers me:

2) TestDrive – Nvidia Faceworks

3) TestDrive – eDrawings Racecar Animation

4) TestDrive – Nvidia “A New Dawn”

5) TestDrive – Google Earth

6) FishGL

Conclusion

Well, what are the important criteria which a thin client needs to fullfil? Is it

  • (Very) small form factor
  • Management software – easy to manage
  • Secure (Patching/Updating, Two Factor Authentication, Smartcard Authentication)
  • Longevity – future proof
  • Enough ports for peripherals (e.g. Dualview Support)
  • Low price
  • Low power consumption

It always depends on the use cases, right? If Unified Communications is important to you or your customer, then you need to go with the Stratodesk’s NoTouch OS or have to buy another device and use a different OS. But if you are looking for a good and cheap device like the Raspberry Pi 4, then multimedia, (ultra) HD video streaming and office applications use cases are no problem.

My opinion? There are a lot of use cases for these small devices. Not only in end-user computing, but it’s easy for me to say that the Raspi has a bright future!

With the current TLXOS and the supported Horizon Client features so far I wouldn’t call this setup “enterprise ready” because the installation of TLXOS needs to be done manually except you can get it pre-installed on a SD card? Most customers rely on Unified Communications today and are using Skype for Business and other collaboration tools which is not possible yet according to the Horizon Client release notes. But as soon as the Horizon Client (for Linux) in TLXOS gets more features, the Raspberry Pi is going to take some pieces of the cake and the current thin client market has to live in fear. 😀

The biggest plus of a Raspberry Pi as a thin client is definitely the very small form factor combined with the available ports and the cheap money (TLXOS license not included). You can connect two high resolution monitors, a network cable, keyboard, mouse and a headset without any problem. If you buy the Pi in bulk as customer then I claim that the price is very, very hard to beat. And if a Pi has a hardware defect then plug the SD card into another Pi and your user can work again within a few minutes. If VESA mount is mandatory for you then buy a VESA case. By the way, this is my KKSB case:

What is missing in the end? Some Horizon Client features and the manual initial OS deployment method maybe. I imagine that IT teams of smaller and medium-sized companies could be very interested in a solution like this, because a Raspberry Pi 4 as a thin client already ROCKS!

    VCAP7-DTM Design Exam Passed

    On 21 October I took my first shot to pass the VCAP7-DTM Design exam and failed as you already know from my this article. Today I am happy to share that I finally passed the exam! 🙂

    What did I do with the last information and notes I had about my weaknesses from the last exam score report? I read a lot additional VMware documents and guides about:

    • Integrating Airwatch and VMware Identity Manager (vIDM)
    • Cloud Pod Architecture
    • PCoIP/Blast Display Protocol
    • VMware Identity Manager
    • vSAN 6.2 Essentials from Cormac Hogan and Duncan Epping
    • Horizon Apps (RDSH Pools)
    • Database Requirements
    • Firewall Ports
    • vRealize Operations for Horizon
    • Composer
    • Horizon Security
    • App Volumes & ThinApp
    • Workspace ONE Architecture (SaaS & on-premises)
    • Unified Access Gateway
    • VDI Design Guide from Johan van Amersfoort

    Today, I had a few different questions during the exam but reading more PDFs about the above mentioned topics helped me to pass, as it seems. In addition to that, I attended a Digital Workspace Livefire Architecture & Design training which is available for VMware employees and partners. The focus of this training was not only about designing a Horizon architecture, but also about VMware’s EUC design methodology.

    If you have the option to attend classroom trainings, then I would recommend the following:

    I had two things I struggled with during the exam. Sometimes the questions were not clear enough and I made assumptions what it could mean and that the exam is based on Horizon 7.2 and other old product versions of the Horizon suite:

    • VMware Identity Manager 2.8
    • App Volumes 2.12
    • User Environment Manager 9.1
    • ThinApp 5.1
    • Unified Access Gateway 2.9
    • vSAN 6.2
    • vSphere 6.5
    • vRealize Operations 6.4
    • Mirage 5.x

    But maybe it’s only me since I have almost no hands-on experience with Horizon, none with Workspace ONE and in addition to that I’m only 7 months with VMware now. 🙂

    It is time for an update, but VMware announced already that they are publishing a new design exam version called VCAP7-DTM 2019 next year.

    What about VCIX7-DTM?

     In part 2 of my VCAP7-DTM Design exam blog series I mentioned this:

    Since no VCAP7-DTM Deploy exam is available and it’s not clear yet when this exam will be published, you only need the VCAP7-DTM Design certification to earn the VCIX7-DTM status. I have got this information from VMware certification.

    This information is not correct, sorry. VMware certification pulled their statement back and provided the information that you need to pass the VCAP6-DTM Deploy exam, as long as no VCAP7-DTM Deploy is available, to earn the VCIX7-DTM badge.

    I don’t know yet if I want to pursue the VCIX7-DTM certification and will think about it when the deploy exam for Horizon 7 is available.

    What’s next?

    Hm… I am going to spend more time again with my family and will use some of my 3 weeks vacation time to assemble and install my new home lab.

    Then I also have a few ideas for topics to write about, like:

    • Multi-Domain and Trust with Horizon 7.x
    • Linux VDI Basics with Horizon 7.x
    • SD-WAN for Horizon 7.x
    • NSX Load Balancing for Horizon 7.x

    These are only a few of my list, but let’s see if I really find the time to write a few article. 

    In regards to certification I think I continue with these exams:

    This has no priority for now and can wait until next year! Or…I could try the VDP-DW 2018 since I have vacation. Let’s see 😀

    New Supermicro Home Lab

    For a few years I ve been using three Intel NUC Skull Canyon (NUC6i7KYK) mini PCs for my home lab. Each NUC is equipped with the following:

    • 6th Gen Intel i7-6770HQ processor with Intel Iris Pro graphics
    • 2x 16GB Kingston Value RAM DDR4-2133
    • 2x 500GB Samsung 960 EVO NVMe M.2
    • 1x Transcend JetFlash 710S USB boot device

    These small computers were nice in terms of space, but are limited to 32GB RAM, have only 1 network interface and no separate management interface.

    This was enough and acceptable when I worked with XenServer, used local storage and just had to validate XenDesktop/XenApp configurations and designs during my time as Citrix consultant.

    When I started to replace XenServer with ESXi and created a 3-node vSAN cluster for my first Horizon 7 environment, all was running fine at the beginning. But after while I had strange issues doing vMotions, OS installations, VCSA or ESXi upgrades.

    So, I thought it’s time build a “real” home lab and was looking for ideas. After doing some research and talking to my colleague Erik Bussink, it was clear for me that I have to build my computing nodes based on a Supermicro mainboard. As you may know, the Skull Canyons are not that cheap and therefore I will continue using them for my domain controller VMs, vSAN witness, vCenter Server appliance etc.

    Yes, my new home lab is going to to be a 2-node vSAN cluster.

    Motherboard

    I found two Supermicro X11SPM-TF motherboards for a reduced price, because people ordered and never used them. This was my chance and a “sign” that I have to buy my stuff for the new home lab NOW! Let’s pretend it’s my Christmas gift. 😀

    The key features for me?

    Chassis

    I went for the Fractal Design Node 804 because it offers me space for the hardware and cooling. And I like the square form factor which allows me to stack them.

    CPU

    I need some number of cores in my system to run tests and have enough performance in general. I will mainly run Workspace ONE and Horizon stuff (multi-site architectures) in my lab, but this will change in the future. So I have chosen the 8-core Intel Xeon Silver 4110 Processor with 2.10 GHz.

    Memory

    RAM was always a limiting factor with my NUCs. I will reuse two of them and start with two 32GB 2666 MHz Kingston Server Premier modules for each ESXi host (total 64GB per host). If memory prices are reducing and I would need more capacity, I easily can expand my system.

    Boot Device

    Samsung 860 EVO Basic 250GB which is way too much for ESXi, but the price is low and I could use the disk for something else (e.g. for a new PC) if needed.

    Caching Device for vSAN

    I will remove one Samsung 960 EVO 500GB M.2 of each NUC and use them for the vSAN caching tier. Both NUCs will have still one 960 EVO 500 left to be used as local storage.

    Capacity Device for vSAN

    Samsung 860 Evo Basic 1TB.

    Network

    Currently, my home network only consists of Ubiquiti network devices with 1GbE interfaces.

    So I ordered the Ubiquiti 10G 16-port switch which comes with four 1/10 Gigabit RJ45 ports – no SFPs needed for now. Maybe in the future 😀

    This is the home lab configuration I ordered and all parts should arrive until end of November 2018.

    What do you think about this setup?

    Your feedback is very welcome!