Select Page

Raspberry Pi 4 – The Ultimate Thin Client?

Everyone is talking about the new Raspberry Pi 4 and ask themselves if it’s the new ultimate and cheap thin client. So far, I haven’t seen any customer here in Switzerland using a Pi with VMware Horizon. And to be honest, I have no hands-on experience with Raspberry Pis yet and want to know if someone in pre-sales like me easily could order, install, configure and use it as a thin client. My questions were:

  • How much would it cost me in CHF to have a nice thin client?
  • What kind of operating system (OS) is or needs to be installed?
  • Is this OS supported for the VMware Horizon Client?
  • If not, do I need to get something like the Stratodesk NoTouch OS?
  • If yes, how easy is it to install the Horizon Client for Linux?
  • How would the user experience be for a normal office worker?
  • Is it possible to use graphics and play YouTube videos?

First, let’s check what I ordered on pi-shop.ch:

  • Raspberry Pi 4 Model B/4GB – CHF 62.90
  • KKSB Raspberry Pi 4 Case – CHF 22.90
  • 32GB MicroSD Card (Class10) – CHF 16.90
  • Micro-HDMI to Standard HDMI (A/M) 1m cable – CHF 10.90
  • Power: Official Power Supply 15W – CHF 19.40
  • Keyboard/Mouse: Already available in my home lab

Total cost in CHF: 133.00

Raspberry Pi 4 Model B Specs

I ordered the Raspberry Pi 4 Model B/4GB with the following hardware specifications:

  • CPU – Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz
  • RAM – 4GB LPDDR4
  • WLAN – 2.4 GHz and 5.0 GHz IEEE 802.11b/g/n/ac wireless
  • Gigabit Ethernet
  • USB – 2x USB 3.0, 2x USB 2.0
  • Video – 2 × micro HDMI ports (up to 4Kp60 supported)
  • Multimedia – H.265 (4Kp60 decode), H.264 (1080p60 decode, 1080p30 encode)

With this powerful hardware I expect no problems and would assume that even playing videos and using graphics is not an issue. But let’s figure that out later.

Horizon Client for Linux

The support for the Raspberry Pi came with Horizon Client 4.6 for Linux:

Horizon Client for Linux now supports the Raspberry Pi 3 Model B devices that are installed with ThinLinx Operating System (TLXOS) or Stratodesk NoTouch Operating System. The supported Horizon Client features include Blast Extreme, USB redirection, and H.264 decoding.

And the current Horizon Client 5.1 still only mentions the support for Raspberry Pi 3 with the same supported feature set:

Horizon Client for Linux 5.1 is supported on Raspberry Pi 3 Model B devices that are installed with ThinLinx Operating System (TLXOS) or Stratodesk NoTouch Operating System. The supported Horizon Client features include Blast Extreme, USB redirection, and H.264 decoding.

Hm, nothing has changed so far. During the time of writing this article I’ll try to figure out if the official support for a Pi 4 is coming soon and why ThinLinX is the only supported OS so far. Because I saw on Twitter and on the Forbes website that people are waiting for Ubuntu MATE for their Raspis

And I found a tweet from August 6, 2019, from the ThinLinX account with the following information:

ThinLinX has just released TLXOS 4.7.0 for the Raspberry Pi 4 with dual screen support. The same image runs on the entire Raspberry Pi range from the RPi2 onward TLXOS 4.7.0 supports VMware Horizon Blast, Citrix HDX, RDP/RemoteFX, Digital Signage and IoT

Raspberry Pi and Horizon Client 4.6 for Linux

The next question came up – are there already any people around who tested the ThinLinX OS with a Raspberry Pi 3/4?

Probably a few people tried it already, but only one guy from UK so far blogged about this combination on his blog vMustard.

He wrote a guide about how to install TLXOS and the TMS management software, the configuration of TLXOS and how the Horizon Client for Linux needs to be installed. For sure his information helps me to get started.

Horizon Test Environment

I’m going to use VMware’s TestDrive to access a vGPU enabled Windows 10 desktop from the EMEA region. Such a Windows 10 1709 desktop is equipped with a Xeon Gold 6140 CPU and a Nvidia Tesla V100 card. I tried to get a card from Nvidia to perform the tests in my home lab, but they already gave away all the cards they had. So, the test in my home lab has to wait for a few weeks or months. 🙂 

Workspace ONE UEM and TLXOS

And when I finally have installed TLXOS and can connect to a Horizon desktop, would it be possible to install Intelligent Hub and enroll the device in my Workspace ONE UEM sandbox environment? Is this also possible and supported?

Checking our VMware Docs and the Workspace ONE UEM product documentation the following information can be found:

The flexibility of the Linux operating system makes it a preferred platform for a wide range of uses, including notebooks, Raspberry Pi devices, and other IoT-capable devices. With Workspace ONE UEM, you can build on the flexibility and ubiquity of Linux devices and integrate them with your other mobile platforms in a central location for mobile device management.

Hm, would my new thin client be supported or not? The only requirements mentioned, are:

  • You can enroll devices running any version and any configuration of Linux running on either x86_64 or ARM7 architecture into Workspace ONE UEM
  • You can enroll Linux devices in any Workspace ONE UEM version from 1903 onward
  • You must deploy the Workspace ONE Intelligent Hub for Linux v1.0

As you can see above the new Raspberry Pi 4 is based on ARM8. I asked our product management if the RPi4 and TLXOS is supported and received the following answer:

As for WS1 UEM support for Linux, we do support ARM and won’t have a problem running on a Pi4, but we are still early stages for the product

As the Linux management capabilities with Workspace ONE UEM are very limited, I’m going to wait another four to six months to perform some tests. But TLXOS is anyway coming with its on management software. And customers would probably prefer another Linux Distribution like Ubuntu MATE.

Raspberry Pi 4 Setup

There is no special manual needed to set up a Raspberry Pi. Just unbox and install it in a case, if you ordered one. Here are some general instructions: https://projects.raspberrypi.org/en/projects/raspberry-pi-setting-up

Install ThinLinX OS on the Raspberry Pi 4

Download the most recent installer for ThinLinX OS (TLXOS) for a Raspberry Pi: http://thinlinx.com/download/

1_TLXOS_RaspberryPi4_SDcard_Installer

Insert your microSD card into your PC and launch the “TLXOS Raspberry Pi SD Card Installer” (in my case tlxos_rpi-4.7.0.exe” and press “Yes” if you are prepared to write the image to the SD card.

3_TLXOS 4.7.0 for Raspberry Pi (v2 and v3)

After the image extraction a “Win32 Disk Imager” window will appear. Make sure the to choose the correct drive letter for the SD card (in my case “G”). Click “Write”

4_Win32_Disk_Imager

If everything went fine you should get a notification that the write was successful.

5_TLXOS-Complete

Now put the SD card into the Pi, connect the USB-C power cable, mirco-HDMI cable, keyboard and mouse.

And then let’s see if the Pi can boot from the SD card.

5_TLXOS-Complete

It seems that the TLXOS just booted up fine and that we have “30 Day Free Trial” included.

8_TLXOS_30d_FreeTrial

A few minutes later TLXOS was writing something to the disk and did a reboot. The Chromium browser appears. This means we don’t need to install the TMS for our tests, except you would like to test the management of a TLXOS device.

I couldn’t find any menu on TLXOS, so I closed the browser and got access to a menu where I apparently can configure stuff.

10_Chromium_closed_menu_appears

Install Horizon Client for Linux on TLXOS

After I clicked on “Configure” before I browsed through the tabs (Application) and found the option to configure the Horizon Client. It seems that the client is included now in TLXOS which was not the case in the past. Nice! 

11_TLXOS_Configure_VMwareBlast

Note:

When a TLXOS device boots, if configured correctly it will automatically connect to a Remote
Server using the specified connection Mode. Up to 16 different connection Modes can be
configured

I just entered the “Server” before and clicked on “Save Settings” which opened the Horizon Client automatically where I just have to enter my username and password (because I didn’t configure “Auto Login” before).

Voila, my vGPU powered Windows 10 desktop from VMware TestDrive appeared.

As first step I opened the VMware Horizon Performance Tracker and the Remote Desktop Analyzer (RD Analyzer) which both confirmed that the active encoder is “NVIDIA NvEnc H264“. This means that the non-CPU encoding (H.264) on the server and the H.264 decoding on TLXOS with the Horizon Client (with Blast) should work fine.

To confirm this, I logged out from the desktop and checked the Horizon Client settings. Yes, H.264 decoding was allowed (default).

15_TLXOS_HorizonClient_H264_allowed

After disallowing the H.264 decoding I could see the difference in the Horizon Performance Tracker.

The active encoder changed to “adaptive”. Let’s allow H.264 again for my tests!

Testing

 

1) User Experience with YouTube

As a first test the user experience with the Raspberry Pi 4 as a thin client and to check how the H.264 decoding performs I decided to watch this trailer:

AVENGERS 4 ENDGAME: 8 Minute Trailers (4K ULTRA HD) NEW 2019: https://www.youtube.com/watch?v=FVFPRstvlvk

I had to compress the video to be able to upload and embed it here. Important to see is that I was watching the 4K trailer in full screen mode and the video and audio were not choppy, but smooth I would say! I had around 21 to 23 fps. But that’s very impressive, isn’t it?

For the next few tests I’m going to use what TestDrive offers me:

2) TestDrive – Nvidia Faceworks

3) TestDrive – eDrawings Racecar Animation

4) TestDrive – Nvidia “A New Dawn”

5) TestDrive – Google Earth

6) FishGL

Conclusion

Well, what are the important criteria which a thin client needs to fullfil? Is it

  • (Very) small form factor
  • Management software – easy to manage
  • Secure (Patching/Updating, Two Factor Authentication, Smartcard Authentication)
  • Longevity – future proof
  • Enough ports for peripherals (e.g. Dualview Support)
  • Low price
  • Low power consumption

It always depends on the use cases, right? If Unified Communications is important to you or your customer, then you need to go with the Stratodesk’s NoTouch OS or have to buy another device and use a different OS. But if you are looking for a good and cheap device like the Raspberry Pi 4, then multimedia, (ultra) HD video streaming and office applications use cases are no problem.

My opinion? There are a lot of use cases for these small devices. Not only in end-user computing, but it’s easy for me to say that the Raspi has a bright future!

With the current TLXOS and the supported Horizon Client features so far I wouldn’t call this setup “enterprise ready” because the installation of TLXOS needs to be done manually except you can get it pre-installed on a SD card? Most customers rely on Unified Communications today and are using Skype for Business and other collaboration tools which is not possible yet according to the Horizon Client release notes. But as soon as the Horizon Client (for Linux) in TLXOS gets more features, the Raspberry Pi is going to take some pieces of the cake and the current thin client market has to live in fear. 😀

The biggest plus of a Raspberry Pi as a thin client is definitely the very small form factor combined with the available ports and the cheap money (TLXOS license not included). You can connect two high resolution monitors, a network cable, keyboard, mouse and a headset without any problem. If you buy the Pi in bulk as customer then I claim that the price is very, very hard to beat. And if a Pi has a hardware defect then plug the SD card into another Pi and your user can work again within a few minutes. If VESA mount is mandatory for you then buy a VESA case. By the way, this is my KKSB case:

What is missing in the end? Some Horizon Client features and the manual initial OS deployment method maybe. I imagine that IT teams of smaller and medium-sized companies could be very interested in a solution like this, because a Raspberry Pi 4 as a thin client already ROCKS!

    Horizon and Workspace ONE Architecture for 250k Users Part 1

    Disclaimer: This article is based on my own thoughts and experience and may not reflect a real-world design for a Horizon/Workspace ONE architecture of this size. The blog series focuses only on the Horizon or Workspace ONE infrastructure part and does not consider other criteria like CPU/RAM usage, IOPS, amount of applications, use cases and so on. Please contact your partner or VMware’s Professional Services Organization (PSO) for a consulting engagement.

    To my knowledge there is no Horizon implementation of this size at the moment of writing. This topic, the architecture and the necessary amount of VMs in the data center, was always important to me since I moved from Citrix Consulting to a VMware pre-sales role. I always asked myself how VMware Horizon scales when there are more than only 10’000 users.

    250’000 users are the current maximum for VMware Horizon 7.8 and the goal is to figure out how many Horizon infrastructure servers like Connection Servers, App Volumes Managers (AVM), vCenter servers and Unified Access Gateway (UAG) appliances are needed and how many pods should be configured and federated with the Cloud Pod Architecture (CPA) feature.

    I will create my own architecture, meaning that I use the sizing and recommendation guides and design a Horizon 7 environment based on my current knowledge, experience and assumption.

    After that I’ll feed the Digital Workspace Designer tool with the necessary information and let this tool create an architecture, which I then compare with my design.

    Scenario

    This is the scenario I defined and will use for the sizing:  

    Users: 250’000
    Data Centers: 1 (to keep it simple)
    Internal Users: 248’000
    Remote Users: 2’000
    Concurrency Internal Users: 80% (198’400 users)
    Concurrency Remote Users: 50% (1’000 users)

    Horizon Sizing Limits & Recommendations

    This article is based on the current release of VMware Horizon 7 with the following sizing limits and recommendations:

    Horizon version: 7.8
    Max. number of active sessions in a Cloud Pod Architecture pod federation: 250’000
    Active connections per pod: 10’000 VMs max for VDI (8’000 tested for instant clones)
    Max. number of Connection Servers per pod: 7
    Active sessions per Connection Server: 2’000
    Max. number of VMs per vCenter: 10’000
    Max. connections per UAG: 2’000 

    The Digital Workspace Designer lists the following Horizon Maximums:

     

    Horizon Maximums Digital Workspace Designer

    Please read my short article if you are not familiar with the Horizon Block and Pod Architecture.

    Note: The App Volumes sizing limits and recommendations have been updated recently and don’t follow this rule of thumb anymore that an App Volumes Manager only can handle 1’000 sessions. The new recommendations are based on “concurrent logins per second” login rate:

    New App Volumes Limits Recommendations

     

    Architecture Comparison VDI

    Please find below my decisions and the one made by the Digital Workspace Designer (DWD) tool:

    Horizon ItemMy DecisionDWD ToolNotes
    Number of Users (concurrent)199'400199'400
    Number of Pods required2020
    Number of Desktop Blocks (one per vCenter)100100
    Number of Management Blocks (one per pod)2020
    Connection Servers required100100
    App Volumes Manager Servers802024+1 AVMs for every 2,500 users
    vRealize Operations for Horizonn/a22I have no experience with vROps sizing
    Unified Access Gateway required22
    vCenter servers (to manage clusters)20100Since Horizon 7.7 there is support for spanning vCenters across multiple pods (bound to the limits of vCenter)

    Architecture Comparison RDSH

    Please find below my decisions* and the one made by the Digital Workspace Designer (DWD) tool:

    Horizon ItemMy DecisionDWD ToolNotes
    Number of Users (concurrent)199'400199'400
    Number of Pods required2020
    Number of Desktop Blocks (one per vCenter)20401 block per pod since we are limited by 10k sessions per pod, but only have 333 RDSH per pod
    Number of Management Blocks (one per pod)2020
    Connection Servers required100100
    App Volumes Manager Servers142024+1 AVMs for every 2,500 users/logins (in this case RDSH VMs (6'647 RDSH totally))
    vRealize Operations for Horizonn/a22I have no experience with vROps sizing
    Unified Access Gateway required22
    vCenter servers (to manage resource clusters)440Since Horizon 7.7 there is support for spanning vCenters across multiple pods (bound to the limits of vCenter)

    *Max. 30 users per RDSH

    Conclusion

    VDI

    You can see in the table for VDI that I have different numbers for “App Volumes Manager Servers” and “vCenter servers (to manage clusters)”. For the amount of AVM servers I have used the new recommendations which you already saw above. Before Horizon 7.7 the block and pod architecture consisted of one vCenter server per block:

    Horizon Pod vCenter tradtitional

    That’s why, I assume, the DWD recommends 100 vCenter servers for the resource cluster. In my case I would only use 20 vCenter servers (yes, it increases the failure domain), because Horizon 7.7 and above allows to span one vCenter across multiple pods while respecting the limit of 10’000 VMs per vCenter. So, my assumption is here, even the image below is not showing it, that it should be possible and supported to use one vCenter server per pod:

    Horizon Pod Single vCenter

    RDSH

    If you consult the reference architecture and the recommendation for VMware Horizon you could think that one important information is missing:

    The details for a correct sizing and the required architecture for RDSH!

    We know that each Horizon pod could handle 10’000 sessions which are 10’000 VDI desktops (VMs) if you use VDI. But for RDSH we need less VMs – in this case only 6’647.

    So, the number of pods is not changing because of the limitation “sessions per pod”. But there is no official limitation when it comes to resource blocks per pod and having one connection server for every 2’000 VMs or sessions for VDI, to minimize the impact of a resource block failure. This is not needed here I think. Otherwise you would bloat up the needed Horizon infrastructure servers and this increases operational and maintenance efforts, which obviously also increases the costs.

    But, where are the 40 resource blocks of the DWD tool coming from? Is it because the recommendation is to have at least two blocks per pod to minimize the impact of a resource block failure? If yes, then it would make sense, because in my calculation you would have 9’971 RDSH users sessions per pod/block and with the DWD calculation only 4’986 (half) per resource block.

    *Update 28/07/2019*
    I have been informed by Graeme Gordon from technical marketing that the 40 resources blocks and vCenters are coming from here:

    App Volumes vCenters per Pod

    I didn’t see that because I expect that we can go higher if it’s a RDSH-only implementation.

    App Volumes and RDSH

    The biggest difference when we compare the needed architecture for VDI and RDSH is the number of recommended App Volumes Manager servers. Because “concurrent logins at a one per second login rate” for the AVM sizing was not clear to me I asked our technical marketing for clarification and received the following answer:

    With RDSH we assign AppStacks to the computer objects rather than to the user. This means the AppStack attachment and filter drive virtualization process happends when the VM is booted. There is still a bit of activity when a user authenticates to the RDS host (assignment validation), but it’s considerably less than the attachment process for a typical VDI user assignment.

    Because of this difference, the 1/second/AVM doesn’t really apply for RDSH only implementations.

    With this background I’m doing the math with 6’647 logins and neglect the assignment validation activity and this brings me to a number of 4 AVMs only to serve the 6’647 RDS hosts.

    Disclaimer

    Please be reminded again that these are only calculations to get an idea how many servers/VMs of each Horizon component are needed for a 250k user (~200k CCU) installation. I didn’t consider any disaster recovery requirements and this means that the calculation I have made recommend the least amount of servers required for a VDI- or RDSH-based Horizon implementation.

    Workspace ONE UEM – Data Security, Data Privacy and Data Collection

    A lot of businesses are getting more and more interested in a Unified Endpoint Management solution like Workspace ONE UEM. While EMM is pretty clear to everyone, UEM is far away from this status. During the meetings with customers about Workspace ONE we often hear concerns about “cloud” and the data which is being sent to the cloud.

    Since this information about data privacy, data security or data collection regarding Workspace ONE is not easy to gather, I decided to make this information available here.

    This topic is very important, because more businesses are open now to talk about cloud and hybrid solutions like Workspace ONE where the management backend is managed by VMware and only a few components need to be installed on-premises in your own data center:

    Workspace ONE UEM SaaS Architecture

    With the release of Workspace ONE UEM 1904 VMware started to publish “SaaS only releases“. Before this announcement an on-premises customer would get the on-prem installers three to four weeks after a new SaaS release has been made available. That’s why it’s clear that a lot more customers are having the same questions and requests when it comes to a cloud-based solution.

    Of course, as we strive to bring you more cloud services at a faster pace, we will continue to add value with innovations in both our On-Premises and cloud offerings.

    As a result, we are making a change to how we deliver Workspace ONE UEM beginning with Workspace ONE UEM Console 1904, which will be SaaS only release.

    Which data are collected from users and devices? Who has access to this data?

    • By default, the solution only collects information necessary to manage the device, such as the device status, compliance information, OS, etc.; our solution may collect (if configured by administrator) or users may input data considered to be sensitive
    • The solution collects a limited personal data which includes user first and last name, username, email address, and phone number for user activation and management. These fields can be encrypted at rest in the solution database (AES 256). Customers may collect additional data points in the following matrix (as configured by the customer administrator): https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/1904/UEM_Managing_Devices/GUID-AWT-DATA-COLLECT-MATRIX.html
      • VMware automatically collects certain information when you use or access Online Properties (“VMware websites, online advertisements or marketing emails “) or mobile apps. This information does not necessarily reveal your identity directly but may include information about the specific device you are using, such as the hardware model, operating system version, web-browser software (such as Firefox, Safari, or Internet Explorer) and your Internet Protocol (IP) address/MAC address/device identifier. We also automatically collect and store certain information in server logs such as: statistics on your activities on the Online Properties or mobile apps; information about how you came to and used the Online Property or mobile app; your IP address; device type and unique device identification numbers, device event information (such as crashes, system activity and hardware settings, browser type, browser language, the date and time of your request and referral URL), broad geographic location (e.g. country or city-level location) and other technical data collected through cookies, pixel tags and other similar technologies that uniquely identify your browser. Please refer to the VMware Privacy Notice for additional information.
    • VMware manages access to the SaaS environment while customers manage administrative and end-user access through the solution console
      • Access to the SaaS environment is technically enforced according to role, the principle of least privileges and separation of duties
      • Customers manage access entitlements for administrative and end users
    • VMware defines customer data related to the solution and/or hosted service in the VMware Data Processing Addendum
    • Data Sub-Processors can be found here

    Is it possible to prevent data collection of specific information?

    • Customer administrators use granular controls to configure what data is collected from users and what collected data is viewable by admins within the Workspace ONE console. Use granular role-based access controls to restrict the depth of device management information and features available to each administrative console user.
    • For Workspace ONE UEM configure Collect and Display, Collect Do Not Display, and Do Not Collect settings for user data:
      • GPS Data
      • Carrier/Country Code
      • Roaming Status
      • Cellular Data Usage
      • Call Usage
      • SMS Usage
      • Device Phone Number
      • Personal Application
      • Unmanaged Profiles
      • Public IP Address
    • Customer administrators can choose whether to display or to do not display the following user information:
      https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/1904/UEM_Managing_Devices/GUID-AWT-CONFIGUREPRIVACYSETTINGS.html
      • First Name
      • Last Name
      • Phone Number
      • Email Accounts
      • Username

     Is the data in the cloud encrypted?

    • Yes – Certificate private keys, client cookie data and tokens are encrypted in the solution database with a derived AES 256-bit symmetric encryption with an IV.
      • Customers can enable encryption at rest for user first name, last name, email and phone number
      • We do not store AD/LDAP passwords in our database
    • VMware Content Locker, VMware Boxer and VMware AirWatch App Wrapping solutions use AES 256-bit encryption to secure data on mobile devices
    • Data between the web console (management console and Self Service Portal) and device is encrypted using HTTPS and is not decrypted at any point along the path
      • VMware leverages a 2048-bit key in the SaaS environment
      • An application server controls communication between the web console and the database to limit the potential for malicious actions through SQL injection or invalid input: No direct calls are made to the database
    • All sensitive interactions between AirWatch nodes (AirWatch hosting servers and the VMware Enterprise Systems Connector), between VMware AirWatch Agent and the AirWatch solution are accomplished using message level encryption. For these message level interactions, the AirWatch Cloud uses 2048-bit RSA asymmetric key encryption using digital certificates.
    • We encrypt AD/LDAP credentials on the device via AES 256-bit and store them in the device keychain (internal memory)

    I hope this short article helps everyone to get the information they require for a Workspace ONE UEM SaaS project. I shared the same information with several customers from different businesses and so far all legal departments accepted the statements and moved forward with their project with Workspace ONE UEM. 🙂

    Horizon on VMC on AWS Basics

    VMC on AWS

    In Switzerland where we have a lot of smaller to medium sized companies the demand for a  cloud solution is increasing. The customers are not yet ready to put all their servers and data into to the cloud, so they go for a hybrid cloud strategy.

    And now it makes even more sense and got easier since VMware’s offering VMware Cloud on AWS (VMC on AWS) exists. This service, powered by VMware Cloud Foundation (VCF), brings VMware’s SDDC stack to the AWS cloud and runs the compute, storage and network products (vSphere, vSAN, NSX) on dedicated bare-metal AWS hardware. 

    VMC on AWS

    If you would like to try this offering you have the option for a Single Host SDDC which is the time-bound starter configuration and comes with the limitation of 30 days. After 30 days your Single Host SDDC will be deleted and all data will be lost as well. If you plan to scale up into a 3-host SDDC you retain all your data and you SDDC is not time bound anymore.

    Availability

    This pretty new service is already available in 13 global regions and already had 200+ released features since its launch. VMC on AWS is available almost everywhere – in US and Asia Pacific for example – and in Europe we find the service hosted in Frankfurt, London, Paris and Ireland. 

    Use Cases

    It’s not hard to guess what the use cases are for a service like this. If you are building up a new IT infrastructure, don’t want to have your own data center and purchase any server, then you might want to consider VMC on AWS. Another project could be to expand your market into a new geography and extend your footprint into the cloud based on a VMware-consistent and enterprise-grade environment in the AWS cloud.

    A few customers are also finding a new way to easily deliver business continuity with VMware Site Recovery and take advantage of VMC on AWS which delivers a robust Disaster Recovery as a Service (DRaaS) possibility.

    Another reason could be that your on-premises data center is in danger because of bad weather and you want to migrate all your workloads to another region.

    Or you just want to quickly build a dev/test environment or do a PoC of a specific solution or application (e.g. VMware Horizon).

    Elastic DRS

    In my opinion EDRS is one of best reasons to go for VMC on AWS. EDRS allows you to get the capacity you need in minutes to meet temporary or unplanned demand. You have the possibility to scale-out and scale-in depending on the generated recommendation.

    A scale-out recommendation is generated when any of CPU, memory, or storage utilization remains consistently above thresholds. For example, if storage utilization goes above 75% but memory and CPU utilization remain below their respective thresholds, a scale-out recommendation is generated.

     

     A scale-in recommendation is generated when CPU, memory, and storage utilization all remain consistently below thresholds.

    This is interesting if your dekstop pool is creating more instant clones and the defined value of RAM for example is above the threshold. But there is also a safety check included in the algorithm, which runs every 5 minutes, to provide time to the cluster to cool off with changes. 

    If you check the EDRS settings you have the option for the “Best Performance” or “Lowest Cost” policy. More information can be found here.

    Horizon on VMC on AWS

    For customers who are already familiar with a Horizon 7 on-premises deployment, Horizon on VMC on AWS lets you leverage the same architecture and the familiar tools. The only difference now is the vSphere outsourcing.

    Use Cases

    Horizon can be deployed on VMware Cloud on AWS for different scenarios. You could have the same reasons like before – data center expansion or to have a disaster recovery site in the cloud. But the most reason why a customer goes for Horizon on VMC on AWS is flexibility combined with application locality.

    Horizon 7 on VMC on AWS

    We have customers who were operating an on-premises infrastructure for years and suddenly they are open to a cloud infrastructure. Because the SDDC stack in the cloud is the same like in the private cloud the migration can be done very easily. You can even use the same management tools like before.

    Minimum SDDC Size

    The minimum number of hosts required per SDDC on VMware Cloud on AWS for production use is 3 nodes (hosts). For testing purpose, a 1-node SDDC is also available. However, since a single node does not support HA, it’s not recommended for production use.

    Cloud Pod Architecture for Hybrid Cloud

    If you are familiar with the pod and block architecture you can start to create your architecture design. This hasn’t changed for the offering on VMC on AWS but there is a slight difference:

    • Each pod consists of a single SDDC
    • Each SDDC only has a single vCenter server
    • A Horizon pod consists of a single block Horizon7Pod on VMC on AWS

    Each SDDC only has one compute gateway which limits the connections to ~2’000 VMs or user sessions. This means that the actual limit per pod on VMC on AWS is ~2’000 sessions as well. When the number of compute gateways per SDDC can be increased, Horizon 7 on VMC on AWS will definitely have a comparable scalability with the on-premises installation.

    You can deploy a hybrid cloud environment when you use the Cloud Pod Architecture to interconnect your on-premises and Horizon pods on VMC on AWS. You can also stretch CPA across pods in two or more VMware Cloud on AWS data centers with the same flexibility to entitle your users to one or multiple pods as desired.

    Supported Features

    The deployment of Horizon 7 on VMC on AWS started with Horizon 7.5 but there was no feature parity at this time. With the release of Horizon 7.7 and App Volumes 2.15 we finally had the requested feature parity. This means since Horizon 7.7 we can use Instant Clones, App Volumes and UEM. At the time of writing the vGPU feature is not available yet but VMware is working with Amazon on it. With the release of Horizon 7.8 a pool with VMware Cloud on AWS is now capable of using multiple network segments, allowing you to use less pools and/or smaller scopes. Please consult this KB for the currently supported features. 

    Use AWS Native Services

    When you set up the Horizon 7 environment in VMware Cloud on AWS you have to install and configure the following components:

    • Active Directory
    • DNS 
    • Horizon Connection Servers
    • DHCP
    • etc.

    If you are deploying Horizon 7 in a hybrid cloud environment by linking the on-premises pod with the
    VMC on AWS pod, you must prepare the on-premises Microsoft Active Directory (AD) to access
    the AD on VMware Cloud on AWS.

    My recommendation: Use the AWS native services if possible 🙂

    AWS Directory Services

    AWS Managed Microsoft AD is built on actual Microsoft Active Directory and does not require you to synchronize or replicate data from your existing Active Directory to the cloud. You can use standard Active Directory administration tools and take advantage of built-in Active Directory features, such as Group Policy and single sign-on (SSO).

    Amazon Relational Database Service

    Amazon RDS is available on several database instance types – optimized for memory, performance or I/O – and provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS.

    This service allows you to quickly setup a SQL Express (not recommended for production) or regular SQL Server which can be used for the Horizon Event DB or App Volumes. 

    Amazon FSx for Windows File Server

    Amazon FSx for Windows File Server provides a fully managed native Microsoft Windows file system so you can easily move your Windows-based applications that require file storage to AWS. Built on Windows Server, Amazon FSx provides shared file storage with the compatibility and features that your Windows-based applications rely on, including full support for the SMB protocol and Windows NTFS, Active Directory (AD) integration, and Distributed File System (DFS).

    At the time of writing I have to mention that the FSx service has not yet officially been tested and qualified for User Environment Manager (UEM), but that’s no problem. Technically it’s working totally fine.

    Amazon Route 53

    The connectivity to data centers in the cloud can be a challenge. You need to manage the external namespace to give users access to their desktop in the cloud (or on-prem). For a multi-site architecture the solution is always Global Server Load Balancing (GSLB), but how is this done when you cannot install your physical appliance anymore (in your VMC on AWS SDDC)?

    The answer is easy: Leverage Amazon Route 53!

    Amazon Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers, or Amazon S3 buckets – and can also be used to route users to infrastructure outside of AWS. You can use Amazon Route 53 to configure DNS health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints. 

    Check Andrew Morgans blog article if you need more information about Route 53.

    Horizon on VMC on AWS rocks! 🙂

     

    vSAN Basics for a Virtual Desktop Infrastructure with VMware Horizon

    As an EUC architect you need fundamental knowledge about VMware’s SDDC stack and this time I would like to share some more basics about VMware vSAN for VMware Horizon.

    In part 5 of my VCAP7-DTM Design exam series I already posted some YouTube videos about vSAN in case you prefer videos instead of reading. To further proof my vSAN knowledge I decided to take the vSAN Specialist exam which focuses on the version 6.6.

    To extend my vSAN skills and to prep myself for this certification I have bought the VMware vSAN 6.7 U1 Deep Dive book which is available on Amazon.

    vSAN 6.7 U1 Deep Dive

    vSAN Basics – Facts and Requirements

    Out in the field not every EUC guy has enough sic knowledge about vSAN and I want to provide some facts about this technology here. This is no article about all the background information and detailed stuff you can do with vSAN, but it should help you to get a basic understanding. If you need more details about vSAN I highly recommend the vSAN 6.7 U1 Deep Dive book and the content available on storagehub.vmware.com.

    • The vSAN cluster requires at least one flash device and capacity device (magnetic or flash)
    • A minimum of three hosts is required except you go for a two-node configuration (requires a witness appliance)
    • Each host participating in the vSAN cluster requires a vSAN enabled VMkernel port
    • Hybrid configurations require a minimum of one 1GbE NIC, 10GbE is recommended by VMware
    • All-Flash configurations require a minimum of one 10GbE NIC
    • vSAN can use RAID-1 (mirroring) and RAID5-/6 (erasure coding) for the VM storage policies
    • RAID-1 is used for performance reasons, erasure coding is used for capacity reasons
    • Disk groups require one flash device for the cache tier and one or more flash/magnetic device for the capacity tier
    • There can be only one cache device per disk group
    • Hybrid configuration – The SSD cache is used for read and write (70/30)
    • All-Flash configuration – The SSD cache is used 100% as a write cache
    • Since version 6.6 there is no multicast requirement anymore
    • vSAN supports IPv4 and IPv6
    • vSphere HA needs to be disabled before vSAN can be enabled and configured
    • The raw capacity of a vSAN datastore is calculated by the number of capacity devices multiplied by the number of ESXi hosts (e.g. 5 x 2TB x 6 hosts = 60 TB raw)
    • Deduplication and compression are only available in all-flash configurations
    • vSAN stores VM data in objects (VM home, swap, VMDK, snapshots)
    • The witness does not store any VM specific data, only metadata
    • vSAN provides data at rest encryption which is a cluster-wide feature
    • vSAN integrates with CBRC (host memory read cache) which is mostly used for VMware Horizon
    • By default, the default VM storage policy is assigned to a VM
    • Each stretched cluster must have its own witness host (no additional vSAN license needed)
    • Fault domains are mostly described with the term “rack awareness”

    vSAN for VMware Horizon

    The following information can be found in the VMware Docs for Horizon:

    When you use vSAN, Horizon 7 defines virtual machine storage requirements, such as capacity, performance, and availability, in the form of default storage policy profiles, which you can modify. Storage is provisioned and automatically configured according to the assigned policies. The default policies that are created during desktop pool creation depend on the type of pool you create.

    This means that Horizon will create storage policies when a desktop pool get created. To get more information I will provision a floating Windows 10 instant clone desktop pool. Before I’m doing that, let’s have a look first at the policies which will appear in vCenter depending on the pool type:

    Since I’m going to create a floating instant clone desktop pool I assume that I should see some the storage policies marked in yellow. 

    Instant Clones

    First of all we need to take a quick look again at instant clones. I only cover instant clones since it’s the recommended provisioning method by VMware. As we can learn from this VMware blog post, you can maissvely reduce the time for a desktop to be provisioned (compared to View Composer Linked Clones).

    VMware Instant Clones

    The big advantage of the instant clone technology (vmFork) is the in-memory cloning technique of a running parent VM.

    The following table summarizes the types of VMs used or created during the instant-cloning process:

    Instant Cloning VMs
    Source: VMWARE HORIZON 7 INSTANT-CLONE DESKTOPS AND RDSH SERVERS 

    Horizon Default Storage Policies

    To add a desktop pool I have created my master image first and took a snapshot of it. In my case the VM is called “dummyVM_blog” and has the “vSAN Default Storage Policy” assigned.

    How does it go from here when I create the floating Windows 10 instant clone desktop pool?

    Instant Clone Technology

    The second step in the process is where the instant-clone engine uses the master VM snapshot to create one template VM. This template VM is linked to the master VM. My template VM automatically got the following storage policy assigned:

    The third step is where the replica VM gets created with the usage of the template VM. The replica VM is a thinprovisioned full clone of the internal template VM. The replica VM shares a read disk with the instantclone VMs after they are created. I only have the vSAN datastore available and one replica VM is created per datastore. The replica VM automatically got the following storage policy assigned:

    The fourth step involves the snapshot of the replica VM which is used to create one running parent VM per ESXi host per datastore. The parent VM automatically got the following storage policies assigned:

    After, the running parent VM is used to create the instant clone, but the instant clone will be linked to the replica VM and not the running parent VM. This means a parent VM can be deleted without affecting the instant clone. The instant clone automatically got the following storage policies assigned:

    And the complete stack of VMs with the two-node vSAN cluster in my home lab, without any further datastores, looks like this:

    vCenter Resource Pool 

    Now we know the workflow from a master VM to the instant clone and which default storage policies got created and assigned by VMware Horizon. We only know from the VMware Docs that FTT=1 and one stripe per object is configured and that there isn’t any difference except for the name. I checked all storage policies in the GUI again and indeed they are all exactly the same. Note this:

    Once these policies are created for the virtual machines, they will never be changed by Horizon 7

    Even I didn’t use linked clones with a persistent disk the storage policy PERSISTENT_DISK_<guid> gets created. With instant clones there is no option for a persistent disk yet (you have to use App Volumes with writable volumes), but I think that this will come in the future for instant clones and then we also don’t need View Composer anymore. 🙂

    App Volumes Caveat

    Don’t forget this caveat for App Volumes when using a vSAN stretched cluster.

    VMware Mirage – Alternatives

    As some of you know Mirage was (and still is) a revolutionary technology at the time Wanova released it in 2011 and in 2012 Mirage became part of VMware.

    VMware Mirage is used by customers for their desktop image management and for backup and recovery requirements.

    VMware Mirage provides next-generation desktop image management for physical desktops and POS devices across distributed environments. Automate backup and recovery and simplify Windows migrations.

    Mirage is and was the solution for certain use cases and solved common desktop challenges. Therefore not all customers are happy that Mirage reaches end of support (EOS) on June 30, 2019. 🙁

    But why is VMware Mirage being removed from support?

    Well, the answer is very simple. Today, the market is heading in two directions – it’s all about the applications and end-user devices (called the Digital Workspace). That’s why customers should move or are somehow forced to move to a Unified Endpoint Management solution which is considered to be “the” Windows desktop management solution of the future. The future of Windows is apparently cloud based and Mirage has not been designed or architected for this.

    What are the alternatives?

    VMware has no successor or product which can replace all of the features and functions Mirage provided, but Workspace ONE is the official alternative solution when it comes to Windows desktop management.

    There are really a lot of use cases and reasons why customers in the past decided to choose Mirage:

    • Reduce Management Complexity (e.g. single management console)
    • Desktop Backup and Recovery (automated and continuous system or user data backup)
    • Image Management (image layering)
    • Patch Management
    • Security & Compliance (auditing and encrypted connections)
    • Simple Desktop OS Migrations (e.g. Windows 7 to Windows 10 migrations)

    VMware Mirage really simplified desktop management and provides a layered approach when it comes to OS and applications rollouts. Customers also had the use case where the physical desktop not always was connected to the corporate network and this is a common challenge IT department were facing.

    The desktop images are stored in your own data center with secure encrypted access from all endpoints. You can also customize access rights to data and apps.  Even auditing capabilities are available for compliance requirements.
    And the best and most loved feature was the possibility for a full system backup and recovery!

    IT people love Mirage because it was so simple to restore any damaged and lost device to the most recent state (snapshot).

    For branch offices where no IT was onsite Mirage was also the perfect fit. An administrator just can distribute updates or Windows images to all remote laptops and PCs without any user interaction – maybe a reboot was now and then required. But that’s all!

    In case of bandwidth problems you could also take advantage of the Branch Reflector technology which ensured that one endpoint downloads images update and then distribute it locally to other computers (peers), which saved relieved the WAN connection drastically.

    Can WorkspaceONE UEM replace Mirage?

    From a technical perspective my opinion is definitely NO. WorkspaceONE has not the complete feature set compared to Mirage when it is about Windows 10 desktop management, but both are almost congruent I have to say.

    I agree that WorkspaceONE (WS1) is the logical step or way to “replace” Mirage, but this you have to know:

    • WS1 cannot manage desktop images for OS deployments. Nowadays, it is expected that a desktop is delivered pre-staged with a Windows 10 OS from the vendor or that your IT department is doing the staging for example with WDS/MDT.
    • WS1 has no backup and recovery function. If you use Dell Factory Provisioning then you can go back to a “restore point” where all of your pre-installed and manually installed applications get restored after a device wipe let’s say for example. But if the local hard disk has a failure and this restore partition is gone, then you have to get your device or hard disk replaced. Without Dell Factory Provisioning this means that IT has, again, still to deploy the desktop image with WDS/MDT.

    For some special use cases it is even necessary to implement VMware Horizon, User Environment Manager, OneDrive for Business etc, but even then WS1 is a good complement since it can also be used for persistent virtual desktops!

    As you can see a transition from Mirage to WS1 is not so easy and the few but most important differences are the reasons why customers and IT admins are not so amused about the EOS announcement of VMware Mirage.

    VCP-DW 2018 Exam Experience

    On the 30th November 2018 I passed my VCAP7-DTM Design exam and now I would like to share my VCP-DW 2018 (2V0-761) exam experience with you guys.

    I’m happy to share that I also passed this exam today and I thought it might be helpful, even a new VCP-DW 2019 exam will be released on 28th February 2019, to share my exam experience since it’s still a pretty new certification and not that much information can be found in the vCommunity.

    How did I prepare myself? To be honest, I almost had no hands-on experience and therefore I had to get the most out of the available VMware Workspace ONE documentation. I already had basic knowledge for my daily work as a solution architect, but it was obvious that this is not enough to pass. The most of my basic knowledge I gained from the VMware Workspace ONE: Deploy and Manage [V9.x] course which was really helpful in this case.

    If you check the exam prep guide you can see that you have to study tons of PDFs and parts of the online documentation. 

    Didn’t check all the links and documents in the exam prep guide but I can recommend to read these additional docs:

    In my opinion you’ll get a very good understanding of Workspace ONE (UEM and IDM) if you read all the documents above. In additional to the papers I recommend to get some hands-on experience with the Workspace ONE UEM and IDM console.

    As VMware employee I have access to VMware TestDrive where I have a dedicated Workspace ONE UEM sandbox environment. I enrolled an Android, iOS and two Windows 10 devices and configured a few profiles (payloads). I also deployed the Identity Manager Connector in my homelab to sync my Active Directory accounts with my Identity Manager instance which enables also the synchronization of my future Horizon resources like applications and desktops.

    I think that I spent around two weeks for preparation including the classroom training at the AirWatch Training Facility Milton Keynes, UK.

    The exam (version 2018) itself consists of 65 multiple choice and drag & drop questions and I had 135 minutes time to answer all questions. If you are prepared and know your stuff then I doubt that you will need more than one hour, but this could change with the new VCP-DW 2019. 🙂

    I’m just happy that I have a second VCP exam in my pocket and now I have to think about the next certification. My scope as solution architect will change a little. In the future I’m also covering SDDC (software defined data center) topics like vSphere, vSAN, NSX, VMware Cloud Foundation, Cloud Assembly and VMC on AWS. That’s why I’m thinking to earn the VCP-DCV 2019 or the TOGAF certification.

    VCAP7-DTM Design Exam Passed

    On 21 October I took my first shot to pass the VCAP7-DTM Design exam and failed as you already know from my this article. Today I am happy to share that I finally passed the exam! 🙂

    What did I do with the last information and notes I had about my weaknesses from the last exam score report? I read a lot additional VMware documents and guides about:

    • Integrating Airwatch and VMware Identity Manager (vIDM)
    • Cloud Pod Architecture
    • PCoIP/Blast Display Protocol
    • VMware Identity Manager
    • vSAN 6.2 Essentials from Cormac Hogan and Duncan Epping
    • Horizon Apps (RDSH Pools)
    • Database Requirements
    • Firewall Ports
    • vRealize Operations for Horizon
    • Composer
    • Horizon Security
    • App Volumes & ThinApp
    • Workspace ONE Architecture (SaaS & on-premises)
    • Unified Access Gateway
    • VDI Design Guide from Johan van Amersfoort

    Today, I had a few different questions during the exam but reading more PDFs about the above mentioned topics helped me to pass, as it seems. In addition to that, I attended a Digital Workspace Livefire Architecture & Design training which is available for VMware employees and partners. The focus of this training was not only about designing a Horizon architecture, but also about VMware’s EUC design methodology.

    If you have the option to attend classroom trainings, then I would recommend the following:

    I had two things I struggled with during the exam. Sometimes the questions were not clear enough and I made assumptions what it could mean and that the exam is based on Horizon 7.2 and other old product versions of the Horizon suite:

    • VMware Identity Manager 2.8
    • App Volumes 2.12
    • User Environment Manager 9.1
    • ThinApp 5.1
    • Unified Access Gateway 2.9
    • vSAN 6.2
    • vSphere 6.5
    • vRealize Operations 6.4
    • Mirage 5.x

    But maybe it’s only me since I have almost no hands-on experience with Horizon, none with Workspace ONE and in addition to that I’m only 7 months with VMware now. 🙂

    It is time for an update, but VMware announced already that they are publishing a new design exam version called VCAP7-DTM 2019 next year.

    What about VCIX7-DTM?

     In part 2 of my VCAP7-DTM Design exam blog series I mentioned this:

    Since no VCAP7-DTM Deploy exam is available and it’s not clear yet when this exam will be published, you only need the VCAP7-DTM Design certification to earn the VCIX7-DTM status. I have got this information from VMware certification.

    This information is not correct, sorry. VMware certification pulled their statement back and provided the information that you need to pass the VCAP6-DTM Deploy exam, as long as no VCAP7-DTM Deploy is available, to earn the VCIX7-DTM badge.

    I don’t know yet if I want to pursue the VCIX7-DTM certification and will think about it when the deploy exam for Horizon 7 is available.

    What’s next?

    Hm… I am going to spend more time again with my family and will use some of my 3 weeks vacation time to assemble and install my new home lab.

    Then I also have a few ideas for topics to write about, like:

    • Multi-Domain and Trust with Horizon 7.x
    • Linux VDI Basics with Horizon 7.x
    • SD-WAN for Horizon 7.x
    • NSX Load Balancing for Horizon 7.x

    These are only a few of my list, but let’s see if I really find the time to write a few article. 

    In regards to certification I think I continue with these exams:

    This has no priority for now and can wait until next year! Or…I could try the VDP-DW 2018 since I have vacation. Let’s see 😀

    Unified Endpoint Management – The Modern EMM

    I was touring through Switzerland and had the honor to speak at five events for a “Mobility, Workspace & Licensing” roadshow for SMB customers up to 250 employees. Before I started my presentation I have always asked the audience three questions:

    • Who knows what MDM or EMM (Mobile Device Management or Enterprise Mobility Management) is?
    • Have you ever heard of Unified Endpoint Management (UEM)?
    • Does the name Airwatch or Workspace ONE ring any bells?

    This is my thing to know which people are sitting in front of me and how deep I should or can go from a technical perspective. And I was shocked and really surprised how many people have raised their hands – only between 1 and 5 persons in average. And the event room was filled with 50 to 60 persons! I don’t know how popular EMM and UEM are in other countries, but I think this is a “Swiss thing” when you work with smaller companies. We need to make people aware that UEM is coming! 🙂

    That’s why I decided to write an article about Enterprise Mobility Management and how it transformed or evolved to the term Unified Endpoint Management.

    The basic idea of Mobile Device Management was to have an asset management solution which provides an overview of the smartphones (at the beginning iPhones were very popular) in a company. Enterprises were interested for example to disable Siri and ensure that corporate mobile phone devices were staying within policy guidelines. In addition, if you could lock and wipe the devices, you were all set.

    However, business needs and requirements changed and suddenly employees wanted or even demanded access to applications and content. Here we are talking about features like mail client configuration, WiFi certificate configuration,  content and mobile application management (MAM) and topics like containerization and identity management also became important – security in general. So, MDM and MAM were part now of Enterprise Mobility Management.

    Vendors like VMware, Citrix, MobileIron and so on wanted to go further and offer the same management and configuration possibilities for operating systems like Windows or Mac OS. If I recall correctly this must have been between 2013 and 2017.

    One of the biggest topics and challenges for this time were the creation of so called IT silos. There are many reasons how IT silos were built, but in the device management area it’s easy to give an example. Let’s say that you are working for an enterprise with 3’000 employees and you have to manage devices and operating systems like:

    • PCs & Laptops (Windows OS)
    • MacBooks or Mac OS in general
    • Android & iOS devices
    • Virtual apps & desktops (Windows OS)

    A typical scenario – your IT is deploying Windows OS mit SCCM (Configuration Manager), Mac OS devices are not managed, IT is using JAMF or does manual work, EMM solution for iOS and Android and for the VDI or server based computing (Terminal Server) environment the responsible IT team is using different deployment and management tools. This is an example how silos got build and nowadays they prevent IT from moving at the speed of business. VMware’s UEM solution to break up those silos is called Workspace ONE UEM.

    The EMM or mobility market is moving into two directions:

     

    Today, it’s all about the digital workspace – access ANY application, from ANY cloud, from ANY device and ANYTIME.

    People need app access to mobile apps, internal apps, SaaS apps and Win32 (legacy) apps. On the other hand we want to use any device, no matter if it’s a regular fat client, the laptop at home, wearables or a rugged or IoT device. If you combine “App Access” and “UEM” then you will get a new direction called “Digital Workspace”. Again, this means that Digital Workspace is just another name for the combined EUC (end-user computing) platform.

    UEM is a term which has been introduced by Gartner as a replacement for the client management tool (CMT) and Enterprise Mobility Management.

    Gartner defines Unified Endpoint Management as a new class of tools which function as an unified management interface – a single pane of glass. UEM should give enterprises the possibility to manage and configure iOS, Android, Mac OS and Windows 10 devices with a single unified console. With this information I would call UEM as the modern EMM.

    Modern Management – Windows 10

    Why is Windows 10 suddenly a topic when we talk about UEM? Well, Microsoft has put a lot efforts in their Windows 10 operating system and are providing more and more APIs that allow a richer feature set for the modern management approach – the same experience and approach we already have with mobile device management. Microsoft is seeking  to simplify Windows 10 management and I have to say that they made a fantastic job so far!

    Modern Management, if it’s with VMware Workspace ONE UEM or with a competitor’s product, is nothing else than going away from the network-based deployment to a cloud-based deployment.

    Traditional means staging with SCCM for example, apply group policies, deploy software packages and perform Windows Updates on a domain-joined PC.

    Modern means that we have the same out-of-the-box experience (OOBE) with our Windows 10 devices compared to an iPhone as an example. We want to unbox the device, perform a basic configuration and start consuming. By consuming I mean install all the apps I want wherever I am at the moment. If it’s a less secure network at home, at friends, on a beach, train or at the airport.

    Modern also means that I receive my policies (GPOs) and basic configuration (WiFi, E-Mail, Bitlocker etc.) over-the-air across any network. And my device doesn’t need to be domain-joined (but it can). Windows Updates can also be configured and deployed directly from Microsoft or still with WSUS.

    Mix Physical and Virtual Desktops with Modern Management

    VMware’s vision and my understanding of modern management means that we can and should be able to manage any persistent desktop even if it’s a virtual machine. During my presentation I told the audience that they could have Windows 10 VMs in their on-premises data center, on AWS, Azure or even on a MacBook.

    This use case has NOT been tested by VMware yet, but what do you think if we can manage the recently announced Windows Virtual Desktops (WVD) which are only available through Microsoft Azure? I hope to give you more information about this as soon as I have spoken to the product management.

    But you see where this is going. Modern management offers us new possibilities for certain use cases and we can even easier on-board contractors or seasonal workers if no separate VDI/RDSH based solution is available.

    And let’s assume that in 2018/2019 all new ordered hardware are pre-staged with a Windows 10 version we ask for. For a virtual persistent desktop this is most certainly not the case, but think again about the Windows 10 offerings from Azure where Windows 10 is also “pre-staged”.

    Do we need UEM and Modern Management? Are we prepared for it?

    Well, if we go by the definition of UEM then we already use Unified Endpoint Management since EMM is a part of, but just without the Windows 10 client management part. A survey in Switzerland has shown that only 50% of the companies are dealing with this topic. And to be clear: an adoption or implementation of UEM takes several years. Gartner predicts that companies have to start working with UEM within the next three to five years.

    What preparation is needed to move to the new modern cloud-based management approach? There are different options depending on your current situation.

    If you are running on Windows 7 and use Configuration Manager (SCCM) for the deployment, you could use Workspace ONE’s Airlift technology to build a co-management setup. But then you need to migrate first from Windows 7 to Windows 10 and use SCCM to deploy our Intelligent Hub (formerly known as Airwatch Agent). Then your good to go and could profit from a transition phase until all clients have been migrated. And in the end you can get rid of SCCM completely.

    If you use another tool or manually install Windows 10, then you just need to install Intelligent Hub, enroll the device and your prepared.

    But we can leverage other features and technologies like AutoPilot or Dell Factory Provisioning for Workspace ONE which are not part of this article.

    Which UEM Solution for your Digital Workspace?

    If you are responsible for modernizing client and device management in your company, then keep the following advice in mind. Check your requirements and define a mobility or a general IT strategy for your company. Then look out for the vendors and solutions which meet your requirements and vision. Ignore who is on the top right of the Gartner Magic Quadrant or the vendor who claims to have “the ONE” digital workspace solution. In the end you, your customers and colleagues must be happy! 🙂 

    In the future I will provide more information about Unified Endpoint Management and Modern Management. We are in the early market phase when it comes to UEM and I’m curious what’s coming within the next one or two years.

    The terms “Intelligence” and “Analytics” have not been covered yet and they are very interesting because it’s about new features and technology based on artificial intelligence and machine learning. E.g. with VMware’s Workspace ONE Intelligence you have new options for “insights” and “automation”. You have data, can collect it and run it through a rules engine (automation). But this is something for another time.

    New Supermicro Home Lab

    For a few years I ve been using three Intel NUC Skull Canyon (NUC6i7KYK) mini PCs for my home lab. Each NUC is equipped with the following:

    • 6th Gen Intel i7-6770HQ processor with Intel Iris Pro graphics
    • 2x 16GB Kingston Value RAM DDR4-2133
    • 2x 500GB Samsung 960 EVO NVMe M.2
    • 1x Transcend JetFlash 710S USB boot device

    These small computers were nice in terms of space, but are limited to 32GB RAM, have only 1 network interface and no separate management interface.

    This was enough and acceptable when I worked with XenServer, used local storage and just had to validate XenDesktop/XenApp configurations and designs during my time as Citrix consultant.

    When I started to replace XenServer with ESXi and created a 3-node vSAN cluster for my first Horizon 7 environment, all was running fine at the beginning. But after while I had strange issues doing vMotions, OS installations, VCSA or ESXi upgrades.

    So, I thought it’s time build a “real” home lab and was looking for ideas. After doing some research and talking to my colleague Erik Bussink, it was clear for me that I have to build my computing nodes based on a Supermicro mainboard. As you may know, the Skull Canyons are not that cheap and therefore I will continue using them for my domain controller VMs, vSAN witness, vCenter Server appliance etc.

    Yes, my new home lab is going to to be a 2-node vSAN cluster.

    Motherboard

    I found two Supermicro X11SPM-TF motherboards for a reduced price, because people ordered and never used them. This was my chance and a “sign” that I have to buy my stuff for the new home lab NOW! Let’s pretend it’s my Christmas gift. 😀

    The key features for me?

    Chassis

    I went for the Fractal Design Node 804 because it offers me space for the hardware and cooling. And I like the square form factor which allows me to stack them.

    CPU

    I need some number of cores in my system to run tests and have enough performance in general. I will mainly run Workspace ONE and Horizon stuff (multi-site architectures) in my lab, but this will change in the future. So I have chosen the 8-core Intel Xeon Silver 4110 Processor with 2.10 GHz.

    Memory

    RAM was always a limiting factor with my NUCs. I will reuse two of them and start with two 32GB 2666 MHz Kingston Server Premier modules for each ESXi host (total 64GB per host). If memory prices are reducing and I would need more capacity, I easily can expand my system.

    Boot Device

    Samsung 860 EVO Basic 250GB which is way too much for ESXi, but the price is low and I could use the disk for something else (e.g. for a new PC) if needed.

    Caching Device for vSAN

    I will remove one Samsung 960 EVO 500GB M.2 of each NUC and use them for the vSAN caching tier. Both NUCs will have still one 960 EVO 500 left to be used as local storage.

    Capacity Device for vSAN

    Samsung 860 Evo Basic 1TB.

    Network

    Currently, my home network only consists of Ubiquiti network devices with 1GbE interfaces.

    So I ordered the Ubiquiti 10G 16-port switch which comes with four 1/10 Gigabit RJ45 ports – no SFPs needed for now. Maybe in the future 😀

    This is the home lab configuration I ordered and all parts should arrive until end of November 2018.

    What do you think about this setup?

    Your feedback is very welcome!