Horizon and Workspace ONE Architecture for 250k Users Part 1

Disclaimer: This article is based on my own thoughts and experience and may not reflect a real-world design for a Horizon/Workspace ONE architecture of this size. The blog series focuses only on the Horizon or Workspace ONE infrastructure part and does not consider other criteria like CPU/RAM usage, IOPS, amount of applications, use cases and so on. Please contact your partner or VMware’s Professional Services Organization (PSO) for a consulting engagement.

To my knowledge there is no Horizon implementation of this size at the moment of writing. This topic, the architecture and the necessary amount of VMs in the data center, was always important to me since I moved from Citrix Consulting to a VMware pre-sales role. I always asked myself how VMware Horizon scales when there are more than only 10’000 users.

250’000 users are the current maximum for VMware Horizon 7.8 and the goal is to figure out how many Horizon infrastructure servers like Connection Servers, App Volumes Managers (AVM), vCenter servers and Unified Access Gateway (UAG) appliances are needed and how many pods should be configured and federated with the Cloud Pod Architecture (CPA) feature.

I will create my own architecture, meaning that I use the sizing and recommendation guides and design a Horizon 7 environment based on my current knowledge, experience and assumption.

After that I’ll feed the Digital Workspace Designer tool with the necessary information and let this tool create an architecture, which I then compare with my design.

Scenario

This is the scenario I defined and will use for the sizing:  

Users: 250’000
Data Centers: 1 (to keep it simple)
Internal Users: 248’000
Remote Users: 2’000
Concurrency Internal Users: 80% (198’400 users)
Concurrency Remote Users: 50% (1’000 users)

Horizon Sizing Limits & Recommendations

This article is based on the current release of VMware Horizon 7 with the following sizing limits and recommendations:

Horizon version: 7.8
Max. number of active sessions in a Cloud Pod Architecture pod federation: 250’000
Active connections per pod: 10’000 VMs max for VDI (8’000 tested for instant clones)
Max. number of Connection Servers per pod: 7
Active sessions per Connection Server: 2’000
Max. number of VMs per vCenter: 10’000
Max. connections per UAG: 2’000 

The Digital Workspace Designer lists the following Horizon Maximums:

 

Horizon Maximums Digital Workspace Designer

Please read my short article if you are not familiar with the Horizon Block and Pod Architecture.

Note: The App Volumes sizing limits and recommendations have been updated recently and don’t follow this rule of thumb anymore that an App Volumes Manager only can handle 1’000 sessions. The new recommendations are based on “concurrent logins per second” login rate:

New App Volumes Limits Recommendations

 

Architecture Comparison VDI

Please find below my decisions and the one made by the Digital Workspace Designer (DWD) tool:

Horizon ItemMy DecisionDWD ToolNotes
Number of Users (concurrent)199'400199'400
Number of Pods required2020
Number of Desktop Blocks (one per vCenter)100100
Number of Management Blocks (one per pod)2020
Connection Servers required100100
App Volumes Manager Servers802024+1 AVMs for every 2,500 users
vRealize Operations for Horizonn/a22I have no experience with vROps sizing
Unified Access Gateway required22
vCenter servers (to manage clusters)20100Since Horizon 7.7 there is support for spanning vCenters across multiple pods (bound to the limits of vCenter)

Architecture Comparison RDSH

Please find below my decisions* and the one made by the Digital Workspace Designer (DWD) tool:

Horizon ItemMy DecisionDWD ToolNotes
Number of Users (concurrent)199'400199'400
Number of Pods required2020
Number of Desktop Blocks (one per vCenter)20401 block per pod since we are limited by 10k sessions per pod, but only have 333 RDSH per pod
Number of Management Blocks (one per pod)2020
Connection Servers required100100
App Volumes Manager Servers142024+1 AVMs for every 2,500 users/logins (in this case RDSH VMs (6'647 RDSH totally))
vRealize Operations for Horizonn/a22I have no experience with vROps sizing
Unified Access Gateway required22
vCenter servers (to manage resource clusters)440Since Horizon 7.7 there is support for spanning vCenters across multiple pods (bound to the limits of vCenter)

*Max. 30 users per RDSH

Conclusion

VDI

You can see in the table for VDI that I have different numbers for “App Volumes Manager Servers” and “vCenter servers (to manage clusters)”. For the amount of AVM servers I have used the new recommendations which you already saw above. Before Horizon 7.7 the block and pod architecture consisted of one vCenter server per block:

Horizon Pod vCenter tradtitional

That’s why, I assume, the DWD recommends 100 vCenter servers for the resource cluster. In my case I would only use 20 vCenter servers (yes, it increases the failure domain), because Horizon 7.7 and above allows to span one vCenter across multiple pods while respecting the limit of 10’000 VMs per vCenter. So, my assumption is here, even the image below is not showing it, that it should be possible and supported to use one vCenter server per pod:

Horizon Pod Single vCenter

RDSH

If you consult the reference architecture and the recommendation for VMware Horizon you could think that one important information is missing:

The details for a correct sizing and the required architecture for RDSH!

We know that each Horizon pod could handle 10’000 sessions which are 10’000 VDI desktops (VMs) if you use VDI. But for RDSH we need less VMs – in this case only 6’647.

So, the number of pods is not changing because of the limitation “sessions per pod”. But there is no official limitation when it comes to resource blocks per pod and having one connection server for every 2’000 VMs or sessions for VDI, to minimize the impact of a resource block failure. This is not needed here I think. Otherwise you would bloat up the needed Horizon infrastructure servers and this increases operational and maintenance efforts, which obviously also increases the costs.

But, where are the 40 resource blocks of the DWD tool coming from? Is it because the recommendation is to have at least two blocks per pod to minimize the impact of a resource block failure? If yes, then it would make sense, because in my calculation you would have 9’971 RDSH users sessions per pod/block and with the DWD calculation only 4’986 (half) per resource block.

*Update 28/07/2019*
I have been informed by Graeme Gordon from technical marketing that the 40 resources blocks and vCenters are coming from here:

App Volumes vCenters per Pod

I didn’t see that because I expect that we can go higher if it’s a RDSH-only implementation.

App Volumes and RDSH

The biggest difference when we compare the needed architecture for VDI and RDSH is the number of recommended App Volumes Manager servers. Because “concurrent logins at a one per second login rate” for the AVM sizing was not clear to me I asked our technical marketing for clarification and received the following answer:

With RDSH we assign AppStacks to the computer objects rather than to the user. This means the AppStack attachment and filter drive virtualization process happends when the VM is booted. There is still a bit of activity when a user authenticates to the RDS host (assignment validation), but it’s considerably less than the attachment process for a typical VDI user assignment.

Because of this difference, the 1/second/AVM doesn’t really apply for RDSH only implementations.

With this background I’m doing the math with 6’647 logins and neglect the assignment validation activity and this brings me to a number of 4 AVMs only to serve the 6’647 RDS hosts.

Disclaimer

Please be reminded again that these are only calculations to get an idea how many servers/VMs of each Horizon component are needed for a 250k user (~200k CCU) installation. I didn’t consider any disaster recovery requirements and this means that the calculation I have made recommend the least amount of servers required for a VDI- or RDSH-based Horizon implementation.

Horizon on VMC on AWS Basics

VMC on AWS

In Switzerland where VMware has a lot of smaller to medium sized companies, the demand for a  cloud solution is increasing. The customers are not yet ready to put all their servers and data into to the cloud, so they go for a hybrid cloud strategy.

And now it makes even more sense and got easier since VMware’s offering VMware Cloud on AWS (VMC on AWS) exists. This service, powered by VMware Cloud Foundation (VCF), brings VMware’s SDDC stack to the AWS cloud and runs the compute, storage and network products (vSphere, vSAN, NSX) on dedicated bare-metal AWS hardware. 

VMC on AWS

If you would like to try this offering you have the option for a Single Host SDDC which is the time-bound starter configuration and comes with the limitation of 30 days. After 30 days your Single Host SDDC will be deleted and all data will be lost as well. If you plan to scale up into a 3-host SDDC you retain all your data and you SDDC is not time bound anymore.

Availability

This pretty new service is already available in 13 global regions and already had 200+ released features since its launch. VMC on AWS is available almost everywhere – in US and Asia Pacific for example – and in Europe you find the service hosted in Frankfurt, London, Paris and Ireland. 

Use Cases

It’s not hard to guess what the use cases are for a service like this. If you are building up a new IT infrastructure, don’t want to have your own data center and purchase any server, then you might want to consider VMC on AWS. Another project could be to expand your market into a new geography and extend your footprint into the cloud based on a VMware-consistent and enterprise-grade environment in the AWS cloud.

A few customers are also finding a new way to easily deliver business continuity with VMware Site Recovery and take advantage of VMC on AWS which delivers a robust Disaster Recovery as a Service (DRaaS) possibility.

Another reason could be that your on-premises data center is in danger because of bad weather and you want to migrate all your workloads to another region.

Or you just want to quickly build a dev/test environment or do a PoC of a specific solution or application (e.g. VMware Horizon).

Elastic DRS

In my opinion EDRS is one of best reasons to go for VMC on AWS. EDRS allows you to get the capacity you need in minutes to meet temporary or unplanned demand. You have the possibility to scale-out and scale-in depending on the generated recommendation.

A scale-out recommendation is generated when any of CPU, memory, or storage utilization remains consistently above thresholds. For example, if storage utilization goes above 75% but memory and CPU utilization remain below their respective thresholds, a scale-out recommendation is generated.

 

 A scale-in recommendation is generated when CPU, memory, and storage utilization all remain consistently below thresholds.

This is interesting if your dekstop pool is creating more instant clones and the defined value of RAM for example is above the threshold. But there is also a safety check included in the algorithm, which runs every 5 minutes, to provide time to the cluster to cool off with changes. 

If you check the EDRS settings you have the option for the “Best Performance” or “Lowest Cost” policy. More information can be found here.

Horizon on VMC on AWS

For customers who are already familiar with a Horizon 7 on-premises deployment, Horizon on VMC on AWS lets you leverage the same architecture and the familiar tools. The only difference now is the vSphere outsourcing.

Use Cases

Horizon can be deployed on VMware Cloud on AWS for different scenarios. You could have the same reasons like before – data center expansion or to have a disaster recovery site in the cloud. But the most reason why a customer goes for Horizon on VMC on AWS is flexibility combined with application locality.

Horizon 7 on VMC on AWS

There are customers who were operating an on-premises infrastructure for years and suddenly they are open to a cloud infrastructure. Because the SDDC stack in the cloud is the same like in the private cloud the migration can be done very easily. You can even use the same management tools like before.

Minimum SDDC Size

The minimum number of hosts required per SDDC on VMware Cloud on AWS for production use is 3 nodes (hosts). For testing purpose, a 1-node SDDC is also available. However, since a single node does not support HA, it’s not recommended for production use.

Cloud Pod Architecture for Hybrid Cloud

If you are familiar with the pod and block architecture you can start to create your architecture design. This hasn’t changed for the offering on VMC on AWS but there is a slight difference:

  • Each pod consists of a single SDDC
  • Each SDDC only has a single vCenter server
  • A Horizon pod consists of a single block Horizon7Pod on VMC on AWS

Each SDDC only has one compute gateway which limits the connections to ~2’000 VMs or user sessions. This means that the actual limit per pod on VMC on AWS is ~2’000 sessions as well. When the number of compute gateways per SDDC can be increased, Horizon 7 on VMC on AWS will definitely have a comparable scalability with the on-premises installation.

You can deploy a hybrid cloud environment when you use the Cloud Pod Architecture to interconnect your on-premises and Horizon pods on VMC on AWS. You can also stretch CPA across pods in two or more VMware Cloud on AWS data centers with the same flexibility to entitle your users to one or multiple pods as desired.

Supported Features

The deployment of Horizon 7 on VMC on AWS started with Horizon 7.5 but there was no feature parity at this time. With the release of Horizon 7.7 and App Volumes 2.15 we finally had the requested feature parity. This means since Horizon 7.7 we can use Instant Clones, App Volumes and UEM. At the time of writing the vGPU feature is not available yet but VMware is working with Amazon on it. With the release of Horizon 7.8 a pool with VMware Cloud on AWS is now capable of using multiple network segments, allowing you to use less pools and/or smaller scopes. Please consult this KB for the currently supported features. 

Use AWS Native Services

When you set up the Horizon 7 environment in VMware Cloud on AWS you have to install and configure the following components:

  • Active Directory
  • DNS 
  • Horizon Connection Servers
  • DHCP
  • etc.

If you are deploying Horizon 7 in a hybrid cloud environment by linking the on-premises pod with the
VMC on AWS pod, you must prepare the on-premises Microsoft Active Directory (AD) to access
the AD on VMware Cloud on AWS.

My recommendation: Use the AWS native services if possible 🙂

AWS Directory Services

AWS Managed Microsoft AD is built on actual Microsoft Active Directory and does not require you to synchronize or replicate data from your existing Active Directory to the cloud. You can use standard Active Directory administration tools and take advantage of built-in Active Directory features, such as Group Policy and single sign-on (SSO).

Amazon Relational Database Service

Amazon RDS is available on several database instance types – optimized for memory, performance or I/O – and provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS.

This service allows you to quickly setup a SQL Express (not recommended for production) or regular SQL Server which can be used for the Horizon Event DB or App Volumes. 

Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides a fully managed native Microsoft Windows file system so you can easily move your Windows-based applications that require file storage to AWS. Built on Windows Server, Amazon FSx provides shared file storage with the compatibility and features that your Windows-based applications rely on, including full support for the SMB protocol and Windows NTFS, Active Directory (AD) integration, and Distributed File System (DFS).

At the time of writing I have to mention that the FSx service has not yet officially been tested and qualified for User Environment Manager (UEM), but that’s no problem. Technically it’s working totally fine.

Amazon Route 53

The connectivity to data centers in the cloud can be a challenge. You need to manage the external namespace to give users access to their desktop in the cloud (or on-prem). For a multi-site architecture the solution is always Global Server Load Balancing (GSLB), but how is this done when you cannot install your physical appliance anymore (in your VMC on AWS SDDC)?

The answer is easy: Leverage Amazon Route 53!

Amazon Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers, or Amazon S3 buckets – and can also be used to route users to infrastructure outside of AWS. You can use Amazon Route 53 to configure DNS health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints. 

Check Andrew Morgans blog article if you need more information about Route 53.

Horizon on VMC on AWS rocks! 🙂

 

vSAN Basics for a Virtual Desktop Infrastructure with VMware Horizon

As an EUC architect you need fundamental knowledge about VMware’s SDDC stack and this time I would like to share some more basics about VMware vSAN for VMware Horizon.

In part 5 of my VCAP7-DTM Design exam series I already posted some YouTube videos about vSAN in case you prefer videos instead of reading. To further proof my vSAN knowledge I decided to take the vSAN Specialist exam which focuses on the version 6.6.

To extend my vSAN skills and to prep myself for this certification I have bought the VMware vSAN 6.7 U1 Deep Dive book which is available on Amazon.

vSAN 6.7 U1 Deep Dive

vSAN Basics – Facts and Requirements

Out in the field not every EUC guy has enough sic knowledge about vSAN and I want to provide some facts about this technology here. This is no article about all the background information and detailed stuff you can do with vSAN, but it should help you to get a basic understanding. If you need more details about vSAN I highly recommend the vSAN 6.7 U1 Deep Dive book and the content available on storagehub.vmware.com.

  • The vSAN cluster requires at least one flash device and capacity device (magnetic or flash)
  • A minimum of three hosts is required except you go for a two-node configuration (requires a witness appliance)
  • Each host participating in the vSAN cluster requires a vSAN enabled VMkernel port
  • Hybrid configurations require a minimum of one 1GbE NIC, 10GbE is recommended by VMware
  • All-Flash configurations require a minimum of one 10GbE NIC
  • vSAN can use RAID-1 (mirroring) and RAID5-/6 (erasure coding) for the VM storage policies
  • RAID-1 is used for performance reasons, erasure coding is used for capacity reasons
  • Disk groups require one flash device for the cache tier and one or more flash/magnetic device for the capacity tier
  • There can be only one cache device per disk group
  • Hybrid configuration – The SSD cache is used for read and write (70/30)
  • All-Flash configuration – The SSD cache is used 100% as a write cache
  • Since version 6.6 there is no multicast requirement anymore
  • vSAN supports IPv4 and IPv6
  • vSphere HA needs to be disabled before vSAN can be enabled and configured
  • The raw capacity of a vSAN datastore is calculated by the number of capacity devices multiplied by the number of ESXi hosts (e.g. 5 x 2TB x 6 hosts = 60 TB raw)
  • Deduplication and compression are only available in all-flash configurations
  • vSAN stores VM data in objects (VM home, swap, VMDK, snapshots)
  • The witness does not store any VM specific data, only metadata
  • vSAN provides data at rest encryption which is a cluster-wide feature
  • vSAN integrates with CBRC (host memory read cache) which is mostly used for VMware Horizon
  • By default, the default VM storage policy is assigned to a VM
  • Each stretched cluster must have its own witness host (no additional vSAN license needed)
  • Fault domains are mostly described with the term “rack awareness”

vSAN for VMware Horizon

The following information can be found in the VMware Docs for Horizon:

When you use vSAN, Horizon 7 defines virtual machine storage requirements, such as capacity, performance, and availability, in the form of default storage policy profiles, which you can modify. Storage is provisioned and automatically configured according to the assigned policies. The default policies that are created during desktop pool creation depend on the type of pool you create.

This means that Horizon will create storage policies when a desktop pool get created. To get more information I will provision a floating Windows 10 instant clone desktop pool. Before I’m doing that, let’s have a look first at the policies which will appear in vCenter depending on the pool type:

Since I’m going to create a floating instant clone desktop pool I assume that I should see some the storage policies marked in yellow. 

Instant Clones

First of all we need to take a quick look again at instant clones. I only cover instant clones since it’s the recommended provisioning method by VMware. As you can learn from this VMware blog post, you can maissvely reduce the time for a desktop to be provisioned (compared to View Composer Linked Clones).

VMware Instant Clones

The big advantage of the instant clone technology (vmFork) is the in-memory cloning technique of a running parent VM.

The following table summarizes the types of VMs used or created during the instant-cloning process:

Instant Cloning VMs
Source: VMWARE HORIZON 7 INSTANT-CLONE DESKTOPS AND RDSH SERVERS 

Horizon Default Storage Policies

To add a desktop pool I have created my master image first and took a snapshot of it. In my case the VM is called “dummyVM_blog” and has the “vSAN Default Storage Policy” assigned.

How does it go from here when I create the floating Windows 10 instant clone desktop pool?

Instant Clone Technology

The second step in the process is where the instant-clone engine uses the master VM snapshot to create one template VM. This template VM is linked to the master VM. My template VM automatically got the following storage policy assigned:

The third step is where the replica VM gets created with the usage of the template VM. The replica VM is a thinprovisioned full clone of the internal template VM. The replica VM shares a read disk with the instantclone VMs after they are created. I only have the vSAN datastore available and one replica VM is created per datastore. The replica VM automatically got the following storage policy assigned:

The fourth step involves the snapshot of the replica VM which is used to create one running parent VM per ESXi host per datastore. The parent VM automatically got the following storage policies assigned:

After, the running parent VM is used to create the instant clone, but the instant clone will be linked to the replica VM and not the running parent VM. This means a parent VM can be deleted without affecting the instant clone. The instant clone automatically got the following storage policies assigned:

And the complete stack of VMs with the two-node vSAN cluster in my home lab, without any further datastores, looks like this:

vCenter Resource Pool 

Now we know the workflow from a master VM to the instant clone and which default storage policies got created and assigned by VMware Horizon. We only know from the VMware Docs that FTT=1 and one stripe per object is configured and that there isn’t any difference except for the name. I checked all storage policies in the GUI again and indeed they are all exactly the same. Note this:

Once these policies are created for the virtual machines, they will never be changed by Horizon 7

Even I didn’t use linked clones with a persistent disk the storage policy PERSISTENT_DISK_<guid> gets created. With instant clones there is no option for a persistent disk yet (you have to use App Volumes with writable volumes), but I think that this will come in the future for instant clones and then we also don’t need View Composer anymore. 🙂

App Volumes Caveat

Don’t forget this caveat for App Volumes when using a vSAN stretched cluster.

VMware Mirage – Alternatives

As some of you know Mirage was (and still is) a revolutionary technology at the time Wanova released it in 2011 and in 2012 Mirage became part of VMware.

VMware Mirage is used by customers for their desktop image management and for backup and recovery requirements.

VMware Mirage provides next-generation desktop image management for physical desktops and POS devices across distributed environments. Automate backup and recovery and simplify Windows migrations.

Mirage is and was the solution for certain use cases and solved common desktop challenges. Therefore not all customers are happy that Mirage reaches end of support (EOS) on June 30, 2019. 🙁

But why is VMware Mirage being removed from support?

Well, the answer is very simple. Today, the market is heading in two directions – it’s all about the applications and end-user devices (called the Digital Workspace). That’s why customers should move or are somehow forced to move to a Unified Endpoint Management solution which is considered to be “the” Windows desktop management solution of the future. The future of Windows is apparently cloud based and Mirage has not been designed or architected for this.

What are the alternatives?

VMware has no successor or product which can replace all of the features and functions Mirage provided, but Workspace ONE is the official alternative solution when it comes to Windows desktop management.

There are really a lot of use cases and reasons why customers in the past decided to choose Mirage:

  • Reduce Management Complexity (e.g. single management console)
  • Desktop Backup and Recovery (automated and continuous system or user data backup)
  • Image Management (image layering)
  • Patch Management
  • Security & Compliance (auditing and encrypted connections)
  • Simple Desktop OS Migrations (e.g. Windows 7 to Windows 10 migrations)

VMware Mirage really simplified desktop management and provides a layered approach when it comes to OS and applications rollouts. Customers also had the use case where the physical desktop not always was connected to the corporate network and this is a common challenge IT department were facing.

The desktop images are stored in your own data center with secure encrypted access from all endpoints. You can also customize access rights to data and apps.  Even auditing capabilities are available for compliance requirements.
And the best and most loved feature was the possibility for a full system backup and recovery!

IT people love Mirage because it was so simple to restore any damaged and lost device to the most recent state (snapshot).

For branch offices where no IT was onsite Mirage was also the perfect fit. An administrator just can distribute updates or Windows images to all remote laptops and PCs without any user interaction – maybe a reboot was now and then required. But that’s all!

In case of bandwidth problems you could also take advantage of the Branch Reflector technology which ensured that one endpoint downloads images update and then distribute it locally to other computers (peers), which saved relieved the WAN connection drastically.

Can WorkspaceONE UEM replace Mirage?

From a technical perspective my opinion is definitely NO. WorkspaceONE has not the complete feature set compared to Mirage when it is about Windows 10 desktop management, but both are almost congruent I have to say.

I agree that WorkspaceONE (WS1) is the logical step or way to “replace” Mirage, but this you have to know:

  • WS1 cannot manage desktop images for OS deployments. Nowadays, it is expected that a desktop is delivered pre-staged with a Windows 10 OS from the vendor or that your IT department is doing the staging for example with WDS/MDT.
  • WS1 has no backup and recovery function. If you use Dell Factory Provisioning then you can go back to a “restore point” where all of your pre-installed and manually installed applications get restored after a device wipe let’s say for example. But if the local hard disk has a failure and this restore partition is gone, then you have to get your device or hard disk replaced. Without Dell Factory Provisioning this means that IT has, again, still to deploy the desktop image with WDS/MDT.

For some special use cases it is even necessary to implement VMware Horizon, User Environment Manager, OneDrive for Business etc, but even then WS1 is a good complement since it can also be used for persistent virtual desktops!

As you can see a transition from Mirage to WS1 is not so easy and the few but most important differences are the reasons why customers and IT admins are not so amused about the EOS announcement of VMware Mirage.

VCP-DW 2018 Exam Experience

On the 30th November 2018 I passed my VCAP7-DTM Design exam and now I would like to share my VCP-DW 2018 (2V0-761) exam experience with you guys.

I’m happy to share that I also passed this exam today and I thought it might be helpful, even a new VCP-DW 2019 exam will be released on 28th February 2019, to share my exam experience since it’s still a pretty new certification and not that much information can be found in the vCommunity.

How did I prepare myself? To be honest, I almost had no hands-on experience and therefore I had to get the most out of the available VMware Workspace ONE documentation. I already had basic knowledge for my daily work as a solution architect, but it was obvious that this is not enough to pass. The most of my basic knowledge I gained from the VMware Workspace ONE: Deploy and Manage [V9.x] course which was really helpful in this case.

If you check the exam prep guide you can see that you have to study tons of PDFs and parts of the online documentation. 

Didn’t check all the links and documents in the exam prep guide but I can recommend to read these additional docs:

In my opinion you’ll get a very good understanding of Workspace ONE (UEM and IDM) if you read all the documents above. In additional to the papers I recommend to get some hands-on experience with the Workspace ONE UEM and IDM console.

As VMware employee I have access to VMware TestDrive where I have a dedicated Workspace ONE UEM sandbox environment. I enrolled an Android, iOS and two Windows 10 devices and configured a few profiles (payloads). I also deployed the Identity Manager Connector in my homelab to sync my Active Directory accounts with my Identity Manager instance which enables also the synchronization of my future Horizon resources like applications and desktops.

I think that I spent around two weeks for preparation including the classroom training at the AirWatch Training Facility Milton Keynes, UK.

The exam (version 2018) itself consists of 65 multiple choice and drag & drop questions and I had 135 minutes time to answer all questions. If you are prepared and know your stuff then I doubt that you will need more than one hour, but this could change with the new VCP-DW 2019. 🙂

I’m just happy that I have a second VCP exam in my pocket and now I have to think about the next certification. My scope as solution architect will change a little. In the future I’m also covering SDDC (software defined data center) topics like vSphere, vSAN, NSX, VMware Cloud Foundation, Cloud Assembly and VMC on AWS. That’s why I’m thinking to earn the VCP-DCV 2019 or the TOGAF certification.

VCAP7-DTM Design Exam Passed

On 21 October I took my first shot to pass the VCAP7-DTM Design exam and failed as you already know from my this article. Today I am happy to share that I finally passed the exam! 🙂

What did I do with the last information and notes I had about my weaknesses from the last exam score report? I read a lot additional VMware documents and guides about:

  • Integrating Airwatch and VMware Identity Manager (vIDM)
  • Cloud Pod Architecture
  • PCoIP/Blast Display Protocol
  • VMware Identity Manager
  • vSAN 6.2 Essentials from Cormac Hogan and Duncan Epping
  • Horizon Apps (RDSH Pools)
  • Database Requirements
  • Firewall Ports
  • vRealize Operations for Horizon
  • Composer
  • Horizon Security
  • App Volumes & ThinApp
  • Workspace ONE Architecture (SaaS & on-premises)
  • Unified Access Gateway
  • VDI Design Guide from Johan van Amersfoort

Today, I had a few different questions during the exam but reading more PDFs about the above mentioned topics helped me to pass, as it seems. In addition to that, I attended a Digital Workspace Livefire Architecture & Design training which is available for VMware employees and partners. The focus of this training was not only about designing a Horizon architecture, but also about VMware’s EUC design methodology.

If you have the option to attend classroom trainings, then I would recommend the following:

I had two things I struggled with during the exam. Sometimes the questions were not clear enough and I made assumptions what it could mean and that the exam is based on Horizon 7.2 and other old product versions of the Horizon suite:

  • VMware Identity Manager 2.8
  • App Volumes 2.12
  • User Environment Manager 9.1
  • ThinApp 5.1
  • Unified Access Gateway 2.9
  • vSAN 6.2
  • vSphere 6.5
  • vRealize Operations 6.4
  • Mirage 5.x

But maybe it’s only me since I have almost no hands-on experience with Horizon, none with Workspace ONE and in addition to that I’m only 7 months with VMware now. 🙂

It is time for an update, but VMware announced already that they are publishing a new design exam version called VCAP7-DTM 2019 next year.

What about VCIX7-DTM?

 In part 2 of my VCAP7-DTM Design exam blog series I mentioned this:

Since no VCAP7-DTM Deploy exam is available and it’s not clear yet when this exam will be published, you only need the VCAP7-DTM Design certification to earn the VCIX7-DTM status. I have got this information from VMware certification.

This information is not correct, sorry. VMware certification pulled their statement back and provided the information that you need to pass the VCAP6-DTM Deploy exam, as long as no VCAP7-DTM Deploy is available, to earn the VCIX7-DTM badge.

I don’t know yet if I want to pursue the VCIX7-DTM certification and will think about it when the deploy exam for Horizon 7 is available.

What’s next?

Hm… I am going to spend more time again with my family and will use some of my 3 weeks vacation time to assemble and install my new home lab.

Then I also have a few ideas for topics to write about, like:

  • Multi-Domain and Trust with Horizon 7.x
  • Linux VDI Basics with Horizon 7.x
  • SD-WAN for Horizon 7.x
  • NSX Load Balancing for Horizon 7.x

These are only a few of my list, but let’s see if I really find the time to write a few article. 

In regards to certification I think I continue with these exams:

This has no priority for now and can wait until next year! Or…I could try the VDP-DW 2018 since I have vacation. Let’s see 😀