I was touring through Switzerland and had the honor to speak at five events for a “Mobility, Workspace & Licensing” roadshow for SMB customers up to 250 employees. Before I started my presentation I have always asked the audience three questions:
Who knows what MDM or EMM (Mobile Device Management or Enterprise Mobility Management) is?
Have you ever heard of Unified Endpoint Management (UEM)?
Does the name Airwatch or Workspace ONE ring any bells?
This is my thing to know which people are sitting in front of me and how deep I should or can go from a technical perspective. And I was shocked and really surprised how many people have raised their hands – only between 1 and 5 persons in average. And the event room was filled with 50 to 60 persons! I don’t know how popular EMM and UEM are in other countries, but I think this is a “Swiss thing” when you work with smaller companies. We need to make people aware that UEM is coming! 🙂
That’s why I decided to write an article about Enterprise Mobility Management and how it transformed or evolved to the term Unified Endpoint Management.
The basic idea of Mobile Device Management was to have an asset management solution which provides an overview of the smartphones (at the beginning iPhones were very popular) in a company. Enterprises were interested for example to disable Siri and ensure that corporate mobile phone devices were staying within policy guidelines. In addition, if you could lock and wipe the devices, you were all set.
However, business needs and requirements changed and suddenly employees wanted or even demanded access to applications and content. Here we are talking about features like mail client configuration, WiFi certificate configuration, content and mobile application management (MAM) and topics like containerization and identity management also became important – security in general. So, MDM and MAM were part now of Enterprise Mobility Management.
Vendors like VMware, Citrix, MobileIron and so on wanted to go further and offer the same management and configuration possibilities for operating systems like Windows or Mac OS. If I recall correctly this must have been between 2013 and 2017.
One of the biggest topics and challenges for this time were the creation of so called IT silos. There are many reasons how IT silos were built, but in the device management area it’s easy to give an example. Let’s say that you are working for an enterprise with 3’000 employees and you have to manage devices and operating systems like:
PCs & Laptops (Windows OS)
MacBooks or Mac OS in general
Android & iOS devices
Virtual apps & desktops (Windows OS)
A typical scenario – your IT is deploying Windows OS mit SCCM (Configuration Manager), Mac OS devices are not managed, IT is using JAMF or does manual work, EMM solution for iOS and Android and for the VDI or server based computing (Terminal Server) environment the responsible IT team is using different deployment and management tools. This is an example how silos got build and nowadays they prevent IT from moving at the speed of business. VMware’s UEM solution to break up those silos is called Workspace ONE UEM.
The EMM or mobility market is moving into two directions:
Today, it’s all about the digital workspace – access ANY application, from ANY cloud, from ANY device and ANYTIME.
People need app access to mobile apps, internal apps, SaaS apps and Win32 (legacy) apps. On the other hand we want to use any device, no matter if it’s a regular fat client, the laptop at home, wearables or a rugged or IoT device. If you combine “App Access” and “UEM” then you will get a new direction called “Digital Workspace”. Again, this means that Digital Workspace is just another name for the combined EUC (end-user computing) platform.
UEM is a term which has been introduced by Gartner as a replacement for the client management tool (CMT) and Enterprise Mobility Management.
Gartner defines Unified Endpoint Management as a new class of tools which function as an unified management interface – a single pane of glass. UEM should give enterprises the possibility to manage and configure iOS, Android, Mac OS and Windows 10 devices with a single unified console. With this information I would call UEM as the modern EMM.
Modern Management – Windows 10
Why is Windows 10 suddenly a topic when we talk about UEM? Well, Microsoft has put a lot efforts in their Windows 10 operating system and are providing more and more APIs that allow a richer feature set for the modern management approach – the same experience and approach we already have with mobile device management. Microsoft is seeking to simplify Windows 10 management and I have to say that they made a fantastic job so far!
Modern Management, if it’s with VMware Workspace ONE UEM or with a competitor’s product, is nothing else than going away from the network-based deployment to a cloud-based deployment.
Traditional means staging with SCCM for example, apply group policies, deploy software packages and perform Windows Updates on a domain-joined PC.
Modern means that we have the same out-of-the-box experience (OOBE) with our Windows 10 devices compared to an iPhone as an example. We want to unbox the device, perform a basic configuration and start consuming. By consuming I mean install all the apps I want wherever I am at the moment. If it’s a less secure network at home, at friends, on a beach, train or at the airport.
Modern also means that I receive my policies (GPOs) and basic configuration (WiFi, E-Mail, Bitlocker etc.) over-the-air across any network. And my device doesn’t need to be domain-joined (but it can). Windows Updates can also be configured and deployed directly from Microsoft or still with WSUS.
Mix Physical and Virtual Desktops with Modern Management
VMware’s vision and my understanding of modern management means that we can and should be able to manage any persistent desktop even if it’s a virtual machine. During my presentation I told the audience that they could have Windows 10 VMs in their on-premises data center, on AWS, Azure or even on a MacBook.
This use case has NOT been tested by VMware yet, but what do you think if we can manage the recently announced Windows Virtual Desktops (WVD) which are only available through Microsoft Azure? I hope to give you more information about this as soon as I have spoken to the product management.
But you see where this is going. Modern management offers us new possibilities for certain use cases and we can even easier on-board contractors or seasonal workers if no separate VDI/RDSH based solution is available.
And let’s assume that in 2018/2019 all new ordered hardware are pre-staged with a Windows 10 version we ask for. For a virtual persistent desktop this is most certainly not the case, but think again about the Windows 10 offerings from Azure where Windows 10 is also “pre-staged”.
Do we need UEM and Modern Management? Are we prepared for it?
Well, if we go by the definition of UEM then we already use Unified Endpoint Management since EMM is a part of, but just without the Windows 10 client management part. A survey in Switzerland has shown that only 50% of the companies are dealing with this topic. And to be clear: an adoption or implementation of UEM takes several years. Gartner predicts that companies have to start working with UEM within the next three to five years.
What preparation is needed to move to the new modern cloud-based management approach? There are different options depending on your current situation.
If you are running on Windows 7 and use Configuration Manager (SCCM) for the deployment, you could use Workspace ONE’s Airlift technology to build a co-management setup. But then you need to migrate first from Windows 7 to Windows 10 and use SCCM to deploy our Intelligent Hub (formerly known as Airwatch Agent). Then your good to go and could profit from a transition phase until all clients have been migrated. And in the end you can get rid of SCCM completely.
If you use another tool or manually install Windows 10, then you just need to install Intelligent Hub, enroll the device and your prepared.
If you are responsible for modernizing client and device management in your company, then keep the following advice in mind. Check your requirements and define a mobility or a general IT strategy for your company. Then look out for the vendors and solutions which meet your requirements and vision. Ignore who is on the top right of the Gartner Magic Quadrant or the vendor who claims to have “the ONE” digital workspace solution. In the end you, your customers and colleagues must be happy! 🙂
In the future I will provide more information about Unified Endpoint Management and Modern Management. We are in the early market phase when it comes to UEM and I’m curious what’s coming within the next one or two years.
The terms “Intelligence” and “Analytics” have not been covered yet and they are very interesting because it’s about new features and technology based on artificial intelligence and machine learning. E.g. with VMware’s Workspace ONE Intelligence you have new options for “insights” and “automation”. You have data, can collect it and run it through a rules engine (automation). But this is something for another time.
For a few years I ve been using three Intel NUC Skull Canyon (NUC6i7KYK) mini PCs for my home lab. Each NUC is equipped with the following:
6th Gen Intel i7-6770HQ processor with Intel Iris Pro graphics
2x 16GB Kingston Value RAM DDR4-2133
2x 500GB Samsung 960 EVO NVMe M.2
1x Transcend JetFlash 710S USB boot device
These small computers were nice in terms of space, but are limited to 32GB RAM, have only 1 network interface and no separate management interface.
This was enough and acceptable when I worked with XenServer, used local storage and just had to validate XenDesktop/XenApp configurations and designs during my time as Citrix consultant.
When I started to replace XenServer with ESXi and created a 3-node vSAN cluster for my first Horizon 7 environment, all was running fine at the beginning. But after while I had strange issues doing vMotions, OS installations, VCSA or ESXi upgrades.
So, I thought it’s time build a “real” home lab and was looking for ideas. After doing some research and talking to my colleague Erik Bussink, it was clear for me that I have to build my computing nodes based on a Supermicro mainboard. As you may know, the Skull Canyons are not that cheap and therefore I will continue using them for my domain controller VMs, vSAN witness, vCenter Server appliance etc.
I found two Supermicro X11SPM-TF motherboards for a reduced price, because people ordered and never used them. This was my chance and a “sign” that I have to buy my stuff for the new home lab NOW! Let’s pretend it’s my Christmas gift. 😀
The key features for me?
768GB RAM limit (not that I would need that much, but better than 32GB)
I went for the Fractal Design Node 804 because it offers me space for the hardware and cooling. And I like the square form factor which allows me to stack them.
I need some number of cores in my system to run tests and have enough performance in general. I will mainly run Workspace ONE and Horizon stuff (multi-site architectures) in my lab, but this will change in the future. So I have chosen the 8-core Intel Xeon Silver 4110 Processor with 2.10 GHz.
RAM was always a limiting factor with my NUCs. I will reuse two of them and start with two 32GB 2666 MHz Kingston Server Premier modules for each ESXi host (total 64GB per host). If memory prices are reducing and I would need more capacity, I easily can expand my system.
Samsung 860 EVO Basic 250GB which is way too much for ESXi, but the price is low and I could use the disk for something else (e.g. for a new PC) if needed.
Caching Device for vSAN
I will remove one Samsung 960 EVO 500GB M.2 of each NUC and use them for the vSAN caching tier. Both NUCs will have still one 960 EVO 500 left to be used as local storage.
Capacity Device for vSAN
Samsung 860 Evo Basic 1TB.
Currently, my home network only consists of Ubiquiti network devices with 1GbE interfaces.
So I ordered the Ubiquiti 10G 16-port switch which comes with four 1/10 Gigabit RJ45 ports – no SFPs needed for now. Maybe in the future 😀
This is the home lab configuration I ordered and all parts should arrive until end of November 2018.
I failed the VCAP7-DTM Design exam, but expected it and the first try of the exam showed me what stuff I need to learn better and where my weaknesses are. Let me tell you about my exam experience.
I arrived on time at the PearsonVUE test center, but they had PC problems and so I had to wait first for 30min until I could start the exam. The timer showed me that I have two hours for the 60 questions. The most of the time I was guessing and eliminating the obviously wrong answers and so I was through 50% of the questions of 50% of the time. If you would know a little bit more than I do and you work/worked with all the products on a daily basis, I would say that the exam is a piece of cake!
Nevertheless, I answered all 60 questions 15 minutes before the timer ended, but I didn’t review any of them, because I knew that I still wouldn’t have the better or correct answers. This may sound to you like I failed with a score of 0, but no. I had 252 of the 300 needed points and this is a sign for me that I just need to improve my weak spots and the topics I didn’t check during my preparation time.
Today I’m going to travel to VMware Airwatch in Milton Keynes (UK) for my VMware Workspace ONE: Deploy and Manage [V9.x] training which starts tomorrow. And I have to prepare a presentation for a roadshow with five events where I will be the speaker of a 30min slot. This means no time for studying yet.
But I’m lucky that I still got a seat at the Digital Workspace Livefire Architecture & Design training taking place in three weeks. This will be last part of my preparation for the retake which I planned for 23rd November 2018. But first I have to wait for my new exam voucher. 🙂
I cannot tell you which topics/technologies or questions were asked during the exam, but I can assure you that I didn’t expect some of the questions – they were just craaaaazy or about veeeery old stuff.
This is also one of my problems. You have to study things which are not valid anymore for the today’s product version or implementation. In a few cases the configuration limits or some parts of an architecture have changed.
So, I read the exam blueprint again and checked some of the attached URLs and document links again. In my opinion the following products and versions you should know for the exam:
VMware Identity Manager 2.8
App Volumes 2.12
User Environment Manager 9.1
Unified Access Gateway 2.9
vRealize Operations 6.4
So, this was my exam experience of the VCAP7-DTM Design exam and my advices after. It is totally okay to fail, because it will just help you if you are not prepared well enough or just went to early for your first shot.
My last advice: Use the note board for the difficult answers and topics you have no clue of. If you have enough time, reviewed your answers and you are ready to end the exam, memorize all your notes. Just in case you didn’t pass, you now have the notess in your mind and could transfer themto your personal notebook. This is totally legal and really helpful! 🙂
Good luck to you if you take the exam. I have another four weeks now to fill the gaps. 🙂 See if I passed or not.
My last article was about the Horizon reference architecture and four weeks have already passed since then. My VCAP7-DTM Design exam is scheduled for October 18 – that’s in five days!
I haven’t opened my books the last three weeks, because I think it’s important to take a break and get some distance of your books and documents, which allows you to understand things better and faster and see connections between things you haven’t seen before. And another reason was my pregnant wife who delivered our beautiful daughter on October 4! 🙂
I started from scratch and repeated reading all my training material and PDF documents.
To design a Horizon 7 environment you have to follow a process to work out a VMware EUC solution that meets the customer’s requirements and follow the VMware design guidelines and use the reference architectures while considering customer constraints. It is very important that all customer business drivers and objectives are clearly defined. Then you will start to gather and analyze the business and application requirements and document the design requirements, assumptions, risks and constraints. For example, if you talk about technical requirements with your customer, the following categories should be covered:
Virtualization infrastructure and data center hardware
With the information from the assessment phase, the design work can begin and you create the conceptual design before you head over to create a logical design. Advice: Minimize risks and keep things simple!
Horizon Logical Design
The logical design (high level design) follows the conceptual design and defines how to arrange components and features. It is also useful to understand and evaluate the infrastructure design. The easiest and most common way to create a logical design is the use of architecture layers. Each layer contains one or more components and has functional and technical inter-dependencies:
Application deployment and type (cloud-based, locally installed, enterprise apps etc.)
Use cases and type of user
Scalability and multi-site
Desktop types and OS
Compute, network and storage
Network and storage
Cluster and resources
Internal and external
Authentication and authorization
A Horizon logical design could look like this:
If you need to write down use cases and their attributes, here an example:
Time of use
Horizon Block and Pod Design
In part 4 I covered this topic how to use a repeatable and scalable approach to design a large scale Horizon environment.
Horizon Component Design
To have a complete design you must define the amount and the configuration of Horizon components required for your environment. You have to include certain design recommendations and design the configuration for Horizon components for your use cases. These are some required infrastructure components:
VMware Identity Manager
Load Balancing for resiliency and scale
Connection to Active Directory
SaaS-based implementation recommended
Approx. 100’000 users per virtual appliance
Up to 10’000 virtual machines per vCenter
Recommendation: 2’000 desktops per vCenter
Dedicated vCenter Server instance per resource block
Up to 2’000 sessions per Connection Server (4’000 tested limit)
Install at least one Replica Server for redundancy
Max. 7 Connection Servers per pod
Max. 10’000 sessions per pod recommended
Cloud Pod Architecture
Max. 175 Connection Servers
Max. 120’000 sessions
Max. 5 sites
View Composer needed?
Security Server (not recommended anymore, use UAG)
Should not be member of AD domain
Should be hardened Windows server (placed in DMZ)
1:1 mapping with Connection Servers
Unified Access Gateway (UAG)
Virtual appliance (placed in DMZ) based on linux (Photon OS)
Scale-out is independent of Connection Server
Does not need to be paired with a single Connection Server
Know the key firewall considerations for Horizon 7
Bandwidth requirements for different types of users
WAN considerations (e.g. latency, WAN optimization)
Optimization/Policies for display protocols (LAN/WAN)
vSphere networking requirements
Separate networks for management, VMs, vMotion etc.
Use vSphere Distributed Switch
Secure your desktops (lockdown, GPOs, UEM)
Use secure client connections (secure gateways/tunnel)
Use Unified Access Gateway for remote access (use three NICs)
View Security Server (if needed)
User authentication method from internal and external
Two Factor Authentication for external connections
Restrict access (tags, AD groups)
Use NSX for micro segmentation
Install signed SSL certificates
Our objective of a Horizon implementation is to provide better support to users than the physical solution. Session management is an aspect of this. Configuration and different settings on the sessions or client device are essential for a smooth user experience.
This is my recommendation. Within the last 8 weeks I’ve effectively studied 5 weeks for the exam. I work approx. since 4 months with Horizon products in a pre-sales role, not as a consultant. I will update you after the exam if the experience combined with learning was enough to pass! 🙂
Did I forget anything? Let me know! Jump to part 12
I only focus on the component design part since I already covered topics like use cases, business drivers, design methodology etc.
A successful deployment depends on good planning and a very good understanding of the platform. The core elements include Connection Server, Composer, Horizon Agent and Horizon Client. Part 4 to part 9 cover the Horizon 7 component design and also provide more information on the following components.
VMware Identity Manager (VIDM) can be implemented on-premises or in the cloud, a SaaS-based implementation. If you decide to go with the SaaS implementation, a VIDM connector needs to be installed on-prem to synchronize accounts from Active Directory to the VIDM service in the cloud.
If cloud is no option for you, you still have the possibility for the on-prem deployment and use the Linux-based virtual appliance. There is also a Windows-based installer available which is included in the VMware Enterprise Systems Connector. VMware’s reference architecture is based on the Linux appliance.
Syncing resources such as Active Directory and Horizon 7 and can be done either by using a separate VMware Identity Manager Connector or by using the built-in connector of an on-premises VMware Identity Manager VM. The separate connector can run inside the LAN in outbound-only connection mode, meaning the connector receives no incoming connections from the DMZ.
VIDM comes with an embedded PostgreSQL database, but it’s recommended to use an external database server for production deployments.
For high availability, based on your requirements, at least two VIDM appliances should be deployed behind a load balancer. After you have deployed your first appliance, you simply clone it and assign a new hostname and a new IP address.
As you still may know from part 8, App Volumes has two functions. The first is the delivery of applications for VDI and RDSH. The second is the provision of writable volumes to capture user-installed applications and the user profile.
For high availability, always use at least two App Volumes Managers which are load-balanced.
AppStacks are very read intensive, hence, you should place AppStacks on storage that is optimized for read operations. Writable volumes should be placed on storage for random IOPS (50/50). There reference architecture uses vSAN to provide a single highly available datastore.
For the SQL database it is recommended using an AlwaysOn Availability Group.
User Environment Manager
When User Environment Manager design decisions need to be made, you have to think about user profiles (mandatory, roaming, local) and folder redirection. As already described in part 9, VMware recommendation is to use mandatory profiles and folder redirection. Use appendix B if you need help configuring the mandatory profile.
The first key design consideration is using DFS-R to provide high availability for the configuration and user shares. Note: Connect the management console only to the hub member when making changes. DFS-R will replicated those changes to the spoke members.
In part 6 I mentioned that a UAG is typically deployed within the DMZ.
UAG appliances are deployed in front of the Horizon 7 Connection Servers and sit behind a load balancer. The Unified Access Gateway also runs the Content Gateway as part the AirWatch (WorkspaceONE UEM) service.
You have two sizing options during the appliance deployment:
Standard (2 vCPU, 4GB RAM, 2’000 Horizon server connections, 10’000 AirWatch service connections)
Large (4 vCPU, 16GB RAM, 2’000 Horizon server connections, 50’000 AirWatch service connections)
As you can see, the big difference here are the estimated AirWatch service connections per appliance. In production you would deploy dedicated UAG appliances for each service. Example:
2 standard size UAGs appliances for 2’000 Horizon 7 sessions (n+1)
3 large size UAG appliances for 50’000 devices using Content Gateway and per-App Tunnel which gives us a total of 100’000 sessions. The third appliance is for high availability (n+1)
vSphere and Physical Environment
The software-defined data center (SDDC) is the foundation that runs all infrastructure servers and components. The products and the licensing for the foundation are outside of the Horizon 7 product (except vSAN), but are required to deliver a complete solution.
And in my opinion this is what makes the whole solution so brilliant. Even I work for VMware, I would never say from the beginning that Horizon is better than XA/XD. This was also the case when I worked as a consultant for Citrix before I joined VMware in May 2018.
It depends on the requirements and use cases which need to be satisfied. That are the most important things if you choose a vendor or a specific technology. Our goal is to make the customer happy! 🙂
But I would say that VMware Horizon including WorkspaceONE is very hard to beat if you use the complete stack! But that’s another topic.
The vSphere infrastructure in the reference architecture includes vSAN and NSX. In part 5 I covered the basics of vSAN, but I think I maybe need to write a short overview about NSX and how you can use it with Horizon.
vSAN provides a hyper-converged storage optimized for virtual machines without the need for an external SAN or NAS. This means that the physical server not only provides the compute and memory resources, but also storage in a modular fashion. You can use vSAN for the management and resource block and follow a hybrid approach for the management resources and use all-flash vSAN for the Horizon resources.
I will not cover the vSphere design, but it’s important to understand that all components are operating redundantly and that you have enough physical resources to meet the requirements.
A general recommendation is to use at least 10 GbE connections, to separate each traffic (mgmt, VM traffic, vSAN, vMotion) and make sure that each of them has sufficient bandwidth.
NSX for vSphere
NSX provides several network-based services and performs several security functions within a Horizon 7 implementation:
Protects VDI infrastructure
Protects desktop pool VM communication with applications
Provides user-based access control (user-level identity-based micro-segmentation)
If you want to use NSX you have to think about a NSX infrastructure design as the NSX platform adds new components (e.g. NSX manager) and new possibilities (distributed firewall and identity firewall).
The most important design consideration for Horizon 7 is the concept of micro-segmentation. In the case of Horizon 7, NSX can block desktop-to-desktop communications, which are normally not needed or recommended. Each VM can now be its own perimeter and this desktop isolation prevents threats from spreading:
The Horizon 7 reference architecture of probably the best document to prepare yourself for the VCAP7-DTM exam. What do the current VCAP7-DTM certified people say? What else needs to be covered? Jump to part 11
This is the 9th part of my VCAP7-DTM Design exam series. In part 8 I covered the creation of an application architecture design for Horizon 7. Let’s have a look at the last part of the exam blueprint, which is about session management and client devices:
Section 8 – Incorporate Endpoints into a Horizon Design
Objective 8.1 – Incorporate Session Connectivity Requirements in a Horizon End Point Design
Objective 8.2 – Incorporate Management Requirements in a Horizon End Point Client Design
Objective 8.3 – Incorporate Security Requirements in a Horizon End Point Design
In a Windows environment several types of user profiles are available:
The user profile include user-specific data and application settings which allows the users to have a persistent appearance regardless which desktops a user logs in to.
As a general leading practice, it is recommended to redirect as much user data as possible to a network share. But in a Windows environment, administrators have often experienced issues with roaming profiles. From my experience, a smaller profile causes less trouble and it’s worth to spend time to have a proper profile management strategy configuration.
VMware User Environment Manager
VMware’s solution for profile management is called User Environment Manager (UEM) which is part of the Just-in-Time Management (JMP) platform. JMP is composed of the Instant Clone technology for fast desktop provisioning, App Volumes for real-time application delivery and User Environment Manager for the profile and session management.
When I worked with Citrix products, the recommendation was to use Citrix UPM (roaming profile) and configure folder redirections via GPO.
One of the things I have learned when I joined VMware, is the different approach when it comes to profile management. VMware recommends mandatory profiles and the dynamic configuration capability of UEM:
User Environment Manager manages user and Windows settings and dynamically configures the desktop. For example, it can create drive and printer mappings, file type associations, and shortcuts. User Environment Manager can also manage and provide shortcuts to applications such as ThinApp to users.
This is Microsoft’s definition of a mandatory user profile:
A mandatory user profile is a special type of pre-configured roaming user profile that administrators can use to specify settings for users. With mandatory user profiles, a user can modify his or her desktop, but the changes are not saved when the user logs off. The next time the user logs on, the mandatory user profile created by the administrator is downloaded.
Very important to know when using UEM with mandatory profiles: Only the settings you have defined in UEM are kept for your sessions. Settings that you didn’t configure with UEM are not preserved and are discarded after a logout. This is called personalization.
Once you have configured your mandatory profile, the configuration in UEM is waiting:
Personalization (e.g. configuration files for Windows settings)
Application Configuration Management (initial settings for applications)
User Environment Settings (printer/drive mappings, environment variables, shortcuts etc.)
Dynamic configuration based on conditions (user, location, client device etc.)
Identify the customer’s client device characteristics and compare it with the requirements. Depending on the requirements you have the following client device options:
Tablets and Smartphone
Fat Clients (the traditional PCs or laptops including Mac)
For each device a different Horizon Client (depending on the OS) is available for download.
As already mentioned earlier in this series, Blast should be the primary protocol for your Horizon sessions. If you have endpoints where a Horizon Client cannot be used or installed, you still have the HTML access option.
Configuration for Smart Policies are done in the UEM console. Some of the settings you have configured via Group Policies before can now be done in UEM. I’m talking about configuration based on conditions like client location, launch tag or pool name. But it’s also possible to fill in your own personal View client properties:
With Smart Policies, administrators have granular control of a user’s desktop experience. A number of key Horizon 7 features can be dynamically enabled, disabled, or controlled based not only on who the user is, but on the many different variables available through Horizon 7: client device, IP address, pool name, and so on.
Example: Based on the client device used you can set different settings for USB redirection, clipboard and bandwidth profile.
Smart Policies can be enforced and evaluated at login/logout and reconnect/disconnect and at defined refresh intervals. This allows IT to maintain endpoint and session security even the user changes the network, the endpoint or both.
These are the basics about session management and client devices. We have now covered all sections of the exam blueprint:
I know the basics about a Horizon 7 implementation but I need to gain more technical knowledge about each product. As a Solution Architect I have a customer-facing pre-sales role and in general have no hands-on experience. As a consultant, who works with the Horizon suite on a daily basis, I’m sure that the VCAP-DTM Design exam would a piece of cake. 🙂
The next weeks I will read a lot of the PDFs (reference architecture and admin guides) mentioned in the exam blueprint and they are about:
Horizon 7.2 (including Mirage, ThinApp, UAG)
App Volumes 2.12
Because I have a quite big home office and love whiteboards, I decided to order whiteboard papers which hold to the walls by static charge. This should help me to note important stuff down. 😀
I have left six weeks to prepare! Let’s do this! 🙂 Jump to part 10
This is the 8th part of my VCAP7-DTM Design exam series. In part 7 I covered the creation of a physical design for Horizon desktop and pools. Now we take a look at section 7 of the blueprint, the creation of an application architecture design for Horizon 7:
Section 7 – Incorporate Application Services into a Horizon Physical Design
Objective 7.1: Design Application Integration and/or Delivery System(s) using Horizon Application Tools
Objective 7.2: Design Active Directory to Facilitate Application Assignment Objective 7.3: Design and Size RDS Application Pools and Farms
Objective 7.4: Create Application Architecture Design
Objective 7.5: Design Application Integration and/or Delivery System(s) using Horizon Workspace One
The purpose of implementing VMware Horizon 7 is to deliver virtualized applications and/or desktop for end users. You have different methods of application delivery and the delivery depends on many factors. The delivery method can have major impacts on the user experience.
End users want the “fat client experience” – they want speed and performance and ease of use. IT has to define and find a balance between user experience and security and these opposing goals of IT and end users could be a challenge.
Today, people don’t want to wait for anything. They want to use, consume, be independent and have all the permissions they need to download and/or install applications – they just want to do their job. In this case, for example, a self-service portal with workflows could provide the necessary flexibility and security. But what about application performance and delivery?
One of the biggest challenges during a VDI project are legacy applications and IT still has to manage them in 2018. And sometimes, the customer is making the money with legacy applications. If the performance suffers or these applications don’t work anymore, neither does the business.
With Horizon 7 you have different options for app delivery:
Manually installed applications in the master image or in the virtual desktop
Delivery using ThinApp, App Volumes or RDSH (RDS application pool)
Each method has advantages, disadvantages and a different way of management. In most of the cases you will find a mix of these application delivery methods, but it depends on your use cases which ones you are going to choose.
I expect you know the features and technology of ThinApp and App Volumes and therefore I don’t explain them further. Just think about flexibility and management. I assume you don’t want to end up with 10 different master images which you have to maintain separately and modify once or twice a week. In general, Office applications and Adobe Reader are installed in the base image and the other applications can be delivered by App Volumes. If you need a “secure browser” (sandboxed browser) environment, then ThinApp is the right solution for this. Maybe you have the same application but with different versions? Then, it depends on the use case and requirement – your options are the manual installation, the delivery with App Volumes and ThinApp. Make yourself familiar with all those methods and also study the multi-site reference guide of each product.
Note: Sometimes it’s hard to know all features of a specific product, but reading and understanding the release notes can save your life sometimes. Example: ThinApp 5.2.3 only supports Firefox version 50.1 and nothing else. Maybe you can install and deploy Firefox 52.9 which is working, but is not officially supported by VMware. And then, when you want to upgrade to 60.1, suddenly the compilation with ThinApp is not working anymore even it was with 52.9, which was also not supported.
If you have read and understood this requirement before, you or your customer wouldn’t have a problem now.
Just think about if you provide secure browsing with Firefox delivered by ThinApp and you have a high security environment. When a new Firefox version gets published which is more secure and is supported by Mozilla, you cannot deliver this browser anymore. What are you doing now? Do you have enough time to find, design and test another solution?
ThinApp, App Volumes and RDSH have unique characteristics that allow them to increase the user experience and decrease resource utilization. Evaluate each solution and use the appropriate one for your design.
This is all I have to say about application delivery without going too deep. Make your homework and know what you need! Next time we take a look at section 8 which is about session management and client devices.
This is the 7th part of my VCAP7-DTM Design exam series. In part 6 I covered the creation of a physical network design for Horizon 7. This time we take a look at section 6 of the blueprint, the creation of a physical design for Horizon desktop and pools:
Section 6 – Create a Physical Design for Horizon Desktops and Pools
Objective 6.1 – Design Virtual and Physical Image Masters
Objective 6.2 – Optimize Desktop Images, OS Services and Applications for a Horizon Design
Objective 6.3 – Incorporate Desktop Pools into a Horizon Design
Objective 6.4 – Incorporate RDS Pools into a Horizon Design
The desktops your customer provides must satisfy the use case requirements to ensure a good user experience and user acceptance. To provide desktops with Horizon you have to create so called desktop pools. VMware has a few recommendations and leading practices for the configuration and optimization of a Horizon desktop. These things will help you to enhance the overall scalability and performance of a Horizon implemenation.
The desktop build process would look like this:
You will start with the creation of the target VM
Installation of guest OS
Installation of VMware Tools
Perform image optimization
Installation of globally used applications and Horizon Agent
Creation of VM template
If you understand the customer’s use cases, you will understand what kind of desktops are needed to meet the requirements. The configuration of the desktop VM varies for each pool. The differences between them are often resource allocations like disk size, installed applications, memory or even the operating system.
For the most use cases VMware recommends only assigning two vCPUs unless it’s proven and really a requirement to have more CPU power.
Consider RAM reservation settings and keep in mind that high memory settings require more disk space as the VM swap file and the Windows pagefile sizes are related to these settings.
Globally used applications like MS Office or Adobe Reader should be installed within the desktop image. All other applications are delivered with App Volumes, if possible.
VMware recommends optimizing the guest operating system of a desktop image to positively affect the performance of a Horizon desktop.
Use VMware OS Optimization Tool (OSOT) to optimize your Windows desktops and server images. It is a great tool and will help you to disable OS components you don’t need and could help to enhance the overall scalability and performance. Make sure you know the optimizations you apply and what settings are changed to avoid any bad user experience or unexpected behaviour of your desktops.
If you are using Windows 10 for example, also make sure that you remove all unneeded native apps.
You can create desktop pools to give users remote access to virtual machine-based desktops. You can also choose VMware PC-over-IP (PCoIP), or VMware Blast to provide remote access to users.
There are two main types of virtual desktop pools. Automated desktop pools use a vCenter Server virtual machine template or snapshot to create a pool of identical virtual machines. Manual desktop pools are a collection of existing vCenter Server virtual machines, physical computers, or third-party virtual machines. In automated or manual pools, each machine is available for one user to access remotely at a time.
With Horizon 7.5 a instance is limited to 10’000 desktops and if the planned deployment exceeds this limit, then you must use the Cloud Pod Architecture (CPA) feature. With CPA you can link together 25 pods to provide one big desktop environment for ten geographically distant sites and provide apps and desktops for up to 200’000 sessions. See VMware Horizon 7 sizing limits and recommendations.
In a Horizon design you must state the use cases and use desktop pools which are the logical containers that represent each use case (desktop type, application set, access mode etc.).
With VMware Horizon it is also possible to provide hosted applications with the integration or Remote Desktop Services Hosts (RDSH) based on Microsoft Remote Desktop Services (RDS).
A RDS desktop pool is associated with a farm, which is nothing more than group of RDS hosts. Each RDS host is a Windows Server that can host multiple RDS desktop sessions.
The Horizon 7.5 handbook is a really great source for this part and I will allow myself to copy and past some part of it. 🙂
There are two options for a desktop assignment:
Each user is assigned a particular remote desktop and returns to the same desktop at each login. Dedicated assignment pools require a one-to-one desktop-to-user relationship. For example, a pool of 100 desktops are needed for a group of 100 users.
Using floating-assignment pools also allows you to create a pool of
desktops that can be used by shifts of users. For example, a pool of 100 desktops could be used by 300 users if they worked in shifts of 100 users at a time. The remote desktop is optionally deleted and re-created after each use, offering a highly controlled environment.
This means that a floating assignment is recommended because it decouples the user from a specific desktop and provides management and resource efficiency. This obviously could also reduce the licensing costs.
Dedicated desktop assignments are useful or required where users have applications or data that they install and keep on a specific desktop. A dedicated desktop can be assigned (fixed) to a specific user or also during the first logon where the next unused desktop will be assigned automatically.
Full Clones, Linked Clones or Instant Clones?
One of the most important questions for your design is whether users need a stateful or stateless desktop to work with. If the user has a stateful desktop, you have to think about the data which needs to be included in a backup (e.g. user profile or application data).
If you provide stateless desktop images you face other challenges. What happens to a user’s profile or data? Should it be saved and be available in the next session?
Stateless desktop images
Also known as nonpersistent desktops, stateless architectures have many advantages, such as being easier to support and having lower storage costs. Other benefits include a limited need to back up the virtual machines and easier, less expensive disaster recovery and business continuity options.
Stateful desktop images
Also known as persistent desktops, these images might require traditional image management techniques. Stateful images can have low storage costs in conjunction with certain storage system technologies. Backup and recovery technologies such as VMware Site Recovery Manager are important when considering strategies for backup, disaster recovery, and business continuity.
There are two ways to create stateless (non-persistent) desktop images in Horizon 7:
You can create floating assignment pools or dedicated assignment pools of instant clone virtual machines. Folder redirection and roaming profiles can optionally be used to store user data.
You can use View Composer to create floating or dedicated assignment pools of linked clone virtual machines. Folder redirection and roaming profiles can optionally be used to store user data or configure persistent disks to persist user data.
There are several ways to create stateful (persistent) desktop images in Horizon 7:
You can create full clones or full virtual machines. Some storage vendors have cost-effective storage solutions for full clones. These vendors often have their own best practices and provisioning utilities. Using one of these vendors might require that you create a manual dedicated-assignment pool.
You can create pools of instant-clone or linked-clone virtual machines and use App Volumes user writable volumes to attach user data and user-installed apps.
Whether you use stateless or stateful desktops depends on the specific type of worker.
There could be a lot more to tell you about when creating desktop pools, but those details can be found on Tech Zone and the available PDFs and Youtube videos.
The next time we take a look at “Section 7 – Incorporate Application Services into a Horizon Physical Design”.
This is the sixth part of my VCAP7-DTM Design exam series. In part 5 I covered the creation of a physical design for horizon storage. This time we take a look at section 5 of the blueprint, the creation of a physical network design for Horizon:
Section 5 – Create a Physical Design for Horizon Networking
Objective 5.1 – Plan and Design Network Requirements for Horizon solutions (including Mirage and Workspace One)
Objective 5.2 – Design Network and Security Components Based on Capacity and Availability Requirements
Objective 5.3 – Evaluate GPO and Display Protocol Tuning Options Based on Bandwidth and Connection Limits
Networking is also a very important and exciting when creating a Horizon architecture and a lot of questions are coming up when I think about Horizon and network access and devices:
How does the ISP infrastructure look like?
Do we have redundant internet uplinks?
Bandwidth in the data center?
How is the connection between Horizon client and agent?
ESXi host network interfaces?
Do we have mobile workers using WLAN?
I once had a customer who had a really nice and modern data center infrastructure, but their firewalls didn’t provide enough throughput. Make your homework and know how the routing and switching looks like and check every component’s limit.
Beside our VDI traffic, what about management, vMotion and vSAN traffic? Do we have enough network interfaces and bandwidth? If you think about management traffic, then 1Gbit interfaces are normally sufficient. But vMotion and vSAN traffic should have redundant 10Gbit connections and be on different subnets/VLANs.
Overview of the Network Architecture
In most network architectures two firewalls exist to create the DMZ.
The Unified Access Gateway (UAG) appliances are placed in the DMZ. UAG can perform authentication or pass a connection to the Connection Server for AD authentication.
Notauthenticated sessions are dropped at the Unified Access Gateway appliance and only authenticated sessions are allowed to connect to the internal resources.
UAG appliances in the DMZ communicate with the Connection Server instances inside the corporate firewalls and ensure that only the desired remote apps and desktop sessions can enter the corporate data center on behalf of this strongly authenticated user.
Inside the corporate firewall you install and configure at least two Connection Server instances. Their configuration data is stored in an embedded LDAP directory (AD LDS) and is replicated among all members of the group.
The used session bandwidth between the Horizon client and agent depends highly on the session configuration. For display traffic, many elements can affect network bandwidth, such as the used protocol, monitor resolution, frames per second, graphically intense applications or videos, image and video quality settings.
Because the effects of each configuration can vary widely, it’s recommended to monitor the session bandwidth consumption as part of a pilot. Try to figure out the bandwidth requirements for each use case.
I would say that Blast Extreme is the way to go, because it has been optimized for mobile devices and can intelligently switch between UDP and TCP (Adaptive Transport). PCoIP has been developed by Teradici, but Blast is VMware’s own creation and that’s why I think that Blast will be “the future” and that RDP still can be used as fallback for some special scenarios.
Display Protocol Tuning Options
I will not cover this topic and explain you how you can configure the maximum bandwidth for PCoIP via GPO. There are several options to decrease and increase the used session bandwidth:
Nowadays, every client device is connected with 1Gbps. LAN connections and the user experience are most of the time perfect. How is it with WAN connections where you will have latencies that could be between 50 and 200ms? Do you apply Quality of Services (Qos) policies to prioritize Horizon traffic?
WAN optimization is one of the keywords when talking about WAN connections and is valuable for TCP-based protocols which require many handshakes between client and server, such as RDP.
PCoIP is UDP-based and this was the reason why everyone in the past said, that you should prefer this protocol for connections with higher latencies and then no WAN optimization or acceleration would be needed.
Then inside the corporate network you would use RDP because your network is stable or did you leave this choice to the user?
With Blast Extreme, Adaptive Transport will automatically detect higher latencies and automatically switches between TCP and UDP if needed. Higher latencies could also occur with mobile devices working of WiFi networks.
In my opinion there are almost no reasons anymore to use anything else than Blast because it’s also more network efficient than PCoIP.
Use separate networks for vSphere management, VM connectivity, vMotion and vSAN traffic. Make sure you have redundancy across different physical adapters (NIC, PCI slot) and devices (switches, router, firewall). Consider the use of a vSphere Distributed Switch (vDS) to reduce management overhead and provide a richer feature set. Maybe NSX could be interesting for micro segmentation.
Load balancing is a very important component of a Horizon architecture. The primary purpose of load balancing is to optimize performance by evenly distributing client sessions across all available Connection Server instances. The same is valid for UAG appliances, Identity Manager or App Volumes Manager. NSX comes with a virtual load balancer, but F5 and NetScaler are also fine.
Depending on your customer’s requirements and needs, the network design is another key part to remove single point of failures.
In part 7 we will figure out how we have to design Horizon desktops and pools.
This is the fifth part of my VCAP7-DTM Design exam series. In part 4 I covered the creation of a physical design for vSphere and Horizon components. This time we take a look at section 4 of the blueprint, the creation of a physical design for horizon storage:
Section 4 – Create a Physical Design for Horizon Storage
Objective 4.1 – Create and Optimize a Physical Design for Horizon Infrastructure Storage
Objective 4.2 – Create and Optimize a Physical Design for View Pool Storage
Objective 4.3 – Create and Optimize a Physical Storage Design for Applications
Objective 4.4 – Create and Optimize a Tiered Physical Horizon Storage Design
Objective 4.5 – Integrate Virtual SAN into a Horizon Design
This article is not a comparison between HCI and traditional storage architecture and if you build hosts by yourself or buy Dell EMC’s VxRail or any other vSAN ReadyNode.
Since it is VMware’s strategy to push vSAN and get away from traditional storage, I only cover vSAN. For my VCDX design I will also move away from traditional storage and use vSAN – it’s also my customer’s strategy. The price for flash storage is decreasing constantly and makes a hybrid vSAN architecture less attractive – at least for our use cases.
In general the storage design of a Horizon implementation is very critical. You have to think about capacity, growth capacity, data/object placement, disaster recovery, kind of SSD disks and so on. But in my opinion, HCI or vSAN makes your life a lot easier and simplifies the storage deployment.
If you fail to correctly size the storage and I/O capacity, your customer’s user experience will suffer or the deployment of new desktops is not possible anymore. So, storage performance and sizing is vital for the satisfactory of your customers and their users!
All-Flash or Hybrid Architecture
The first thing you have to figure out and define is the vSAN platform you are going to deploy – All-Flash or hybrid architecture. A All-Flash vSAN configuration aims at delivering very high IOPS with low latencies. Also in a All-Flash configuration you use two different grades of (flash) disks: lower capacity and higher endurance device for the capacity tier and more cost-effective and higher capacity disks for the capacity tier
There is no read cache available in a All-Flash configuration as all data is directly read from the capacity tier. Because you aim for extremely high IOPS, make sure you provide a dedicated 10Gb network for the vSAN traffic.
You can enable the deduplication and compression setting (not available when using a hybrid vSAN) in the vSAN cluster to reduce redundant copies of blocks within the same disk group to one copy and to compress the blocks after they have been deduplicated.
Erasure Coding (RAID 5/6 is only available with All-Flash) provides the same level of redundancy as mirroring, but with a reduced capacity requirement. In general, erasure coding means breaking data into multiple pieces and spread them across multiple devices, while adding parity data in the event data gets corrupted or lost. This is a good and short video about this feature:
When using vSAN without further adjustments, your virtual desktops and infrastructure servers are using the default vSAN storage policy. For infrastructure servers this might be okay, but for our desktops we need to create a new policies. Cormac Hogan has very good material about Horizon and vSAN Storage Policies:
The Number of Failures to Tolerate defines the number of host, disk or network failures a storage object can tolerate. This number of Failures to Tolerate (FTT) has the greatest impact on your capacity in a vSAN cluster. Based on your configured availability requirements for a VM, the settings in the policy can lead to a higher consumption on the vSAN datastore (more copies of your data). For “n” failures tolerated, n+1 copies of the object are created and 2n+1 hosts are required.
Consider to configure FTT = 0 for the OS disk for linked-clone floating pools or if you use full-clone non-persistent desktops. If vSAN should experience a failure, only non-persistent data will be lost.
I hope this information was helpful even we didn’t go to deep. If you need to know more about vSAN, then you’ll find tons of documents and other blogs about this technology.
In part 6 I’ll try to give you more information about the design for a Horizon network.