Select Page

Horizon on VMC on AWS Basics

VMC on AWS

In Switzerland where we have a lot of smaller to medium sized companies the demand for a  cloud solution is increasing. The customers are not yet ready to put all their servers and data into to the cloud, so they go for a hybrid cloud strategy.

And now it makes even more sense and got easier since VMware’s offering VMware Cloud on AWS (VMC on AWS) exists. This service, powered by VMware Cloud Foundation (VCF), brings VMware’s SDDC stack to the AWS cloud and runs the compute, storage and network products (vSphere, vSAN, NSX) on dedicated bare-metal AWS hardware. 

VMC on AWS

If you would like to try this offering you have the option for a Single Host SDDC which is the time-bound starter configuration and comes with the limitation of 30 days. After 30 days your Single Host SDDC will be deleted and all data will be lost as well. If you plan to scale up into a 3-host SDDC you retain all your data and you SDDC is not time bound anymore.

Availability

This pretty new service is already available in 13 global regions and already had 200+ released features since its launch. VMC on AWS is available almost everywhere – in US and Asia Pacific for example – and in Europe we find the service hosted in Frankfurt, London, Paris and Ireland. 

Use Cases

It’s not hard to guess what the use cases are for a service like this. If you are building up a new IT infrastructure, don’t want to have your own data center and purchase any server, then you might want to consider VMC on AWS. Another project could be to expand your market into a new geography and extend your footprint into the cloud based on a VMware-consistent and enterprise-grade environment in the AWS cloud.

A few customers are also finding a new way to easily deliver business continuity with VMware Site Recovery and take advantage of VMC on AWS which delivers a robust Disaster Recovery as a Service (DRaaS) possibility.

Another reason could be that your on-premises data center is in danger because of bad weather and you want to migrate all your workloads to another region.

Or you just want to quickly build a dev/test environment or do a PoC of a specific solution or application (e.g. VMware Horizon).

Elastic DRS

In my opinion EDRS is one of best reasons to go for VMC on AWS. EDRS allows you to get the capacity you need in minutes to meet temporary or unplanned demand. You have the possibility to scale-out and scale-in depending on the generated recommendation.

A scale-out recommendation is generated when any of CPU, memory, or storage utilization remains consistently above thresholds. For example, if storage utilization goes above 75% but memory and CPU utilization remain below their respective thresholds, a scale-out recommendation is generated.

 

 A scale-in recommendation is generated when CPU, memory, and storage utilization all remain consistently below thresholds.

This is interesting if your dekstop pool is creating more instant clones and the defined value of RAM for example is above the threshold. But there is also a safety check included in the algorithm, which runs every 5 minutes, to provide time to the cluster to cool off with changes. 

If you check the EDRS settings you have the option for the “Best Performance” or “Lowest Cost” policy. More information can be found here.

Horizon on VMC on AWS

For customers who are already familiar with a Horizon 7 on-premises deployment, Horizon on VMC on AWS lets you leverage the same architecture and the familiar tools. The only difference now is the vSphere outsourcing.

Use Cases

Horizon can be deployed on VMware Cloud on AWS for different scenarios. You could have the same reasons like before – data center expansion or to have a disaster recovery site in the cloud. But the most reason why a customer goes for Horizon on VMC on AWS is flexibility combined with application locality.

Horizon 7 on VMC on AWS

We have customers who were operating an on-premises infrastructure for years and suddenly they are open to a cloud infrastructure. Because the SDDC stack in the cloud is the same like in the private cloud the migration can be done very easily. You can even use the same management tools like before.

Minimum SDDC Size

The minimum number of hosts required per SDDC on VMware Cloud on AWS for production use is 3 nodes (hosts). For testing purpose, a 1-node SDDC is also available. However, since a single node does not support HA, it’s not recommended for production use.

Cloud Pod Architecture for Hybrid Cloud

If you are familiar with the pod and block architecture you can start to create your architecture design. This hasn’t changed for the offering on VMC on AWS but there is a slight difference:

  • Each pod consists of a single SDDC
  • Each SDDC only has a single vCenter server
  • A Horizon pod consists of a single block Horizon7Pod on VMC on AWS

Each SDDC only has one compute gateway which limits the connections to ~2’000 VMs or user sessions. This means that the actual limit per pod on VMC on AWS is ~2’000 sessions as well. When the number of compute gateways per SDDC can be increased, Horizon 7 on VMC on AWS will definitely have a comparable scalability with the on-premises installation.

You can deploy a hybrid cloud environment when you use the Cloud Pod Architecture to interconnect your on-premises and Horizon pods on VMC on AWS. You can also stretch CPA across pods in two or more VMware Cloud on AWS data centers with the same flexibility to entitle your users to one or multiple pods as desired.

Supported Features

The deployment of Horizon 7 on VMC on AWS started with Horizon 7.5 but there was no feature parity at this time. With the release of Horizon 7.7 and App Volumes 2.15 we finally had the requested feature parity. This means since Horizon 7.7 we can use Instant Clones, App Volumes and UEM. At the time of writing the vGPU feature is not available yet but VMware is working with Amazon on it. With the release of Horizon 7.8 a pool with VMware Cloud on AWS is now capable of using multiple network segments, allowing you to use less pools and/or smaller scopes. Please consult this KB for the currently supported features. 

Use AWS Native Services

When you set up the Horizon 7 environment in VMware Cloud on AWS you have to install and configure the following components:

  • Active Directory
  • DNS 
  • Horizon Connection Servers
  • DHCP
  • etc.

If you are deploying Horizon 7 in a hybrid cloud environment by linking the on-premises pod with the
VMC on AWS pod, you must prepare the on-premises Microsoft Active Directory (AD) to access
the AD on VMware Cloud on AWS.

My recommendation: Use the AWS native services if possible 🙂

AWS Directory Services

AWS Managed Microsoft AD is built on actual Microsoft Active Directory and does not require you to synchronize or replicate data from your existing Active Directory to the cloud. You can use standard Active Directory administration tools and take advantage of built-in Active Directory features, such as Group Policy and single sign-on (SSO).

Amazon Relational Database Service

Amazon RDS is available on several database instance types – optimized for memory, performance or I/O – and provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS.

This service allows you to quickly setup a SQL Express (not recommended for production) or regular SQL Server which can be used for the Horizon Event DB or App Volumes. 

Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides a fully managed native Microsoft Windows file system so you can easily move your Windows-based applications that require file storage to AWS. Built on Windows Server, Amazon FSx provides shared file storage with the compatibility and features that your Windows-based applications rely on, including full support for the SMB protocol and Windows NTFS, Active Directory (AD) integration, and Distributed File System (DFS).

At the time of writing I have to mention that the FSx service has not yet officially been tested and qualified for User Environment Manager (UEM), but that’s no problem. Technically it’s working totally fine.

Amazon Route 53

The connectivity to data centers in the cloud can be a challenge. You need to manage the external namespace to give users access to their desktop in the cloud (or on-prem). For a multi-site architecture the solution is always Global Server Load Balancing (GSLB), but how is this done when you cannot install your physical appliance anymore (in your VMC on AWS SDDC)?

The answer is easy: Leverage Amazon Route 53!

Amazon Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers, or Amazon S3 buckets – and can also be used to route users to infrastructure outside of AWS. You can use Amazon Route 53 to configure DNS health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints. 

Check Andrew Morgans blog article if you need more information about Route 53.

Horizon on VMC on AWS rocks! 🙂

 

vSAN Basics for a Virtual Desktop Infrastructure with VMware Horizon

As an EUC architect you need fundamental knowledge about VMware’s SDDC stack and this time I would like to share some more basics about VMware vSAN for VMware Horizon.

In part 5 of my VCAP7-DTM Design exam series I already posted some YouTube videos about vSAN in case you prefer videos instead of reading. To further proof my vSAN knowledge I decided to take the vSAN Specialist exam which focuses on the version 6.6.

To extend my vSAN skills and to prep myself for this certification I have bought the VMware vSAN 6.7 U1 Deep Dive book which is available on Amazon.

vSAN 6.7 U1 Deep Dive

vSAN Basics – Facts and Requirements

Out in the field not every EUC guy has enough sic knowledge about vSAN and I want to provide some facts about this technology here. This is no article about all the background information and detailed stuff you can do with vSAN, but it should help you to get a basic understanding. If you need more details about vSAN I highly recommend the vSAN 6.7 U1 Deep Dive book and the content available on storagehub.vmware.com.

  • The vSAN cluster requires at least one flash device and capacity device (magnetic or flash)
  • A minimum of three hosts is required except you go for a two-node configuration (requires a witness appliance)
  • Each host participating in the vSAN cluster requires a vSAN enabled VMkernel port
  • Hybrid configurations require a minimum of one 1GbE NIC, 10GbE is recommended by VMware
  • All-Flash configurations require a minimum of one 10GbE NIC
  • vSAN can use RAID-1 (mirroring) and RAID5-/6 (erasure coding) for the VM storage policies
  • RAID-1 is used for performance reasons, erasure coding is used for capacity reasons
  • Disk groups require one flash device for the cache tier and one or more flash/magnetic device for the capacity tier
  • There can be only one cache device per disk group
  • Hybrid configuration – The SSD cache is used for read and write (70/30)
  • All-Flash configuration – The SSD cache is used 100% as a write cache
  • Since version 6.6 there is no multicast requirement anymore
  • vSAN supports IPv4 and IPv6
  • vSphere HA needs to be disabled before vSAN can be enabled and configured
  • The raw capacity of a vSAN datastore is calculated by the number of capacity devices multiplied by the number of ESXi hosts (e.g. 5 x 2TB x 6 hosts = 60 TB raw)
  • Deduplication and compression are only available in all-flash configurations
  • vSAN stores VM data in objects (VM home, swap, VMDK, snapshots)
  • The witness does not store any VM specific data, only metadata
  • vSAN provides data at rest encryption which is a cluster-wide feature
  • vSAN integrates with CBRC (host memory read cache) which is mostly used for VMware Horizon
  • By default, the default VM storage policy is assigned to a VM
  • Each stretched cluster must have its own witness host (no additional vSAN license needed)
  • Fault domains are mostly described with the term “rack awareness”

vSAN for VMware Horizon

The following information can be found in the VMware Docs for Horizon:

When you use vSAN, Horizon 7 defines virtual machine storage requirements, such as capacity, performance, and availability, in the form of default storage policy profiles, which you can modify. Storage is provisioned and automatically configured according to the assigned policies. The default policies that are created during desktop pool creation depend on the type of pool you create.

This means that Horizon will create storage policies when a desktop pool get created. To get more information I will provision a floating Windows 10 instant clone desktop pool. Before I’m doing that, let’s have a look first at the policies which will appear in vCenter depending on the pool type:

Since I’m going to create a floating instant clone desktop pool I assume that I should see some the storage policies marked in yellow. 

Instant Clones

First of all we need to take a quick look again at instant clones. I only cover instant clones since it’s the recommended provisioning method by VMware. As we can learn from this VMware blog post, you can maissvely reduce the time for a desktop to be provisioned (compared to View Composer Linked Clones).

VMware Instant Clones

The big advantage of the instant clone technology (vmFork) is the in-memory cloning technique of a running parent VM.

The following table summarizes the types of VMs used or created during the instant-cloning process:

Instant Cloning VMs
Source: VMWARE HORIZON 7 INSTANT-CLONE DESKTOPS AND RDSH SERVERS 

Horizon Default Storage Policies

To add a desktop pool I have created my master image first and took a snapshot of it. In my case the VM is called “dummyVM_blog” and has the “vSAN Default Storage Policy” assigned.

How does it go from here when I create the floating Windows 10 instant clone desktop pool?

Instant Clone Technology

The second step in the process is where the instant-clone engine uses the master VM snapshot to create one template VM. This template VM is linked to the master VM. My template VM automatically got the following storage policy assigned:

The third step is where the replica VM gets created with the usage of the template VM. The replica VM is a thinprovisioned full clone of the internal template VM. The replica VM shares a read disk with the instantclone VMs after they are created. I only have the vSAN datastore available and one replica VM is created per datastore. The replica VM automatically got the following storage policy assigned:

The fourth step involves the snapshot of the replica VM which is used to create one running parent VM per ESXi host per datastore. The parent VM automatically got the following storage policies assigned:

After, the running parent VM is used to create the instant clone, but the instant clone will be linked to the replica VM and not the running parent VM. This means a parent VM can be deleted without affecting the instant clone. The instant clone automatically got the following storage policies assigned:

And the complete stack of VMs with the two-node vSAN cluster in my home lab, without any further datastores, looks like this:

vCenter Resource Pool 

Now we know the workflow from a master VM to the instant clone and which default storage policies got created and assigned by VMware Horizon. We only know from the VMware Docs that FTT=1 and one stripe per object is configured and that there isn’t any difference except for the name. I checked all storage policies in the GUI again and indeed they are all exactly the same. Note this:

Once these policies are created for the virtual machines, they will never be changed by Horizon 7

Even I didn’t use linked clones with a persistent disk the storage policy PERSISTENT_DISK_<guid> gets created. With instant clones there is no option for a persistent disk yet (you have to use App Volumes with writable volumes), but I think that this will come in the future for instant clones and then we also don’t need View Composer anymore. 🙂

App Volumes Caveat

Don’t forget this caveat for App Volumes when using a vSAN stretched cluster.

VMware Mirage – Alternatives

As some of you know Mirage was (and still is) a revolutionary technology at the time Wanova released it in 2011 and in 2012 Mirage became part of VMware.

VMware Mirage is used by customers for their desktop image management and for backup and recovery requirements.

VMware Mirage provides next-generation desktop image management for physical desktops and POS devices across distributed environments. Automate backup and recovery and simplify Windows migrations.

Mirage is and was the solution for certain use cases and solved common desktop challenges. Therefore not all customers are happy that Mirage reaches end of support (EOS) on June 30, 2019. 🙁

But why is VMware Mirage being removed from support?

Well, the answer is very simple. Today, the market is heading in two directions – it’s all about the applications and end-user devices (called the Digital Workspace). That’s why customers should move or are somehow forced to move to a Unified Endpoint Management solution which is considered to be “the” Windows desktop management solution of the future. The future of Windows is apparently cloud based and Mirage has not been designed or architected for this.

What are the alternatives?

VMware has no successor or product which can replace all of the features and functions Mirage provided, but Workspace ONE is the official alternative solution when it comes to Windows desktop management. 

There are really a lot of use cases and reasons why customers in the past decided to choose Mirage:

  • Reduce Management Complexity (e.g. single management console)
  • Desktop Backup and Recovery (automated and continuous system or user data backup)
  • Image Management (image layering)
  • Patch Management
  • Security & Compliance (auditing and encrypted connections)
  • Simple Desktop OS Migrations (e.g. Windows 7 to Windows 10 migrations)

VMware Mirage really simplified desktop management and provides a layered approach when it comes to OS and applications rollouts. Customers also had the use case where the physical desktop not always was connected to the corporate network and this is a common challenge IT department were facing.

The desktop images are stored in your own data center with secure encrypted access from all endpoints. You can also customize access rights to data and apps.  Even auditing capabilities are available for compliance requirements.
And the best and most loved feature was the possibility for a full system backup and recovery!

IT people love Mirage because it was so simple to restore any damaged and lost device to the most recent state (snapshot).

For branch offices where no IT was onsite Mirage was also the perfect fit. An administrator just can distribute updates or Windows images to all remote laptops and PCs without any user interaction – maybe a reboot was now and then required. But that’s all!

In case of bandwidth problems you could also take advantage of the Branch Reflector technology which ensured that one endpoint downloads images update and then distribute it locally to other computers (peers), which saved relieved the WAN connection drastically.

Can WorkspaceONE UEM replace Mirage?

From a technical perspective my opinion is definitely NO. WorkspaceONE has not the complete feature set compared to Mirage when it is about Windows 10 desktop management, but both are almost congruent I have to say.

I agree that WorkspaceONE (WS1) is the logical step or way to “replace” Mirage, but this you have to know:

  • WS1 cannot manage desktop images for OS deployments. Nowadays, it is expected that a desktop is delivered pre-staged with a Windows 10 OS from the vendor or that your IT department is doing the staging for example with WDS/MDT.
  • WS1 has no backup and recovery function. If you use Dell Factory Provisioning then you can go back to a “restore point” where all of your pre-installed and manually installed applications get restored after a device wipe let’s say for exmaple. But if the local hard disk has a failure and this restore partition is gone, then you have to get your device or hard disk replaced. Without Dell Factory Provisioning this means that IT has, again, still to deploy the desktop image with WDS/MDT.

For some special use cases it is even necessary to implement VMware Horizon, User Environment Manager, OneDrive for Business etc, but even then WS1 is a good complement since it can also be used for persistent virtual desktops!

As you can see a transition from Mirage to WS1 is not so easy and the few but most important differences are the reasons why customers and IT admins are not so amused about the EOS announcement of VMware Mirage.

VCP-DW 2018 Exam Experience

On the 30th November 2018 I passed my VCAP7-DTM Design exam and now I would like to share my VCP-DW 2018 (2V0-761) exam experience with you guys.

I’m happy to share that I also passed this exam today and I thought it might be helpful, even a new VCP-DW 2019 exam will be released on 28th February 2019, to share my exam experience since it’s still a pretty new certification and not that much information can be found in the vCommunity.

How did I prepare myself? To be honest, I almost had no hands-on experience and therefore I had to get the most out of the available VMware Workspace ONE documentation. I already had basic knowledge for my daily work as a solution architect, but it was obvious that this is not enough to pass. The most of my basic knowledge I gained from the VMware Workspace ONE: Deploy and Manage [V9.x] course which was really helpful in this case.

If you check the exam prep guide you can see that you have to study tons of PDFs and parts of the online documentation. 

Didn’t check all the links and documents in the exam prep guide but I can recommend to read these additional docs:

In my opinion you’ll get a very good understanding of Workspace ONE (UEM and IDM) if you read all the documents above. In additional to the papers I recommend to get some hands-on experience with the Workspace ONE UEM and IDM console.

As VMware employee I have access to VMware TestDrive where I have a dedicated Workspace ONE UEM sandbox environment. I enrolled an Android, iOS and two Windows 10 devices and configured a few profiles (payloads). I also deployed the Identity Manager Connector in my homelab to sync my Active Directory accounts with my Identity Manager instance which enables also the synchronization of my future Horizon resources like applications and desktops.

I think that I spent around two weeks for preparation including the classroom training at the AirWatch Training Facility Milton Keynes, UK.

The exam (version 2018) itself consists of 65 multiple choice and drag & drop questions and I had 135 minutes time to answer all questions. If you are prepared and know your stuff then I doubt that you will need more than one hour, but this could change with the new VCP-DW 2019. 🙂

I’m just happy that I have a second VCP exam in my pocket and now I have to think about the next certification. My scope as solution architect will change a little. In the future I’m also covering SDDC (software defined data center) topics like vSphere, vSAN, NSX, VMware Cloud Foundation, Cloud Assembly and VMC on AWS. That’s why I’m thinking to earn the VCP-DCV 2019 or the TOGAF certification.

VCAP7-DTM Design Exam Passed

On 21 October I took my first shot to pass the VCAP7-DTM Design exam and failed as you already know from my this article. Today I am happy to share that I finally passed the exam! 🙂

What did I do with the last information and notes I had about my weaknesses from the last exam score report? I read a lot additional VMware documents and guides about:

  • Integrating Airwatch and VMware Identity Manager (vIDM)
  • Cloud Pod Architecture
  • PCoIP/Blast Display Protocol
  • VMware Identity Manager
  • vSAN 6.2 Essentials from Cormac Hogan and Duncan Epping
  • Horizon Apps (RDSH Pools)
  • Database Requirements
  • Firewall Ports
  • vRealize Operations for Horizon
  • Composer
  • Horizon Security
  • App Volumes & ThinApp
  • Workspace ONE Architecture (SaaS & on-premises)
  • Unified Access Gateway
  • VDI Design Guide from Johan van Amersfoort

Today, I had a few different questions during the exam but reading more PDFs about the above mentioned topics helped me to pass, as it seems. In addition to that, I attended a Digital Workspace Livefire Architecture & Design training which is available for VMware employees and partners. The focus of this training was not only about designing a Horizon architecture, but also about VMware’s EUC design methodology.

If you have the option to attend classroom trainings, then I would recommend the following:

I had two things I struggled with during the exam. Sometimes the questions were not clear enough and I made assumptions what it could mean and that the exam is based on Horizon 7.2 and other old product versions of the Horizon suite:

  • VMware Identity Manager 2.8
  • App Volumes 2.12
  • User Environment Manager 9.1
  • ThinApp 5.1
  • Unified Access Gateway 2.9
  • vSAN 6.2
  • vSphere 6.5
  • vRealize Operations 6.4
  • Mirage 5.x

But maybe it’s only me since I have almost no hands-on experience with Horizon, none with Workspace ONE and in addition to that I’m only 7 months with VMware now. 🙂

It is time for an update, but VMware announced already that they are publishing a new design exam version called VCAP7-DTM 2019 next year.

What about VCIX7-DTM?

 In part 2 of my VCAP7-DTM Design exam blog series I mentioned this:

Since no VCAP7-DTM Deploy exam is available and it’s not clear yet when this exam will be published, you only need the VCAP7-DTM Design certification to earn the VCIX7-DTM status. I have got this information from VMware certification.

This information is not correct, sorry. VMware certification pulled their statement back and provided the information that you need to pass the VCAP6-DTM Deploy exam, as long as no VCAP7-DTM Deploy is available, to earn the VCIX7-DTM badge.

I don’t know yet if I want to pursue the VCIX7-DTM certification and will think about it when the deploy exam for Horizon 7 is available.

What’s next?

Hm… I am going to spend more time again with my family and will use some of my 3 weeks vacation time to assemble and install my new home lab.

Then I also have a few ideas for topics to write about, like:

  • Multi-Domain and Trust with Horizon 7.x
  • Linux VDI Basics with Horizon 7.x
  • SD-WAN for Horizon 7.x
  • NSX Load Balancing for Horizon 7.x

These are only a few of my list, but let’s see if I really find the time to write a few article. 

In regards to certification I think I continue with these exams:

This has no priority for now and can wait until next year! Or…I could try the VDP-DW 2018 since I have vacation. Let’s see 😀

VCAP7-DTM Design Exam, Part 12

I failed the VCAP7-DTM Design exam, but expected it and the first try of the exam showed me what stuff I need to learn better and where my weaknesses are. Let me tell you about my exam experience.

I arrived on time at the PearsonVUE test center, but they had PC problems and so I had to wait first for 30min until I could start the exam. The timer showed me that I have two hours for the 60 questions. The most of the time I was guessing and eliminating the obviously wrong answers and so I was through 50% of the questions of 50% of the time. If you would know a little bit more than I do and you work/worked with all the products on a daily basis, I would say that the exam is a piece of cake!

Nevertheless, I answered all 60 questions 15 minutes before the timer ended, but I didn’t review any of them, because I knew that I still wouldn’t have the better or correct answers. This may sound to you like I failed with a score of 0, but no. I had 252 of the 300 needed points and this is a sign for me that I just need to improve my weak spots and the topics I didn’t check during my preparation time.

Today I’m going to travel to VMware Airwatch in Milton Keynes (UK) for my VMware Workspace ONE: Deploy and Manage [V9.x] training which starts tomorrow. And I have to prepare a presentation for a roadshow with five events where I will be the speaker of a 30min slot. This means no time for studying yet.

But I’m lucky that I still got a seat at the Digital Workspace Livefire Architecture & Design training taking place in three weeks. This will be last part of my preparation for the retake which I planned for 23rd November 2018. But first I have to wait for my new exam voucher. 🙂

I cannot tell you which topics/technologies or questions were asked during the exam, but I can assure you that I didn’t expect some of the questions – they were just craaaaazy or about veeeery old stuff.

This is also one of my problems. You have to study things which are not valid anymore for the today’s product version or implementation. In a few cases the configuration limits or some parts of an architecture have changed.

So, I read the exam blueprint again and checked some of the attached URLs and document links again. In my opinion the following products and versions you should know for the exam:

  • Horizon 7.2
  • VMware Identity Manager 2.8
  • App Volumes 2.12
  • User Environment Manager 9.1
  • ThinApp 5.1
  • Unified Access Gateway 2.9
  • vSAN 6.2
  • vSphere 6.5
  • vRealize Operations 6.4
  • Mirage 5.x

So, this was my exam experience of the VCAP7-DTM Design exam and my advices after. It is totally okay to fail, because it will just help you if you are not prepared well enough or just went to early for your first shot.

My last advice: Use the note board for the difficult answers and topics you have no clue of. If you have enough time, reviewed your answers and you are ready to end the exam, memorize all your notes. Just in case you didn’t pass, you now have the notess in your mind and could transfer themto your personal notebook. This is totally legal and really helpful! 🙂

Good luck to you if you take the exam. I have another four weeks now to fill the gaps. 🙂 See if I passed or not.