Implementing Failover Clustering on Windows Server 2019

7 minutes to read

This topic shows how to create a failover cluster on Windows Server 2019 by using either the Failover Cluster Manager snap-in. The release of Windows Server 2019 is coming with many new features that it’s tough to keep track of them all. Of course, we will focus on failover clustering that grew like a high availability platform help us to protect mission-critical applications such as Microsoft SQL Server and different Windows services and applications. Failover clustering is part of the foundation for Dynamic Datacenter and technologies such as live migration. The new features that were added to Windows Server 2019 that is directly related to failover clustering are:

  • Cluster sets
  • Azure-aware clusters
  • Cross-domain cluster migration
  • USB witness
  • Cluster infrastructure improvements
  • Cluster Aware Updating supports Storage Spaces Direct
  • File share witness enhancements
  • Cluster hardening
  • Failover Cluster no longer uses NTLM authentication

If you want to learn more about each one you can check this link.

Let me change to the scenario of this topic to Implementing Failover Clustering on Windows Server 2019 version. Below is the diagram that will be used during this implementation.

From the diagram above we have 3 servers:

  1. VM-WIN-STO01 – used as iSCSI storage
  2. VM-WIN-NODE01  – used as a cluster node 1
  3. VM-WIN-NODE02  – used as a cluster node 2

 

Let’s begin step by step Implementing Failover Clustering on Windows Servers 2019 version.

  1. Install iSCSI target server feature on storage server VM-WIN-STO01
  2. From Server Role (1)> File and Storage Services > File and iSCSI Services select ISCSI Target Server (2), next x 2 and Finish to install the role.
  3. Wait for instal, after that go to Server Manager > File and Storage Services(3) > iSCSI(4) and click to create an iSCSI virtual disk(5)
  4. In my case, I have an additional disk S:\, that will be used to create new iSCSI virtual disks. In the same tab, you should specify the path of the virtual disks that will be created, you can leave by default and the disk will be created as below in You main drive, my case S:\iSCSIVirtualDisk, but I prefer custom, in S:\
  5. Specify iSCSI virtual disk name
  6. After, specify the virtual disk size and select the desired option: fixed/dynamically expanding/differencing.
  7. The next tab is the iSCSI target that our storage server, in my case I don’t have it, click next to the next tab Target Name and Access to add or the server name like below:
  8. Next tab is Access Servers(7), access servers are nods that need to be added(8), there are 4 types by adding a new one(9), I prefer IP Address, after adding the value click OK(10).
  9. The next tab is for Authentication the access servers, for my demo, I’ll skip this.
  10. The Confirmation tab with our configurations. If everything looks ok, click the create button.
  11. I will create another 4 iSCSI volumes, the same settings as iSCSIvolume01.
  12. Now all these disks need to be connected to the nodes, the new configuration should be performed on nodes, in my scenario will be VM-WIN-NODE01.
  13. From Server Manager > Tools > open iSCSI Initiator, click yes to start the iSCSI service.
  14. In Discovery(11) tab, click Discover Portal button(12) and add IP address or DNS name of the target(storage server), click OK,
  15. Go back to the Targets tab(15) and click Connect button(16), check the status of the target, should be Connected.
  16. The next step is to bring all the disks online on the same server VM-WIN-NODE01, open disk management by typing diskmgmt.msc, right on disk, click online.
  17. Initialise disks by click right on the disk, you can do it in the same time for all disks
  18. Select all the disks, check the partition format, MBR or GPT, depending on your requirements and click OK.
  19. Create a new volume for every partition that was initialized in step behind.
  20. Now we need to perform step 22 – 28 on 2nd node, in my case, will be VM-WIN-NODE02, open iSCSI initiator to add the target, rescan disk in Disk Management after that just bring the disks online.
  21. Now all you have to do is install Failover Cluster on both of nodes.
  22. Go to Server Manager > Add Roles and Feature
  23. Go to Features(17) tab and select Failover Clustering (18)
  24. Next, confirm the selected role and install it.
  25. After installation of Failover Clustering open it from Server Manager > Tools > Failover Cluster Manager
  26. Before start creating the cluster, the most important step is to validate the configuration, this can be done by clicking Validate Configuration (21) as per image below.
  27. In the wizard that is opened, select the nodes as per image below,
  28. The next tab is about chosen between running all tests or running selected tests, Microsoft recommended to run all the tests and of course, I recommend you to run all the tests, to avoid a miss configuration.
  29. Next is the confirmation that can see the servers that were added on the 1st tab, and the list of tests that will be performed
  30. After validation, you should check the report if some errors are presented, save the report for further information.
  31. The next step after validation is to create a Cluster, click on Create Cluster from the right Actions menu
  32. In Select Servers tab, add the nodes(23), in my case will be VM-WIN-NODE01 and VM-WIN-NODE02, click add button(24),
  33. In the next tab is the configuration of cluster name(25), and assign an IP address for your cluster. To mention that is very important when you assign an IP address to your cluster, make sure that this address is available in your network, if you have a DHCP, make sure to reserve it. I choose 10.1.1.15 for my cluster.
  34. All these configurations are performed on VM-WIN-NODE01, the next tab is about confirmation your wizard and Add all eligible storage to the cluster. Best practice is to uncheck the box and add disks it manually later.
  35. After creating the cluster, as per image below, we have 5 sections under the cluster, Roles, Nodes, Storage, Network, and Cluster Events. As I mention that in step 58 that best practice to add storage after creating the cluster. So, navigate to on cluster go to Storage section, click right on Disk > Add Disk (28).
  36. From this wizard add the desire disks and click OK. if you look closely, the disks are assigned to NODE02, because last online was configured on NODE02.
  37. A role can be added by click right on Roles > Configure Role (29), as per image below
  38. From High Availability Wizard you have a lot of roles that can be installed on this cluster.

Let me know if this article was useful for you if during the process of prepare you have encountered some issue or difficulties let me know here in a comment or Ask a question here.

 

Leave a Reply

Your email address will not be published.Required fields are marked *