Saturday, 24 July 2021

vSphere HA | Requirements | Admission Control | General Introduction

 Hello my dear readers, Greetings!

It was quite a long time i just got engaged in my Training deliveries that's the reason couldn't spare time to write a blog post.

Let's start our topic Discussions!

vSphere HA, we normally say or recognize it with a restart of VMs on surviving host in a vSphere cluster.

We normally use vsphere HA in vCenter server cluster object and is helpful in different situations like 

  1. ESXi host Hardware issues 
  2. Network disconnectivity among ESXi hosts in a cluster
  3. Shared Storage connectivity or unavailability issues with ESXi hosts
  4. Planned maintenance of ESXi hosts
How does it work?

vSphere HA, unlike its name (HA = High Availability), it restarts VMs on surviving hosts where VMs requirements are accommodated as shown in the below picture. 



For example, if any ESXi host has got any hardware problem due to which it stops working resulting in unavailability of VMs. The (interrupted) VMs then be taken care by other available ESXi hosts in the same cluster to power them on accessing the same shared datastore.

This failures could be a Hosts Hardware/ Network interruptions /Storage in-accessibility etc. 

So, it means we have to fulfill some important hardware requirements for vSphere HA. Let's discuss its requirements

The basic high-level requirements are as below
  1. vCenter Server (vpxd)
  2. Fault Domain Management (FDM-local to every host)
  3. Hostd (local to every host)
Let's break these requirements into understandable pieces

Hardware Requirements

  • Minimum 1 shared datastore - Recommended 2 shared Datastores
  • Minimum 2 ESXi hosts and Maximum 64 ESXi hosts in a cluster
  • Minimum 1 Ethernet network with Static IP Address for host Recommended 2 Ethernet networks with static IP Addresses for ESXi hosts (Multiple Gateways)

Software Requirements 

  • vCenter Server - To create cluster object
  • 1 Management network must be common among all ESXi hosts in the Cluster
  • Enable vSphere HA on the cluster object
  • Minimum vSphere Essential plus kit license or single standard vCenter server license 
Talking about high-level requirements, vCenter server is required to build or create cluster object and to push FDM agents to the ESXi host those are the part of cluster as member hosts. 

FDM Agent is actually a service that runs locally inside each ESXi hosts in the cluster which is enabled with vSphere HA feature. FDM is the one who is taking care of all HA related actions like 
  • HA logging
  • VM restarted on survining hosts
  • selection of Master node in a cluster
  • Management of vSphere HA all requirements
FDM service talks directly to "hostd" service of each ESXi host. 

The basic purpose of "hostd" is to create/delete/start/restart/shutdown and infact all the necessary actions of ESXi host against VMs are taken care by "hostd".

vSphere HA Anatomy

When you enable vsphere HA on a cluster then the members of the cluster are divided into two basic parts
  1. Master Node
  2. Slave / Subordinate Nodes
There would be only one Master Node in a vSphere HA cluster and rest would be Slave / Subordinate Nodes. Total size of vSphere cluster could go upto 64 Nodes (1 Master & 63 Slave Nodes) vSphere 6.5 / 6.7 / 7

Master node has got all the responsibility to Restart VMs on available surviving hosts (slave / subordinate).

Master node has got responsibility to equally divide the workload of Restarting VMs on surviving hosts.

Master node has got responsibility to inform vCenter server about the current status of vSphere HA cluster

Master node has got responsibility to keep track of Heartbeat from Slave nodes either from Network or from datastore.

How Master node know all about the VMs which are required to be restarted on surviving hosts ? 

There is a file named "Protected List" located on all shared Data-stores which can be accessed by Master Node in the cluster and held / occupied by Master-node. 

This file contains information about Virtual Machines running on their respective hosts.

An-other file named "Power-on" file located on shared data-stores and accessible by all nodes including master node in the cluster. The purpose of this file is to maintain time stamp updated after every 5 minutes by all the hosts to mark the connectivity of all hosts for avoiding network isolation. 

The significance of "Power-on" file is to let Master node know about network isolation impact on disconnected hosts from ethernet network. So, master node locates the alternate connectivity of such network disconnected hosts by looking into the latest time stamp with 5 minutes update after last accessibility to this file by the host using alternative to heart-beat channel other than ethernet network (which is datastores). 

Minimum alternative heart-beat sources (in the form of data-store accessibility) is two. It is highly recommended to choose alternative datastores manually instead of letting vCSA to choose them (automatically) for you.

(Design Tip)

Design your vSphere HA network with redundant ethernet gateways and keep your shared storage network (fabric) physical separate. Incase of any network disaster, your vsphere design can survive / mitigate the situation.

How different Nodes respond to HA failure Scenarios? 

Master Node:
Master nodes are responsible to restart failing host VMs on surviving hosts and updates "Protected List" file all across datastores it can access. 
  • If Master node struck a failure (H/W Issue or Network Isolation etc), VMs running on top of this host shall be evenly distributed amongst the surviving hosts right after election process. What is Election process
    •  All the slave nodes in the cluster send heart beat to each other and to master node and wait for the master node's heart beat. 
    • If slave node do not receive Master node heart beat for 15 seconds then they consider it is dead
    • Slave node initiate a special broadcast which is known as election traffic which all the slave nodes sense and elect the next master node amongst them. 
  • This election process continues for next 15 seconds right after slave nodes waited for master node's heart beat for 15 seconds.
  • Right after election process (which takes another 15 seconds) to elect one master node from remaining slave nodes, the elected Master node takes over the "Protected list" file and initiate (initial placement) affected VMs which were running on faulty Master node.
Conclusion:
Master node takes around 45 seconds to restart the virtual machines on surviving hosts.

Slave nodes:
These are the nodes which take instructions from Master node to take care of affected (failing host) virtual machines to be restarted.
  • If slave node stuck a failure (H/W and or network isolation) then Master node takes responsibility to restart VMs (from the failing host) to available hosts in the cluster.
  • Master node within 15 seconds takes decision and evenly distribute the VMs across the cluster amongst the surviving hosts in the cluster.
Conclusion:
Slave nodes take 15 seconds to restart the VMs amongst the surviving hosts.

About Network Isolation
In this kind of state, affected host or number of hosts cannot be able to contact their gateways and Master node will not be able to contact isolated hosts. That's the reason we choose alternative to ethernet in the form of data-store heart beat. 

This kind of isolation would impact more if we have not taken care of ethernet design along with shared storage accessibility with redundancy.

Better Ethernet-network designs
Choosing better and physically separated topological approach for vSphere HA always helps a-lot. Just as you can see in below picture.



In the above picture which depicts the recommended approach for system traffic isolation, explains clearly that physical isolation of system traffic can be done through provisioning or creation of separate logical switches for separate system traffic. 

Though in this picture, I have mentioned 2 separate traffics to be the part of same virtual switch which explains that you can also put different (system) traffics combined or logically separate as well.

An-other important aspect, I wanted to draw your kind attention over there is to look at redundancy from the very basic component (vmNIC) till physical switches. This approach can also lessen the impact of any network level disaster.

Note: You can use same DNS as well instead of using separate DNS zones for each network as shown or mentioned in the picture above.

Logical (Isolation) network 
In this scenario, you can use as low as available number of vmNICs (Physical network cards). Specially in case of blade chassis. So, you can separate system traffic (like Management, vMotion, vSAN, FT, Replication etc) logically using vLANs. 

Note: Better network design even save from disasters like shared storage unavailability resulting in problems like APD (All Path Down).


To be continued (Stay Tuned...!)

 

No comments:

Post a Comment

My Posts

vSphere 8 HA | Isolation Addresses | Admission control Policy - Skill Enhancement Series - Advanced Administration - Episode#8

 In my last blog about vSphere HA basic concept, I explained the conceptual part of vSphere HA with some design tips. Now, in the continuat...