Showing posts with label vsphere. Show all posts
Showing posts with label vsphere. Show all posts

Tuesday, 25 June 2024

vSpherer 8 Administration (Skillset Enhancement Series) Episode#4

 Introduction to vCenter Server Appliance

vCenter server Appliance (vCSA) is the management tool that enhances the administration and management easy for the life cycle of 

  1. ESXi hosts
  2. Virtual Machines
  3. Other Management Services (like NSX, vSAN, VMware Aria, vSphere 8 with Tanzu etc.)

Internal Architecture

vCenter Server Appliance was introduced back in (around) 2017 with the introduction to vSphere 6.0. when VMware Announced Photon OS (a flavored Linux owned by VMware) as container optimized OS. So this appliance is comprised of 3 Major parts, let's discuss this 

  1. OS (Photon OS)
  2. Postgres SQL (vPostgres)
  3. vCenter Server Services

It is understood that you cannot deploy vCenter server Appliance on a Bare metal (as you were able to do when vCenter server for Windows was there) but yes you can deploy it on ESXi host as a VM.  

In the beginning, vCSA was with 2 GUI interfaces 

  1. vSphere Web Client 
  2. vSphere Client

But with the introduction to vSphere 7 and above only vSphere Client left behind which is simpler and more independent than "Web Client" which was dependent on "Adobe Flash Plugin". 

So, Now, Let's talk about vCenter Server Appliance Application services and their capabilities. vCenter Server Appliance is now a single VM having multiple services and some config changes to its architecture as well. 

We discuss these updates and changes in more details one by one. So, let's start with 

SSO

vCenter Server Single Sign-On (SSO) is a crucial component of VMware's vSphere (vCenter Server), providing authentication services to various VMware products within the vSphere environment. Here are the primary capabilities and features of vCenter Server SSO

  1. Single Authentication source for VMware products
  2. Integration with LDAP Servers (AD) or Open LDAP using SAML
  3. Role based access and control of vSphere environment.
  4. Upto 15 vCenter Server Instances using Single SSO domain can be managed
This is the AAA that is aligned with Internal vCenter Server Directory service "vmDIR" and that's the reason we always mention not to use common name as of Active Directory domain while defining SSO domain during the installation of vCenter Server.

VMDIR is a service that acts similarly as of Microsoft Active Directory technique of multi-master replication if you use Enhanced Linked Mode or ELM for vCSA instances.
ELM configuration can only be achieved during the installation of the new instance of vCSA. At the time when you are installing the second instance of vCSA it will ask you to go with new "SSO Domain" or choose an "Existing" one. So, you need to choose an existing one as shown below



Once this replication happens in between the two instances then ELM establishes connecting to vCSA instances with one another to share inventory objects based on RBAC.

Certificate Authority (VMCA)

In-order to be more independent and use VMware own certification authority for providing certificates for VMware platform-based products, now we don't need to have or maintain 3rd party CA(s) at all. vCenter Server itself can be used a certification Authority to produce, renew certificates for VMware platform products like ESXi host, VMware Aria family, vCSA iteself etc.

Web Services

vCenter server Appliance is equipped with GUI (vSphere Client) to access its Interfaces. There are 2 different types of Interfaces offered by vCenter server Appliance 

  1. vSphere Client - for datacenter Administration (Default port: 443) - can be changed using General settings of vCenter server.
  2. VMware Appliance Management Interface (VAMI) - for Appliance (itself) management (Port: 5480)
We use Admin Interface by providing vCSA URL ("https://vcsa-fqdn:443/ui") and we use VAMI interface through ("https://vcsa-fqdn:5480"). both of the interfaces have got their own significance. It solely depends, what actually you want to do. 

For example, if you want to do day-2 administration of the ESXi hosts and or VMs in the datacenter then you always go with Admin interface. But, if you want to do configurational changes like changing Appliance Password, IP address etc then you need Appliance Own interface which is known as VAMI.

License Service

This service is used to hold information about installed and assigned licenses for ESXi host and other solutions like NSX, vSAN and vCenter Server itself. This service provides common license inventory and management capabilities to all vCenter Server systems within the Single Sign-On domain.

Postgres DB

A bundled version of the VMware distribution of PostgreSQL database for vSphere and vCloud Hybrid Services. It is used to hold SEAT logs and vCenter Server Configuration. SEAT stands for Statistics, Events, Alarms and Tasks logs whereas vCenter Server Configuration covers Cluster, vDS, ESXi hosts and other inventory and configurational information within it.

When you do the back of your vCSA than it asks you to backup SEAT and Config or only Config information. So at this point this is the configurational information that you backup and restore when it is needed.

Its maximum capacity as per vSphere version 8 is upto 62 TB which is quite good and big for logs to retain for longer time period.

Lifecycle Manager (vCLM)

vCenter Server Life-cycle Manager previously known as Update Manager is a service that takes care of ESXi host and VMware Tools life-cycle management to maintain compliance and software patch management not only limited to ESXi host but Hardware Drivers can also be updated or deployed through this service as well.

Administrators can not only update existing ESXi host by downloading updates directly from VMware or In-directly from VMware through manual updates using FTP (File servers) but also can build ESXi host bundled images to push these images to bare metal servers.


vCenter Server Services

This is the collection of various distributed services that vCSA has to offer like
  1. DRS
  2. vMotion
  3. Cluster Services
  4. vSphere HA
  5. vCSA HA
Other services

There are some other services most of these are by default disabled but you need to enable these. These are like

Dump collector Service

The vCenter Server support tool. You can configure ESXi to save the VMkernel memory to a network server, rather than to a disk, when the system encounters a critical failure. The vSphere ESXi Dump Collector collects such memory dumps over the network.

Auto-Deploy Service

The vCenter Server support tool that can provision hundreds of physical hosts with ESXi software. You can specify the image to deploy and the hosts to provision with the image. Optionally, you can specify host profiles to apply to the hosts, and a vCenter Server location (folder or cluster) for each host.

Syslog Collector Service

A central location for all the logs collected from ESXi host and vCSA or other VMware products to be retained for longer time period. You can have a dedicated vCSA as Syslog collector server for a centralized repository for logs depending on the company compliance policies. Example over here could be banks or telcos etc.

From version 8 and above this service is enabled by default but you need to configure it and can be integrated for troubleshooting Purpose with vRealize Log Insight new name VMware Aria for Logs or for monitoring/analytics purpose with vRealize Operations new name VMware Aria Operations.

You can configure Syslog Collector using VAMI Interface and then you need to configure other apps to send the logs.

So, this was a little introduction to vCenter Server Appliance but this is not all. We shall continue and dig deeper to understand the role of vCSA in combination to ESXi host as a hypervisor. Stay tuned...

For detailed explanation with demonstration please visit my Channel as well 😊

or you can directly watch the relevant video here







Thursday, 20 June 2024

vSpherer 8 Administration (Skillset Enhancement Series) Episode#2

ESXi host different Interfaces and their usecases.

In this skillup series, we are now talking about the other advanced options that you may need to know about DCUI options in an ESXi host like you can see as of below picture "Troubleshooting Mode options"

So you can either enable or disable local ESXi Shell or SSH shell with Shell timeout settings that you can configure in Minutes. Maximum minutes you can go for is "1440" and "0" means disabled settings.

Moreover, you can also setup DCUI idel timout in minutes as of the same frequency as mentioned above.

Otherthan above options you can go for Restart Management Agents which are locally available in all ESXi hosts locally. These Agents / Services are "Hostd" and "vpxa". But be very careful, if you are using SSH or remote shell or vCenter Server then ESXi host can be disconnected. 

Otherthan DCUI there are some more connectivity interfaces that you can use to access ESXi host either in the form of command line or through graphical user interface. Like 
  1. ESXi Shell (Local command line shell)
  2. SSH (Remote command line shell)
  3. PowerCLI (using Powershell capability of vCSA)
  4. vSphere Host Client (GUI offered by ESXi host individually)
  5. vSphere Client (GUI offered by vCenter Server)
Below picture explains some of above interfaces and their connectivity easily.


Some of the points mentioned above have been explained in our demonstration in a video that you should watch to understand this topic quite easily.

Just click the below thumbnail or link below


Stay tuned for more in-depth topics and a steady way to equip yourself with vSphere 8 day-2 administration.







Tuesday, 18 June 2024

vSphere 8.0 Administration (Skillset Enhancement Series) Episode#1

 Understanding and Installing ESXi Host (vSphere Version 8.0)

What is ESXi?

VMware ESXi is a type-1 hypervisor that enables you to run multiple virtual machines (VMs) on a single physical server. As part of VMware's vSphere suite, ESXi is known for its efficiency and minimal footprint, making it a popular choice for enterprise virtualization.

Key Features of ESXi 8

  • Improved Performance and Scalability: Enhanced support for modern hardware with increased resource limits.
  • Security Enhancements: Improved security features, including secure boot and TPM 2.0 support.
  • Simplified Management: Streamlined management interfaces and improved automation capabilities.
  • Enhanced Networking: Advanced networking features to support modern data center needs.

Prerequisites for Installing ESXi 8

  • Hardware Compatibility: Ensure your hardware is on the VMware Compatibility Guide (VCG).
  • BIOS/UEFI Settings: Enable virtualization technology (VT-x/AMD-V) and Data Execution Prevention (DEP).
  • Storage Requirements: At least 8 GB of storage for the ESXi installation.
  • Network: A compatible network interface card (NIC) is required.

Steps to Install ESXi 8

1. Download the ESXi 8 ISO

  •  Go to the [VMware Downloads page](https://my.vmware.com/web/vmware/downloads) and log in with your VMware account.
  • Navigate to the ESXi 8 download section and download the ISO image.

 2. Create a Bootable USB Drive

  • Use software like Rufus (Windows) or UNetbootin (Linux/Mac) to create a bootable USB drive from the ISO image.

3. Boot from the USB Drive

  • Insert the bootable USB drive into your server and power it on.
  • Access the BIOS/UEFI settings (usually by pressing a key like F2, F10, DEL during startup) and set the USB drive as the primary boot device.
  • Save changes and reboot.

4. Begin Installation

  • Once the server boots from the USB drive, the ESXi installer will start.
  • Follow the on-screen instructions:
  • Press **Enter** to begin the installation.
  • Read and accept the End User License Agreement (EULA).
  • Select the disk to install ESXi (ensure this is the correct disk as it will be formatted).

 5. Configure ESXi

  • After selecting the disk, the installer will scan for available network adapters.
  • Set a root password when prompted.

 6. Complete the Installation

  • Review the installation settings and press **F11** to start the installation.
  • Once the installation is complete, remove the USB drive and reboot the server.

7. Initial Setup and Configuration

  • After rebooting, you will see the Direct Console User Interface (DCUI).
  • Press **F2** to customize the system and log in with the root password.
  • Configure network settings (IP address, DNS, hostname) as required.

Post-Installation Steps

1. Accessing the ESXi Host via Web Client

  • Open a web browser and navigate to `https://<ESXi_host_IP>/ui`.
  • Log in with the root credentials.

2. Configuring Datastores

  • In the web client, go to **Storage** and configure datastores as needed.

3. Setting Up Virtual Machines

  • Navigate to the **Virtual Machines** section and follow the wizard to create new VMs.

Common Issues and Troubleshooting

  • Network Connectivity Issues**: Ensure that the NIC is compatible and correctly configured.
  • Storage Problems**: Verify that the storage device is recognized and supported by ESXi.
  • Performance Issues**: Check resource allocation and ensure that the server hardware meets the requirements for ESXi 8.

Additional Resources

  • Do check Detailed lecture on this topic with hands-on demo on youtube.

 


By following these steps, you should be able to successfully install and configure ESXi 8 on your server, allowing you to take full advantage of its virtualization capabilities.

Saturday, 24 July 2021

vSphere HA | Requirements | Admission Control | General Introduction

 Hello my dear readers, Greetings!

It was quite a long time i just got engaged in my Training deliveries that's the reason couldn't spare time to write a blog post.

Let's start our topic Discussions!

vSphere HA, we normally say or recognize it with a restart of VMs on surviving host in a vSphere cluster.

We normally use vsphere HA in vCenter server cluster object and is helpful in different situations like 

  1. ESXi host Hardware issues 
  2. Network disconnectivity among ESXi hosts in a cluster
  3. Shared Storage connectivity or unavailability issues with ESXi hosts
  4. Planned maintenance of ESXi hosts
How does it work?

vSphere HA, unlike its name (HA = High Availability), it restarts VMs on surviving hosts where VMs requirements are accommodated as shown in the below picture. 



For example, if any ESXi host has got any hardware problem due to which it stops working resulting in unavailability of VMs. The (interrupted) VMs then be taken care by other available ESXi hosts in the same cluster to power them on accessing the same shared datastore.

This failures could be a Hosts Hardware/ Network interruptions /Storage in-accessibility etc. 

So, it means we have to fulfill some important hardware requirements for vSphere HA. Let's discuss its requirements

The basic high-level requirements are as below
  1. vCenter Server (vpxd)
  2. Fault Domain Management (FDM-local to every host)
  3. Hostd (local to every host)
Let's break these requirements into understandable pieces

Hardware Requirements

  • Minimum 1 shared datastore - Recommended 2 shared Datastores
  • Minimum 2 ESXi hosts and Maximum 64 ESXi hosts in a cluster
  • Minimum 1 Ethernet network with Static IP Address for host Recommended 2 Ethernet networks with static IP Addresses for ESXi hosts (Multiple Gateways)

Software Requirements 

  • vCenter Server - To create cluster object
  • 1 Management network must be common among all ESXi hosts in the Cluster
  • Enable vSphere HA on the cluster object
  • Minimum vSphere Essential plus kit license or single standard vCenter server license 
Talking about high-level requirements, vCenter server is required to build or create cluster object and to push FDM agents to the ESXi host those are the part of cluster as member hosts. 

FDM Agent is actually a service that runs locally inside each ESXi hosts in the cluster which is enabled with vSphere HA feature. FDM is the one who is taking care of all HA related actions like 
  • HA logging
  • VM restarted on survining hosts
  • selection of Master node in a cluster
  • Management of vSphere HA all requirements
FDM service talks directly to "hostd" service of each ESXi host. 

The basic purpose of "hostd" is to create/delete/start/restart/shutdown and infact all the necessary actions of ESXi host against VMs are taken care by "hostd".

vSphere HA Anatomy

When you enable vsphere HA on a cluster then the members of the cluster are divided into two basic parts
  1. Master Node
  2. Slave / Subordinate Nodes
There would be only one Master Node in a vSphere HA cluster and rest would be Slave / Subordinate Nodes. Total size of vSphere cluster could go upto 64 Nodes (1 Master & 63 Slave Nodes) vSphere 6.5 / 6.7 / 7

Master node has got all the responsibility to Restart VMs on available surviving hosts (slave / subordinate).

Master node has got responsibility to equally divide the workload of Restarting VMs on surviving hosts.

Master node has got responsibility to inform vCenter server about the current status of vSphere HA cluster

Master node has got responsibility to keep track of Heartbeat from Slave nodes either from Network or from datastore.

How Master node know all about the VMs which are required to be restarted on surviving hosts ? 

There is a file named "Protected List" located on all shared Data-stores which can be accessed by Master Node in the cluster and held / occupied by Master-node. 

This file contains information about Virtual Machines running on their respective hosts.

An-other file named "Power-on" file located on shared data-stores and accessible by all nodes including master node in the cluster. The purpose of this file is to maintain time stamp updated after every 5 minutes by all the hosts to mark the connectivity of all hosts for avoiding network isolation. 

The significance of "Power-on" file is to let Master node know about network isolation impact on disconnected hosts from ethernet network. So, master node locates the alternate connectivity of such network disconnected hosts by looking into the latest time stamp with 5 minutes update after last accessibility to this file by the host using alternative to heart-beat channel other than ethernet network (which is datastores). 

Minimum alternative heart-beat sources (in the form of data-store accessibility) is two. It is highly recommended to choose alternative datastores manually instead of letting vCSA to choose them (automatically) for you.

(Design Tip)

Design your vSphere HA network with redundant ethernet gateways and keep your shared storage network (fabric) physical separate. Incase of any network disaster, your vsphere design can survive / mitigate the situation.

How different Nodes respond to HA failure Scenarios? 

Master Node:
Master nodes are responsible to restart failing host VMs on surviving hosts and updates "Protected List" file all across datastores it can access. 
  • If Master node struck a failure (H/W Issue or Network Isolation etc), VMs running on top of this host shall be evenly distributed amongst the surviving hosts right after election process. What is Election process
    •  All the slave nodes in the cluster send heart beat to each other and to master node and wait for the master node's heart beat. 
    • If slave node do not receive Master node heart beat for 15 seconds then they consider it is dead
    • Slave node initiate a special broadcast which is known as election traffic which all the slave nodes sense and elect the next master node amongst them. 
  • This election process continues for next 15 seconds right after slave nodes waited for master node's heart beat for 15 seconds.
  • Right after election process (which takes another 15 seconds) to elect one master node from remaining slave nodes, the elected Master node takes over the "Protected list" file and initiate (initial placement) affected VMs which were running on faulty Master node.
Conclusion:
Master node takes around 45 seconds to restart the virtual machines on surviving hosts.

Slave nodes:
These are the nodes which take instructions from Master node to take care of affected (failing host) virtual machines to be restarted.
  • If slave node stuck a failure (H/W and or network isolation) then Master node takes responsibility to restart VMs (from the failing host) to available hosts in the cluster.
  • Master node within 15 seconds takes decision and evenly distribute the VMs across the cluster amongst the surviving hosts in the cluster.
Conclusion:
Slave nodes take 15 seconds to restart the VMs amongst the surviving hosts.

About Network Isolation
In this kind of state, affected host or number of hosts cannot be able to contact their gateways and Master node will not be able to contact isolated hosts. That's the reason we choose alternative to ethernet in the form of data-store heart beat. 

This kind of isolation would impact more if we have not taken care of ethernet design along with shared storage accessibility with redundancy.

Better Ethernet-network designs
Choosing better and physically separated topological approach for vSphere HA always helps a-lot. Just as you can see in below picture.



In the above picture which depicts the recommended approach for system traffic isolation, explains clearly that physical isolation of system traffic can be done through provisioning or creation of separate logical switches for separate system traffic. 

Though in this picture, I have mentioned 2 separate traffics to be the part of same virtual switch which explains that you can also put different (system) traffics combined or logically separate as well.

An-other important aspect, I wanted to draw your kind attention over there is to look at redundancy from the very basic component (vmNIC) till physical switches. This approach can also lessen the impact of any network level disaster.

Note: You can use same DNS as well instead of using separate DNS zones for each network as shown or mentioned in the picture above.

Logical (Isolation) network 
In this scenario, you can use as low as available number of vmNICs (Physical network cards). Specially in case of blade chassis. So, you can separate system traffic (like Management, vMotion, vSAN, FT, Replication etc) logically using vLANs. 

Note: Better network design even save from disasters like shared storage unavailability resulting in problems like APD (All Path Down).


To be continued (Stay Tuned...!)

 

My Posts

vSphere 8 HA | Isolation Addresses | Admission control Policy - Skill Enhancement Series - Advanced Administration - Episode#8

 In my last blog about vSphere HA basic concept, I explained the conceptual part of vSphere HA with some design tips. Now, in the continuat...