VirtuBytes

Bytes of virtualization with bits of other technology.

Identify NIC Driver and Firmware Versions with ESXCLI

Quick byte today – Last week, we discussed an HPE advisory affecting certain network adapters on VMware hosts.  The advisory pertained to specific firmware and driver versions. If you need to identify or verify such network card information, it is possible to pull that data via ESXCLI commands. In this post, we will get a list of installed NICs as well as individual driver and firmware versions.

First, let’s get a list of the installed NICs. To do so, SSH to the pertinent host and run the esxcli network nic list command. Here we can see a record of devices and general information.

esxcli network nic list

ESXCLi NIC List

Now that we have a list of the installed NICs, we can pull detailed configuration information. Run the esxcli network nic get command specifying the name of the NIC necessary.

esxcli network nic get –n vmnic0

Continue reading

HPE Network Adapter Replacement on VMware Hosts

Last week, HPE released a customer advisory pertaining to an issue with network adapters that have been upgraded from certain custom ISOs or bundles. The advisory states, HPE ProLiant servers running VMware ESXi 5.5, 6.0, or 6.5 are experiencing network issues after installing or upgrading QLogic drivers to version 2.713.30 from the July Service Pack for Proliant or July HPE Custom ESXi Image. The noted network issues are causing network disconnections or network adapter absence in the OS or RBSU. HPE elaborates that installation of the affected driver causes the firmware image on the network adapter to become inoperable. Once this happens, the network adapter must be replaced. HPE Support or a reseller can aid with the replacement. If you do replace the network adapter(s), the server needs to be booted with SPP 2016.10.0 (or later) and the adapter firmware updated. Do this to ensure failure does not occur again.

Custom ESXi Image and Service Pack for Proliant Updates

If you are unable to locate an HPE custom ESXi image for ESXi 6.5 or 6.5 U1, they have already been removed from VMware download resources. VMware notes that the image will be replaced by October 6th, 2017. HPE has also removed the original July Service Pack for ProLiant. They have replaced it with SPP 2017.07.2. If you leverage VMware Update Manager in your environment, the affected directories have also been pulled from HPE’s vibsdepot.

Continue reading

EMC Replace Disk – Copy to Hotspare

Occasionally, VNX\CX administrators need to copy data between disks manually. Whether you need to replace an online disk, invoke a proactive copy to hotspare, or just copy data from an online disk to an unbound disk, Navisphere CLI commands can get the job done.

NOTE – I think this goes without saying, but if you have any doubts about the process, contact EMC Support. Errors when copying disk data can have disastrous effects.

In this tutorial, we will be copying data from an online disk to an unbound disk. We will be utilizing Navisphere CLI from the command line to invoke the process. If you have not already, install Navisphere CLI. It is also possible to run Navisphere commands from the array’s control station via SSH.

Let’s start with a breakdown of the syntax.

Continue reading

How to Install EMC Navisphere CLI

Navisphere CLI is a command line interface tool provided by EMC for accessing and managing VNX or Clariion storage arrays. Administrators can issue commands for general array management, including storage provisioning, status, and configuration control. Navisphere CLI can also be leveraged for automating management tasks through scripts or batch files. Navisphere CLI is supported on EMC VNX, CX, CX3, CX4, and AX4-5 storage systems. Starting at VNX OE 5.31, Classic CLI (NaviCLI) is no longer compatible. Secure CLI (NaviSecCLI) commands are now required.

As per EMC recommendation, the Navisphere CLI version should be matched with the Block Software version of the storage system. For instance, if the Block side version is 05.33.x, use Navisphere CLI 7.33.x or higher.

Continue reading

Configure vCenter High Availability

A great feature that was introduced in vSphere 6.5 was the ability to implement vCenter High Availability (VCHA). If you are unfamiliar with the vCenter High Availability, it is an active-passive architecture to safeguard your vCenter Server appliance from host, hardware, or application failures.

How does it work?

The VCHA deployment is comprised of three nodes; active, passive and witness. The active node is just that, the active vCenter Server Appliance instance. This node has two interfaces; a standard management interface (eth0) and an HA network interface (eth1). VCHA replicates data from the active node to the passive node on the HA network. The active vCenter database is replicated synchronously and the file system is replicated asynchronously. The passive node also has two interfaces; however, eth0, which contains the identical FQDN, IP, and MAC address, is dormant. This interface becomes active in the event of a failover. The final piece of the VCHA architecture is the witness node. The witness is utilized for quorum services, informing the cluster which nodes should be active at any given time. All three nodes must be online and be functioning for a healthy cluster.

In a failover event, eth0 of the passive node is brought online. A gratuitous ARP will be performed to notify the network that the passive node has now become active and taken over the FQDN, IP and MAC address. After failover, if the failed node can be brought back online, replication will resume. If the original node cannot be brought back online, the node can be removed from inventory and redeployed.

Continue reading

« Older posts

© 2017 VirtuBytes

Theme by Anders NorenUp ↑