VirtuBytes

Bytes of virtualization with bits of other technology.

Category: VMware (page 1 of 8)

Identify NIC Driver and Firmware Versions with ESXCLI

Quick byte today – Last week, we discussed an HPE advisory affecting certain network adapters on VMware hosts.  The advisory pertained to specific firmware and driver versions. If you need to identify or verify such network card information, it is possible to pull that data via ESXCLI commands. In this post, we will get a list of installed NICs as well as individual driver and firmware versions.

First, let’s get a list of the installed NICs. To do so, SSH to the pertinent host and run the esxcli network nic list command. Here we can see a record of devices and general information.

esxcli network nic list

ESXCLi NIC List

Now that we have a list of the installed NICs, we can pull detailed configuration information. Run the esxcli network nic get command specifying the name of the NIC necessary.

esxcli network nic get –n vmnic0

Continue reading

HPE Network Adapter Replacement on VMware Hosts

Last week, HPE released a customer advisory pertaining to an issue with network adapters that have been upgraded from certain custom ISOs or bundles. The advisory states, HPE ProLiant servers running VMware ESXi 5.5, 6.0, or 6.5 are experiencing network issues after installing or upgrading QLogic drivers to version 2.713.30 from the July Service Pack for Proliant or July HPE Custom ESXi Image. The noted network issues are causing network disconnections or network adapter absence in the OS or RBSU. HPE elaborates that installation of the affected driver causes the firmware image on the network adapter to become inoperable. Once this happens, the network adapter must be replaced. HPE Support or a reseller can aid with the replacement. If you do replace the network adapter(s), the server needs to be booted with SPP 2016.10.0 (or later) and the adapter firmware updated. Do this to ensure failure does not occur again.

Custom ESXi Image and Service Pack for Proliant Updates

If you are unable to locate an HPE custom ESXi image for ESXi 6.5 or 6.5 U1, they have already been removed from VMware download resources. VMware notes that the image will be replaced by October 6th, 2017. HPE has also removed the original July Service Pack for ProLiant. They have replaced it with SPP 2017.07.2. If you leverage VMware Update Manager in your environment, the affected directories have also been pulled from HPE’s vibsdepot.

Continue reading

Configure vCenter High Availability

A great feature that was introduced in vSphere 6.5 was the ability to implement vCenter High Availability (VCHA). If you are unfamiliar with the vCenter High Availability, it is an active-passive architecture to safeguard your vCenter Server appliance from host, hardware, or application failures.

How does it work?

The VCHA deployment is comprised of three nodes; active, passive and witness. The active node is just that, the active vCenter Server Appliance instance. This node has two interfaces; a standard management interface (eth0) and an HA network interface (eth1). VCHA replicates data from the active node to the passive node on the HA network. The active vCenter database is replicated synchronously and the file system is replicated asynchronously. The passive node also has two interfaces; however, eth0, which contains the identical FQDN, IP, and MAC address, is dormant. This interface becomes active in the event of a failover. The final piece of the VCHA architecture is the witness node. The witness is utilized for quorum services, informing the cluster which nodes should be active at any given time. All three nodes must be online and be functioning for a healthy cluster.

In a failover event, eth0 of the passive node is brought online. A gratuitous ARP will be performed to notify the network that the passive node has now become active and taken over the FQDN, IP and MAC address. After failover, if the failed node can be brought back online, replication will resume. If the original node cannot be brought back online, the node can be removed from inventory and redeployed.

Continue reading

Connect vCenter to vRealize Orchestrator 7.x

Last month, we discussed how to install VMware vRealize Orchestrator (vRO) 7.x. Once vRO is installed, you can begin utilizing Orchestrator plug-ins. Orchestrator plug-ins allow you to access and interact with external applications through workflows. Natively, the vRealize Orchestrator appliance deploys with a set of standard plug-ins. It is also possible to develop custom plug-ins with Orchestrator’s open standards.

The vCenter connection plug-in is a good place to begin your Orchestrator journey. In order to access objects and run workflows against your vSphere environment, this connection needs to be configured to your vCenter Server instance.

Configure Connection to vCenter Server

Log into the vRealize Orchestrator Client. From the Workflows tab, locate Add a vCenter Instance workflow under Library > vCenter > Configuration. Right-click the workflow and select Start workflow.

Continue reading

Install Windows Server 2016 on VMware

Historically, the adoption rate on major server releases is slow. With Windows Server 2016 about to hit one year in the release cycle, more organizations are gearing up to deploy the operating system in their environments. That being the case, it seemed appropriate to walk through an install of Microsoft Windows Server 2016 as the guest OS in a vSphere 6.5 environment.

NOTE – Server 2016 is fully supported from ESXi 5.5 and up, as per the VMware Compatibility Guide.

Continue reading

Older posts

© 2017 VirtuBytes

Theme by Anders NorenUp ↑