All VMware hosts run a service for logging system information. This service, vmsyslogd, logs messages from the VMkernel and other system components for auditing and diagnostic purposes. By default, the logs are directed to a local scratch location or ramdisk. The scratch space is created automatically during ESXi installation in the form of a 4 GB Fat16 local scratch partition. If storage space is unavailable, the host will store data on a ramdisk, which is not persistent across reboots. That being the case, many admins choose to send these logs to a persistent datastore or remote logging server for retention.
Configuring the log location can be done in a variety of ways. In this post, we will focus on vSphere Web Client and ESXi Shell.
Manually Configure ESXi Syslog – vSphere Web Client
For convenience sake, the Web Client is popular for setting the log location. In the Web Client, the Syslog.global.logdir key controls the syslog location. By default, the location is set to /scratch/log which references that local scratch location created during installation or ramdisk. To change the syslog location, navigate to Advanced System Settings under the host Configuration tab. Edit the Syslog.global.logdir value to specify the new log path. Format the value as [datastore]/logdir. Example: [DevDS]/LOGS/Dev
Quick byte today – Last week, we discussed an HPE advisory affecting certain network adapters on VMware hosts. The advisory pertained to specific firmware and driver versions. If you need to identify or verify such network card information, it is possible to pull that data via ESXCLI commands. In this post, we will get a list of installed NICs as well as individual driver and firmware versions.
First, let’s get a list of the installed NICs. To do so, SSH to the pertinent host and run the esxcli network nic list command. Here we can see a record of devices and general information.
esxcli network nic list
Now that we have a list of the installed NICs, we can pull detailed configuration information. Run the esxcli network nic get command specifying the name of the NIC necessary.
esxcli network nic get –n vmnic0
Last week, HPE released a customer advisory pertaining to an issue with network adapters that have been upgraded from certain custom ISOs or bundles. The advisory states, HPE ProLiant servers running VMware ESXi 5.5, 6.0, or 6.5 are experiencing network issues after installing or upgrading QLogic drivers to version 2.713.30 from the July Service Pack for Proliant or July HPE Custom ESXi Image. The noted network issues are causing network disconnections or network adapter absence in the OS or RBSU. HPE elaborates that installation of the affected driver causes the firmware image on the network adapter to become inoperable. Once this happens, the network adapter must be replaced. HPE Support or a reseller can aid with the replacement. If you do replace the network adapter(s), the server needs to be booted with SPP 2016.10.0 (or later) and the adapter firmware updated. Do this to ensure failure does not occur again.
Custom ESXi Image and Service Pack for Proliant Updates
If you are unable to locate an HPE custom ESXi image for ESXi 6.5 or 6.5 U1, they have already been removed from VMware download resources. VMware notes that the image will be replaced by October 6th, 2017. HPE has also removed the original July Service Pack for ProLiant. They have replaced it with SPP 2017.07.2. If you leverage VMware Update Manager in your environment, the affected directories have also been pulled from HPE’s vibsdepot.
Occasionally, VNX\CX administrators need to copy data between disks manually. Whether you need to replace an online disk, invoke a proactive copy to hotspare, or just copy data from an online disk to an unbound disk, Navisphere CLI commands can get the job done.
NOTE – I think this goes without saying, but if you have any doubts about the process, contact EMC Support. Errors when copying disk data can have disastrous effects.
In this tutorial, we will be copying data from an online disk to an unbound disk. We will be utilizing Navisphere CLI from the command line to invoke the process. If you have not already, install Navisphere CLI. It is also possible to run Navisphere commands from the array’s control station via SSH.
Let’s start with a breakdown of the syntax.
Navisphere CLI is a command line interface tool provided by EMC for accessing and managing VNX or Clariion storage arrays. Administrators can issue commands for general array management, including storage provisioning, status, and configuration control. Navisphere CLI can also be leveraged for automating management tasks through scripts or batch files. Navisphere CLI is supported on EMC VNX, CX, CX3, CX4, and AX4-5 storage systems. Starting at VNX OE 5.31, Classic CLI (NaviCLI) is no longer compatible. Secure CLI (NaviSecCLI) commands are now required.
As per EMC recommendation, the Navisphere CLI version should be matched with the Block Software version of the storage system. For instance, if the Block side version is 05.33.x, use Navisphere CLI 7.33.x or higher.