Here is an issue more on the obscure side; nevertheless, it may prove beneficial to someone, somewhere, at some point. The issue pertains to a SQL database going Suspect after a Zerto Failover. To adequately explain the cause, we will first look at Microsoft SQL Server best practices for both VMware and Zerto.
Let’s start with VMware best practices. Assuming the back-end storage is spinning disks and the virtual disks reside on VMFS volumes, VMware recommends separating SQL files. Meaning, SQL Server binaries, SQL data (mdf), SQL transaction logs (ldf), and tempdb files are placed on separate VMDKs. Since SQL Server accesses all that data in different I/O patterns, separating their files helps minimize disk head movements and limits I/O contention; thus optimizing storage performance. The disk configuration in the affected environment looked like this:
In the past, I’ve discussed a few of my favorite features in Zerto; however, a recent feature that flew under my radar was the support for Host Remediation using VMware Virtual Update Manager. If you utilize Zerto in your VMware environment, you may have run into an issue using maintenance mode on hosts with Virtual Replication Appliances (VRA).
First, a quick background. Zerto VRAs are virtual appliances deployed to handle the continuous replication between protected and recovery sites. These VRA’s are typically installed on each ESXi host to perform data compression and transportation across your WAN.
Versions prior to 5.0 required VRAs to be manually shut down for a host to successfully enter maintenance mode. As you can probably tell, this manual shut down process posed an issue for admins who were automating maintenance mode tasks across their environments; notably remediation through Update Manager.
To address this matter, Zerto has incorporated support for host remediation in release 5.0 U1. Now, when a host is put in maintenance mode, Zerto monitors the host’s workload to ensure all machines have been migrated or powered down. Once the workload fits maintenance criteria, Zerto will signal the VRA to shut down; thus, allowing the host to enter maintenance mode. Conversely, once the host exits maintenance mode, Zerto automates the power on of the VRA. As machines are migrated back to the host through further remediation, the powered on VRA is prepared to handle their replication.
Enabling this feature is simple and can be done directly from the Zerto Virtual Manager.
Update – Big thanks to vBrownBag for guest posting this article on their blog!
A few weeks back, we discussed Zerto’s ability to perform point-in-time file level restores directly from the replication journal. However, what if the data you need from 30 minutes ago isn’t readily compiled into a file or requires manual intervention to produce; such as a database backup or a PST export of Exchange mailbox items? Introducing Zerto Failover Tests for data recovery.
A Zerto Failover Test has the ability to spin up a virtual machine from a specific point-in-time to an isolated network. From there, data can be manually compiled and exported.
Although the failover test process is more step intensive than a File Restore, it still addresses a vital need for many organizations; having that ability to restore application data from a specific moment. When compared to traditional backups, the ability to restore data seconds before corruption or loss is critical.
In this walkthrough, we will focus on a database backup and recovery. To do so, we will perform a failover test to an isolated network, backup the database, and finally, extract the SQL database to our production location. As this particular test network is completely isolated, we will be using VMware PowerCLI to extract the backup file.
Today, Zerto released an Upgrade Bulletin regarding an issue with possible data corruption when Zerto incurs IO errors. The IO errors in question are received when a customer’s storage array controller becomes unresponsive or fails. This typically would occur during an upgrade in an environment that leverages high availability on their storage controllers. When the controller fails and the transfer between controllers occur, there is a brief period where Zerto incurs said IO errors. These errors appear to be different from standard IO errors and can be misinterpreted by Zerto. If this misinterpretation happens, data corruption could occur on protected virtual machines on the original controller. This issue was originally seen by Nutanix customers, but it is suspected that the errors may affect other environments. As Zerto cannot identify all arrays this will effect, they are advising customers to upgrade to ZVR 5.0 U2 or 4.5 U5. Zerto also stresses the issue doesn’t happen under normal operating conditions; just failures/failovers.
As per Zerto, customers should:
- Go to MyZerto Support Portal
- Review ZVR 5.0 U2 Release Notes
- Install ZVR 5.0 U2
- Contact Zerto Support with any questions
Today I wanted to walk-through a Zerto journal file restore. This feature has been available since version 4.5 but fills such a crucial need in many organizations. The recovery process allows you restore a file or folder utilizing any checkpoint in the journal. When compared to traditional nightly backups, this can be a game changer when it comes to data loss. With Zerto RPOs typically in seconds, you can granularly restore data back to the moment right before data corruption or loss.
The process is super simple. Initiate the File Restore from the Zerto Virtual Manager (ZVM). After you select the point in time for the restore, the disk will be mounted on the recovery ZVM enabling files and folders to be downloaded in your browser and restored.
One item I want to elaborate on is the downloading and restoring of the files or folders. As mentioned above, when restoring from the ZVM, your files will be downloaded to the local machine you initiate the download from. This download will go into the user’s download folder which is fine if you don’t mind manually moving the files to the correct location. However, if you want more control over where the files go, you can recover files straight from the mounted disk. The disk containing the restore data will be mounted to the Recovery side ZVM. Once mounted, you can access the drive through file explorer. From there you can copy the data as you please.
Let’s walk through the process. Log into your ZVM and click the Actions tab from the bottom pane.