In this blog post I am discussing the features of VMware VAAI and how it helps customers increase the consolidation ration in their environment.
offloading tasks to storage is been some of the great features that was released in vSphere 4.1, however, with vSphere 5 lots of features have been added to help customers get more value out of the feature.
So in summary, here is a figure that i got when i searched about the matter explaining where the integration is taking place.
vStorage APIs for Array Integration is a feature introduced ESX/ESXi 4.1 that provides hardware acceleration functionality. It enables your host to offload specific virtual machine and storage management operations to compliant storage hardware. With the storage hardware assistance, your host performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth.
VMware first started to work on VAAI way back in 2008. VAAI was initially implemented as vendor specific commands in vSphere 4.1. However, VMware and its partners worked on standardizing these commands to the extent that all of VAAI (including the new thin provisioning additions in vSphere 5) are based on T10 standards. The amount VMware has contributed to standards in the short amount of time between vSphere 4.1 and 5 is non-trivial and unprecedented, as one can clearly see from the functionality it supports (hardware accelerated locking, Virtual Machine cloning, Storage vMotion, thin provisioning, space reclamation, etc).
Running through the feature will require me to run back to vSphere 4.1 and tell you the features from there. so in 4.1 when VAAI was introduced, so before that i would like to run through the basic requirement.
- ESX/ESXi 4.1 or later
- Storage arrays that support storage-based hardware acceleration.
- ESX/ESXi 4.1 does not support hardware acceleration with NAS storage devices.
- Support for NAS storage devices is introduced in ESXi 5.x.
Now Let us Talk Features.
VAAI in vSphere 4.1 Features.
1. Full copy enables the storage arrays to make full copies of data within the array without having the ESX server read and write
2. Block zeroing enables storage arrays to zero out a large number of blocks to speed up provisioning of virtual machines.
3. Hardware-assisted locking provides an alternative means to protect the metadata for VMFS cluster-file systems, thereby
improving the scalability of large ESX server farms sharing a datastore.
But now let us identify what was missing from the above 3 features.
- When VMs are deleted or migrated from a datastore, the array is not informed that these blocks are now free. This leads to array management tools reporting a much higher space consumption than is actually the case.
- What happens when I run out of space (OOS) on my datastore? InvSphere 4.1 , an OOS conditions could lead to all VMs on the OOS TP datastore being impacted.
- NAS Support was Missing.
so in vSphere 5.0 the following features were added.
• vSphere® Thin Provisioning (Thin Provisioning), enabling the reclamation of unused space and monitoring of
space usage for thin-provisioned LUNs
• Hardware acceleration for NAS
• SCSI standardization by T10 compliancy for full copy, block zeroing and hardware-assisted locking
So Now let us runthrough the ifs again in case of vSphere 5.0.
- If a Thin Provisioned datastore reaches 100%, only those VMs which require extra blocks of storage space will be paused, while VMs on the datastore that do not need additional space contnue to run.
- A new VAAI primitive (using the SCSI UNMAP command) allows an ESXi to tell the storage array that space that was occupied by a VM (whether it be deleted or migrated to another datastore) can be reclaimed. This allows an array to correctly report space consumption of a Thin Provisioned datastore, and allows customer to correctly monitor and correctly forecast new storage requirements.
- A warning is now raised & surfaced in vCenter via VAAI if a Thin Provisioned datastore reaches 75% as per the screen-shot below. This allows an admin to proactively add more storage, extend the disk or Storage vMotion some VMs off of the datastore to avoid OOS conditions.
for NAS Support, The following features were added.
- Full File Clone – Similar to the “Full Copy” Hardware Acceleration Primitive provided for block arrays, this primitive enables virtual disks to be cloned by the NAS device.
- Native Snapshot Support – Allows creation of VM snapshots to be offloaded the array.
- Extended Statistics – Enables visibility into space usage on NAS datastores, especially useful for Thin Provisioning.
- Reserve Space – Enables creation of thick virtual disk files on NAS whereas previously the only supported VMDK type that could be created on NAS was thin.
How do i know if VAAI is enabled.
- In the vSphere Client inventory panel, select the host.
- Click the Configuration tab, and click Advanced Settings under Software.
- Check that these options are set to 1 (enabled):DataMover.HardwareAcceleratedMove
VMFS3.HardwareAcceleratedLockingNote: These options are enabled by default.
# esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit
# esxcfg-advcfg -g /VMFS3/HardwareAcceleratedLocking
# esxcli system settings advanced list -o /DataMover/HardwareAcceleratedInit
# esxcli system settings advanced list -o /VMFS3/HardwareAcceleratedLocking
Int Value: 1 <– set to 1 if enabled
Default Int Value: 1
Min Value: 0
Max Value: 1
Default String Value:
Description: Enable hardware accelerated VMFS locking (requires compliant hardware)
- The source and destination VMFS volumes have different block sizes
- The source file type is RDM and the destination file type is non-RDM (regular file)
- The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
- The source or destination VMDK is any sort of sparse or hosted format
- Cloning a Virtual Machine that has snapshots (or doing a View replica or recompose), since this process involves consolidating the snapshots into the virtual disks of the target Virtual Machine.
- The logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)
- The VMFS datastore has multiple LUNs/extents spread across different arrays
so the end picture is that this is what happens when VAAI is enabled.