Storage checks

This article is part of the VMware vSphere Health Check PowerPack. More info can be found here: VMware vSphere Health Check PowerPack

VMs per datastore

This shows the number of VMs per datastore. When there are more than 25 VMs on this datastore a warning is display in the “checks” column. This number of 25 VMs is just an indication since the number of VMs per datastore can be much higher before performance issues arise or they can even arise at a much lower number of VMs per datastore. Usually 20-25 VMs per datastore are used as a rule of thumb. The real number determining the maximum number of VMs that is suitable for your specific environment, is determined by your SAN and by the number of SCSI reservations the VMs generate. SCSI reservations are only applicable to block based storage, not NFS.

Reading tip: “Analyzing SCSI Reservation conflicts on VMware Infrastructure 3.x and vSphere 4.x

Datastore and LUN info

This is just an handy list that maps LUN ID’s, LUN Name and Datastores to each other for easy reference.

RDM overview

This is just a list of RDMs used by VMs running on the hosts that are checked. In early days of ESX it was common to mostly use RDMs instead of VMDKs on a VMFS volume. Now RDMs are only used for special reasons and the default should be to give VMs a VMDK on a VMFS datastore.

Reading tips: “Performance Characterization of VMFS and RDM Using a SAN

Datastore Space info

When datastores run out of space it is possible that VMs with running snapshots can freeze because they can’t write new changes to the snapshot. Therefore you will always want to make sure there is always a certain amount of free space available. As a rule of thumb a minimum of 40GB (40960MB) is used. The percentage of free space is not a good measure to be used, since 10% of 2TB is quite different from 10% of a 500GB datastore.

Blocksize Overview

Did you know that the blocksize on a datastore determines the maximum size of VMDK that can be used? Probably, because it is one of the most asked questions on VCP exams. In the days of ESX 3.x most admins would use 1MB blocksizes for their datastores as a default and only move to 2, 4 or 8MB (or some to 16MB) blocksizes if the VMDK size really demands for bigger blocksize. After some discussions on the VMware Community forums and a very good blog post on Yellow-Bricks about blocksizes, most admins have switched to 8MB blocksize for their VMFS volumes.

Did you also know that copying VMDKs between datastores with different blocksizes is much slower than copying VMDKs between datastores with the same blocksize? At Yellow-Bricks there is a good post on this, which proofs and explains why you should keep the blocksize on all datastores equal if possible.

This check shows you the blocksizes of your VMFS datastores.

(PS: Remember that the new VMFS5 will only use 1MB blocksizes for NEWLY created datastores. The VMFS3 to VMFS5 converted datastores will remain at the blocksize of the pre-converted volume)

Reading tips:

Guest disk waste finder

This check gives you an idea of how much space you could regain by resizing your VM disks. The script is based off the formulas in vOptimizer. The reclaim value of .8 is stating that if I recover storage or resize disk, I still want 20% free space for growth. This value in the report states how much you can reclaim and still maintain 20% free.

VM Snapshot overview

Snapshots on a VM have a big impact on performance of that VM, therefore you should always make sure that you delete snapshots as soon as you no longer need them. Even better would be to create a vCenter Alarm on a snapshot size of for example 1 MB, just to make sure that every snapshot triggers an alarm and constantly alerts you to quickly clean it up again.

This check shows you all active snapshots. Unfortunately it can’t filter out snapshots from your VMware View environment which are really needed when using linked clones.

Orphaned VMDKs

When working with VMs and VMDKs and connecting, disconnecting, removing and adding VMDKs you can easily ‘lose track’ of some VMDKs. This check searches for VMDKs that are no longer connected to a VM. Well, to be precisely: It can only tell that the VMDK found is not connected to a VM that is a member of the vCenter we’re doing the check on. It does happen sometimes that multiple vCenter or ESX environments shares the same shared storage. For example when for Oracle VMs a separate host is used to bring down the licensing costs of Oracle. In that case, the “orphaned VMDK” check would find a VMDK which is seems to be orphaned but is actually connected to a VM from different environment. So be careful and check thoroughly if the VMDK you want to delete, really is orphaned.

(Oracle is licensed per CPU and when run in a vSphere cluster, you would have to license every CPU of every ESX host in that cluster. Often, in small environments, an Oracle VM is isolated on a separate host, sometimes even outside the vCenter environment.)