VMware vSphere 4.1 released – What’s new?

As many of you expected, the new VMware vSphere version is called VMware vSphere 4.1. The hints have been clear these past few weeks and today it became official. Today was the official release and we can finally start blogging about VMware vSphere’s latest features. In the next few days I will post a number of articles on the new features, starting with a list of new features today.

First thing that will be most noticeable is a change in naming:

–          The free ESXi has now been renamed to VMware vSphere Hypervisor (ESXi)

–          VMotion is now officially named vMotion

–          Storage VMotion is now officially named Storage vMotion

Many changes have been made to ESXi:

–          ESXi has become more important than ever

–          ESXi boot from SAN now fully supported on FC, hardware iSCSI, FCoE

–          ESXi automated install has been greatly improved. More scripting possibilities during installation and post installation.

–          Tech Support mode in ESXi

  • is now fully supported
  • will generate a warning in vCenter when active
  • will automatically time out after enabling
  • can be configured from vCenter
  • only accessible for authenticated users

–          Total host lockdown mode (tech support and DCUI) enabled via vCenter

–          More commands available in tech support mode: vscsiStats, tcpdump-uw, VAAI, Network, VM

–          VMware Update Manager will be able to update drivers, CIM providers and modules

An overview of vSphere’s new feature set.

Availability – VMware HA

–          VMware HA diagnostics and HealthCheck tell you constantly if your cluster is still healthy and has enough available HA slots

–          DRS is now able to move VMs to guarantee free HA slots

–          There is a new API set available for 3rd parties to make applications HA aware

–          HA cluster limits are now equal to DRS cluster limits, which is 32 hosts per cluster

–          The number of VMs protected by HA per host is now at 320 VMs

–          The maximum number of protected VMs is 3000 VMs per HA cluster

Availability – VMotion

–          Is now officially renamed from VMotion to vMotion

–          The number of concurrent VMotion is now 4 vMotions on a 1 Gbps link and 8 concurrent vMotions on a 10 Gbps link

–          The speed of the vMotion process has greatly improved and makes the time to evacuate a large hosts much shorter.

–          Enhanced vMotion Compatibility mode has improved, the latest AMD 3D power mode can be included and excluded. EVC also handles powered-on VMs much better.


–          Active Directory integration for ESX(i) hosts.

–          Users of the “ESX Admins” group can be granted access to the ESX console or ESX(i) tech support mode


–          More VMs per cluster and DataCenter and more hosts per vCenter and DataCenter

–          32 hosts per cluster (was 32)

–          3000 VMs per cluster (was 1280)

–          1000 hosts per vCenter Server (was 300)

–          15000 registered VMs (was 4500)

–          10000 concurrent powered On VMs (3000)

–          120 concurrent connected VI Clients (was 30)

–          500 hosts per virtual datacenter (was 100)

–          5000 VMs per virtual datacenter (was 2500)

vCompute – Host affinity rules

–          Host affinity rules allow you to configure DRS to keep VMs on one or more hosts.

–          With a “required” rule enforcement, DRS/HA will never violate the rule; event generated if violated manually. Only advised for enforcing host-based licensing of ISV apps

–          With a “preferential” rule DRS/HA will violate the rule if necessary for failover or for maintaining availability

–          With host affinity rules you can keep VMs on a chassis or keep them divided over chasis. You can keep VMs together on a host for example for XenApp Servers.

vCompute – Distributed Power Management

–          Distributed Power Management now can be enabled and disable through a scheduled tasks. This allows you to disable DPM before the busiest time of day starts.

–          Disabling DPM will now automatically power-on all hosts on standby

vCompute – Memory compression

–          Memory compression will enhance the memory saved on an ESX host.

–          When an ESX host has a memory shortage, it will first try to solve this using the ballooning driver. If ESX still could not reclaim enough memory, it will start to compress memory and as a last resort, if compressing memory still won’t help, it will swap out to disk

–          Memory compression will add 2-3% CPU utilization at host level

–          The time to decompress memory is around 20 micro seconds

–          Internally memory compression uses the GZip algorithm with a few modifications especially made by the VMware team

–          Overview of memory techniques kicking in:

  • VMs without  vMMU (Large Pages): Transparent Page Sharing (TPS) -> Ballooning -> Memory Compression -> Swap
  • VMs with vMMU (Large Pages): No TPS -> TPS -> Ballooning -> Memory Compression -> Swap

vStorage – Storage I/O Control (SIOC)

–          SIOC calculates datastore latency to identify storage contention

  • Latency is a normalized, average across virtual machines
  • IO size and IOPS included

–          SIOC enforces fairness when datastore latency crosses threshold

  • Default of 30 ms
  • Fairness enforced by limiting VMs access to queue slots

–          Net-effect is a re-distribution of latency between VMs

vStorage – vStorage API’s for Array Integration (VAAI)

–          Improves performance by leveraging efficient array-based operations as an alternative to VMware host-based solutions. Three primitives include:

  • Full copy: Xcopy like function to offload work to the array
  • Block Zeroing: Speeds up zeroing out of blocks or writing repeated content
  • Hardware assisted Locking: Alternate means to locking the entire LUN and therefore allows more VMs on the same LUN

–          These improvements will improve copying, cloning, zeroing out speeds

–          Also the CPU and network load on the ESX host performing these actions is reduced to almost zero

–          Currently EMC, Dell, NetApp, IBM, HP and HDS are working together with VMware

vNetwork – Network I/O Control

–          Network I/O Control gives you limits and shares on your virtual network.

–          With limits you can limit the amount of egress traffic per TEAM.

–          With shares you can set the importance of vmnic traffic, like with cpu shares or memory shares.

–          Network I/O Control only works on distributed vSwitches and therefore only on enterprise plus licenses.

Licensing changes

–          The following list of management products has changed from per CPU licensing to per VM licensing:

  • VMware vCenter Chargeback
  • VMware vCenter SRM (Site Recovery Manager)
  • VMware vCenter Capacity-IQ
  • VMware vCenter AppSpeed

–          Looking at the biggest changes in vSphere versions available we can see:

VMware vSphere Editions for Mid-Size & Enterprise Business vSphere Standard vSphere Advanced vSphere Enterprise vSphere Enterprise Plus
Consolidation: Convert Physical System to Virtual Machines and Leverage Live Migration (vMotion) v v v v
Availability: Enable High Availability (HA) and Fault Tolerance (FT) for Applications v v v
Automated Resources Management: Deliver Load Balancing (DRS), Power Management (DPM), and Live Storage Migration (Storage vMotion) without Manual Intervention v v
Simplified Operations: Advanced Networking (Distributed Network Switch) and Host Configuration Templates (Host Profiles) for More OPEX Savings v

6 thoughts on “VMware vSphere 4.1 released – What’s new?

  1. Hi Gabe,

    I am leveraging a portion of your scripted install for ESX 4 in my environment. I'm now looking to port those scripts over to ESXi 4.1. Have you had any success with this? Particularly, I'm trying to get the stuff in the %pre section to work – mainly for passing variables from the PXE bootstrap to Weasel. I posted this over in the forums, but haven't received any hits yet.


    Any advice you have would be great!

Comments are closed.