Thursday, August 30, 2012

New Features of the vSphere Storage Appliance version 5.1

 

This post is to highlight the new features of the recently announced vSphere Storage Appliance version 5.1. The major enhancements to VSA v5.1 are two fold. The first is to enhance the VSA for the SMB/SME markets; the second is to move into adjacent markets such as ROBO.

Before we start, I want to make a clarification around the required RAID configuration. Initially, VSA v1.0 required a RAID10 configuration on the local storage of each of the ESXi hosts participating in the VSA cluster. This has already been relaxed and RAID5 & RAID6 are now also supported configurations. More detail can be found here. Let’s move on to the new 5.1 features.

Support for additional disk drives & Expansion CHASSIS

In VSA 1.0, each ESXi host could have only 4 x 3TB disk drives. In VSA 5.1, we will be increasing the number of disks per ESXi host 8 x 3TB disk drives.

The number of 2TB (or less) disks per host has also been increased. 12 disks can now be supported internally in an ESXi host. In VSA 1.0, this was only 8. One other major enhancement is the support for JBODs (Just a Bunch Of Disks) or disk expansion chassis. An additional 16 disks can now be supported in an expansion chassis attached to an ESXi host. This gives a maximum number of 2TB (or less) physical disks per host of 28.

Increase Storage Capacity Online

In VSA 1.0, the cluster storage capacity cannot be resized after deployment. VSA 5.1 supports the online growing of storage capacity.

There is a new UI enhancement in VSA 5.1 to address this. It allows the VSA shared storage to be increased in size after deployment, as long as there is enough free local storage on all nodes to grow.

ROBO Support

This is the most sought after feature of the VSA 5.1 release. There have been many requests to enable VSA for ROBO (Remote Office/Branch Office) solutions. This involved two development efforts:

  • Allow a single vCenter instance to manage multiple VSA clusters
  • Allow vCenter to reside on a different network subnet to the VSA cluster

Both of these features are now in VSA 5.1. VMware will support 150 VSA clusters being managed from a single vCenter server.

vCenter running on the VSA Cluster

Another popular feature request was to allow vCenter server to run as a VM on the VSA cluster, something that wasn’t possible in VSA 1.0. Therefore vCenter had to be installed somewhere else first before a VSA cluster could be deployed. Customers can now deploy a vCenter on a local VMFS datastore of one of the ESXi hosts that will participate in the cluster. The cluster can then be created, since we can now create the VSA datastore using a subset of local VMFS storage and not require all of VMFS storage like we did in 1.0. After the shared storage is created (NFS datastores), vCenter can then be migrated to it.

Brownfield Install of the VSA Cluster

In VSA 1.0, we required a vanilla version of ESXi 5.0 installed on the 2/3 nodes (what we called a green field installation). VSA 5.1 includes a feature called the automatic brownfield install of the VSA. This is where VSA 5.1 can be installed on ESXi hosts that are already in production and may have network portgroups configured as well as running VMs. One of these running VMs can contain your vCenter server as we discussed previously.

vSphere 5.1 Specific Enhancements

VSA 5.1 will run on both vSphere 5.1 and vSphere 5.0. Another restriction which we had in VSA 1.0 is also lifted in VSA 5.1. We now support memory overcommit on VMs running on VSA 5.1. This means that you no longer need to allocate a full complement of memory to each VM running on the VSA.

That completes the list of storage ehancements in the 5.1 version of the vSphere Storage Appliance (VSA). Obviously this is only a brief overview of each of the new features. I will be elaborating on all of these new features over the coming weeks and months.

more @ Cormac Hogan

No comments:

Post a Comment

Recommended Readings