VMworld Buzz: vVolumes, VMware’s game changer for storage

This certainly didn’t disappoint and turned out to be one of the coolest new technologies VMware is working on with its storage partners. Very strange that it wasn’t even mentioned in the keynote yesterday or not more highly billed.
This is a game changer in storage!


The session started looking at the issues with the current storage model and traditional storage, there’s no app level visibility, multiple apps (VMs) are lumped together in LUNs and get the same service level. There’s no per VM failover, you need to use existing replication solutions for applications. The storage arrays have no visibility into the files within a LUN.
So, the need is for more granular file management. There’s a mismatch in the granularity of data management between storage systems and vSphere. vSphere and storage currently see and manage different things.
A product management wish list was shown:
  1. Ability for VMware to offload VMDK – level operations to storage systems with snapshot, clone, replication, encryption, de-dupe, thin-provisioning etc.
  2. Build a framework where any current or future array operation can be leveraged with minimal disruption to vSphere infrastructure.
  3. No disruption to existing VM creation workflows & highly scalable – both number of VMDKs and operations
  4. Scale to 1000,000s to VM deployments per storage system.
What VMware wants is to be able to natively store VMDKs on a storage array using the spirit of RDMs but without the hassle or scale issues. The VM layer would be able to directly talk to the storage layer and the storage layer will be able to see down to the individual VMDK level.
So, looking into the future VMware is creating Capacity Pools which is a new way to manage capacity assignment. This actually requires a whole new storage system to do this and VMware has been working with EMC obviously, NetApp, HP, Dell and IBM.
Capacity Pools are an allocation of physical storage space, and a set of allowed data services on any part of that storage space. They can span storage system chassis, even across datacenters.
A storage admin would create a Capacity Pool with a service policy attached, say 14TB with snapshots and replicas allowed. A VM admin would then be able to carve out VM volumes from the Capacity Pool until it runs out of space, volumes could be primary VMDKs, backup copies, replicas, clones thin, thick etc.
To connect to the storage device you would use an IO Demultiplexer which is a hugely scalable system whereby you would have a single (load balanced, obviously) connection from computer clusters to storage systems, a single mount point. No more LUNs and their complexity and in fact bypassing the whole SAN/NAS debate which is as of today, old news! In fact whether you connect to the IO Demultiplexer with FC or 10GbE, it doesn’t matter, the storage will still be presented in the same way.
The current VAAI and VASA systems would be extended to talk to this and provide the policy driven storage and offload as many storage related tasks to the storage arrays.
The way this is a game changer is it drags the storage arrays into the virtualisation (OK, also cloud) universe by truly virtualising storage rather than the current system which is a bit of a fudge and uses the old aspects of traditional block storage fitted into a virtual world. I’m a big proponent of NFS and this combines the usability of NFS at the VMDK level with a whole lot more.
The only questions are …when can we have it…and how difficult will it be to implement for the storage vendors.
A very interesting session with loads more information to come I hope.