Saturday, 14 March 2009
vStorage - My View
Currently to date Storage management and architectural offerings in VMware have provided sufficient performance and functionality to enable large scale server consolidation, to go to the next level of a complete world where VMware want to ensure every workload can be virtualized more performance and management components are required to ensure that when you are running your complete Server estate in a Virtualised world it provides the same functionality and performance as you would have obtained when you had server workloads operating in the legacy physical world if not more.
The VDC-OS strategy (Virtual Datacenter OS) is VMwares next generation strategy for moving its current Virtualisation offerings within the VI3 Branding and moving into specializing in more specialized layers to create what they are classing as the “Software Mainframe” . Operating this is vSphere (formerly ESX). vStorage service components which are all currently under development will offer a whole band of new and exciting technical functionality to fill gaps and improve upon what exists in the current offerings, I will give a brief overview of each of the main offerings which will be available and comment on how this may likely change or even complicate matters in the Virtualised world.
Functionality features in vStorage
Visability
Greater visability will be made available to enable enhanced interaction with both the Storage array and management software via the central datacenter management component vCenter (Formerly Virtualcenter). This will provide holistic diagnostic views of any performance bottle necks at the Virtualisation layer, greater provided insight and trend analysis on expensive operating areas such as monitoring how much storage you have left for VM’s and how long you have left before storage for VM’s fills up.
Disk I/O has always been a hot topic with server virtualisation, large scale virtualisation deployments utilise SAN storage to host Virtual Machines, this calls for large amounts of diagnostic information to be provided to ensure that multiple tools are not being used and providing differencing information. Other problem areas include Virtual Machine snapshot consumption and disk usage of those snapshots, with newer management interaction current great solutions such as Snapshots which when deployed on mass wreak havoc on the storage management side will hopefully be usable more effectively without concern of when and how you utilise them.
Linked Clones
Something more useful in Virtual Desktop environments and Vmware View. This functionality has been within Vmware Workstation for some time, the principle is that you have a master gold VM and when requiring multiple copies of that base VM you deploy a linked clone, this basically utilises the base VM and writes differential changes into a snapshot file.
This will initially be very good for VDI deployments and possible server landscapes such as a Web Facing environment which need to scale out exceptionally quick to meet demand from concurrent connections. Brian Madden wrote a good article about how linked clones do have some limitations http://www.brianmadden.com/blogs/brianmadden/archive/2009/02/18/brian-dump-atlantis-computing-hopes-to-solve-the-quot-file-based-quot-versus-quot-block-based-quot-vdi-disk-image-challenge.aspx, this article provides a good point on why linked clones are not completely the finished article yet for simplified deployment.
Thin Provisioning
This feature, which is actually available and turned on by default when hosting VM’s on NFS Storage is thin provisioning of VMDK files, the functionality will be built into vSphere to use with VMFS volumes.
With Vstorage API interaction and extensive holistic management with improved vCenter management this will be rather easy to deploy and ensure you do not have issues with storage becoming full. However this will still mean more preemptied management of your Virtual Machine storage volumes if not designed and the powerfull functionality controlled.
Common issues that arise in Thin Provisioning that exist in the physical SAN array world will most likely exist still by using this at the vSphere layer. The cost savings from originally virtualising say a 72GB Disk Physical server down to 15-20GB Disk was a cost saving in itself, I didn’t need to worry about consumed space I could store much more Virtual Machine and change the way server disk partitioning was typically done.
Thin provisioning offerings maybe something that could add slightly too much complexity to management of a virtual environment, it adds more possibility of VM’s becoming unavailable, im sure VMware will have mechanism to ensure services to not fail so time will tell on how this will impact storage costs and efficiency in vSphere, I would probably prefer to have pre-emptive warnings from storage monitors and then grow and plan storage demand manually.
Interestingly VMware currently state Thin Provisioning is better being done at the Array Level, bit confusing this to be honest as if you have a VMDK its a physical file upon a VMFS or NFS volume. I can see this being the case for VMFS volumes though as it will enable you to grow this as your VM’s grow. Once again this new offering will need extensive and detailed architectural planning.
Disk Volume management
As said on my view of Thin Provisioning, i’d probably prefer to grow and even shrink Virtual disk storage and VMFS storage manually upon a pre-emptive warning being provided. New functionality will be made available in vSphere to grow VM Disk files and more beneficially the VMFS Volume.
Disk expansion is another confusing one to me, VMware have preached that when scaling and designing a VMware storage architecture you plan LUN sizes appropriately, call me a skeptic but growing a VMFS and having inconsistent LUN sizes across your Infrastructure may not be the most ideal scenario to have in a large scale environment. VMFS Growth also impacts the backend Physical SAN hosting the VMFS, the LUN’s on a more intuitive array are optimally set at a RAID and Disk Spindle count to provide best performance to running VM’s. Changing the sizing of LUN’s for VMFS also introduces layers of complexity and possible operational issues.
vStorage API
Integral for enabling most of the new features is the vStorage API, this will enable storage partners to use API within the VMware components and enable enhanced Storage operations and Storage management capability between vSphere and the Physical SAN array hosting your Virtual Datastores.
VMwares current Native MPIO driver within ESX 3.5 is currently only an Active/Passive multipath offering, failover of active path cumbersome and also when deploying virtualisation in large scale environments requires large amounts of planning and configuration to ensure that problems such as Path Thrashing does not occur. The flipside of this is that it has enabled VMware to test and have a thoroughly checked HCL which means better all round support and benchmarking for end users.
The new basic vStorage offerings will apparently provide increased performance, failover capability and improved throughput of I/O, all of which will allow you to start to virtualise much more demanding workloads such as SQL or Oracle. First release of storage vendor designed multipath drivers will be from EMC, this will effectively utilise EMC Powerpath within the Hypervisor stack. This will enable offload more to the Storage Array and away from the Hypervisor and provide more resources to running VM’s. Opening up API to Storage vendors is a good step for VMware, the current MPIO architecture built by VMware engineering has its limitation and this will hopefully propel vSphere to another level.
Overall the vStorage intiative is starting to look rather compelling to enable people to virtualise any workload with any IO requirement, integration with storage vendors had to happen at some point for VMware to start to grow. Opportunities are now formalised and within the vStorage layer within the VDC-OS.
The VDC-OS strategy (Virtual Datacenter OS) is VMwares next generation strategy for moving its current Virtualisation offerings within the VI3 Branding and moving into specializing in more specialized layers to create what they are classing as the “Software Mainframe” . Operating this is vSphere (formerly ESX). vStorage service components which are all currently under development will offer a whole band of new and exciting technical functionality to fill gaps and improve upon what exists in the current offerings, I will give a brief overview of each of the main offerings which will be available and comment on how this may likely change or even complicate matters in the Virtualised world.
Functionality features in vStorage
Visability
Greater visability will be made available to enable enhanced interaction with both the Storage array and management software via the central datacenter management component vCenter (Formerly Virtualcenter). This will provide holistic diagnostic views of any performance bottle necks at the Virtualisation layer, greater provided insight and trend analysis on expensive operating areas such as monitoring how much storage you have left for VM’s and how long you have left before storage for VM’s fills up.
Disk I/O has always been a hot topic with server virtualisation, large scale virtualisation deployments utilise SAN storage to host Virtual Machines, this calls for large amounts of diagnostic information to be provided to ensure that multiple tools are not being used and providing differencing information. Other problem areas include Virtual Machine snapshot consumption and disk usage of those snapshots, with newer management interaction current great solutions such as Snapshots which when deployed on mass wreak havoc on the storage management side will hopefully be usable more effectively without concern of when and how you utilise them.
Linked Clones
Something more useful in Virtual Desktop environments and Vmware View. This functionality has been within Vmware Workstation for some time, the principle is that you have a master gold VM and when requiring multiple copies of that base VM you deploy a linked clone, this basically utilises the base VM and writes differential changes into a snapshot file.
This will initially be very good for VDI deployments and possible server landscapes such as a Web Facing environment which need to scale out exceptionally quick to meet demand from concurrent connections. Brian Madden wrote a good article about how linked clones do have some limitations http://www.brianmadden.com/blogs/brianmadden/archive/2009/02/18/brian-dump-atlantis-computing-hopes-to-solve-the-quot-file-based-quot-versus-quot-block-based-quot-vdi-disk-image-challenge.aspx, this article provides a good point on why linked clones are not completely the finished article yet for simplified deployment.
Thin Provisioning
This feature, which is actually available and turned on by default when hosting VM’s on NFS Storage is thin provisioning of VMDK files, the functionality will be built into vSphere to use with VMFS volumes.
With Vstorage API interaction and extensive holistic management with improved vCenter management this will be rather easy to deploy and ensure you do not have issues with storage becoming full. However this will still mean more preemptied management of your Virtual Machine storage volumes if not designed and the powerfull functionality controlled.
Common issues that arise in Thin Provisioning that exist in the physical SAN array world will most likely exist still by using this at the vSphere layer. The cost savings from originally virtualising say a 72GB Disk Physical server down to 15-20GB Disk was a cost saving in itself, I didn’t need to worry about consumed space I could store much more Virtual Machine and change the way server disk partitioning was typically done.
Thin provisioning offerings maybe something that could add slightly too much complexity to management of a virtual environment, it adds more possibility of VM’s becoming unavailable, im sure VMware will have mechanism to ensure services to not fail so time will tell on how this will impact storage costs and efficiency in vSphere, I would probably prefer to have pre-emptive warnings from storage monitors and then grow and plan storage demand manually.
Interestingly VMware currently state Thin Provisioning is better being done at the Array Level, bit confusing this to be honest as if you have a VMDK its a physical file upon a VMFS or NFS volume. I can see this being the case for VMFS volumes though as it will enable you to grow this as your VM’s grow. Once again this new offering will need extensive and detailed architectural planning.
Disk Volume management
As said on my view of Thin Provisioning, i’d probably prefer to grow and even shrink Virtual disk storage and VMFS storage manually upon a pre-emptive warning being provided. New functionality will be made available in vSphere to grow VM Disk files and more beneficially the VMFS Volume.
Disk expansion is another confusing one to me, VMware have preached that when scaling and designing a VMware storage architecture you plan LUN sizes appropriately, call me a skeptic but growing a VMFS and having inconsistent LUN sizes across your Infrastructure may not be the most ideal scenario to have in a large scale environment. VMFS Growth also impacts the backend Physical SAN hosting the VMFS, the LUN’s on a more intuitive array are optimally set at a RAID and Disk Spindle count to provide best performance to running VM’s. Changing the sizing of LUN’s for VMFS also introduces layers of complexity and possible operational issues.
vStorage API
Integral for enabling most of the new features is the vStorage API, this will enable storage partners to use API within the VMware components and enable enhanced Storage operations and Storage management capability between vSphere and the Physical SAN array hosting your Virtual Datastores.
VMwares current Native MPIO driver within ESX 3.5 is currently only an Active/Passive multipath offering, failover of active path cumbersome and also when deploying virtualisation in large scale environments requires large amounts of planning and configuration to ensure that problems such as Path Thrashing does not occur. The flipside of this is that it has enabled VMware to test and have a thoroughly checked HCL which means better all round support and benchmarking for end users.
The new basic vStorage offerings will apparently provide increased performance, failover capability and improved throughput of I/O, all of which will allow you to start to virtualise much more demanding workloads such as SQL or Oracle. First release of storage vendor designed multipath drivers will be from EMC, this will effectively utilise EMC Powerpath within the Hypervisor stack. This will enable offload more to the Storage Array and away from the Hypervisor and provide more resources to running VM’s. Opening up API to Storage vendors is a good step for VMware, the current MPIO architecture built by VMware engineering has its limitation and this will hopefully propel vSphere to another level.
Overall the vStorage intiative is starting to look rather compelling to enable people to virtualise any workload with any IO requirement, integration with storage vendors had to happen at some point for VMware to start to grow. Opportunities are now formalised and within the vStorage layer within the VDC-OS.
Subscribe to Posts [Atom]
Post a Comment