Saturday, 24 January 2009
SAN Migrations and Virtualisation
Chris Evans has posted a Great post discussing the true cost of migrating from one SAN Array to another new array. http://storagearchitect.blogspot.com/2009/01/enterprise-computing-migrating-petabyte.html, I am going to write a bit about this topic and discuss how this type of cost and migration is eased in virtualised worlds.
Its a fact of life that the lovely array units that you first fell in love with when it was installed all those years ago and the associated spinning disks that just don't meet NFR anymore for your application demands and will need to be upgraded to a device which provides required functionality and features and is in support with the vendor. Along with the all important data all at some point in time. I expect it is also something that service providers must experience quite regularly with activity such as the change in group wide storage deals and move of customers from one SP to another SP, other scenarios could be where companies lease storage and they come to the end of that lease so have to refresh. For all scenarios portability and migration strategy is a must have, without it your exits plans will as Chris highlights incur large and exceptionally large costs on the planning, risk mitigation and any other relevant topics.
Something mentioned is the use of storage virtualisation arrays and appliances to migrate across and pool storage between hosts. This can then be migrated with next to no downtime. As you may have seen by my various posts i'm a VMware bloke, When you say migration of the server and associated connected volumes to new storage a lightbulb appears with the word "storage vmotion" inside it. http://www.vmware.com/products/vi/storage_vmotion.html When you look at the feature of Storage Vmotion in VMware it offers similar technology to what a USP-V or SVC does at the Array level except this is wthin the ESX host and all inclusively built into the actual service console management component.
An example would be if I need to migrate a VM which resides on a VMFS volume on SAN 1, when initiating a migration the VMDK file will quite happily migrate to any presented LUN on SAN 2, I can then can migrate to a swing ESX host which is presented on the same fabric as the original connected host with no downtime. Another option could be to migrate across a connected iSCSI LUN to SAN 2 without needing to connect to a shared Fabric if this wasnt available as part of the upgrade, once migrated off of the LUN I could then migrate iSCSI to FC on SAN 2, iSCSI/NAS would also be a possibility across network links at remote sites. The below figure shows the simplified steps in the whole migration process.
This natural decouplement virtualisation creates between ESX Host and Storage layer I expect can still be achieved with this use of inband storage virtualisation but SVmotion certainly helps at reducing overall cost and complexity compared to deploying a virtualised storage array.
Agreed, not every server SAN attached with be VMware ESX and not every server will be virtualised but in the UNIX world server virtualisation is also becoming more of the norm due to the large size the monoliths are now getting to, also I expect that this will start to become available in UNIX virtualisation technology such as IBM LPAR and LDOM technology from SUN.
Its a fact of life that the lovely array units that you first fell in love with when it was installed all those years ago and the associated spinning disks that just don't meet NFR anymore for your application demands and will need to be upgraded to a device which provides required functionality and features and is in support with the vendor. Along with the all important data all at some point in time. I expect it is also something that service providers must experience quite regularly with activity such as the change in group wide storage deals and move of customers from one SP to another SP, other scenarios could be where companies lease storage and they come to the end of that lease so have to refresh. For all scenarios portability and migration strategy is a must have, without it your exits plans will as Chris highlights incur large and exceptionally large costs on the planning, risk mitigation and any other relevant topics.
Something mentioned is the use of storage virtualisation arrays and appliances to migrate across and pool storage between hosts. This can then be migrated with next to no downtime. As you may have seen by my various posts i'm a VMware bloke, When you say migration of the server and associated connected volumes to new storage a lightbulb appears with the word "storage vmotion" inside it. http://www.vmware.com/products/vi/storage_vmotion.html When you look at the feature of Storage Vmotion in VMware it offers similar technology to what a USP-V or SVC does at the Array level except this is wthin the ESX host and all inclusively built into the actual service console management component.
An example would be if I need to migrate a VM which resides on a VMFS volume on SAN 1, when initiating a migration the VMDK file will quite happily migrate to any presented LUN on SAN 2, I can then can migrate to a swing ESX host which is presented on the same fabric as the original connected host with no downtime. Another option could be to migrate across a connected iSCSI LUN to SAN 2 without needing to connect to a shared Fabric if this wasnt available as part of the upgrade, once migrated off of the LUN I could then migrate iSCSI to FC on SAN 2, iSCSI/NAS would also be a possibility across network links at remote sites. The below figure shows the simplified steps in the whole migration process.
This natural decouplement virtualisation creates between ESX Host and Storage layer I expect can still be achieved with this use of inband storage virtualisation but SVmotion certainly helps at reducing overall cost and complexity compared to deploying a virtualised storage array.
Agreed, not every server SAN attached with be VMware ESX and not every server will be virtualised but in the UNIX world server virtualisation is also becoming more of the norm due to the large size the monoliths are now getting to, also I expect that this will start to become available in UNIX virtualisation technology such as IBM LPAR and LDOM technology from SUN.
Subscribe to Posts [Atom]
Post a Comment