Sunday, 10 January 2010 it or hate it?

So I am an Infrastructure guy surrounded by massive volumes of technology in the industry which operate above Servers, Storage, Networking environments to enable certain goals. Within the technology some have a level of abstraction (or virtualisation) to provide a level of "ease" to make portability and migration easier between the lower level and upper level component i.e. VMware server Virtualisation or SAN Virtualisation arrays, we also have lower level components that we just don't realise like proprietary volume managers/file systems.

Unfortunately though the industry is still despite being full of such glorious technology plagued with any kind of easy migration and flexible movement capability and by this I mean some of the following examples that I hear and see about day to day in the industry;
Fortunately with the above common examples you do have some technical options, for example on the SAN replication problem you can use a SAN Virtualisation appliance like a Netapp V-Series or a IBM SVC, however you do need one of each appliances on the target and source, for starters this is expensive and you also have support issues with this from the underlying storage array vendor, you also have various other potential issues that may arise all to achieve what is merely just copying data from A to B (maybe not that simplistic but im a simple guy remember).

To address cross Hypervisor migration, within the Server Virtualisation industry we have the Open Virtualisation Format (.OVF). As Server Virtualisation is more my bag I will use this example for the rest of this post. With OVF the industry has got together to build a standard for portability of Virtual Machines, bear in mind before you run off shouting EUREKA this isn't live migration, to migrate requires minimal downtime as it only works on cold migrations which is still a pain in the rear, however this means you can in theory move from one Virtualisation vendor to another by using export/import capability with OVF.

Unfortunately even with functionality that enables us to address the common interop problems in the datacentre such as replication between different array vendors and migration between Hypervisor Vendors, being humans and never satisfied the most common groan I hear is that they never actually provide 100% confidence and functionality that the original abstraction layer did, so for example with OVF you can't migrate "a la" Vmotion style between hypervisors. They also add large volumes of overhead and support, you need someone to operationally support VMware, you need someone to operationally support SAN Virtualisation arrays etc, and additionally they can end up actually costing more if you do not do complete TCO analysis on the actual solution being acquired to address the problem in the first place.

This leads me to the conclusion even with the limited knowledge in IT I have on such topics that unless large volumes of innovation and collaboration occurs we are never likely to see such technology or initiatives occur where we the customer or end user have this reduced overhead penalty for portability. Being the cynic I am, I am seriously starting to think that solutions that address problems are merely a level of abstraction that is pushing the problem faced higher up the stack for something else to be affected or to deal with, additionally the increase in layers means yes you got it more support and TCO costs. So based on my theory for example lets look below at how much abstraction is required to enable OVF functionality, and how this compares to abstraction of the application environment of yesterday;

As you can see the cost of gaining OVF means this functionality adds 2-3 more layers of abstraction in order to work and be fully exploited, and lets not forget the increase in Software cost/renewals in order for the Vendors to develop and support such features, lets also not forget the fact we have underlying components which are required to reach the goal.

Quilt, its a nasty thing

To be fair on the industry and VMware in particular this portability means tasks in the legacy world is a breeze. Before meant we faced the lenghty costly strategy of having to move that relevant Workload onto another x86 server with re-installation of OS/App components, we had to plan this a lot more harder and in actual fact it also very rarely actually happened when physical tin was EOL, the kit just sat in the datacentre rotting, we also didn't have OS Refresh capability.

My concern

So what am I getting at here? One minute I am whinging about abstraction the next I am praising it, I guess its a bit like Marmite with Abstraction, you either love it or hate it. But to summarise my concern is this, the more I hear about new technology that helps me to solve another problem the more I see a level of abstraction being introduced into the stack, this to me means that it now means more software purchase costs, more ongoing OPEX costs of that software, more layers of operational complexity, more concerns and arguments with my lovely ISV's on support statements i.e. and all round more TCO on products that already struggle to have a good TCO.

Moving forward and being so young and naive would it be unreasonable to hope that the industry vendors look to reduce this overall use of abstraction and to combat a problem in a more practical way that ensures we do not have multitude of abstraction layers? Or alternatively is there any technology which addresses the above problems that I am not aware of that bloggers are aware of? (on the later it is not an invitation for vendors to tender so forget spamming me if you are a vendor!),

Additionally I am not looking to implement such technology (yet), but I do see this as a potential snowball growing in size as the move towards the demand from businesses for more of an agile and flexible datacentre environment, so I am interested to know if you think I am talking absolute rubbish or whether you agree....

Not sure where to start :-)

I think what you are bashing is the concept of "abstraction". What you are willing to get to is the concept of "standards".

If products A is compatible (i.e. standard) with product B you (the end-user) win. The reason is obvious.

If A and B are not compatible and you need a product C (abstraction) to hide the differences between the two... you "lose". C means additional costs, additional skills, it means that you have to use A and B tools anyway (no C can provide the depth of the native tools for specific tasks) and more importantly ... you are just being "locked-in" by C (rather than A or B).

Your post is complex and would require me to get into more details / options / niches.... but I am lazy this afternoon and I'll leave it to another day ...... ;-)


Post a Comment

Subscribe to Post Comments [Atom]

<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]