Saturday 31 January 2009

VMware - retro edition

Strange but true, but you look back to the period before I was a twinkle in my old mans eyes, and even he was a wee nipper and Virtualisation today is when revealed and looked into at its software engineering routes very similar, if not the same to Mainframe.

A few tweets on twitter has realigned that what i'm doing and preaching about on mediums like this blog in today is nothing new and at its roots nothing fancy, its nothing revolutionary like say the IBM era of SystemZ and virtualisation was, its nothing new in the respect of say going from punch cards to magnetic tape so why is it so big and mainstream today? When you look at this negatively its just got smaller and more easier to do/procure!

I guess the fact is were human and however clever our brains can calculate and design things it has a limitation and continuously looks to the past to see what worked and how it worked to give us an insurance that it will work. Boy did we go through a boom in the era of valve computing and into binary computing and going from using rolls of magnetic tape to spinning magnetic disks today, I guess VMware are doing this today with adding additional technology to benefit the running workloads with Memory overcommit and shadow page tables and CPU resource scheduling.

On the state of x86 Virtualisation and VMware is today in this it really is reiterating what was done on the large mainframes of yesterday and what is still scarily being done today to run payroll systems, airline reservation systems and systems that we rely on for critical parts of our day to day lives like the emergency services. Other comparisons could be to look at IBM GDPS http://www-03.ibm.com/systems/z/advantages/gdps/index.html and its doing what VMware Fault Tolerance will do across WAN sites in ESX 4....

The next step and leap in computing will be tough for x86 virtualisation organisations, they will need to accelerate past the basic fundamentals that System Z has laid as the baseline. I think they are doing this as far as I can see in some context, don't get me wrong I know sweet FA about mainframe but you certainly have the portability and the flexibility that probably isn't available on big Iron within ESX, its also alot smaller and more flexible to deploy with less operational cost than a big monolith of a mainframe.

It will be very interesting to see where Mainframe computing is place within the computing in say 10-20 years time, I expect a form of even smaller microcomputing will arise that will jump and shift the model slightly further to make x86 computing look exactly the same as it does today when compared to Mainframe. Lets sincerely hope not, and lets hope the boffins come up with something more original and different that will revolutionise like the jump did in computing in the early adaptations!

Anyway rambling over and hopefully I can start to see yet more fun innovative stuff arise in the Virtualisation space in years to come :)


Saturday 24 January 2009

SAN Migrations and Virtualisation

Chris Evans has posted a Great post discussing the true cost of migrating from one SAN Array to another new array. http://storagearchitect.blogspot.com/2009/01/enterprise-computing-migrating-petabyte.html, I am going to write a bit about this topic and discuss how this type of cost and migration is eased in virtualised worlds.

Its a fact of life that the lovely array units that you first fell in love with when it was installed all those years ago and the associated spinning disks that just don't meet NFR anymore for your application demands and will need to be upgraded to a device which provides required functionality and features and is in support with the vendor. Along with the all important data all at some point in time. I expect it is also something that service providers must experience quite regularly with activity such as the change in group wide storage deals and move of customers from one SP to another SP, other scenarios could be where companies lease storage and they come to the end of th
at lease so have to refresh. For all scenarios portability and migration strategy is a must have, without it your exits plans will as Chris highlights incur large and exceptionally large costs on the planning, risk mitigation and any other relevant topics.

Something mentioned is the use of storage virtualisation arrays and appliances to migrate across and pool storage between hosts. This can then be migrated with next to no downtime. As you may have seen by my various posts i'm a VMware bloke, When you say migration of the server and associated connected volumes to new storage a lightbulb appears with the word "storage vmotion" inside it. http://www.vmware.com/products/vi/storage_vmotion.html
When you look at the feature of Storage Vmotion in VMware it offers similar technology to what a USP-V or SVC does at the Array level except this is wthin the ESX host and all inclusively built into the actual service console management component.

An example would be if I need to migrate a VM which resides on a VMFS volume on SAN 1, when initiating a migration the VMDK file will quite happily migrate to any presented LUN on SAN 2, I can then can migrate to a swing ESX host which is presented on the same fabric as the original connected host with no downtime. Another option could be to migrate across a connected iSCSI LUN to SAN 2 without needing to connect to a shared Fabric if this wasnt available as part of the upgrade, once migrated off of the LUN I could then migrate iSCSI to FC on SAN 2, iSCSI/NAS would also be a possibility across network links at remote sites. The below figure shows the simplified steps in the whole migration process.


This natural decouplement virtualisation creates between ESX Host and Storage layer I expect can still be achieved with this use of inband storage virtualisation but SVmotion certainly helps at reducing overall cost and complexity compared to deploying a virtualised storage array.

Agreed, not every server SAN attached with be VMware ESX and not every server will be virtualised but in the UNIX world server virtualisation is also becoming more of the norm due to the large size the monoliths are now getting to, also I expect that this will start to become available in UNIX virtualisation technology such as IBM LPAR and LDOM technology from SUN.

Saturday 17 January 2009

NFS 25 Years old and still going strong

NAS with NFS is a perfect storage medium to run your Vmware Virtual Machines on, it is fully supported by Vmware on there HCL and is when benchmarked against FC http://www.vmware.com/files/pdf/storage_protocol_perf.pdf about 20-25% slower on response time, this is a percentage that when you look at the large percentage of consolidated workload being run in a Vmware environment i.e. Domain Controllers, File Servers, IIS etc this 20-25% figure is nothing to be sniffed if you are willing to push the boundary and move away from comfort zones that have existed previously in Fibre Channel for lower end production workloads and all of your test and DEV environment.

The benefits of using NFS when you look at the overall cost of your storage configuration, taking cost from the actual ESX host requirements for connectivity through to the actual spinning disk to run VM's and it starts to look in the current climate an exceptionally good infrastructure to use and cut costs on physical infrastructure. Also emerging as I have talked about are SAN Appliances or Disk heads which are basically using commodity tin to pool connected storage i.e. SUN Fishworks, EMC Celerra and Lefthand Networks (now HP) . The appliances are also making there way into running within a Virtual Machine environment and consolidating even further.

The sensible winners this year will those that don't knee jerk when designing solutions such as Virtualised environment. When looking at the overall architecture of ESX and Vcenter it's flexibility and functionality gives you the options to move the bar higher when we are out of this mess that the financial geniuses of this world have made for us.

NAS may start to gain respect by architects and techies in the virtualisation space in 2009 because they have no options, it will gain respect with the purse string holders BUT anyone responsible for proposing solutions needs to make them aware of the caveats and differences in technology that we could get away with budget wise 1-2 years ago. They need to raise that
moving towards the end of the year and into 2010 using the great portability options in VMware such as Storage Vmotion to great affect by moving back to storage that is faster and cheaper IS possible and its not just a one way street.

2009 is going to be a tough year financially for organisations, I myself face the same contraints when planning virtualisation needs in 2009 for my organisation as most organisations are for Infrastructure need. I'll be honest I wouldnt have even bothered to look at NAS a year ago, why would I, I could quite easily obtain budget to deploy machines on FC and SATA. However Something that gives me great comfort is that VMware ESX is quite capable scaled and designed and will provide great performance and I have great confidence that VMware labs have made sure this is the case by working with Vendors and doing the hard work for lazy architects like me :) In theory I do the easy bit and make sure that the business need is still suitable to the underlying foundations built for me.


Sunday 11 January 2009

EMC Celerra simulator

I wrote last week about the SUN Fishworks VSA, openly available to download play about with and use the same code that would run on there Unified storage. This first write up is about my other ongoing testing with the EMC Celerra Virtual Appliance.

This isn't as easy to download as the SUN one, you need a powerlink username, however Chad Sakac over at Virtual Geek has made it available to download at http://virtualgeek.typepad.com/virtual_geek/2008/06/get-yer-celerra.html, he also has an excellent series on how to deploy and test this into a Vmware development environment.

The EMC Celerra family offers either the standard Integrated SAN where the disk storage and NS array controllers are within the same physical unit OR EMC offer the NS(G) devices to implement as Gateway heads to basically use inband with backend Symmetrix or Clariion storage. Not quite sure which one is the most popular on the market, it would be interesting to know what people reading actually use.

The EMC Celerra simulator is an download for storage bods to download, provided is a full license with the option to go as far as performing replication, Chad has on Virtual Geek also wrapped it into a OVF format which means you can just download and import. I downloaded the appliance from Powerlink myself and just imported with Vmware converter.

Functionality wise I have only currently got some basic LUN's and have connected across to the NFS mount, so next steps will be to do some possible replication to another filer to see how good and simple it is to do.

Compared to the SUN Unified appliance i found the EMC to be slightly less easy to get up and running, however I would say that this shows how much it has to offer in terms of functionality and also flexibility, the Celerra does have very intuitive guis for the likes of me so it is still very easy to implement.

I'll be testing the two and possibly even looking at brushing off the Netapp filer VSA at some point so keep your eyes peeled if your interested in seeing how things go.


Friday 9 January 2009

The first cloud in 2009

I like this post by Scott Lowe who is a first class Virtualisation evangelist http://blog.scottlowe.org/2009/01/09/a-quick-thought-regarding-cloud-computing/

This is exactly one of the valid points that the media and vendors fail to tell the CIO's and CTO's about the fluffy fluffy cloud when they are touting what the cloud can offer and how it will revolutionise the way we do IT today.

Ok its starting with initiatives such as http://www.opencloudconsortium.org/index.html , but currently no defined standards exist as to how cloud vendors can talk and interact together, and how individual cloud (or more the case hosting) companies define standards within each supporting stack of there own as we all try and guess. They also have no defined standard to ensure that you as the customer can ensure you can migrate your stack between hosting...sorry cloud companies when your agreement is up and you've had enough.


All of the infrastructure components that Scott mentions follow standards to communicate and inter opt together and they are all established and mostly commoditised within the datacenter, the vendors all strive to better rival technology but the baseline standard and building block defined within each individuals stack is pretty much stable which is why people rely on them to run their business. They also most importantly make sure they actually talk to other components because if they didnt I think its easy to work out what would happen to both the vendor and us!

If the new emerging cloud industry doesn't start to build the standards around its infrastructure that exist as they do in the shape and form today on standard infrastructure then I'm afraid it's going absolutely nowhere for a long time until it does. Maybe when cloud does start to gain these standards and interoperability it will become more clunky and inflexible which is one if not the main key selling points of the technology.


Thursday 8 January 2009

VCP Integrity

Hi,

As a fellow VCP I felt an opinion on Point 6 on this blogpost was needed... http://itknowledgeexchange.techtarget.com/virtualization-pro , hey my blog views are down this month so I might get away with slagging it off (although I wont be going that far).

I took my Install & Configure VCP course after doing what I enjoy with new technology, learning the product in my own time and learning how it works at my own pace with a VMTN Subscription (god i remember the first P2V and how amazed I was it worked!). When reviewing the Exam track the first thought on the exam having the prerequisite to attend the course was that it was a rip off but then after that split second I felt like I was about to become part of an Elite band.

The fact is that forcing people to actually go to a training school and attend the course before taking the exam will ensure that a better breed of clued up students attending is created to then ensure technical end design and architecture is defined by what is learnt. For someone like VMware this is of paramount, it can't rely on people learning ESX out of a book and then going into an exam centre to qualify and put it on there CV and then go and start designing Virtual Infrastructure. VMware have VAR program which has the people qualified to do this but guess what there not cheap....not the price of an exam thats for sure and they have minimal amount of resource internally.

Granted you'd be stupid to employ someone with no practical experience to back up a qualification, but hey I'm sure people employ people for other skills on there CV initially and then assign them on the project of Virtualisation later
.

To me the exam is a bonus to be honest, in the UK I don't think that taking exams after they attend a funded course is enforced as much by countries like the US and India. Although SI's and Consultancies here have to do this here to ensure they are able to keep reseller certification etc.

So to summarise the 2 Grand in the grand scheme of things is a small price to pay for someone to pass and qualify in what is probably an elite group of 30-40 K VCP's worldwide and to obtain some great networking with peers while on the course and quality time from a great instructor who is an elite band of VCI's likely in the single digit thousands as well.

Saturday 3 January 2009

4 Minute SAN - SUN Fishworks

Well holidays are nearly over, i have had a chance to play with a view Virtual Storage Appliances, one being a EMC Cellera simulator http://www.emc.com/products/family/celerra-family.htm which is all setup, i just need to configure and finish off with some LUN creations etc.

Next one to play with is a VSA built by the SUN FISHworks team
http://wikis.sun.com/display/FishWorks/Fishworks, this is a bizzare name for a team, although I am sure the teams i've run/work in have been named worse over the years. The actual official name is the "SUN Unified storage systems" http://www.sun.com/storage/disk_systems/unified_storage/.

The basis of the SUN USS is that they offer NFS/CIFs and iSCSI protocol connectivity in a commodity SUN physical server running there new Open source OS OpenSolaris under the hood.

Nothing particularly new on the HW concept, however the underlying filesystem utilise ZFS, this provides a larger amount of benefits such as decouplement of the underlying HW to the ZFS volume, this provides benefits such as adding more disk to the pool dynamically, greater scalability for size of volumes (some could say unlimited) and great snapshot benefits etc. Not sure what can be done on disk removal though...

On the GUI side, the USS has great diagnostic facilities available, with Solaris being under the hood the use of Dtrace is available to perform enhanced debug on the storage device and volumes and also the other IO which affects storage.

This opens up a lot of opportunity if this adoption works for SUN, a lot of people have Solaris with Veritas Volume manager which they are just itching to get rid of (hope my account manager doesn't read this), they are making this an Open standard which is great for smaller shops. The USS can i presume run on any Solaris Tin, now just think of this running with a high end M Series...

So early days of playing and reading up so excuse the vagueness, I Will top up sometime through the week on this in more detail if SUN are still in business ;)

The VSA is easily downloadable from SUN http://www.sun.com/storage/disk_systems/unified_storage/resources.jsp and installed on my MAC with VMware fusion, it run's "OK" on the standard 1GB of RAM, I will probably piss about with more RAM on my main PC when I get a chance. And as my subject title says it take about 4 Minutes to configure!!!!

Happy playing



This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]