Monday, 30 March 2009

Nehalem Day - start of a new tin era?

Well today marks the day when new x86 server ranges from the core market players HP, Dell and IBM all released new generation servers based on the i7 Nehalem chipset. Its probably the first time that chipsets and server ranges have been built and tailored for Virtualisation and associated workload and not the other way round where VMware is plonked on top of a big beefy multi core server with loads of Layer 3 cache.

Nehalem has changed the game, it is built to cater for Virtualised workloads and most importantly to work effectively with VMware and i expect in future other Hypervisor vendors (kinda helps VMware have Intel as an investor though).

Various features within the chipset will most certainly benefit, some examples when compared to the older chips includes improved memory controller access and speed due to Quickpath technology being implemented onto the CPU Die rather then externally on the Northbridge, flexible vmotion migration between generations of Intel Chipsets to ensure we don't need to go and buy massive clusters worth of kit, general power reduction and Hyper Threading comes back from the dead.

For Virtualised environment newer architectures bring benefit with Extended Page Table (NPT in the AMD world), this feature was shown with a whitebox prototype way back at Vmworld 2007 on a keynote. This evolves upon how current Shadow Page Table algoritms work to allocate memory register within ESX's. SPT has brought benefits which has allowed us to obtain higher consolidation ratios with lower utilised machines in the first phase of VMware maturity in most organisations. Next level demand for higher intensive workload can be catered for in most cases by NPT/EPT. Both nehalem and AMD barcelona/shanghai are able to offer built page table management within the actual CPU and remove associated overhead which was previously incurred when new page requests were required at the hypervisor layers. Applications and environments which have large amounts of Processes running within the box i.e. Citrix/terminal server will benefit from this, but not all environments will benefit as greatly as these.

When I was at the Intel stand in Cannes i asked the geezer if Nehalem was 8 Core, he said "oh yes its eight core blah blah", being the skeptic I am I investigated, it turned out it was just dual threaded or more better known as Hyper Threaded. I seem to remember in the early days hyper threading caused more hassle than it was worth, it kind of didn't actually matter then, OS and application was able to interact. This new generation I feel will change the mould, instead of the yesterday era of single core processors which were just logically split and presented to a host OS, we now have a friendly more environmentally, easier to manage etc etc layer of indirection called the Hypervisor, couple this with neat new features in the vSphere technology and this will hopefully work better than previous incarnations did.

I don't believe that adding continuous amounts of cores is the answer, look at Sun UltraSparc and this works in a similar fashion
http://www.sun.com/processors/UltraSPARC-T2/, it basically has multiple threads making use of memory lag, this allows more CPU cycles to be made available to the virtual machines. Will be a shame if IBM buy Sun as I think they will just EOL this architecture.

Overall Nehalem benefits are application/workload dependant, be very wary of buying it with the intention that it will solve your problems, like most things in todays world its mostly hype and isn't a silver bullet, you may find that going with AMD will benefit, they have offered Nested page table support with RVI for a long time now and it is mature.

So...now onto arguing whether its worth waiting a week or two for this kit and also badgering vendors/resellers to find out prices and lead time!

Sunday, 22 March 2009

UCS again....

There has been an absolute massive amount of coverage on the new Cisco UCS announcements on Monday this week, with large amounts of speculation such as are they badged or designed by SUN, large amounts of talk within the Storage blogosphere on how being only FCoEs changes the game for connectivity and may shoe horn people into it regardless and many more.

Few thoughts and questions i've thought about when reading about include with my view and answer;

It looks to me like Cisco will just concentrate on the x86 market, does this limit there growth and coverage in enterprise customers?

Most probably they will, they have designed blades from scratch which cater for high dense virtualised Vmware environments, something current vendors are just about achieving with offerings today pre Intel Nehalem. Maybe Cisco are going along with VMware's claims that ESX/Vsphere will be able to run any workload as a valid agreed strategy. Maybe they see larger organisations moving away from non agile big monolithic mainframes that are hosting core applications and onto commoditised distributed grid environment to reduce risk and cost and using platforms like Linux or JEOS.

Will Architects or Designers recommend solutions and roadmaps which include running dual vendor shops with Cisco Blades and any current tech? IBM and HP both manufacture RISC architecture and Itanium Blade offerings respectively which fit in the same Chassis so its a tough one to call and needs benefit to outweigh commercial impact and risk.

Tough one, standardisation reduces operational overhead and associated cost and all vendor value add offerings start to take great affect when using fewer (not one) vendors, yes VMware decouplement reduces issues with the underlying physical services but you still have an ecosystem that exists within the Physical landscape to update and maintain the environment from central operational tools. UCS uses BMC Bladelogic to provision and manage services it appears, will shops want to run this alongside tools such as Altiris and have to go through vigorous training or even recruit again?

Also commercially this could limit your buying power if siloing on Cisco for Blades, you may require rackmount for certain requirements that UCS cannot meet i.e. Dongles or Specialised PCI devices. (BTW HP Do have a PCI Blade IO Module).

Will Cisco have no option if they do not achieve intial sales growth targets and popularity to release MDS/Catalyst modules that are compatible with core networking/san in datacentres today?

Not having true access to any of the technology it seems that Cisco are going to limit themselves if they do not provide connectivity which allows customers to transform across to FCoE, most companies have probably procured kit before any credit crisis in 2007/8 on what maybe a three to four year refresh cycle policy. They make connectivity options for there new competitors so can't be too hard to do this if customer uptake is stumbled due to this.

Or does the I/O have backward compatibility? i.e. can you connect to conventional fibre connection points already, maybe someone can put me straight as I'm speculating on something I've never had the fortune at seeing/playing with yet.

If most of what i say is right, that's a bit effort to dump your HP/IBM blades and all the expensive chassis components you've procured?

A few other quick ones;

Maybe people can pass comment on above or email me, i might be talking utter rubbish (as per usual) but i thought that was what your own blog was for :) Hopefully its raised some valid points or is what others are currently thinking as well.

Thanks


Monday, 16 March 2009

Cisco Blades - Paradigm shift or Flop?

Well Cisco have finally revealed the hype surrounding there entry into the Blade server industry today, codename "California" offering has been revealed in a simplified manor with a clear message that they want to consolidate and remove the problems experienced in computing today and change how organisations provision and deploy. I provide a few thoughts and views expressed here with some possible comments to look back on in the future once this stuff goes mainstream adoption.

Unification is the main name of the game and it could either propel the underlying connectivity and IO within the Nexus Unified switch range into the mainstream or it can carry on being an investment along with the full blade package which will be on the back burner until the financial crisis finishes. Something which will be hard certainly with most organisations being on a tightrope budget for what will be until next year if fortunate.

On the other hand i'm sure readers on Vmlover will realise that there are potential cost saving and ROI benefits to consolidating your infrastructure with Virtualisation and Blades even in a financial crisis, if you can raise a business case or if you have a budget from last year its seriously a possibility that you can go for next generation technology such as blade and unified core networking and use this to host your Virtualisation environment to save massive bucks short and longer term.

Once Blades and Virtualisation are deployed you have more agility, quicker response to business demand and overall cost savings from reduced overhead of your operation deploying the solution and again you can reap the cost saving.

Whether you feel Cisco has the whole package with there new offering and you feel it can be offered to deploy turn key without having to worry about your SAN/LAN provisioning is dependent, maybe this will be a target for initial installs for medium sized organisations who feel that they can jump up a level and go for the full all in one package. Not many organisations are fortunate to be able to refresh there core networking and server environments, most are locked into an investment previous purchased

Other considerations on this new venture is who's going to sell this solution and then possibly deploy all this kit and deploy the Servers? Typically a solutions provider/reseller will recommend a blade spec and deploy SAN and LAN configuration on the chassis backplane, install at customer and then hand over to the network bods. Is the network bod in this Cisco unified blade world going to do the network and the chassis and then hand over to the Virtual/Server bod? Maybe not this might be "the empire strikes back" with cisco blade, Network/Storage bods have been shoved out in most organisations on LAN/SAN config of blades with new simplified tech such as HP Virtual Connect. Or is the whole deployment going to go and push network/storage bods back even further with provisioning process? Plug your FCoE in and off you go server bod....

Cisco versus the rest of the world

I will be interested to see how who are now the vendor competition and previous partners for shipping IO connectivity on current market blades muster in the new battle, someone the size of Cisco is not moving into Blade servers to dabble, they want to take on the world like they do with networking and storage products. With there competitors IBM and HP have backplane options that go upto up to 10GB LAN/8GB SAN, will they now start to lose the first class citizen status of what they want developed with Cisco? Will the other Blade vendors struggle with FCoE adoption and partnership and is this why its been a bit slow on any new strategy for FCoE with Blade recently? Probably not to the above other people are building switches that support FCoE so if Cisco do then there take alliance with a vendor like Brocade.

I do like Cisco's pitch and approach, I like the unified fabric architecture lets see what the rest of the industry says and does, however i wouldn't want to dive in and be lumbered with a technology like VFrame was. This is something i expect will be the case across most IT organisations with this latest entrant to the server world so were see.

Saturday, 14 March 2009

vStorage - My View

Currently to date Storage management and architectural offerings in VMware have provided sufficient performance and functionality to enable large scale server consolidation, to go to the next level of a complete world where VMware want to ensure every workload can be virtualized more performance and management components are required to ensure that when you are running your complete Server estate in a Virtualised world it provides the same functionality and performance as you would have obtained when you had server workloads operating in the legacy physical world if not more.

The VDC-OS strategy (Virtual Datacenter OS) is VMwares next generation strategy for moving its current Virtualisation offerings within the VI3 Branding and moving into specializing in more specialized layers to create what they are classing as the “Software Mainframe” . Operating this is vSphere (formerly ESX). vStorage service components which are all currently under development will offer a whole band of new and exciting technical functionality to fill gaps and improve upon what exists in the current offerings, I will give a brief overview of each of the main offerings which will be available and comment on how this may likely change or even complicate matters in the Virtualised world.

Functionality features in vStorage

Visability

Greater visability will be made available to enable enhanced interaction with both the Storage array and management software via the central datacenter management component vCenter (Formerly Virtualcenter). This will provide holistic diagnostic views of any performance bottle necks at the Virtualisation layer, greater provided insight and trend analysis on expensive operating areas such as monitoring how much storage you have left for VM’s and how long you have left before storage for VM’s fills up.

Disk I/O has always been a hot topic with server virtualisation, large scale virtualisation deployments utilise SAN storage to host Virtual Machines, this calls for large amounts of diagnostic information to be provided to ensure that multiple tools are not being used and providing differencing information. Other problem areas include Virtual Machine snapshot consumption and disk usage of those snapshots, with newer management interaction current great solutions such as Snapshots which when deployed on mass wreak havoc on the storage management side will hopefully be usable more effectively without concern of when and how you utilise them.

Linked Clones

Something more useful in Virtual Desktop environments and Vmware View. This functionality has been within Vmware Workstation for some time, the principle is that you have a master gold VM and when requiring multiple copies of that base VM you deploy a linked clone, this basically utilises the base VM and writes differential changes into a snapshot file.

This will initially be very good for VDI deployments and possible server landscapes such as a Web Facing environment which need to scale out exceptionally quick to meet demand from concurrent connections. Brian Madden wrote a good article about how linked clones do have some limitations http://www.brianmadden.com/blogs/brianmadden/archive/2009/02/18/brian-dump-atlantis-computing-hopes-to-solve-the-quot-file-based-quot-versus-quot-block-based-quot-vdi-disk-image-challenge.aspx, this article provides a good point on why linked clones are not completely the finished article yet for simplified deployment.

Thin Provisioning

This feature, which is actually available and turned on by default when hosting VM’s on NFS Storage is thin provisioning of VMDK files, the functionality will be built into vSphere to use with VMFS volumes.

With Vstorage API interaction and extensive holistic management with improved vCenter management this will be rather easy to deploy and ensure you do not have issues with storage becoming full. However this will still mean more preemptied management of your Virtual Machine storage volumes if not designed and the powerfull functionality controlled.

Common issues that arise in Thin Provisioning that exist in the physical SAN array world will most likely exist still by using this at the vSphere layer. The cost savings from originally virtualising say a 72GB Disk Physical server down to 15-20GB Disk was a cost saving in itself, I didn’t need to worry about consumed space I could store much more Virtual Machine and change the way server disk partitioning was typically done.

Thin provisioning offerings maybe something that could add slightly too much complexity to management of a virtual environment, it adds more possibility of VM’s becoming unavailable, im sure VMware will have mechanism to ensure services to not fail so time will tell on how this will impact storage costs and efficiency in vSphere, I would probably prefer to have pre-emptive warnings from storage monitors and then grow and plan storage demand manually.

Interestingly VMware currently state Thin Provisioning is better being done at the Array Level, bit confusing this to be honest as if you have a VMDK its a physical file upon a VMFS or NFS volume. I can see this being the case for VMFS volumes though as it will enable you to grow this as your VM’s grow. Once again this new offering will need extensive and detailed architectural planning.

Disk Volume management

As said on my view of Thin Provisioning, i’d probably prefer to grow and even shrink Virtual disk storage and VMFS storage manually upon a pre-emptive warning being provided. New functionality will be made available in vSphere to grow VM Disk files and more beneficially the VMFS Volume.

Disk expansion is another confusing one to me, VMware have preached that when scaling and designing a VMware storage architecture you plan LUN sizes appropriately, call me a skeptic but growing a VMFS and having inconsistent LUN sizes across your Infrastructure may not be the most ideal scenario to have in a large scale environment. VMFS Growth also impacts the backend Physical SAN hosting the VMFS, the LUN’s on a more intuitive array are optimally set at a RAID and Disk Spindle count to provide best performance to running VM’s. Changing the sizing of LUN’s for VMFS also introduces layers of complexity and possible operational issues.

vStorage API

Integral for enabling most of the new features is the vStorage API, this will enable storage partners to use API within the VMware components and enable enhanced Storage operations and Storage management capability between vSphere and the Physical SAN array hosting your Virtual Datastores.

VMwares current Native MPIO driver within ESX 3.5 is currently only an Active/Passive multipath offering, failover of active path cumbersome and also when deploying virtualisation in large scale environments requires large amounts of planning and configuration to ensure that problems such as Path Thrashing does not occur. The flipside of this is that it has enabled VMware to test and have a thoroughly checked HCL which means better all round support and benchmarking for end users.

The new basic vStorage offerings will apparently provide increased performance, failover capability and improved throughput of I/O, all of which will allow you to start to virtualise much more demanding workloads such as SQL or Oracle. First release of storage vendor designed multipath drivers will be from EMC, this will effectively utilise EMC Powerpath within the Hypervisor stack. This will enable offload more to the Storage Array and away from the Hypervisor and provide more resources to running VM’s. Opening up API to Storage vendors is a good step for VMware, the current MPIO architecture built by VMware engineering has its limitation and this will hopefully propel vSphere to another level.

Overall the vStorage intiative is starting to look rather compelling to enable people to virtualise any workload with any IO requirement, integration with storage vendors had to happen at some point for VMware to start to grow. Opportunities are now formalised and within the vStorage layer within the VDC-OS.

Tuesday, 10 March 2009

Late VMworld 2009 views

Back into the swing of things after Cannes, I unfortunately picked up a nasty flu from the event, it was going round as most people i spoke to after got the bug.

This years event in my opinion wasn't as good for providing informative roadmaps and general deep dives into new technology, I don't know if this is due to lack of development and end product yet on vSphere, or whether its due to the fact they are now under massive sanction due to stock market limitations but this non committal attitude really damaged there credibility for me. Maybe I've been too spoilt in the past it seems gone are the days of great live demos that have happened in the past like they did with Fault tolerance on keynotes and a move more into how they can sell there vision, even Microsoft don't do that its technology for god sake.

Paul Maritz went over his strategy for VDC-OS and how they want to ensure they are providing the encapsulation and decouplement when deploying virtual infrastructure in internal and external clouds in future, this isn't something unknown, Mendel Rosenbaum has explained about decouplement of the encapsulated VM files for yonks.

I do like however the idea that the VDC-OS is now part of a strategic vision to move VMware away from ESX as the main focal point for virtualisation and moving more into additional management layering and orchestration service driven architecture. Its becoming obvious that the Hypervisor is now commoditized and i think VMware now accepting this more will be better all round.

Some peeps may not like this statement but additionally on the VDC-OS strategy, but i really do think that VMware want ESX or soon to be vSphere to just be the running blackbox appliance in the corner running the virtual datacentre and enabling the decouplement of services, this may mean a move away from the generation of followers who live and breathe ESX and love how it can be tweaked and played with becoming a rarity and not needed because its just not something you can do. 

The transition in strategy with VDC-OS will be more of a move into focusing on how you leverage capability with other new offerings in your infrastructure such as orchestration workflow and how additional tools such as Capacity IQ and Appspeed can be used more effectively across the whole organisation to provide streamlined delivery of service.

Lets hope VMware can deliver what they are proposing as its a mammoth task to expect to perform under the current economic situation, I predict that new developments being rolled into strategy from acquisition by VMware will take 6-12 Months to mature and provide us with what is more than just a development shoehorn into VMware look and feel.


This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]