Monday, 30 March 2009
Nehalem Day - start of a new tin era?
Nehalem has changed the game, it is built to cater for Virtualised workloads and most importantly to work effectively with VMware and i expect in future other Hypervisor vendors (kinda helps VMware have Intel as an investor though).
Various features within the chipset will most certainly benefit, some examples when compared to the older chips includes improved memory controller access and speed due to Quickpath technology being implemented onto the CPU Die rather then externally on the Northbridge, flexible vmotion migration between generations of Intel Chipsets to ensure we don't need to go and buy massive clusters worth of kit, general power reduction and Hyper Threading comes back from the dead.
For Virtualised environment newer architectures bring benefit with Extended Page Table (NPT in the AMD world), this feature was shown with a whitebox prototype way back at Vmworld 2007 on a keynote. This evolves upon how current Shadow Page Table algoritms work to allocate memory register within ESX's. SPT has brought benefits which has allowed us to obtain higher consolidation ratios with lower utilised machines in the first phase of VMware maturity in most organisations. Next level demand for higher intensive workload can be catered for in most cases by NPT/EPT. Both nehalem and AMD barcelona/shanghai are able to offer built page table management within the actual CPU and remove associated overhead which was previously incurred when new page requests were required at the hypervisor layers. Applications and environments which have large amounts of Processes running within the box i.e. Citrix/terminal server will benefit from this, but not all environments will benefit as greatly as these.
When I was at the Intel stand in Cannes i asked the geezer if Nehalem was 8 Core, he said "oh yes its eight core blah blah", being the skeptic I am I investigated, it turned out it was just dual threaded or more better known as Hyper Threaded. I seem to remember in the early days hyper threading caused more hassle than it was worth, it kind of didn't actually matter then, OS and application was able to interact. This new generation I feel will change the mould, instead of the yesterday era of single core processors which were just logically split and presented to a host OS, we now have a friendly more environmentally, easier to manage etc etc layer of indirection called the Hypervisor, couple this with neat new features in the vSphere technology and this will hopefully work better than previous incarnations did.
I don't believe that adding continuous amounts of cores is the answer, look at Sun UltraSparc and this works in a similar fashion http://www.sun.com/processors/UltraSPARC-T2/, it basically has multiple threads making use of memory lag, this allows more CPU cycles to be made available to the virtual machines. Will be a shame if IBM buy Sun as I think they will just EOL this architecture.
Overall Nehalem benefits are application/workload dependant, be very wary of buying it with the intention that it will solve your problems, like most things in todays world its mostly hype and isn't a silver bullet, you may find that going with AMD will benefit, they have offered Nested page table support with RVI for a long time now and it is mature.
So...now onto arguing whether its worth waiting a week or two for this kit and also badgering vendors/resellers to find out prices and lead time!
Sunday, 22 March 2009
Few thoughts and questions i've thought about when reading about include with my view and answer;
It looks to me like Cisco will just concentrate on the x86 market, does this limit there growth and coverage in enterprise customers?
Most probably they will, they have designed blades from scratch which cater for high dense virtualised Vmware environments, something current vendors are just about achieving with offerings today pre Intel Nehalem. Maybe Cisco are going along with VMware's claims that ESX/Vsphere will be able to run any workload as a valid agreed strategy. Maybe they see larger organisations moving away from non agile big monolithic mainframes that are hosting core applications and onto commoditised distributed grid environment to reduce risk and cost and using platforms like Linux or JEOS.
Will Architects or Designers recommend solutions and roadmaps which include running dual vendor shops with Cisco Blades and any current tech? IBM and HP both manufacture RISC architecture and Itanium Blade offerings respectively which fit in the same Chassis so its a tough one to call and needs benefit to outweigh commercial impact and risk.
Tough one, standardisation reduces operational overhead and associated cost and all vendor value add offerings start to take great affect when using fewer (not one) vendors, yes VMware decouplement reduces issues with the underlying physical services but you still have an ecosystem that exists within the Physical landscape to update and maintain the environment from central operational tools. UCS uses BMC Bladelogic to provision and manage services it appears, will shops want to run this alongside tools such as Altiris and have to go through vigorous training or even recruit again?
Also commercially this could limit your buying power if siloing on Cisco for Blades, you may require rackmount for certain requirements that UCS cannot meet i.e. Dongles or Specialised PCI devices. (BTW HP Do have a PCI Blade IO Module).
Will Cisco have no option if they do not achieve intial sales growth targets and popularity to release MDS/Catalyst modules that are compatible with core networking/san in datacentres today?
Not having true access to any of the technology it seems that Cisco are going to limit themselves if they do not provide connectivity which allows customers to transform across to FCoE, most companies have probably procured kit before any credit crisis in 2007/8 on what maybe a three to four year refresh cycle policy. They make connectivity options for there new competitors so can't be too hard to do this if customer uptake is stumbled due to this.
Or does the I/O have backward compatibility? i.e. can you connect to conventional fibre connection points already, maybe someone can put me straight as I'm speculating on something I've never had the fortune at seeing/playing with yet.
If most of what i say is right, that's a bit effort to dump your HP/IBM blades and all the expensive chassis components you've procured?
A few other quick ones;
- Will cisco open up and allow Brocade to build backplane switches?
- Will this work in DMZ environments? or will you need designated blade chassis/backend networking for this.
- Are Cisco aiming to transform Rackmount only shops into Blade?
- Will HP/IBM just jump on the bandwagon and quash Cisco due to current popularity in the datacentre? How unique is UCS and how good is it to actually dump a Vendor you've used and had relationships with for ion's?
Maybe people can pass comment on above or email me, i might be talking utter rubbish (as per usual) but i thought that was what your own blog was for :) Hopefully its raised some valid points or is what others are currently thinking as well.
Monday, 16 March 2009
Cisco Blades - Paradigm shift or Flop?
Saturday, 14 March 2009
vStorage - My View
The VDC-OS strategy (Virtual Datacenter OS) is VMwares next generation strategy for moving its current Virtualisation offerings within the VI3 Branding and moving into specializing in more specialized layers to create what they are classing as the “Software Mainframe” . Operating this is vSphere (formerly ESX). vStorage service components which are all currently under development will offer a whole band of new and exciting technical functionality to fill gaps and improve upon what exists in the current offerings, I will give a brief overview of each of the main offerings which will be available and comment on how this may likely change or even complicate matters in the Virtualised world.
Functionality features in vStorage
Greater visability will be made available to enable enhanced interaction with both the Storage array and management software via the central datacenter management component vCenter (Formerly Virtualcenter). This will provide holistic diagnostic views of any performance bottle necks at the Virtualisation layer, greater provided insight and trend analysis on expensive operating areas such as monitoring how much storage you have left for VM’s and how long you have left before storage for VM’s fills up.
Disk I/O has always been a hot topic with server virtualisation, large scale virtualisation deployments utilise SAN storage to host Virtual Machines, this calls for large amounts of diagnostic information to be provided to ensure that multiple tools are not being used and providing differencing information. Other problem areas include Virtual Machine snapshot consumption and disk usage of those snapshots, with newer management interaction current great solutions such as Snapshots which when deployed on mass wreak havoc on the storage management side will hopefully be usable more effectively without concern of when and how you utilise them.
Something more useful in Virtual Desktop environments and Vmware View. This functionality has been within Vmware Workstation for some time, the principle is that you have a master gold VM and when requiring multiple copies of that base VM you deploy a linked clone, this basically utilises the base VM and writes differential changes into a snapshot file.
This will initially be very good for VDI deployments and possible server landscapes such as a Web Facing environment which need to scale out exceptionally quick to meet demand from concurrent connections. Brian Madden wrote a good article about how linked clones do have some limitations http://www.brianmadden.com/blogs/brianmadden/archive/2009/02/18/brian-dump-atlantis-computing-hopes-to-solve-the-quot-file-based-quot-versus-quot-block-based-quot-vdi-disk-image-challenge.aspx, this article provides a good point on why linked clones are not completely the finished article yet for simplified deployment.
This feature, which is actually available and turned on by default when hosting VM’s on NFS Storage is thin provisioning of VMDK files, the functionality will be built into vSphere to use with VMFS volumes.
With Vstorage API interaction and extensive holistic management with improved vCenter management this will be rather easy to deploy and ensure you do not have issues with storage becoming full. However this will still mean more preemptied management of your Virtual Machine storage volumes if not designed and the powerfull functionality controlled.
Common issues that arise in Thin Provisioning that exist in the physical SAN array world will most likely exist still by using this at the vSphere layer. The cost savings from originally virtualising say a 72GB Disk Physical server down to 15-20GB Disk was a cost saving in itself, I didn’t need to worry about consumed space I could store much more Virtual Machine and change the way server disk partitioning was typically done.
Thin provisioning offerings maybe something that could add slightly too much complexity to management of a virtual environment, it adds more possibility of VM’s becoming unavailable, im sure VMware will have mechanism to ensure services to not fail so time will tell on how this will impact storage costs and efficiency in vSphere, I would probably prefer to have pre-emptive warnings from storage monitors and then grow and plan storage demand manually.
Interestingly VMware currently state Thin Provisioning is better being done at the Array Level, bit confusing this to be honest as if you have a VMDK its a physical file upon a VMFS or NFS volume. I can see this being the case for VMFS volumes though as it will enable you to grow this as your VM’s grow. Once again this new offering will need extensive and detailed architectural planning.
Disk Volume management
As said on my view of Thin Provisioning, i’d probably prefer to grow and even shrink Virtual disk storage and VMFS storage manually upon a pre-emptive warning being provided. New functionality will be made available in vSphere to grow VM Disk files and more beneficially the VMFS Volume.
Disk expansion is another confusing one to me, VMware have preached that when scaling and designing a VMware storage architecture you plan LUN sizes appropriately, call me a skeptic but growing a VMFS and having inconsistent LUN sizes across your Infrastructure may not be the most ideal scenario to have in a large scale environment. VMFS Growth also impacts the backend Physical SAN hosting the VMFS, the LUN’s on a more intuitive array are optimally set at a RAID and Disk Spindle count to provide best performance to running VM’s. Changing the sizing of LUN’s for VMFS also introduces layers of complexity and possible operational issues.
Integral for enabling most of the new features is the vStorage API, this will enable storage partners to use API within the VMware components and enable enhanced Storage operations and Storage management capability between vSphere and the Physical SAN array hosting your Virtual Datastores.
VMwares current Native MPIO driver within ESX 3.5 is currently only an Active/Passive multipath offering, failover of active path cumbersome and also when deploying virtualisation in large scale environments requires large amounts of planning and configuration to ensure that problems such as Path Thrashing does not occur. The flipside of this is that it has enabled VMware to test and have a thoroughly checked HCL which means better all round support and benchmarking for end users.
The new basic vStorage offerings will apparently provide increased performance, failover capability and improved throughput of I/O, all of which will allow you to start to virtualise much more demanding workloads such as SQL or Oracle. First release of storage vendor designed multipath drivers will be from EMC, this will effectively utilise EMC Powerpath within the Hypervisor stack. This will enable offload more to the Storage Array and away from the Hypervisor and provide more resources to running VM’s. Opening up API to Storage vendors is a good step for VMware, the current MPIO architecture built by VMware engineering has its limitation and this will hopefully propel vSphere to another level.
Overall the vStorage intiative is starting to look rather compelling to enable people to virtualise any workload with any IO requirement, integration with storage vendors had to happen at some point for VMware to start to grow. Opportunities are now formalised and within the vStorage layer within the VDC-OS.
Tuesday, 10 March 2009
Late VMworld 2009 views
Subscribe to Posts [Atom]