Sunday 1 February 2009

Nexus 1000v

Sheesh this write up is one uber informative and cool schematic with explanation of how Cisco 1000v works with the new converged Cisco physical nexus networking switches. http://www.internetworkexpert.org/2009/01/01/nexus-1000v-with-fcoe-cna-and-vmware-esx-40-deployment-diagram/

I like the concept behind the 1000v for virtualised environments. The Physical external dependency in FCoE and convergence is something I have not had the opportunity to look at yet so looking at this resource and Nexus 1000V since it was previewed at VMworld last year, has given me a great insight into how the two work in tandem within a Virtualised environment to increase capability and consolidate cabling and provisioning times at the same time. Think of it this way, today on a typical ESX host for say 15 VM's which is using NFS and iSCSI and Fibre Channel it have upto 8-10 connections, this will differ completely when run and converged have a look at the diagram i've knocked up to show the conceptual view over 10GB FCoE (its only rough go easy)





Looking at the Qlogic QLE8042 CNA specification sheet it works seamlessly with standard 4/8GB Fibre Channel connection and is not soley for use within a FCoE stack, this could mean that you can invest today in a CNA and upgrade your external infrastructure quite easily without having to go and buy Zillions of HBA's for say a whole ESX farm being built and ensure you can move to the converged switches at a later date.

There are some interesting comments to the post in regards to how this technology is currently not developed with Blade technology such as HP C Class in mind. Cisco are going to be announcing blade platforms http://www.theregister.co.uk/2008/12/11/cisco_blade_servers/ so I can imagine that they are not going to be releasing a product which doesn't work with there next generation of switching technology, lets face it they will most likely be releasing backplane technology to reach greater heights than competitors do today with the likes of HP Virtual Connect as big blue things with exceptional quick backplanes is there core business.

Expect the market to most likely open up and be targetted for competition in probably the same way that various Blade switch vendors did in the early inception of blades and opened up for partnerships with the likes of Nortel and Cisco themselves.

NFS is back...again

Be carefull though when architecting connectivity for Virtualised environments and the backend physical stack to support those requirements, many shops use iSCSI and NFS today quite happily across a 1GB Networking infrastructure, so why does using converged technology gain advantage? Surely it reduces any cost effectiveness and cost benefits that have been achieved on typical catalyst and procurve range networking (maybe not Nortel hey). I'll maybe touch on this and the whole iSCSI/FCoE war in another post when I have time.

So I guess one question being asked by architects and people whom will be investing is how long will it take for the Converged initiative to take off and gain popularity, we don't want another Infiniband now do we with limited usage and adoption. Within a credit crunch it is not going to be easy to justify spend on such projects and its also not going to be easy to justify ripping and replacing your full core networking infrastructure to start using this type of architecture with its limited HCL (only about two on the current HCL) so I guess its a technology like most which will be more heavily adopted in 18-24 months time when it has been used by the banks and service providers who can invest and test (with our money anyway ;)).

Update 02.02.09 - Realised I've not gone into to much technical detail on how 1000v Works with VM's in a virtualised environment so I will delve into 1000v in a later "part duex" post


Comments:

Post a Comment

Subscribe to Post Comments [Atom]





<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]