Tuesday, 25 August 2009

Cloud - Likely to bite you on the arse

I overheard an interesting discussion today between some application guys (who are purely fictional) , they were saying that they had spoke to another respected Application bod from another company at a recent networking meet up who are currently using cloud services for application services, they then proceeded to say they will be looking to engage with a cloud provider currently offering a Cloud Application based service currently in beta (which is also fictional as this one works ;), additionally one of the fictional guys said they could see excellent benefits that can be reaped by turnkey production environment roll outs and development environments that can be built up on demand in the "Cloud".

I'm sure this type of conversation alone is enough to put the fear of god into any Infrastructure bod, from my stand point yes its scary to hear this stuff, however the context and goal for this post is I am only concerned with the issues and approach on how cloud is being adopted and not what the Cloud replaces, I am an advocate of Cloud and this type of talk should be embraced in IT departments and between teams. I do worry though when I hear this type of discussion as it means cost and IT budget goes to waste when it should be being invested using the correct methodology and process to implement new solutions into organisations. My immediate thoughts with a conversation like this is if cloud adoption is approached in this manor it has potential dangerous consequences for IT departments as a whole. My concern with this type of "shortcutting" lie with a possible emergence of business application peeps being blinded by the cloud marketing vendor hype, and being blinded like moths to a lamp in the way that the actual cloud providers are making the service offerings seem relatively simple to adopt for applications you currently have.

Most Infrastructure guys will know that for years we have had to implement and design around poorly written and designed application stacks, and yes this isn't just bespoke apps it includes COTS and proprietary middleware/databases that we now struggle to virtualise and avoid having excessive server sprawl from by having to employ strategy such as one app per server whether virtual or physical. So enter into our lives the cloud provider that can offer the opportunity to Change current deployment process for business applications and services, bypass any current Infrastructure design authority and the people that make the engine run today that supports the current bloated app stack. All seems so easy doesn't it, it would do to an application bod, they tend to think differently and approach IT differently to Infrastructure types. This is one of the reasons why we struggle to sell server virtualisation to application bods, they want platforms that provide above and beyond and not just "enough" workload, they simply don't understand consolidation and to be honest why would they?

If application peeps approach a Cloud strategy in the strategy that I explained I think it will no doubt end up being great for the cloud providers but I feel this will end up being bad for the organisations who adopt it in such a blase manor. My reasons to suggest this are around the fact that we still have many current unanswered questions on Cloud in Enterprises, and the fact that cloud is still at the bleeding edge stage of the adoption curve.

Using the Cloud without ensuring that you have a finely tuned hosted application may actually mean services could (and I say could) end up longer term being more expensive due to the way that Infrastructure is used to cover up for sloppy Software configuration in Private infrastructure today. If public cloud services are used to host current "dodgy" bloated apps that consume excessive network and sucks CPU cycles then knock yourself out but be prepared if an application is being used on a pay per use metered basis, it will need to be lean and mean, you cannot simply migrate like for like and assume that the app and associated processes and dependencies on that app will provide the cost savings cloud touts.

As I've said this is a basic overview of why I feel the Cloud may bite anybody who is looking to move current application services into the cloud well and truly on the arse. When cloud services are mature and accepted within enterprises it will no doubt offer significant benefits to being hosted within the metered shrink and grow environment, however applications need to change and the way we architect systems will need to change, and this strategy is not as you will have found with Virtualisation adoption something that can happen over night, this could take years.

Which leads me onto the Vmware cloud strategy and fully optimized stacks which may emerge with the Springsource acquisition....hey I couldn't have a post without a vendor name drop, maybe another post at a later point in time ;)

Thursday, 20 August 2009

Tech Review - PHD Virtual esXpress

I posted a few weeks ago about PHD Virtual technologies release of esXpress version 3.6, on this post I delve into each new feature and the benefits that you can acheive with your virtualised backup strategy;

Company Background

PHD Technologies was formed in 2002 and they are based out of Mount Alington New Jersey, USA. They have well over 1600 customers and this customer base includes big enterprises such as Siemens, Barnes and Noble, Tyco and are extremely popular and big in the SMB Space and the Academia space.

Technology Background

PHD provides organisations with a cost effective and robust solid backup tool, esXpress is now in its 3rd generation and has been designed to combat the problem of backup in Virtualised estates since the ESX 2.x days. Now in the 3rd Generation release esXpress provides core functionality and technological advances that give PHD a leading edge against other players in the space and allows them to compete for Enterprise custom due to the extensive technical offerings.


PHD utilise VBA (Virtual Backup Appliances) to perform backup and restore from either a connected RDM Disk to your Virtual ESX Hosts, an NFS Share or even backup to a VMDK file. VBA's are something PHD has architecturally used within its product set since 2006, they enable backups to be performed with the following benefits for your Virtualised estates:

Esxpress 3.6 benefits

Vsphere 4 Support

3.6 provides full support for this and also has a VC Plugin which is supported on the vCenter client. Additional admin is performed either via a Web interface which provides you with the ability to manage the backup environment from any desktop and not have to worry about having to install GUI's wherever you go or want to check and manage backups.

Data Deduplication

Massive reductions in backup space requirements can be acheived by using this feature. With VM's being backed up with VCB using conventional backup products like Backup Exec or even Netbackup performing inline dedupe operations is not capable natively by the underlying Backup software, you need an appliance or piece of additional software to do that at additional cost. esXPress 3.6 Provide Data deduplication inline completely free of charge within your license entitlement. Dedupe in esXpress is claimed to provide upto a dedupe ratio of 25:1.

The real magic in esXpress happens on what PHD call there Dedupe appliance, this dedupe appliance is built using the PHD San appliance which is available to download additionally at http://www.phdvirtual.com/products/virtualization-utilities, this appliance can provide you with the option of using shared VMFS across Local DAS storage on your ESX hosts without buying a full backend SAN. When using the Dedupe appliance and backup method incremental VM restoration is also possible for deduplicated backup.

EsXpress Deduplication technology also means you can backup and restore any delta changes across your VM's, this is extremely beneficial for remote located VM's, the delta block change capability means that you can exploit use of the WAN to backup across sites to reap the benefits of a complete centralised backup strategy, using a centralised backup stategy provides you with benefits such as reduced manpower overheads to change backup media and monitor jobs remotely, improved security of having data stored offsite, reduction in possible Tape vaulting costs and many other benefits pertinent to your organisation when removing remote infrastructure.

Improved restores and backup streaming

In my opinion restores are the most important aspect to consider for of any backup technology, it is no good backing up quickly if the restore activity is long and increases your RTO time. File level restores in VCB Virtual backup software is never quick, mainly due to you having to mount the VMDK and then extract the file, for a 15-20GB file this can take considerable time. Esxpress 3.6 has improved multi user point and click restore functionality from within the esXpress java GUI, the extraction is made possible by unique executable file format and not a TAR similar file.

Total backups per host is always a limiting factor in virtual backup architectures, VMware VCB has a recommendation (setup dependant) of 4-6 concurrent jobs per ESX Host, esXpress 3.6 now can allow backup of a total of 16 concurrent jobs per ESX host, meaning shorter backup windows.

Product Summary

ESXpress is a great backup tool and touts some great niche technical features, this is most certainly a good product to use in the SMB space due to the lack of requirement to use VCB and can be used within the larger virtualisation estates with the concurrency and deduplication benefits.
I think the most attractive component in esXpress is its Dedupe capability and the fact you can exploit a side benefit of this in remote backup across your WAN, centralised backup is becoming a very popular strategy in virtualised estates.

Check out esXpress at its website http://www.phdvirtual.com and a full 30 day demo available on http://phdvirtual.com/products/esxpress-virtual-backup. For any Vmworld attendees they are also on the exhibitors stand #1502 at Vmworld 2009 in San Francisco http://www.vmworld.com/community/exhibitors/phdvirtual/ so check them out and pass by the stand.


Sunday, 16 August 2009

As if by cloud magic...

Since my last post where I discussed the Microsoft Azure cloud service and highlighted that I felt companies competing in the cloud space such as VMware are now diversifying past the Hypervisor to reach aspirations of Cloud, and then Vmware go and do me a favor and make me look like i'm a visionary and extrememy intelligent and buy Spring Source!!!.

I have had all week to digest the views and opinions from various Industry analysts and bloggers and see what they are thinking and saying in general on the Spring acquisition and now heres my attempt (ARSE COVER DISCLAIMER - I am not a Software Architect/coder/guru/white sandals & sock wearer so excuse any rubbish) at trying to predict where the purchase will lead VMware's current business model and what will evole from the acquisition, lastly I also highlight what the industry needs from any of Vmware's clouds offerings.

VMware - Now not just a virtualisation company?

VMware are now kick starting themselves into providing multi tiered service offerings, and with the latest acquisition are slowly creeping up the layers and stack past just the underlying Datacentre Virtualisation technology. This natural growth is mainly I feel due to demand from the industry and current customers for more agile cloud based services and coverage across more of the datacentre.

The Springsource acquisition enables Vmware to move into the
PaaS (Platform as a Service) market, this effectively means that they can provide organisations with an end to end operational stack starting from the underlying Virtual Machine workload, through to the presentation layer for orgs to run JVM type workloads and webservices. With this solid framework VMware can provide presentation layer building blocks alongside technology such as vApp and they go further than providing just the solid underlying platform infrastructure which is present within the VDC-OS intiative.

Customers can also benefit from the confidence instilled through partnerships and alliances formed in the Datacentre virtualisation boom with the likes of EMC, Cisco, IBM and HP. What will also attract any customers is the fact that Spring is open source and common knowledge to most people in development meaning any future developments and offerings won't be a closed shop like Azure (We Hope). Overall within the cloud space I believe VMware will probably have the competitive edge to attract Enterprise customer base when pitched against other competitors in this space such as Microsoft and Google. This is mainly due to current core datacentre infrastructure values and them being the company respected as being the ones to establish the defacto baselines for capable Server Virtualisation and consolidation products.

What does this mean for an Infrastructure bod?

Some people maybe asking themselves (like me) "Well VMware have bought these guys what does that mean to me and my current investments in VMware technology". From my observation the Spring acquisition isnt going to be a tactical point purchase like Propero, Beehive and Dunes, I see Spring being the key driver for Vmware to start to be able to provide a sustainable reputable driven cloud based offering for current enterprise based customers and for new potential customers that would be considering Azure and Google Appengine. Looking at available hooks within Spring and by the Spring.NET and I presume you will even extend and be able to use the app components available upon Microsoft Azure services which is a very smart move if this happens, this will mean you will have the flexibility to run certain components on alternative clouds and not be locked into using the complete end to end VDC-OS stack.

Below is a self compiled diagram which includes my vision and picture of what architecture within the VMware suite may start to look like, you might think its a load of rubbish and if you do then please pass comment, however I feel we will have an emergence and movement now into Cloud being accepted more and more due to the capability of VMware having a open framework and the capability to host the application being hosted anywhere with the massive scalability options such as being able to burst workload on demand for more required capacity and utilise and offer your organisation standard machine building blocks with vApp and Virtual appliances.



The future vision

VDC-OS is quite clearly going to be VMware's stack for hosting cloud workloads privately and publicly, to provide openness within the industry VMware will have the OVF Format to ease hosting virtual workloads upon multiple carriers and vApps within the VMware space to define building blocks and provide metadata on any whereabouts of objects.

All of these items which are being provided today will ensure that transition into a Cloud Architecture will be achievable and cost effective and I sincerely hope it won't mean a complete throw away of your underlying existing Virtual Infrastructure, and I would hope you will have ease of portability to other platform providers if you want or have an need within your enterprise to do this.

Within the VMware strategy I think to succeed the industry needs some of the following ideas and functionality in future from cloud with Vmware;

  • vApps and Virtual Appliances starting to grow further with Spring and its inherent available turnkey middleware and webservice building block stacks, and with the underlying Vsphere Platform being available this complete end to end stack can now be turned into a moveable and flexible workload across Private and Public cloud infrastructure, with that I would like to see live migration capability between Clouds and certainly more portability for external cloud usage. Private clouds can be hosted internally via vCloud api's, an example could be a company like SAP that would be hosting SAP instances privately but that hosted workload is not limited to a firewall or boundary between each of them,
  • Federated security between Clouds for any running workloads and applications regardless of what datacentre or cloud hosting partner the platforms and applications sit on. I want ease for security and ease to move to cloud providers when I want without penalty or obstacle,
  • Virtual Desktop will evolve into revolving around less of a hosted OS centrally and more of a application orientated strategy, with the applications being hosted upon any cloud and accessible via any browser or OS, similar to how Adobe AIR apps work,
  • Burst capability With intuitive interaction between app presentation and the core VDC-OS I would hope we see more automated orchestration activity in the event for example that you need to increase and burst to more workload. By using vApp you can essentially acheive this with pre automated configurations and knowledge available as to any external dependancy within the metadata such as a Database server that could become constrained if more app stack workload was added etc.
Other avenues

I am going to go over some of my predicted possibles in more detail in future posts once the dust settles with the Spring acquisition but one of the prime benefit that I can see to this is the possibility that we may start to shift into VMware building defacto affordable Middle/Web stacks just as Redhat do with JBoss. This may additional mean that we will move away from the performance and tuning arguments and issues of running certainly Java workloads within VMware machines, we move away from the issues of ISV compatibility and also away from the expensive licensing infringements to use DRS and Multiple hosts for our benefit that the likes of Oracle impose.

Summary

Well thats enough babbling and digressing for one post, hopefully I have raised some possible avenues that will arise for organisations on yet another acquisition that VMware have made. This one is certainly an eye opener and it is going to be very interesting to see what the vibe is like at VMworld 2009 on what direction VMware are now moving into, and lastly it will be very interesting to see in 6-12 months time how VMware shape as a company with such a diverse acquisition.

Wednesday, 5 August 2009

Azure - Microsoft's new baby

I've noticed last week or so we have had a bit of a rise in Hyper-V bashing, so I thought it was time to write a post on where I think this whole thing is going longer term rather than just talking about the today comparatives of the technical nuts and bolts of the underlying Virtualisation tech/

Looking at how Hyper-V has performed within the analyst popularity stakes since its release I don't think market share is still any better for Microsoft, analysts are saying that only really Citrix Xenserver is the competitor with single percentage gains on market share on Vmware and if anything Uncle Larry at Oracle is likely to succeed in gaining more uptake on Oracle VM due to the Sun acquisition finally closing.

My view (if you want it) on Microsoft's strategy for Hyper-V is that they are now shifting concentration to the Azure Cloud platform and not the underlying Datacentre server virtualisation platform. My justification for this is that in all reality to them Cloud is seen as almost a software layer and an extension of services within your datacentre today such as Exchange and SQL. Azure being hosted within Microsoft's datacentre will no doubt rely on Hyper-V but to be honest it is the interface to components that will be where Microsoft concentrate on exploiting, they are not in the game to make money from a Hypervisor this is why it is an inclusive product to Windows 2008.

Microsoft would never be taken seriously running capacity planner exercises and on engagements to work with you to configure optimal infrastructure platforms to gain large consolidation ratios, and I feel neither will Gold Certified MS partners (most are VMware resellers anyway). Instead I feel Microsoft will stick to what they know best and have the developer teams internally being capable to run such a beast within the application arena, this then provides them with the ability to kill two birds with one stone and concentrate on similar strategy to Google whilst gaining foot hold in the Cloud service space.

There is speculative rumour that the domain www.office.com has been bought and registered for a new online version of Office probably arriving at release time that 2010 Office arrives, also MS has Exchange 2010 on the roadmap for next year which is going to be tailored for Cloud environments so if at least anything else, moving focus in the Hypervisor arms race with Hyper-V and flogging a possible dead horse would have the potential negative impact of losing ground on Mr Page and Brin. Microsoft will no doubt spend less development money on Azure than designing Hypervisor related technology, most of the development I expect will come from development blueprinting done on things such as Live Services, MOSS and Collaboration tools such as MSN messenger. And you don't necessarily need server virtualisation capability to run a cloud, you can provide ASP like services with just a fully optimised application stack, this is something that Microsoft has a better chance at providing with current portfolio offerings such as MOSS and Exchange and future technology on the horizon meaning no need to focus on the underlying platform.

Cloud Computing still has a rather large volume of unanswered questions and it is still very much a bleeding edge stage for the technology, it is clear though that even the likes of VMware are not focusing on the Hypervisor platform as much and are having to diversify and concentrate efforts on the bigger picture of a cloud environment with vCloud and other core components within the whole VDC-OS. So by following this strategy of being mostly a application cloud provider means that Microsoft isn't seen as the conquer all vendor by taking on the Hypervisor market, it means they are classed as a fluffy software provider still but are able to keep on track in competing with the likes of Google. It also means they can resell services through partners and hosting companies and still ensure that software partners such as the likes of Quest will still be able to use the API that they offer to provide niche partner software beneficial to both parties.

I will find it interesting to see if my predictions come true, if anything its worth a stab at guessing as in today's fast pace world of Cloud it can quickly be blown away by the wind in minutes :)


Sunday, 2 August 2009

Virtualisation within today's IT Frameworks

When I look at current IT frameworks that are followed within corporate environments it still appears that people are still trying to erratically bolt on any methdology and best practices created by new emerging game breaking technology. My main target area of technology that evidently shows that this problem exists is with bringing processes up to date with the current explosion created by the introduction of Virtualisation technology to datacentres.

The higher level issues

Today we have mature IT Libary's and frameworks such as ITIL, MOF and many others which have over the years focused on building operational and technical processes that you can tailor to operate a streamlined Infrastructure (in theory). Libary's like ITIL had to start with a baseline somewhere, but with technology growth this is however something that needs to continuously evolve to changing fads and paradigms. Examples of this include the fact that 3-4 years ago hardly anyone virtualised servers like they do today, nobody ever heard of the Cloud let alone looked at using it, nobody performed Outsourcing as rediculously as they try to do today and we certainly were not under the same financial constraints that we all face due to the credit crunch.


So enter to market at about 2004/5 enterprise server virtualisation into this arena, with the technology having one of the biggest impacts to IT since the actual physical servers it has consolidated, and having many many technological advances which have fully matured 5 years later and is now mature enough to suggest it will be here for many moons to come. However even with the dramatic impact it has had it still appears that IT Frameworks still lag in providing any focus on how Virtualisation technology changes process within datacentre ops. When I google "ITIL and Virtualisation" I get very little to suggest I'm wrong.


Back at the ranch and in the datacentre and with new technology or not the op processes still need to be followed to keep the ship running, Architects and operations guys that know Virtualisation inside out and have seen it mature to the stage that it is at today want it desperately to change to make sure that they are continuously are not drowned in legacy process which limits potential of the core technology they believe in. It isnt just going to be Virtualisation that suffers, add game breaking technologies into this arguement such as Cisco UCS and EMC Symmetrix VMAX that reduce even more reduction of process in the management and operation of the datacentre and this is going to become more and more of a pain in the arse to you and me.

Yes I know...

Virtualisation and the underlying technology that supports the virtual landscape is deployed as a point solution, this is also the case for the example technologies that are changing the shape of datacentres, you procure them and you can solve common datacentre problems. Virtualisation ecosystems as in the software and components that you can deploy however can reduce your process by default and with very little need to implement any fandangle add on or interfaces, however you still need to ensure that this potential is exploited correctly.

Any problem areas of a datacentre that have been remediated by Virtualisation had various processes and framework activity previously structured around them which was expensively designed for them when the infrastructure was placed within the Physical world. For an analogy of this the Physical world is rather much the equivilant of turning an oil tanker around in the Panama canal, instead now you have replaced the oil tanker with virtualisation technology that now means everything has the potential to move at 1000MPH, meaning now instead of the tanker you are turning around a hovercraft in the Panama canal. This speed and agility however can be a double edged sword, it now means your process certainly has to be a lot more streamlined and lean and you need to be on top of defining processes for any operational activity alongside any IT service management teams.

A comparison

An example of differences in contrast is within Vsphere and that you now have the possibility of features such as Virtual Machine server resource upgrades being possible on the fly with Hot Add of CPU/RAM all via the gui with a soft change. Question is does this type of thing require a full attention of the CAB? Surely it is low risk, it doesn't require an outage, it dosen't go wrong (yet). Other examples include run book reductions in the event of a DR scenario, by using something like Vmware SRM to replicate VM's negates this, previous it would need probably half a department of people and a runbook as long as your arm to intiate even a reduced functionality test.

So the question is what can you do to help transform your processes to align to a technology evolution like this?

People Change


The first place to start is by educating operational teams on what Virtualisation actually is and how it changes the game for IT service management, you need to evangelise and show them how the technology works, you need to ensure that they will see a benefit in both there day to day job role and most importantly you need to ensure that they do not feel the new technology is likely to put them and there role at risk.

It's important to highlight any differences in technology and to explain clearly and concisely what has changed, bull in a china shop tactics with service management players won't cut it, you need to explain the real basics first, examples of this could include;
  • Release Management - Includes covering how the day to day service automation and features such as Vmotion work first, then work onto working with them to define where you can reshape other process flows for say a change mechanism for upgrading your virtual resources or deploying various new VM's in an automated script.
  • Root Cause Analysis/Incident management - Explain that vSphere can offer a full blown visable internal and extensible monitoring repositary to see what has changed and when, this can show them that you can remove the need for witchhunts on RCA procedures. It's also extensible into current service management tools such as BMC remedy, Veeam Nworks with HPOV and MS SCOM.
  • Change Management - Cover things such as how Vmotioning means that you do not need to turn off VM's to power down the ESX host, the VM's can be migrated, and explain how other technology such as Storage Vmotion can even provide reduction in outages on your SAN.
  • Capacity Management - Explain Resource Pools and the ESX Scheduler, this doesn't need to be to a VCP understanding, keep to a basic overview that Virtual Machines can be rate limited or provided with reservations in resource pools.
  • Release management - Explain you can perform activity such as a snapshot before any change is made on a VM for instant rollback and service dependant you may negate the need for a CAB approval by the change being deemed low risk. Also explain you can negate potential bottle necks by P2V'ing machines with technology such as Vmware Appspeed.
  • Continuity Management - Explain the runbook can be completely disolved with technology for DR such as SRM, Also explain you won't need to organise outages during bank holiday weekends when you should all be enjoying a pint down the pub!

These are all example areas to kickstart your approach, I could write hundreds of gamechangers which evolve from technological advances. Like any new technology the important thing is to ensure that you view the processes you need to transform from a different angle and viewpoint, the people responsible for managing the operational processes will 9 times out of 10 probably want to emulate what they do today "because it works and thats how we have always done it" thats rubbish I Say, ensue you make sure you educate and ensure that the newer technology you are designing is effectively not just being put in as project process exercise and you actually get the best TCO out of the technology.


The Future and media methods

Looking at how we can solve the problem and I do beleive we will always be constrained on keeping processes upto date with new Technology trends until we can change how IT moguls set goverend criteria for being adopted into a formal Libary.

New technology mediums with things such as Wiki's and Blogs may emerge into actually being the methodology medium, subject to obvious approval by relevant bodies this is something that I beleive will become more and more accepted by the pragmatists. When you look at how the Encyclopedia has evolved it went from book to Encarta type resource through to mostly where we are today with the Internet with Wiki's so the next logical step has to be moving from classroom based environments into online web 2.0 experiences for realtime updates with trends.

Summary

Some of this you may think is rubbish, hey I don't proclaim to being an expert in ITIL, This is something that most people adopting new technology may face so hopefully I have provided you with a possible approach to reduce process and provide to your business what matters most which is core quality services and streamlined delivery.

Saturday, 1 August 2009

vSphere Service Console and Partitioning

After doing some work on vSphere builds in the lab It looks as though you cannot specify the desired size of a VMFS partition when installing Vsphere via a manual deployment from CD.

Previous versions of ESX held the /Boot within the RHEL formatted disk partition's. Vsphere differs, it now has the Service Console stored within a VMDK which can be either held on a local VMFS partition or from a Shared Storage VMFS LUN. Some of the advantages to this include;



From my current observations when performing a manual install of Vsphere from DVD/ISO it formats under a extended partition all of the complete spare disk space with VMFS to cater for the Service Console VMDK partition.

The Problem

So why is this a problem? In most environments it wont be, however If you want to ensure you have freedom to partition and to keep the Service Console on a seperate VMFS partition away from VM's on a local VMFS volume then it will not be possible by manual install.

The Solution

The only way I have been able to acheive creating manual partitions is with building a customised Kickstart Configuration File. Vsphere again appears to have changed on this front, in previous versions you created a KS by downloading from the ESX web sites, again a difference in Vsphere is the easiest way I found of building a KS file was to build the ESX host from CD first and then grab the KS.CFG file from /Root, I then edited this file with the relevant parameters for storage with the following;

part '/boot' --fstype=ext3 --size=1100 --onfirstdisk
part 'none' --fstype=vmkcore --size=110 --onfirstdisk
part 'Storage1' --fstype=vmfs3 --size=15000 --onfirstdiskvirtualdisk 'esxconsole' --size=7804 --onvmfs='Storage1' part 'swap' --fstype=swap --size=800 --onvirtualdisk='esxconsole'
part '/var/log' --fstype=ext3 --size=2000 --onvirtualdisk='esxconsole'part '/' --fstype=ext3 --size=5000 --grow -onvirtualdisk='esxconsole'

Word of advice in any of the preincarnations of ESX it was always best practice when building ESX to either physically disconnect your connectivity to shared storage on an ESX OR remove any zoning to the SAN. This again is something I strongly recommend you do, if you don't you may hose a complete VMFS lun.

Hope this helps and anyone from VMware can shed some of the benefits of using a VMDK for the Service Console now are set aside the obvious ones.


This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]