Wednesday, 24 March 2010
VMlover is moving...
Hi all, I am currently in process of migrating this blog to Wordpress and on a private host....
Apologies for any transition delays in the process, I am hoping the move to Wordpress will give you the reader an all round better experience.
Hopefully the domain will transfer in the next 24/48 hours/years
Apologies for any transition delays in the process, I am hoping the move to Wordpress will give you the reader an all round better experience.
Hopefully the domain will transfer in the next 24/48 hours/years
Wednesday, 10 March 2010
Life on the other side
Ex-CEO of SUN Microsystems just started his blog with a bang....
http://jonathanischwartz.wordpress.com/
A good read and i'm hoping a book will appear soon...
http://jonathanischwartz.wordpress.com/
A good read and i'm hoping a book will appear soon...
Monday, 8 March 2010
Now for something Cloud related...
This post is short and sweet....
After a week of more Cloud news it strikes me that Cloud is the new Black. In the industry we have big large software corporations who have made their owners and shareholders extremely rich with 20-30 years of business strategy that was a million miles away from Cloud with Client-Server models, most of which are all now pushing the future of their business strategy on the Cloud model.
This has started to sum one thing up for me....mainly around the fact I think
ISV's want to host services in the Cloud to ensure they do what humans do and learn from there mistakes, with these key feats of history never beckoning their door again by
After a week of more Cloud news it strikes me that Cloud is the new Black. In the industry we have big large software corporations who have made their owners and shareholders extremely rich with 20-30 years of business strategy that was a million miles away from Cloud with Client-Server models, most of which are all now pushing the future of their business strategy on the Cloud model.
This has started to sum one thing up for me....mainly around the fact I think
ISV's want to host services in the Cloud to ensure they do what humans do and learn from there mistakes, with these key feats of history never beckoning their door again by
- Having complete centralised big brother monitoring of offered software usage, with no stone unturned for organisational usage of offered cloud services. This means no more true ups and no more EA's or GA's being operated on a trust basis
- Being able to change Software license models at the drop of a hat (a bit like your gas/electricity bill),
- Design applications that do not have excessive and long development periods to mean that they get the potentially underdeveloped software out to the end user and they can start to be more lucrative than competitors, with that under developed software being able to be amended at later points in time,
- Kill the VAR/SI relationships for quicker PO's and to reduce internal ISV overheads of relationship management
Tuesday, 2 March 2010
Home NAS - Iomega IX4-200d
I had the great pleasure of getting my hands on a Storcenter IX4-200d monster NAS box and and I can safely say it has been a massive and beneficial peice of kit for use with my day to day home storage needs. Here is my review and feedback to what I think is a great product for the home and anyone wanting to do some good home lab testing.
Previously I ran VM's from a USB Drive with WS7, with about 3 Individual SATA disks homed in my local PC for other storage such as iTunes, MP3, Movies and Pics etc. This worried me, it wasn't raided, it certainly was starting to run low on free space and there is so long you can take running ESX in a VM on WS7. So I was looking around for a NAS solution which would be cost effective, fitted the bill functionality wise and provided good all round storage space. Originally i'd dabbled at the lower end models of the IX2-200d and thought maybe it would suffice, after a think about it something said to me though that an investment in a 1TB (With RAID of course) NAS would suffice on first 6-12 months but over time as my lab grow would probably end up with me wanting more space at some point so I felt the IX4-200d was yes intially oversized at 4TB (2.7TB Usable with RAID 5) but I know it will keep me more than satisfied for future storage needs.
Product feedback requests
Now for the bad bit....what the IX4-200D doesn't do for me;
Previously I ran VM's from a USB Drive with WS7, with about 3 Individual SATA disks homed in my local PC for other storage such as iTunes, MP3, Movies and Pics etc. This worried me, it wasn't raided, it certainly was starting to run low on free space and there is so long you can take running ESX in a VM on WS7. So I was looking around for a NAS solution which would be cost effective, fitted the bill functionality wise and provided good all round storage space. Originally i'd dabbled at the lower end models of the IX2-200d and thought maybe it would suffice, after a think about it something said to me though that an investment in a 1TB (With RAID of course) NAS would suffice on first 6-12 months but over time as my lab grow would probably end up with me wanting more space at some point so I felt the IX4-200d was yes intially oversized at 4TB (2.7TB Usable with RAID 5) but I know it will keep me more than satisfied for future storage needs.
Key selling points and benefits I've found for the IX4-200d are as follows;
- Small form factor, it looked MASSIVE on the Iomega site and I imagined it to arrive like an EMC Celerra would! However I was so surprised when I got it out of the box (which was rather small too I might add) that it was no more than about 10x10 Inches which was excellent for storing in my small lab :)
- Setup - Setting this up was a breeze, i was literally up and running in about 10 Minutes, it is RAID'ed from the factory (additionally supports RAID 1,5 and 10) and ready to go prepackaged, so easy and to be honest this is the way it should be
- Protocol/Application support - iOmega is an EMC company, and all iOmega Storcenter devices are fully HCL compatible with vSphere/ESX when using iSCSI and NFS, this is great as I use this for home lab (see next section on this). Additionally DLNA support is available so I can stream movies to my DLNA compatible TV and also PS3! again excellent for playing music and browsing my camera photos
- Speed - Doing time machine backup was about 20-25MB/s and general VM across NFS is really quick. Also copying files up and to is more than acceptable speed wise along with vSphere VM's running and cloning etc quite happily at an acceptable speed
- File Migration - A cool feature is being able to connect my original USB disk and copy that onto the NAS without copying files across the network, I can also use the USB drive across a CIFS share all very good timesavers.
- LED Screen - This is native to the IX4 and not available on other versions, you get a nice little interface to see storage volume capacity and other stuff like IP Address of the box etc
As said I use this at Home for my home lab, in this lab I have recently purchased a HP ML110 for lab testing and this works very well with the IX4-200d. Coupled with good vSphere kit the IX is a great piece of kit for consolidating out both you home media storage needs and Home Lab needs onto one easy to use and manage NAS, it has Block and File level storage capability which is great for playing with both versions of storage protocol and overall when used with 1GB networking is better than any extremely beefed up PC with Workstation 7. Overall I am very pleased, I am an Nandy pandy hands off Architect in my day job but using this provides me with all the needs that I would typically get when both using vSphere day to day or within a lab environment back in the office.
Product feedback requests
Now for the bad bit....what the IX4-200D doesn't do for me;
- Mozy support - I use Mozy backup and it only supports Mozy backup when the IX is connected to a PC with USB (a big USB drive basically), you then use your the relevant drive the IX is mapped as with the Mozy PC Client. I seriously hope EMC/Iomega release an update soooon!
- Copying files between CIFS shares can't be done within the Admin web page, I'd like to know or be able to copy files locally via the Lifeline OS and not have to use the device mounting volumes
- Being a techy I wouldnt mind seeing a bit more performance monitoring from the core hardware, things like disk write/read speed etc would be a great option, even if it was not enabled by default
Great resources for IX4-200d
- Gabe Van Zanten has an excellent write up on putting the Storcenter to the test and providing some benchmarking http://www.gabesvirtualworld.com/?p=909
- Simon "techhead" Seagrave does a great piece on a complete home lab build with IX4-200d as one of the storage options http://www.techhead.co.uk/vmware-esxi-home-lab-why-what-and-how-considerations-when-building-your-own-home-lab
- www.iomega.com of course!
Tuesday, 9 February 2010
Application efficiency in the Cloud
As an afterthought after visiting a few recent Cloud Camp's and after talking to some very bleeding edge and clever individuals who develop applications and object based storage for the cloud I felt compelled to blog about my thoughts on this subject.
It appears that the mainstream adoption of Public cloud is starting tochange the mindset on how developers write code for web based applications, certain disciplines in the dev world are really are changing how they architect applications, additionally another news item that maybe concludes my thought is Facebook recently releasing detail on new PHP code that they claim reduces 50% CPU Overhead over legacy code.
This is great news, developers whom are building applications using compute within the abstraction layer of the Cloud seem to be now moving away from designing and developing lazy code that used to be unoptimised and hog as much Infrastructure resources as it could take, didn't scale horizontally as it needed to use hardware which scaled up (such as good old Mainframe) and additionally was heavily underutilised at times when it didn't operate its associated task.
So why do I think this is happening? well its simple....COST VISABILITY! Apologies for the CAPS here but I feel any direct metered costs which are applicable instantaneously to any typical public IaaS model or PaaS is driving this optimisation, Early adopters of cloud are starting to realise that they have a driver to save cost with an optimised platform that dosn't need uber amounts of compute resource and is cheaper.
Additionally at a higher strategic level I certainly think cost visability seems to be becoming higher the further you go into the Cloud, the below diagram shows how the Insourced model of IT has very low cost visability (in most organisations) against the transition into Outsourced/Managed Service IT that is billed to the client on monthly time periods all through to Cloud hosted IT components that have instantaneous cost visability to consumers.
It appears that the mainstream adoption of Public cloud is starting tochange the mindset on how developers write code for web based applications, certain disciplines in the dev world are really are changing how they architect applications, additionally another news item that maybe concludes my thought is Facebook recently releasing detail on new PHP code that they claim reduces 50% CPU Overhead over legacy code.
This is great news, developers whom are building applications using compute within the abstraction layer of the Cloud seem to be now moving away from designing and developing lazy code that used to be unoptimised and hog as much Infrastructure resources as it could take, didn't scale horizontally as it needed to use hardware which scaled up (such as good old Mainframe) and additionally was heavily underutilised at times when it didn't operate its associated task.
So why do I think this is happening? well its simple....COST VISABILITY! Apologies for the CAPS here but I feel any direct metered costs which are applicable instantaneously to any typical public IaaS model or PaaS is driving this optimisation, Early adopters of cloud are starting to realise that they have a driver to save cost with an optimised platform that dosn't need uber amounts of compute resource and is cheaper.
Additionally at a higher strategic level I certainly think cost visability seems to be becoming higher the further you go into the Cloud, the below diagram shows how the Insourced model of IT has very low cost visability (in most organisations) against the transition into Outsourced/Managed Service IT that is billed to the client on monthly time periods all through to Cloud hosted IT components that have instantaneous cost visability to consumers.
Summary
So thats my summarised thought, you might think I am being Naive, but I do beleive we maybe starting to see the growth of more cost concious optimised applications for the better of both App and Infrastructure environments rather than the legacy world where they have constantly acted like Oil above Water.
Thursday, 28 January 2010
Cloud Overbooking - Part 2
Following last weeks post which went into where I think the potential requirement for similar algorithms that are used within the airline overbooking model may need to be used within a Public cloud provider such as EC2, I will now blog with a second part which provides some predictions on where I think that similar software and associated modules that may start to arise within the world of Cloud providers.
As stated in Part 1, I work for an airline, this dosn't mean this post will include industry based secrets, but what I will provide is a comparison of technologies used to ensure the overbooking strategy works.
The cloud revenue calculation thingy
Today within the Airline industry are various commercially available software technologies that calculate what an airline can make from various different seating strategies on certain key flights, dont ask how this does it but the companies that have designed this are certainly not short of a bob or two, meaning its very niche and very clever (and works).
Looking to compare this to the world of Public clouds and I think we may see Software ecosystems arise as has arisen within reservation and booking worlds. A couple of thoughts collated that I think may or may not emerge withing the future state of cloud computing include;
As stated in Part 1, I work for an airline, this dosn't mean this post will include industry based secrets, but what I will provide is a comparison of technologies used to ensure the overbooking strategy works.
The cloud revenue calculation thingy
Today within the Airline industry are various commercially available software technologies that calculate what an airline can make from various different seating strategies on certain key flights, dont ask how this does it but the companies that have designed this are certainly not short of a bob or two, meaning its very niche and very clever (and works).
Looking to compare this to the world of Public clouds and I think we may see Software ecosystems arise as has arisen within reservation and booking worlds. A couple of thoughts collated that I think may or may not emerge withing the future state of cloud computing include;
- Potential third parties selling third party software to Public cloud providers to calculate optimal times or prices to charge customers, or do Amazon already do this?
- If cloud is going to provide fluidity and flexibility than say your Electricity in the home will we potentially see variable seasonal or peak pricing charges emerging once the Cloud starts to become more heavily adopted and resource becomes scarce
Will we see larger customers obtaining a first class citizenship in multi tenancy environment and recieve higher weighting and priority when resources become scarce, the same as Airlines do in similar ways for frequent flyers, or will a model exist where they be exposed and at risk more like Economy travellers are in the Overbooking model where they pay smaller prices for services but run the risk of being bumped off of the core underlying cloud service? I am only speculating here, its difficult to know what really goes on within public cloud business plans but it maybe potentially something that may start to become more apparent as people transition from conventional outsourced models into cloud based environments.
Screen Scrapers
Just another crazy thought that i'll leave you with which is completely seperate to overbooking and is in regards the potential role of screen scrapers in "Cloud commerce". In the reservation world, screen scrapers play havoc on travel industry websites if they are not controlled, in a nutshell a screen scraper is basically a third party whom are scraping say a Airline booking site to scour for the best deal. If not controlled correctly, scrapers play havoc with underlying ecommerce environments because they consume transactional space and mean the real humans end user experience who is using the website directly suffers. Screen scrapers can work in an airlines favor though, some airlines have agreements with some third parties to "scrape" and some airlines have partnerships with third parties who provide indirect services.
So within the world of Cloud services are we going to see an influx of parties screen scraping big players like EC2 and draining ecommerce portals? Imagine hundreds of screen scrapers upon screen scrapers scouring main portals to see if EC2 has a good price, suitable AMI's, suitable SLA's (dont laugh), and many other charateristics within? It would degrade end user services and potentially steer them to competitors......Just more crazy thoughts that i'll leave you with.
Thats all folks until next time
Screen Scrapers
Just another crazy thought that i'll leave you with which is completely seperate to overbooking and is in regards the potential role of screen scrapers in "Cloud commerce". In the reservation world, screen scrapers play havoc on travel industry websites if they are not controlled, in a nutshell a screen scraper is basically a third party whom are scraping say a Airline booking site to scour for the best deal. If not controlled correctly, scrapers play havoc with underlying ecommerce environments because they consume transactional space and mean the real humans end user experience who is using the website directly suffers. Screen scrapers can work in an airlines favor though, some airlines have agreements with some third parties to "scrape" and some airlines have partnerships with third parties who provide indirect services.
So within the world of Cloud services are we going to see an influx of parties screen scraping big players like EC2 and draining ecommerce portals? Imagine hundreds of screen scrapers upon screen scrapers scouring main portals to see if EC2 has a good price, suitable AMI's, suitable SLA's (dont laugh), and many other charateristics within? It would degrade end user services and potentially steer them to competitors......Just more crazy thoughts that i'll leave you with.
Thats all folks until next time
Tuesday, 19 January 2010
Cloud Overbooking - Part 1
Now for something cloud related as I haven't waffled on about cloud for a while. This Two part series (it got too long for one post) is based upon oversubscribing or over allocating strategy within public cloud world. Within this first part I will use the current Airline reservation overbooking strategy and use this as an example to potentially see where similar algorithms may start to be needed to calculate workload allocation in a typical open Public Cloud provider. This post was also super charged by this excellent post on what the blogosphere classes as the difference between capacity oversubscription and over capacity models within the Amazon EC2 service.
So ever been bumped up or bumped off?
No this isn't a question about your mafia status, I am talking about flight bookings. As you may have noticed from the "about me section" I currently work for an Airline, with this in mind I will use some of the (small) knowledge I have gained on how the oversubscription model works in our world. It is a well known fact that the Airline industry falls into a number of industries that "overbook" on certain flights, see this definition for full gory detail on how this whole process works behind the scenes but in a nutshell it is an algorithm used by the travel industry to work towards achieving full capacity on certain flights by taking more upfront purchases than is available in the reservation system. Overbooking tends to affect the lower entry level economy passenger who is paying less for his seat and is likely to be less of a regular customer, lastly overbooked passengers are all covered for compensation in many shapes and forms such as being offered either a seat on the next available flight or a volume of cash that makes them happy.
So hopefully after reading the brief detail on how the overbooking model I am beginning to think we are going to see a overbooking or oversubscribed type strategy needing to be adopted within Public Clouds. To justify my comparison, simplistic marketing from Public cloud companies state that you can buy a workload in EC2 from the Cloud provider and assume it will be able to provide you with the compute and networking requirements that you would get if hosting on premise. Based on this comparison in a shared multi tenant public cloud do you think the same rules could apply to allocation models of cloud workloads?
Rate of change of public cloud a problem?
Public Cloud adoption is happening at a very fast rate, in future I assume public cloud providers such as EC2 are going to start to hit massive problems with not being able to facilitate large volumes of customer requirements and I also predict that public cloud is certainly not capable of facilitating concurrently every single customer that has ever laid eyes on a public cloud Virtual Machine in such providers. Therefore I believe that to succeed, Public Cloud providers are going to seriously need to look at the level of service they can potentially offer and design an algorithm similar to what Airlines have developed within the Overbooking model. Remember you are not always guaranteed to get the seat on a plane that you always want but most customers are happy to take compensation in return. Interestingly the likely compensation from a public cloud provider is not likely to be high if you fail to get what workload you require....
Summary
I admit that using this comparison between Cloud providers and Airline reservations is quite a cynical view, but putting this into perspective my view is that EC2 and any other public cloud provider that is struggling to control who is able to buy a workload and who wants to use a workload is going to hit massive PR and Customer relation problems just like you get when an airline unfortunately overbooks a flight with 20-30 economy passengers.
In Part Two I delve into various areas and technologies that exist today in the Airline reservation world and align these to how they may emerge within the world of cloud as potential problems or answers to common problems.
So ever been bumped up or bumped off?
No this isn't a question about your mafia status, I am talking about flight bookings. As you may have noticed from the "about me section" I currently work for an Airline, with this in mind I will use some of the (small) knowledge I have gained on how the oversubscription model works in our world. It is a well known fact that the Airline industry falls into a number of industries that "overbook" on certain flights, see this definition for full gory detail on how this whole process works behind the scenes but in a nutshell it is an algorithm used by the travel industry to work towards achieving full capacity on certain flights by taking more upfront purchases than is available in the reservation system. Overbooking tends to affect the lower entry level economy passenger who is paying less for his seat and is likely to be less of a regular customer, lastly overbooked passengers are all covered for compensation in many shapes and forms such as being offered either a seat on the next available flight or a volume of cash that makes them happy.
So hopefully after reading the brief detail on how the overbooking model I am beginning to think we are going to see a overbooking or oversubscribed type strategy needing to be adopted within Public Clouds. To justify my comparison, simplistic marketing from Public cloud companies state that you can buy a workload in EC2 from the Cloud provider and assume it will be able to provide you with the compute and networking requirements that you would get if hosting on premise. Based on this comparison in a shared multi tenant public cloud do you think the same rules could apply to allocation models of cloud workloads?
Rate of change of public cloud a problem?
Public Cloud adoption is happening at a very fast rate, in future I assume public cloud providers such as EC2 are going to start to hit massive problems with not being able to facilitate large volumes of customer requirements and I also predict that public cloud is certainly not capable of facilitating concurrently every single customer that has ever laid eyes on a public cloud Virtual Machine in such providers. Therefore I believe that to succeed, Public Cloud providers are going to seriously need to look at the level of service they can potentially offer and design an algorithm similar to what Airlines have developed within the Overbooking model. Remember you are not always guaranteed to get the seat on a plane that you always want but most customers are happy to take compensation in return. Interestingly the likely compensation from a public cloud provider is not likely to be high if you fail to get what workload you require....
Summary
I admit that using this comparison between Cloud providers and Airline reservations is quite a cynical view, but putting this into perspective my view is that EC2 and any other public cloud provider that is struggling to control who is able to buy a workload and who wants to use a workload is going to hit massive PR and Customer relation problems just like you get when an airline unfortunately overbooks a flight with 20-30 economy passengers.
In Part Two I delve into various areas and technologies that exist today in the Airline reservation world and align these to how they may emerge within the world of cloud as potential problems or answers to common problems.
Subscribe to Posts [Atom]