My ramblings on the stuff that holds it all together
Using Virtualization to Extend The Hardware Lifecycle
In harder economic times getting real money to spend on server refreshes is difficult. There are the arguments that new kit is more power efficient; supports higher VM/CPU core densities but the reality is that even if you can show a cost saving over time most current project budgets are at best frozen until the economic uncertainty passes, at worst eliminated.
Although power costs have become increasingly visible because they’ve risen so much over the last 18 months this is still a hidden cost to many organisations, particularly if you run servers in your offices where a facilities team picks up the bill the overall energy savings through virtualization and hardware refresh don’t always get through.
So, I propose some alternative thinking to ride out the recession and make the kit you have and can’t get budget to replace last longer, as well as delivering a basic disaster recovery or test & development platform (business value) in the meantime.
Breaking the Cycle
In the traditional Wintel world server, OS, app and configuration are all tightly integrated. It’s hard to move a Windows install from an HP server to a cheaper Dell server for example without reinstalling or at least some in-depth registry surgery – you can use PlateSpin products to do P2P conversion but they come at a cost (see point above).
Let’s take an example; you have a Microsoft Windows 2003 server loaded with BizTalk server and a bunch of custom orchestrations running on an HP DL380g2. If the motherboard on that server were to die could you get a replacement quickly or at all? do you have to carry the cost of a care-pack on that server and because it’s gone “end of life” what is the SLA around any replacement hardware that is becoming increasingly scarce as supplier stocks are used up.
If you can’t get hold of replacement hardware in time, what about restoring it to an alternative server that you do have spare? For example a Dell Power Edge – that type of bare-metal recovery is still not a simple task due to the drivers/OS level components required and is laden with risks & 3rd party backup software which you needed to have.
Are your backups/recovery procedures good, tested last week…? yes they should be, but are they? – will the new array controller drivers or old firmware cause problems with your AV software or management agents for example.
Virtualization makes this simpler – the hypervisor layer abstracts the complicated bit that you care about (OS/App configuration “workload”) from the underlying hardware – which is essentially a commodity these days, it’s just a “server”.
So, if you virtualize your workload and the underlying hardware dies (for example that old HP DL380g2) restarting that workload on an alternative piece of hardware like the Dell is very simple – no complicated drivers or OS reinstallation, just start it up and go. If you have shared storage then this is even simpler, you might even have had a chance to proactively move workloads away from a failing server using vMotion.
Even if you only run 1 VM per piece of physical hardware to maintain almost equivalent performance because you can’t purchase a new, more powerful host(VMware call this containment) you’ve broken the hardware/OS ties and have made replacement easier as & when you are able to do so. VMware provide the VMware convertor tool, which is free/cheap, version 4 does almost everything you could ever want in a P2V tool to achieve this virtualization goal, if not PlateSpin powerConvert is cheap for a one-hit conversion.
So, this leads to my point – this can effectively extend the life of your server hardware, if it’s gone out of official vendor support – do you care as much? The hypervisor has broken the tight workload/hardware integration you are less tied to a continual refresh cycle of hardware as it goes in/out of vendor support – you can almost treat it as disposable – when it dies or has problems throw it away, cannibalise it for spare parts to keep other similar servers going – it’s just “capacity”.
Shiny New or 2nd Hand?
Another angle on this is that businesses almost always buy new hardware, direct from a reseller or manufacturer – traditionally because it’s best-practice and you are less likely to have problems with new kit. The reality is that with virtualization; server hardware is actually pretty flexible, serviceable and as I hope I’ve demonstrated here, disposable.
For example, look on eBay there are hundreds of recent 2nd hand servers and storage arrays on the open market, maybe that’s really something to do with the numbers of companies currently going into administration (hmm).
What’s to stop your department or project from buying some 2nd hand or liquidated servers, you’ll probably pay a tiny fraction of the “new” price and as I hope I’ve shown here if it dies then you would probably have saved enough money overall to replace it or buy some spares up-front to deal with any failures in a controlled way.
This type of commoditisation is where Google really have things sorted – this is exactly the same approach they have taken to their infrastructure, and virtualization is what gets you there now.
Recycle for DR/Dev/Test
Alternatively, if you can show a cost saving through a production kit refresh and are lucky enough to get some budget to buy servers you can recycle the older kit and use ESXi to setup a lab or very basic DR facility.
Cannibalise the de-commissioned servers to build fewer, loaded up hosts that can run restored copies of virtual machines in the event of a DR situation – your organization has already purchased this equipment so this is a good way to show your management how you are extending the life-cycle of previous hardware “investments”, greater RoI etc. heck I’m sure you could get a “green message” out of that as well 🙂
If you are able to do so, you can run this in parallel at an alternative site to the refreshed production system and act as a DR site – virtualization makes the “workloads” entirely portable across sites, servers and storage.
I do realise that this is post is somewhat of a simplification and ignores the power/hosting cost and new functionality of new hardware but the reality is that this is still often a sunk/invisible cost to many small/medium businesses.
There is still a wide perception that purchased hardware is an investment by a business, rather than the commodity that the IT community regards it as.
An analogy I often use is with company cars/vans, they are well established as depreciating, disposable assets to a business and more often than not are leased and regularly replaced because of this very reason. If you can’t get management to buy into this mindset for IT hardware; virtualization is your only sane solution.
in summary, you can show the powers that be that you can make servers last longer by virtualizing and cannibalising them, and this was a lot harder to do with before virtualization came along as it all meant downtime, hands-on and risk, now it’s just configuration and change.