Virtualization, Cloud, Infrastructure and all that stuff in-between
My ramblings on the stuff that holds it all together
Hardware is Hard, Software is Easy. is 2011 the year of the VSA?
I have done a lot of lab-work with Virtual Storage Appliances, mainly because proper shared storage is hard to come-by for lab-time so I’ve used the following for the last few years running inside Virtual Machines
The more vendors that release software versions of their kit or emulators are high on my list of things to watch as IMHO it shows they are looking-ahead
Traditional storage vendors have made a very good living in the last decade selling custom, high-performance silicon – but this comes at a cost – designing custom ASICs and code takes time, because it involves high-tech fabrication technologies, even if these are outsourced it’s very expensive and time-consuming.
It’s also harder to “turn the ship” if the market moves as a vendor has significant resources committed to product development.
Mainframes have also maintained a similar position and have seen their market share eroded by commodity x86 hardware that combined with clever software delivers the same solutions with less hardware-vendor lock-in and typically a lower cost.
Software is easy – well, relatively easy to change when compared to hardware so R&D cycles can be shorter, more agile and respond quicker to market changes.
Changes/upgrades to custom chips have development lifecycles in multiple years, and once a chip is burnt/fabricated and shipped to the masses it’s harder to make changes if a problem is found – x86 builds up on a well-used and field-proven architecture typically adopting a scale-out architecture over standardised interconnects (Infiniband/Ethernet) to achieve higher performance – why re-invent the wheel?
There will always be edge-cases where ultra low-latency interconnects can only be provided over on-die CPU traces – but for general compute, network, storage – but as x86 and it’s ancillary interconnect technologies march ever faster, can equivalent functionality not be achieved using clever software on common hardware rather than raw physics and men in white-coats?
As this cycle continues – can storage vendors continue to make those margins, respond to customer requirements and keep ahead of the competition when they are tied to a custom silicon architecture – is it more advantageous to move to a commodity platform to deliver their solutions?
Using “clever” software like a hypervisor to abstract a commodity x86 hardware architecture means you can push storage functions like snapshots, cloning, replication higher up the stack and make them less specific to hardware vendor X’s track/cylinder or backplane protocol Y
Building in x86 also means you can be selective about how you deploy – on bare hardware, or with a hypervisor like ESXi – both use-cases are equally valid and the cost to change between the two is minimal (in development terms)
EMC are already committed to an x86 scale-out architecture for their platforms for this reason and even if the badge on the outside says EMC it’s just commodity kit with clever software, rather than custom firmware running on custom chips and I expect all the competition are considering if being a niche edge-case player or a high-performance general storage player is a better business play.
The Open-source community also have some excellent projects in this space which are being spun out into commercial products, traditional storage vendors beware!
Virtual Storage Appliances (VSA) are the next logical step in de-coupling storage services from hardware.
Disclosure: I work for VMware, of whom EMC are a majority shareholder – however this isn’t an advert – it’s my opinion and experience.
Pingback: Hardware is Hard, Software is Easy. is 2011 the year of the VSA … « hardware