Subscribe to my RSS Feed
Join 2,575 other subscribers
My ramblings on the stuff that holds it all together
VMware have an interesting proof of concept document posted online here, this is great progress for the platform and it can only be helped out by the close partnership with Cisco that has resulted in the NX1000V switch.
I’m no networking expert but to my understanding there are issues with extending Layer 2 networks across multiple physical locations that need to be resolved for this to be a safe configuration. to my limited understanding traditional technologies like spanning tree can present some challenges for inter-DC flat VLANs so they need to be designed carefully, maybe using MPLS as a more suitable inter-DC protocol.
The interesting part for me is that this will be the nirvana for VMware’s vCloud programme, where services can be migrated on/off-premise to/from 3rd party providers as required and without downtime. this is do-able now with some downtime via some careful planning and some tools but this proposition extends the vMotion zero downtime migration to vCloud.
As this technology and relevant VM/storage best-practice filters out of VMware and into service providers and customers this could become a supportable service offering for vCloud Service Providers.
To achieve this you still need storage access from both sites, to me the next logical step is to combine vMotion and FT technologies with some kind of host based replication or storage virtualization like the Datacore products. this will remove the dependency (and thus potential SPOF) on a single storage device for vMotion/FT.
Virtualizing/replicating the actual VM storage between different arrays and storage types (EMC—>HP, or even DAS—>EMC) and allowing (encapsulating) it over standard IP links rather than relying on complicated and proprietary array based replication and dedicated fibre connectivity is going to be a key success factor for vCloud, it’s interesting to see all the recent work on formalising FCoE along with other WAN-capable standards like iSCSI.
Some further reading on how I see “the cloud” evolving at a more practical level here
Hi Simon, i have also just been reading this article. I think probably the biggest issue companies will have is available bandwith between DC’s.
I would be interested to see what the minimum bandwidth is needed to run a successful vMotion.
I wonder if it will be latency or bandwidth that kills it?
Before I had my GbE switch at home I could easily do vMotions between hosts over an old 100mb hub, 100mb WAN connections (at least within the UK) aren’t beyond the realms of affordability for many organisations and 1Gb LES links also not too bad within the M25
The article doesn’t make this clear but I assume it’s still using a single piece of storage which is accessible from both DC’s, if VMware could make long distance vMotion understand array mirroring (or via 3rd party storage virtualization like datacore etc.) then the actual bandwidth between the two DC’s would be slightly less of a concern as ongoing execution will happen from DC-local storage and you rely on the array to ship the data to the secondary node’s storage. this would need some kind of checkpoint/control mechanism which ESX controls – as long as you can sync the LUN/vmfs delta’s between the two sites at the storage level with an acceptable latency for vMotion (and I’ve made this work over a 100mb hub with ESX 3.5 before now)
This is similar to how EMC implemented long distance/stretched failover clusters with Exchange 2003/SQL – there was a plug-in product that handled the checkpointing of replication and reporting to the host.
Maybe not within the current release of the product, unless the new vSphere SATP/PSP modules can implement this? – VMware – feature request? 😉
Pingback: WAN VMotion - A Step Closer to a Private Cloud? | The SLOG - SimonLong/Blog
Pingback: Geographically dispersed cluster design | Arnim van Lieshout