Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Category Archives: Solid State

Running a VM from a RAM Disk

 

I posted earlier about some of my experiments with the FusionIO solid state storage card, SSD’s and the feature I spotted in Starwind to create a virtual disk from RAM – these are the quick results I see when running a Windows 2003 R2 virtual machine from a RAM disk.

This is done by creating a RAM disk on a Windows 2008 x64 machine running the StarWind vSAN software. the physical machine is an HP ML110 G5 with 8Gb of RAM.

image

image

image

In this test I allocated a 6Gb RAM disk on the vSAN host and shared it out via iSCSI to a vSphere 4 host running on an HP ML115 G5, where it shows up as a normal LUN and vSphere is unaware that it is actually physical RAM on a host elsewhere rather than a normal spinning disk (virtualization/abstraction :)).

image

I deployed a single Windows 2003 R2 virtual machine into the 6Gb LUN via the usual processes with thin-provisioning enabled.

image

The topology for this test looks like the following;

image

As with previous tests, and based on Eric’s work I used HD Tune Pro (trial) to get some disk access statistics; during the test the iSCSI traffic used c.50% of the bandwidth on the physical box running the Starwind software.

image

These are the results; which you can compare to Eric, Simon Seagrave and my FusionIO results.

image image

image

So far I have only done read speed testing, as write-testing requires some extra virtual disks – I’ll get this done in the coming weeks.

There is no escaping the fact that physical RAM is still expensive in quantities sufficient to meet the size of normal VM storage requirements.  There are commercially available hardware SAN products that use this sort of concept like the RAMSAN and FusionIO but this is definitely the way the industry is going in future.

Thin-provisioning, linked-cloned and automated storage tiering (like EMC FAST) are going to be key to giving this level of performance whilst keeping costs low by minimising physical storage consumption until RAM/SSD prices reach the current spinning disk levels.

These results go to show how this software concept could be scaled up and combined with commodity Nehalem blades or servers which are capable of supporting several hundred Gb of RAM to build a bespoke high performance storage solution that is likely to cost less than a dedicated commercial solid-state SAN product.

In the real world It’s unlikely that you would want to take this bespoke approach unless you have some very specific requirements as the trade-off is that a bespoke solution is likely to have a higher ongoing complexity/management cost and is probably less reliable/supportable – I did it “just because I could”; your mileage may vary 🙂

Solid Sate SAN, Storage vMotion and VMWare – HSM for your VMs

 

You’ve been able to buy solid state SAN technology like the Tera-RAMSAN from TMS which gives you up to 1Tb of storage, presented over 4Gb/s fibre channel or Infiniband @10Gb/s… with the cost of flash storage dropping its going to soon fall in to the realms of affordability (from memory a year ago 1Tb SSD SAN was about £250k, so would assume that’s maybe £150k now – would be happy to see current pricing if anyone has it though).

If you were able to combine this with a set of ESX hosts dual-connected to the RAMSAN and traditional equipment (like an HP EVA or EMC Clariion) over a FC or iSCSI fabric then you could possibly leverage the new Storage vMotion features that are included in ESX 3.5 to achieve a 2nd level of performance and load levelling for a VM farm.

image

It’s pretty common knowledge that you can use vMotion and the DRS features to effectively load level or average VM CPU and memory load across a number of VMWare nodes within a cluster.

Using the infrastructure discussed above could add a second tier of load balancing without downtime to a DRS cluster. If a VM needs more disk throughput or is suffering from latency then you could move them to/from the more expensive solid-state storage tiers to FC-SCSI or even FATA disks, this ensures you are making the best use of fast, expensive storage vs. cheap, slow commodity storage.

Even if Virtual Center doesn’t have a native API for exposing this type of functionality or criteria for the DRS configuration you could leverage the plug-in or scripting architecture to use a manager of managers (or here) to map this across an enterprise and across multiple hypervisors (Sun, Xen, Hyper V)

I also see EMC integrating flash storage into the array itself, would be even better if you could transparently migrate LUNS to/from different arrays and disk storage without having to touch ESX at all.

Note: This is just a theory I’ve not actually tried this – but am hoping to get some eval kit and do a proof on concept…