Virtualization, Cloud, Infrastructure and all that stuff in-between
My ramblings on the stuff that holds it all together
Category Archives: VM
Running a VM from a RAM Disk
I posted earlier about some of my experiments with the FusionIO solid state storage card, SSD’s and the feature I spotted in Starwind to create a virtual disk from RAM – these are the quick results I see when running a Windows 2003 R2 virtual machine from a RAM disk.
This is done by creating a RAM disk on a Windows 2008 x64 machine running the StarWind vSAN software. the physical machine is an HP ML110 G5 with 8Gb of RAM.
In this test I allocated a 6Gb RAM disk on the vSAN host and shared it out via iSCSI to a vSphere 4 host running on an HP ML115 G5, where it shows up as a normal LUN and vSphere is unaware that it is actually physical RAM on a host elsewhere rather than a normal spinning disk (virtualization/abstraction :)).
I deployed a single Windows 2003 R2 virtual machine into the 6Gb LUN via the usual processes with thin-provisioning enabled.
The topology for this test looks like the following;
As with previous tests, and based on Eric’s work I used HD Tune Pro (trial) to get some disk access statistics; during the test the iSCSI traffic used c.50% of the bandwidth on the physical box running the Starwind software.
These are the results; which you can compare to Eric, Simon Seagrave and my FusionIO results.
So far I have only done read speed testing, as write-testing requires some extra virtual disks – I’ll get this done in the coming weeks.
There is no escaping the fact that physical RAM is still expensive in quantities sufficient to meet the size of normal VM storage requirements. There are commercially available hardware SAN products that use this sort of concept like the RAMSAN and FusionIO but this is definitely the way the industry is going in future.
Thin-provisioning, linked-cloned and automated storage tiering (like EMC FAST) are going to be key to giving this level of performance whilst keeping costs low by minimising physical storage consumption until RAM/SSD prices reach the current spinning disk levels.
These results go to show how this software concept could be scaled up and combined with commodity Nehalem blades or servers which are capable of supporting several hundred Gb of RAM to build a bespoke high performance storage solution that is likely to cost less than a dedicated commercial solid-state SAN product.
In the real world It’s unlikely that you would want to take this bespoke approach unless you have some very specific requirements as the trade-off is that a bespoke solution is likely to have a higher ongoing complexity/management cost and is probably less reliable/supportable – I did it “just because I could”; your mileage may vary 🙂
Comparing the I/O Performance of 2 or more Virtual Machines SSD, SATA & IOmeter
I’m currently doing some work on SSD storage and virtual machines so I need an easy way of comparing I/O performance between a couple of virtual machines, each backed onto different types of storage.
I normally use IOmeter for this kind of work but generally only in a standalone manner – i.e I can run IOmeter inside a single VM guest and get statistics on the console etc.
With a bit of a read of the manual I quickly realised IOmeter was capable of so much more! (amazing things, manuals :)).
Note: You should download IOmeter from this link at SourceForge and not this link to iometer.org; which seems to be an older non-maintained build
You can run a central console which runs the IOmeter Windows GUI application then add any number of “managers” – which are machines doing the actual benchmarking activities (disk thrashing etc.)
The use of the term manager is a bit confusing to me as you would think the “manager” is the machine running the IOmeter console, but actually each VM or physical server you want to load-test is a known as a manager, which in turn runs a number of workers which carry out the I/O tasks you specify and reports the results back to a central console (the IOmeter GUI application shown above).
Each VM that you want to test runs the dynamo.exe command with some switches to point it at an appropriate IOmeter console to report results.
For reference:
On the logging machine run IOmeter.exe
on each VM (or indeed physical machine) that you want to benchmark at the same time run the dynamo.exe command with the following switches
dynamo.exe /i <IP of machine running IOmeter.exe> /n <display name of this machine – can be anything> /m <IP address or hostname of this machine>
in my case;
dynamo.exe /i 192.168.66.11 /n SATA-VM /m 192.168.66.153
You will then see output similar to the following;
The IOmeter console will now show all the managers you have logged on – in my case I have one VM backed to a SATA disk and one VM backed to an SSD disk.
I can now assign some disk targets and access specifications to each worker and hit start to make it “do stuff which I can measure” 🙂 for more info on how to do this see the rather comprehensive IOmeter manual
If you want to watch in realtime, click the results display tab and move the update frequency slider to as few seconds as possible
If you want to compare figures from multiple managers (VMs) against each other you can just drag and drop them on to the results tab
Then chose the metric you want to compare from the boxes – which don’t look like normal drop down elements so you probably didn’t notice them.
You can now compare the throughput of both machines in real-time next to each other – in this instance the SSD backed VM achieves less throughput than the SATA drive backed VM (more on this consumer-grade SSD in a later post)
Depending on the options you chose when starting the test run the results may have been logged out to a CSV file for later analysis.
Hope that helps get you going – if you want to use this approach to benchmark your storage array with a standard set of representative IOmeter loads – see these VMware communities threads
http://communities.vmware.com/thread/197844
http://communities.vmware.com/thread/73745
from a quick scan of the thread, this file seems to be the baseline everyone is measuring against
http://www.mez.co.uk/OpenPerformanceTest.icf
To use the above file you need to open it with IOmeter, then start up your VMs that you want to benchmark as described earlier in this post.
You will need to manually assign the disk target to each worker once you have opened that .icf file in IOmeter unless you set them in the .icf file manually.
This is the test whilst running with the display adjusted to show interesting figures – note the standard test contains a number of different iterations and access profiles – this is just showing averages since the start of the test and are not final figures.
This screenshot shows the final results of the run, and the verdict is; overall consumer-grade SSD sucks when compared against a single 7.2k RPM 1Tb SATA drive plugged into an OpenFiler 🙂 I still have some analysis to do on that one – and it’s not quite that simple as there are a number of different tests run as part of the sequence some of which are better suited to SSD’s
More posts on this to follow on SSD & SATA performance for your lab in the coming weeks, stay tuned..