My ramblings on the stuff that holds it all together
Running VMs from a FusionIO Solid State Storage Card and Consumer-grade SSD
Following on from Eric’s post on running VMs from SSD’s at this page, and my previous experiments at using SSD’s to run VMs I thought I would post up my initial (non-scientific) findings from the FusionIO card that I have been loaned.
FusionIO make solid state storage cards that come packaged as PCIe x8 format expansion cards, they make use of multi-level cell (MLC) NAND storage to create amazingly high speed direct-attach storage, the Duo640 device I am working with is the mid-range offering, at the higher end forthcoming versions can support up to 1TB/sec throughput. the Woz is also their chief scientist 🙂
In my test rig I am using it and the Starwind software on Windows 2008 R2 to share the FusionIO card over iSCSI to a couple of vSphere 4 hosts – in this initial test I’m just using a single GbE NIC in both the server and the vSphere client – as you’ll see from the screenshot below it can actually max out the GbE connection in these hosts if you push it with several concurrent VM cloning sessions – so there is plenty more performance to be had out of the card and high levels of concurrency, in this case the NIC was the bottleneck.
The FusionIO duo card comes with 2 banks of 320Gb of memory (640Gb in total), in my initial configurations it’s not configured to RAID across the 2 banks, but that is possible to improve performance and fault-tolerance.
The FusionIO card doesn’t yet have drivers for vSphere but they are working on them – so you can’t directly access it from an ESX host yet so I am connecting to it using the Starwind software iSCSI target software.
One issue I found with my Starwind configuration is that the Starwind software is’t able to see the FusionIO card as raw block storage like it can with normal direct attached storage (SSD/SATA HDD etc.) although it is visible to Windows disk manager as a normal disk. so to get it to work I had to format and mount the FusionIO “disks” as NTFS drives under Windows 2008 R2 Disk Manager and create a virtual disk files in these drives using the Virtual Disk feature of the Starwind software – this is then accessible both directly to the Windows 2008 host and to my ESX hosts via iSCSI.
So, based on the same software that Eric used in his post these are the out of the box numbers the FusionIO card gets – there is still significant scope for fine-tuning to increase performance – But it’s pretty impressive.
FusionIO (non-RAID configuration) inside a VM over iSCSI
FusionIO (non-RAID configuration) – direct attached to a Windows 2008 R2 host
Consumer-grade SSD – direct attached to a Windows 2008 R2 host
This is using the following SSD which I purchased last year for under £200.
The FusionIO cards aren’t cheap storage; they are in the ‘000’s of £ price-range, but they are FAST! with solid state storage pricing coming down in 2010 and when combined with iSCSI target software like Starwind it’s an excellent way to build a very high performance solid state SAN using DAS technology without the enterprise SSD SAN price-range and FC/network interconnects.
When vSphere drivers become available I can see some excellent 2-node/p2p replicating cluster/vSAN configurations using either Starwind or HP Lefthand Networks vSAN VM appliances as shown below, removing the dependency on single shared storage is a great design goal.
Disclosure: FusionIO and their UK distributor have kindly lent me a 640Gb duo card to work with, I have received no financial compensation nor have they imposed any copy approval or conditions with regards to what I write about their device – it’s just great and I’m that impressed.