Subscribe to my RSS Feed
Join 2,575 other subscribers
My ramblings on the stuff that holds it all together
I encountered this problem in my lab – I have the following configuration physically installed on an HP Microserver for testing (I will probably put it into a VM later on however)
1 x 8Gb USB flash drive holding the boot OS
And the following configured into a single volume, accessed over NFSv3 (see this post for how to do that)
1 x 64Gb SSD Drive as a cache
4 x 160Gb 7.2k RPM SATA disks for a raid volume in a raidz1 configuration
A quick benchmark using IOmeter showed that it was being outperformed on all I/O types by my Iomega IX4-200d, which is odd, as my Nexentastor config should be using an SSD as cache for the volume making it faster than the IX4 So I decided to investigate.
If you look in data management/Data Sets and then click on your volume you can see how much I/O is going to each individual disk in the volume.
In my case the SSD c3d1 had no I/O at all – so if you click on the name of the volume shown in green (in my case it’s called “fast”) then you are shown the status of the physical disks.
So, from looking at the following screen Houston we have a problem – my SSD is showing as faulted (but no errors are recorded either) – so I need to investigate why (and hope it’s still under warranty, if it has actually failed this will be the 2nd time this SSD has been replaced!)
Attempts to manually online the disk return no error, but don’t work either so not entirely sure what happened there, I did have to shut down the box and move it so I re-seated all the connectors but it still wouldn’t let me re-enable the disk.
Worth noting that even with this fault the volume remained on-line; just without the cache enabled so I was able to storage vMotion off all the VMs and delete and re-create the volume (this time I re-created it without any mirroring for maximum performance.
Once I had storage vMotioned the test VM back (again, no downtime – good old ESX!) I ran some more Iometer tests and performance looked a lot better (see below)
I’ll be posting some proper benchmarks later on, but for now it was interesting to see how much better it could perform than my IX4 (although remember there is no data-protection/RAIDZ overhead so a disk fault will destroy this LUN – good enough for my home lab though, and I plan to RSYNC the contents off to the IX4 for a backup ).
Fingers crossed this isn’t a fault with my SSD… time will tell!
Since you are flush with USB flash drives (thumb drives) you might try using these are L2ARC (cache) devices. The write speed is terrible and read speed not great, but the read I/O is lower-latency than your spinning rust. The L2ARC was designed for “read optimized” storage and your thumb drives are on the low-end of that definition.
Use the “fancy” stuff for ZIL acceleration or production L2ARC, but in the lab the USB flash drive provides some pretty good numbers as L2ARC – especially where 5200-5400 RPM disks are being used in RAIDz1/2/3 applications. You’ve also discovered that the L2ARC device drops out harmlessly upon failure. The ZFS system simply reverts to the actual disk for the correct blocks…
Thanks Colin – will give that a go, have 4 x empty USB slots in the front and lots of spare thumb drives 🙂
looks like the SSD is marked as faulty again this morning 😦
Cool thing, keep going! Also bought a HP Microserver recently and now playing around with Esxi and nexenta. Currently nexenta is on a different server with a small Xeon were it is very fast (Gbit LAN is the bottleneck).
From your set-up picture, it looks like your SSD drive is connected to the ODD port of the motherboard. This port is set as legacy IDE mode. Could it be the cause of your problem?
Brice – that’s a very interesting point, the only place to plug in an SSD on the Microserver is where the CD drive would be connected – will investigate further and see if there are any other settings..
I have been playing around with the Microservers quite a bit lately. In my test setup I am using 8G Ram, 4 2TB Seagate LP drives, and I installed a CF reader in the ODD bay to be used for boot.
I sprang for the iLO card as well. And an additional NIC.
Have you thought of booting from an SSD PCIe card from the further slot? They are running pretty cheap and you would need much for space.
Or you could buy a PCIe SATA card & run the SSD from there. They also Mini SAS add on cards if you wanted to switch your ports up. You may be able to get “Real” RAID out of it. 🙂
I have been playing with CF, either booting over USB or SATA via all kinds of cool gadgety connectors. No speed tests yet though. We will see what happens when I get an appliance on there.
In speaking with Nexenta, and other folks working on similar technologies, they are all thinking big, fat VMs with lots of memory. I am actually trying to cut down to 4GB ram and see just what I can do with ESXi all in one box. Asking for over a gig for my VSA is too much! The EMC Lifeline VM runs well at under 512MB.
Also looking at the new Gluster Appliance recently released.
We need small, JeOS’s VMs!
Ha yeah I think the problem with Nexentastor is that it uses the RAM as a cache, then off to the SSD then off to the spinning rust; hence the RAM requirement – I would assume the Celerra VSA has a similar model using RAM for caching.
I have 8Gb in mine and will be ESX’ing it at some point in the future, boot from USB works suprisingly well too 🙂