Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Category Archives: Nexentastor

Nexentastor CE performance not as good as expected with SSD Cache check it is actually working

I encountered this problem in my lab – I have the following configuration physically installed on an HP Microserver for testing (I will probably put it into a VM later on however)

1 x 8Gb USB flash drive holding the boot OS

And the following configured into a single volume, accessed over NFSv3 (see this post for how to do that)

1 x 64Gb SSD Drive as a cache

4 x 160Gb 7.2k RPM SATA disks for a raid volume in a raidz1 configuration

A quick benchmark using IOmeter showed that it was being outperformed on all I/O types by my Iomega IX4-200d, which is odd, as my Nexentastor config should be using an SSD as cache for the volume making it faster than the IX4  So I decided to investigate.

If you look in data management/Data Sets and then click on your volume you can see how much I/O is going to each individual disk in the volume.

image

In my case the SSD c3d1 had no I/O at all – so if you click on the name of the volume shown in green (in my case it’s called “fast”) then you are shown the status of the physical disks.

So, from looking at the following screen Houston we have a problem – my SSD is showing as faulted (but no errors are recorded either) – so I need to investigate why (and hope it’s still under warranty, if it has actually failed this will be the 2nd time this SSD has been replaced!)

image

Attempts to manually online the disk return no error, but don’t work either so not entirely sure what happened there, I did have to shut down the box and move it so I re-seated all the connectors but it still wouldn’t let me re-enable the disk.

Worth noting that even with this fault the volume remained on-line; just without the cache enabled so I was able to storage vMotion off all the VMs and delete and re-create the volume (this time I re-created it without any mirroring for maximum performance.

Once I had storage vMotioned the test VM back (again, no downtime – good old ESX!) I ran some more Iometer tests and performance looked a lot better (see below)

image

I’ll be posting some proper benchmarks later on, but for now it was interesting to see how much better it could perform than my IX4 (although remember there is no data-protection/RAIDZ overhead so a disk fault will destroy this LUN – good enough for my home lab though, and I plan to RSYNC the contents off to the IX4 for a backup ).

Fingers crossed this isn’t a fault with my SSD… time will tell!

Nexentastor, When 1Gb just isn’t enough

 

I have been trying to get my Nexentastor SSD/SATA hybrid NAS working this last week and I’ve found that the web UI grinds to a halt sometimes for me, I couldn’t find a UNIX ‘top’ equivalent quickly but the diagnostic reports that you can generate from the setup menu command line did indicate that it was short of RAM.

The HP Micro server I am using shipped with 1Gb of RAM, and normally that would be fine for a file-server/NAS but I’m thinking that Nexentastor does a fair bit more and is based on OpenSolaris rather than a stripped down Linux or BSD; the eval guide says 768Mb is enough for testing, 2Gb better 4Gb ideal so I was already pushing my luck with 1Gb for any real use.

So, I bit the bullet and ordered 8Gb of RAM for the server, which is the maximum you can install – ironically this cost the same amount as I paid for the whole Microserver in the 1st place (after the cash-back deal) but that’s reflective of the fact it only has 2 memory slots so I had to opt for the more expensive 4Gb chips.

I went for 8Gb as at some point I will probably re-run my experiments under ESXi and deploy this host as a part of my management cluster for the vTARDIS.cloud.

I am also booting the OS from a USB flash-drive – I had several 2Gb units but it wouldn’t install to them as they didn’t have quite enough space, so I’m using an 8Gb flash drive to hold the OS – this isn’t the most performant drive either so any swapping will be further impacted by the USB speed.

I’m Pleased to report that the 8Gb RAM upgrade has resolved all the problems with navigating the UI, and should also yield further I/O performance as the Nexentastor software uses the extra RAM as extra cache (ARC) as well as the SSD (L2ARC) – there is a good explanation of that on this blog post.

I’m going to post up my I/O benchmarking when I have some further wrinkles ironed out – in the meantime there is an excellent post here with some example benchmarks running Nexentastor in a VM on a slightly more powerful HP ML110 server.

Building a Fast and Cheap NAS for your vSphere Home Lab with Nexentastor

 

My home lab is always expanding and evolving – no sooner have I started writing up the vTARDIS.cloud configuration than something shiny and new catches my eye! fear not I will be publishing the vTARDIS configuration notes over the next 2 months, however in the meantime I have noticed that my IX4-200d NAS has been bogging down performance a bit recently – I attribute this to the number of VMs I am running across the 5 physical (and up to 50) vESXi hosts.

The IX4 is great and very useful for protecting my photos and providing general media storage but I suspect it also uses 5.4k RPM disks and in a RAID5 configuration it performs /ok/ but I feel the need, the need for speed Smile

With the sub-£100 HP MicroServer deal that is on at the moment I spotted an opportunity to combine it with some recycled hardware into a fast NAS box, using some new software – the NexentaStor Community Edition, I’ve used OpenFiler and the Celerra VSA a lot in the past but this has some pretty intriguing features.

Nexentastor allows you to use SSD as a cache and provides a type of software RAID using Sun’s ZFS technology – you can read a good guide to configuring it inside a VM on this excellent post

I already have a number of ML115 and ML110 servers, which all boot from 160Gb 7.2k RPM SATA disks; most of the time they do nothing so an idea was born, I will switch my home lab to boot from 2Gb USB sticks (of which I have a plentiful supply) and re-use those fast SATA disks in the HP MicroServer for shared, fast VM storage

I also have a spare 64Gb SSD from my orginal vTARDIS experiments which I am planning to re-use as the cache within the MicroServer

So, the configuration looks is like this;

image

Because I want maximum performance and I don’t care particularly about data protection for this NAS I’m just going to try striping data across all the SATA disks for best performance and I hope the SSD will provide a highly performant front-end cache for VMs stored in it (if I understand how it works correctly).

Most of the VMs it will be storing are disposable or easily re-buildable but I can configure RSYNC copies between it and my IX4 for anything I want to keep safe (or maybe just use one of the handy NFR licenses Veeam are giving out)

I did consider putting ESXi on the HP MicroServer and running Nexentastor as a VM (which is supported) but I haven’t yet put any more RAM in the MicroServer, although I may do this in future and add it to my existing management cluster.

I’ll post up some benchmarks when I’m done.