Subscribe to my RSS Feed
Join 2,575 other subscribers
My ramblings on the stuff that holds it all together
Following on from my recent blog posts about the various ways to configure ML115 G5 servers to run ESX, I thought I would do some further experimenting on some older hardware that I have.
I have a Dell D620 laptop with dual-core CPU and 4Gb of RAM which is now no longer my day-day machine, because of the success I had with SSD drives I installed a 64Gb SSD in this machine
I followed these instructions to install ESXi 4 Update 1 to a USB Lego brick flash drive (freebie from EMC a while ago and plays nicely to my Legogeekdom). I can then boot my laptop from this USB flash drive to run ESXi.
I am surprised to say it worked 1st time, booted fully and even supports the on-board NIC!
So, there you go – another low-cost ESXi server for your home lab that even comes with its own hot-swappable built-in battery UPS 🙂
The on-board SATA disk controller was also detected out of the box
A quick look on eBay and D620’s are going for about £250, handy!
Here is a screenshot of the laptop running a nested copy of ESXi, interestingly I also told the VM it had 8Gb of RAM, when it only has 4Gb of physical RAM.
If you have a home lab setup or want to get going with learning VMware’s new vSphere product you will need an x64 capable machine to run it on, although it does also run under VMware Workstation too – even supporting nested VMs and physical ESX to virtual ESX vMotion! unfortunately it won’t run on my trusty old HP D530 desktops which I’ve used to run ESX 3.5 over the last year or so.
My lab setup uses a couple of HP ML110 servers, they are low-cost and pretty capable boxes, for example they both have 8Gb of RAM and cost me less than £350 GBP each with RAM and disks (although I’ve added storage from my spares pile).
If you are in the UK Servers Plus have some great deals on HP ML series servers which are great for home lab setups – see some of Techhead’s postings on his findings with the ML1xx range here
Linkage to Servers Plus £199 +VAT servers here (www.serversplus.com) if you tell them vinf.net or techhead.co.uk sent you they may cut you a deal on delivery as they have done in the past (no promises as I’ve not had a chance to speak to them).
A note of caution if you are looking to try out the cool FT features of vSphere you will need to purchase specific CPUs, which may be more expensive – there is a good list of compatible CPUs on Erics blog here and some more reading here
Check before you buy you can lookup the manufacturers part code to check with CPU each model has – or check with the supplier.
The CPUs I have in my dual-core Xeon ML110G5 is not compatible with FT 😦
but it does look like the AMD quad-cores may be compatible, but check 1st – don’t take my word for it I HAVE NOT TRIED IT but I would like to if someone wants to donate one 🙂
UPDATE: the ML110G5 with the AMD Quad Core CPU IS VMware FT compatible – see link here for more details; I am ordering one now!
If you are interested – here are some performance charts from my home lab running vSphere RC on an HP ML110 with 8Gb RAM and 2 x 160Gb SATA HDD’s whilst doing various load tests of Exchange 2007 and Windows 2008 with up to 500 concurrent heavy profile users (these stats are not particularly scientific but give you an idea of what these boxes can do, I’ve been more than happy with mine and I would recommend you get some for your lab)
These are some general screengrabs, note there are lots of warnings showing – this is what happens when you thin-provision all your VM’s and then one fills up rapidly making the VMFS volume itself run out of space – you have been warned!
I’m running 15 VMs on one ML110, the 2nd box only has 1 VM on it as I wanted to see how far I could push one box, I’ve not found a real limit yet! it runs a mix of Windows 2003/2008 virtual machines, and it doesn’t generally break a sweat – note the provisioned vs. used space columns – Thin Provisioning 🙂 and I’m also over-subscribing the RAM significantly.
Note to remember, don’t forget to check the duplex settings on NICs handling your vMotion traffic.
My updated clustered ESX test lab is progressing (more posts on that in the next week or so)… and I’m kind of limited in that I only have an old 24-port 100Mb Cisco hub for the networking at the moment.
vMotion warns about the switch speed as a possible issue.
I had my Service Console/ vMotion NICit forced to 100/full and when I 1st tried it vMotion took 2hrs to get to 10%, I changed it to auto-negotiate whilst the task was running and it completed without breaking the vMotion task ain a couple of seconds, dropped only 1 ping to the VM I moved.
Cool, it’s not production or doing a lot of workload but useful to know despite the warning it will work even if you’ve only got an old hub for your networking, and worth remembering that Duplex mis-matches can literally add hours and days onto network transfers.
The following screen dump is from an HP DL380G5 server that runs all the core infrastructure under VMWare Server (the free one) for a friend’s company which I admin sometimes.
It is housed in some co-lo space and runs the average range of Windows servers used by a small but global business, Exchange SQL, Windows 2003 Terminal Services.
As a result of some planned (but not very well communicated!) power maintenance the whole building lost power earlier today, when it was restored I grabbed the following screenshot as the 15 or so Virtual Machines automatically booted.
interesting to note that all the VM’s had been configured to auto-start with the guest OS, meaning there wasn’t any manual intervention required, even though it was a totally dirty shutdown for both the host and guest OS’es (No UPS, as the building and suite is supposed to have redundant power feeds to each rack – in this instance the planned maintenance was on the building wiring so required taking down all power feeds for a 5 yearly inspection..)
There are no startup delay settings in the free version of VMWare Server so they all start at the same time, interesting to note the following points..
The blue line that makes a rapid drop is the pages/second counter, and the 2nd big drop (green) is the disk queue length. the hilighted (white) line is the overall %CPU time, note the sample frequency was 15 seconds on this perfmon.
After it had settled down, I took the following screenshot, it hardly breaks a sweat during its working day. there are usually 10-15 concurrent users on this system from around the world (access provisioned via an SSL VPN device) and a pretty heavily used Exchange mail system.
The box is an HP DL380 G5 with 2 x quad core CPUs (8 cores in total) and 16Gb of RAM, it has 8 x 146Gb 15k HDDs in a single RAID 5 set + hot-spare, it was purchased in early 2007 and cost c.£8,000 (UK Prices)
It runs Windows 2003 Enterprise Edition x64 edition with VMWare Server 1.0.2 (yes, its an old build.. but if it ain’t broke..) and they have purchased multiple w2k3 ent-edition licences to take advantage of the virtualisation use-rights to cover the installed virtual OS’es.
It’s been in-place for a year and hardly ever has to be touched, its rock-solidly available and the company have noticed several marked improvements since they P2V’d their old servers onto this platform, as follows;
Hopefully this goes to show the free version of VMWare’s server products can work almost as well if budget is a big concern, ESX would definitely give some better features and make backup easier, they are considering upgrading and combining with something like Veeam Backup to handle failover/backup.
Techhead has posted a nice article on his ML110 ESX test server, nice alternative to my D530 approach, he’s got a few more disks than I have.
I’ve not done anything with my home ESX server this week as I’ve been busy with work; so this will be interesting – it’s been powered up all the time with all the VM’s spinning; but not doing very much.
Whist running this set of VMs.. (the CPU stats for VMEX01 and VMEX02 are a bit skewed as I added this bit after the original post and they are both running seti@home (hence increased CPU)
So, nothing interesting to see here – but might be worth bearing in mind for some kind of sizing estimate; this is a single core CPU (HT enabled) PC with 4Gb RAM and a single 500Gb SATA disk
Hopefully I will get some time this week to load up SETI@Home or Folding@Home and see what that does 🙂 it should be a good test to see how well the hypervisor manages CPU timesharing between hosts.