Advertisements

Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Category Archives: ESX under ESX

8 Node ESXi Cluster running 60 Virtual Machines – all Running from a Single 500GBP Physical Server

 

I am currently presenting a follow-up to my previous vTARDIS session for the London VMware Users Group where I demonstrated a 2-node ESX cluster on cheap PC-grade hardware (ML-115g5).

The goal of this build is to create a system you can use for VCP and VCDX type study without spending thousands on normal production type hardware (see the slides at the end of this page for more info on why this is useful..) – Techhead and I have a series of joint postings in the pipeline about how to configure the environment and the best hardware to use.

As a bit of a tangent I have been seeing how complex an environment I can get out of a single server (which I have dubbed v.T.A.R.D.I.S: Nano Edition) using virtualized ESXi hosts, the goals were;

  • Distributed vSwitch and/or Cisco NX100V
  • Cluster with HA/DRS enabled
  • Large number of virtual machines
  • Single cheap server solution
  • No External hardware networking (all internal v/dvSwitch traffic)

The main stumbling block I ran into with the previous build was the performance of the SATA hard disks I was using, SCSI was out of my budget and SATA soon gets bogged down with concurrent requests which makes it slow; so I started to investigate solid state storage (previous posts here).

By keeping the virtual machine configurations light and using thin-provisioning I hoped to squeeze a lot of virtual machines into a single disk, previous findings seem to prove that cheap-er consumer grade SSD’s can support massive amount of IOps when compared to SATA (Eric Sloof has a similar post on this here)

So, I voted with my credit card and purchased one of these from Amazon – it wasn’t “cheap” at c.£200 but it will let me scale my environment bigger than I could previously manage which means less power, cost, CO2 and all the other usual arguments you try to convince yourself that a gadget is REQUIRED.

So the configuration I ended up with is as follows;

1 x HP ML115G5, 8Gb RAM, 144Gb SATA HDD c.£300 (see here) but with more RAM
1 x 128Gb Kingston 2.5” SSDNow V-Series SSD c£205

I installed ESX4U1 classic on the physical hardware then installed 8 x ESXi 4U1 instances as virtual machines inside that ESX installation

image

This diagram shows the physical server’s network configuration

image

In order for virtualized ESXi instances to talk to each other you need to update the security setting on the physical host’s vSwitch only as shown below;

image

This diagram shows the virtual network configuration within each virtualized ESXi VM with vSwitch and dvSwitch config side-side.

image

I then built a Windows 2008R2 Virtual Machine with vCenter 4 Update 1 as a virtual machine and added all the hosts to it to manage

I clustered all the virtual ESXi instances into a single DRS/HA cluster (turning off admission control as we will be heavily oversubscribing the resources of the cluster and this is just a lab/PoC setup

image

Cluster Summary – 8 x virtualized ESXi instances – note the heavy RAM oversubscription, this server only has 8Gb of physical RAM – the cluster thinks it has nearly 64Gb

image

image

I then built an OpenFiler Virtual Machine and hooked it up to the internal vSwitch so that the virtualized ESXi VMs can access it via iSCSI, it has a virtual disk installed on the SSD presenting a 30Gb VMFS volume over iSCSI to the virtual cluster nodes (and all the iSCSI traffic is essentially in-memory as there is no physical networking for it to traverse.

image

Each virtualized ESXi node then runs a number of nested virtual machines (VM’s running inside VMs)

In order to get Nested virtual machines to work; you need to enable this setting on each virtualized ESXi host (the nested VM’s themselves don’t need any special configuration)

image

Once this was done and all my ESXi nodes were running and settled down, I have a script to build out a whole bunch of nested virtual machines to execute on my 8-node cluster. the VM’s aren’t anything special – each has 512Mb allocated to it and won’t actually boot past the BIOS because my goal here is just to simulate a large number of virtual machines and their configuration within vCenter, rather than meet an actual workload – remember this is a single server configuration and you can’t override the laws of physics, there is only really 8Gb or RAM and 4 CPU cores available.

Each of the virtual machines was connected to a dvSwitch for VM traffic – which you can see here in action (the dvUplink is actually a virtual NIC on the ESXi host).

image

image

I power up the virtual machines in batches of 10 to avoid swamping the host, but the SSD is holding up very well against the I/O

With all 60 of the nested VMs and virtualized ESXi instances loaded these are the load stats

image

 

 

 

 

 

 

 

 

I left it to idle overnight and these are the performance charts for the physical host; the big spike @15:00 was the scripts running to deploy the 60 virtual machines

image

Disk Latency

image

Physical memory consumption – still a way to go to get it to 8Gb – who says oversubscription has no use? 🙂

image image

So, in conclusion – this shows that you can host a large number of virtual machines for a lab setup, this obviously isn’t of much use in a production environment because as soon as those 60VM’s actually start doing something they will consume real memory and CPU and you will run out of raw resources.

The key to making this usable is the solid state disk – in my previous experiments I found SATA disks just got soaked under load and caused things like access to the VMFS to fail (see this post for more details)

Whilst not a production solution, this sort of setup is ideal for VCP/VCDX study as it allows you to play with all the enterprise level features like dvSwitch and DRS/HA that really need more than just a couple of hosts and VMs to understand how they really work. for example; you can power-off one of the virtual ESXi nodes to simulate a host failure and invoke the HA response, similarly you can disconnect the virtual NIC from the ESXi VM to simulate the host isolation response.

Whilst this post has focused on non-production/lab scenarios it could be used to test VMware patch releases for production services if you are short on hardware and you can quite happily run Update manager in this solution.

If you run this lab at home it’s also very power-efficient and quiet, there are no external cables or switches other than a cross-over cable to a laptop to run the VI Client and administer it; you could comfortably have it in your house without it bothering anyone – and with an SSD there is no hard disk noise under load either 🙂

Thin-provisioning also makes good use of an SSD in this situation as this screenshot from a 30Gb virtual VMFS volume shows.

image

The only thing  you won’t be able to play around with seriously in this environment is the new VMware FT feature – it is possible to enable it using the information in this post and learn how to enable/disable but it won’t remain stable and the secondary VM will loose sync with the primary after a while as it doesn’t seem to work very well as a nested VM. If you need to use FT for now you’ll need at least 2 physical FT servers (as shown in the original vTARDIS demo)

If you are wondering how noisy it it at power-up/down TechHead has this video on YouTube showing the scary sounding start-up noise but how quiet it gets once the fan control kicks-in.

ML115 G5 Start-up Noise

Having completed my VCP4 and 3 I’m on the path to my VCDX and next up is the enterprise exam so this lab is going to be key to my study when the vSphere exams are released.

Advertisements

Installing ESXi on a Laptop

 

Following on from my recent blog posts about the various ways to configure ML115 G5 servers to run ESX, I thought I would do some further experimenting on some older hardware that I have.

I have a Dell D620 laptop with dual-core CPU and 4Gb of RAM which is now no longer my day-day machine, because of the success I had with SSD drives I installed a 64Gb SSD in this machine

I followed these instructions to install ESXi 4 Update 1 to a USB Lego brick flash drive (freebie from EMC a while ago and plays nicely to my Legogeekdom). I can then boot my laptop from this USB flash drive to run ESXi.

image

I am surprised to say it worked 1st time, booted fully and even supports the on-board NIC!

image

image

So, there you go – another low-cost ESXi server for your home lab that even comes with its own hot-swappable built-in battery UPS 🙂

The on-board SATA disk controller was also detected out of the box

image

A quick look on eBay and D620’s are going for about £250, handy!

Here is a screenshot of the laptop running a nested copy of ESXi, interestingly I also told the VM it had 8Gb of RAM, when it only has 4Gb of physical RAM.

image

vSphere ESXi as a VM – VMKernel Traffic Not Working

 

In the lab I am currently working with I have a set of vSphere 4 ESXi installations running as a virtual machine and configured in an HA cluster – this is a great setup for testing VM patches, and general ops procedures or learning about VMware HA/DRS/FT etc. (this lab is running on a pair of ML115 g5 servers but would work equally on just one

image

Everything installed ok and I can ping the virtual ESX servers from the vCenter host that manages the cluster (the warning triangle is that there is no management network redundancy – I can live with that in this lab.

All ESX hosts (physical and virtual) are connected via iSCSI to a machine running OpenFiler and the storage networking works ok, however when I configure the vMotion & FT private networks between the VM ESX hosts I cannot ping the vMotion/FT IP addresses using vmkping – indicating that there were some communication problems, normally this would be a VLAN issue or some routing but in this instance all the NICs and IP addresses for my lab reside on a flat 10.0.0.0/8 network (it’s not production, just a lab).

image

image

After some digging I came across this post for running ESX full as a VM, and noted the section on setting the vSwitch to promiscuous mode so I tried that with the vSwitch on the physical ESX host that the two ESXi VMs were running on;

image

And now the two Virtual ESXi nodes can communicate via vmkping

image

Problem solved and I can now vMotion nested VMs between each virtual ESX host – very clever!

vSphere – How to Enable FT for a Nested VM

 

As in my previous post; I am working on a lab with virtual ESX4 servers in it – I can vMotion VMs from a physical vSphere cluster into the virtual vSphere cluster perfectly and performance is very good (just 1 dropped ping in my testing)

One of the physical hosts belongs to www.techhead.co.uk which he has kindly lent for this joint experiment – see his posts here, here and here on running vSphere on these HP ML115g5 servers and their FT compatibility. We have some joint postings in the pipeline on guest performance with complicated apps like SQL & Exchange when protected via FT , so keep your eyes peeled.

As the physical ESX hosts themselves are FT compatible I thought I’d see if I can enable FT for a VM running inside a virtual ESX server cluster, so a VM running inside a hypervisor, inside another hypervisor..!

image

image

Our of the box, unfortunately not; as it gives the following error message 😦

Power On virtual machine

Record/Replay is not supported on this CPU for this guest operating system. Vou may have an incompatible CPU, you may have specified the wrong guest operating system type, or you may have conflicting options set in your config file. See the online help fot a list of supported guest operating systems, CPUs and associated config options. Unable to enter fault tolerance mode.

To work around this you can enable the following advanced (and likely totally unsupported) settings to enable FT on the nested VM (the default is/was false) (thanks to the comment on this post for the replay.allowBTOnly = TRUE setting!)

image image

And here it is – Nested VM running, with FT enabled

image

Very nice

Later on you can see some warnings about hosts getting a bit behind, also I had some initial problems getting FT to bring up the 2nd VM properly, the UI said it was restarting and it got stuck there, I dropped the virtual ESXi host down to a single vCPU rather than two and it worked ok from then on. I decided to do this as the virtual ESXi nodes were coming up reporting 2 x Quad core CPUs; whilst the physical host only has a 1 x Quad Core CPU so I guess that was causing some confusion.

At this point both of my virtual ESXi hosts were on the same physical vSphere server, and I seemed to have problems with the secondary getting behind. (vLockstep interval)

In this instance my nested VM is running an x86 Windows 2003 unattended setup.

image

image

I vMotioned one of the virtual ESXi hosts to the second physical vSphere server (very cool in itself) and it seemed to be better for a while, I assume there was some CPU contention from the nested VM.

image image

However in the end it flagged up similar errors, I assume this is due to the overhead of running a VM inside a hypervisor, inside another hypervisor 🙂 this is a lab setup but will prove very useful if you have to learn about this stuff or experiment with different configurations.

This is probably totally unsupported, use at your own risk – but it does work well enough to play about with in the lab.